Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Man Hei Connie Siu

The speakers at the discussion highlighted the persistent disparities in access to care, despite the progress made in digital health. They argued that digital health has not necessarily improved health equity and mentioned two key factors contributing to this issue: the digital divide and low digital health literacy.

The digital divide refers to the gap between those who have access to digital technology and those who do not. This divide disproportionately affects disadvantaged communities, including low-income individuals, rural populations, and marginalized groups. As digital health relies on technology, those without access are unable to benefit from its potential advantages. This creates a further divide in healthcare, perpetuating existing health inequalities.

Low digital health literacy is another barrier to achieving health equity. Many individuals lack the necessary skills and knowledge to navigate digital health information and services effectively. This can prevent them from accessing healthcare resources, making informed decisions, and actively participating in their own care. Addressing this issue requires comprehensive frameworks and assessment tools that capture and assess various dimensions of digital health literacy. By understanding individuals’ abilities and needs in this area, tailored interventions can be developed to enhance digital health literacy and bridge the gap.

Policy solutions were proposed as a means to bridge the digital divide and ensure that digital health truly advances healthcare outcomes for all. It was emphasised that these solutions should be inclusive and consider the unique needs and challenges faced by marginalized communities. By actively addressing these disparities, policymakers can promote equity and ensure that the benefits of digital health are accessible to all.

Throughout the discussion, the importance of promoting inclusivity and equitable access to digital health resources was stressed. It was highlighted that this not only requires action at the policy level but also requires advocacy for strategies that effectively address the unique needs of marginalized communities. By prioritising inclusivity and equity, digital health initiatives can contribute to reducing health disparities and improving overall healthcare outcomes.

In conclusion, while progress has been made in digital health, disparities in access to care persist. The digital divide and low digital health literacy contribute to these disparities, hindering efforts to improve health equity. Policy solutions, comprehensive frameworks, and tailored strategies are needed to bridge this divide, enhance digital health literacy, and promote equitable access to digital health resources for all individuals and communities. By addressing these issues, digital health has the potential to play a significant role in advancing healthcare outcomes and reducing health inequalities.

Audience

The current state of digital health needs to be improved in order to effectively handle future pandemics, according to experts. With the potential for another pandemic like COVID-19, it is crucial to address the shortcomings of the existing digital health infrastructure. The main concerns revolve around overcrowded healthcare facilities during pandemics, which can lead to increased transmission rates and overwhelmed healthcare systems. To mitigate these challenges, it is essential for individuals to receive accurate and timely medical advice and treatment remotely.

There is a growing need to provide accessible treatment and advice without physical visits, especially for vulnerable populations such as the elderly or those with underlying health conditions, who may face higher risks during a pandemic. The reliance on telemedicine and digital healthcare services has become necessary to ensure their safety and well-being.

The argument for improving digital healthcare in pandemic response is compelling. The current system falls short of meeting the demands and implications of a crisis like COVID-19. Enhancing virtual consultations, remote monitoring, and telehealth services would allow individuals to access medical advice, receive prescriptions, and monitor their health from the comfort of their homes.

Additionally, digital health should aim to provide consistent and accurate medical advice and treatment. The decentralization of healthcare during a pandemic can result in inconsistencies and disparities in the quality of care received by individuals in different locations. By standardizing and improving digital healthcare services, individuals can have confidence in the advice and treatment they receive, regardless of where they are located.

In conclusion, the current state of digital health needs to be improved in order to effectively handle future pandemics. The concerns over overcrowded healthcare facilities, the need for individuals to receive accurate and remote medical advice and treatment, and the importance of providing accessible healthcare for vulnerable populations all highlight the urgency of enhancing digital healthcare services. By integrating telemedicine and digital health into the healthcare system, it is possible to enhance access, ensure consistent care, and improve overall pandemic response capabilities.

Geralyn Miller

The analysis examines the perspectives of various speakers on topics related to health, technology, and social determinants. One key point is the importance of addressing social determinants of health to improve health outcomes. It is emphasized that social determinants, including economic policy, development agendas, and social policies, have a significant impact on health outcomes, contributing to around 30 to 55% of health outcomes. The argument put forward is that tackling these determinants is crucial for achieving better health outcomes.

Another important theme is the use of data and technology to understand and address health disparities. The Microsoft AI for Good team has developed a health equity dashboard that provides insights into disparities and outcomes. Partnerships between Microsoft and other organizations, such as the Humanitarian Action Program and Bing Maps, are highlighted as a way to map vulnerable areas. The argument is that data and technology play a crucial role in addressing health disparities.

The analysis also emphasizes the impact of partnerships on social determinants. LinkedIn’s Data for Impact program is mentioned as an example of a partnership that provides professional data to organizations like the World Bank Group. LinkedIn’s data has informed a $1.7 billion World Bank strategy for Argentina. The argument is that partnerships with various entities can have a significant impact on social determinants.

Additionally, the promotion of digital skilling is highlighted as a way to contribute to health equity. Microsoft’s Learn program offers free online learning resources, including role-based learning paths for AI engineers and data scientists. The argument is that digital skilling is important for advancing health equity.

Microsoft’s responsible AI initiatives are also highlighted, emphasizing their focus on fairness, transparency, accountability, reliability, privacy, security, and inclusion. It is crucial to ensure that AI systems and their outputs are understood and accountable to stakeholders, including patients and clinicians.

Furthermore, the analysis advocates for a policy of accountability in AI development, ensuring that products are safe before being released to the public. Brad Smith, Microsoft’s President, has testified in the US Senate Judiciary Subcommittee, stressing the importance of accountability and safe AI deployment. The argument is that technology creators should take responsibility for the impact of their technology.

The value of cross-sector partnerships is also highlighted, particularly during the pandemic. Different types of partnerships, including governance-sponsored consortiums, privately funded consortiums, and community-driven groups, have played a crucial role. The argument is that cross-sector partnerships are invaluable in addressing health crises.

Moreover, the analysis recognizes the importance of standards work during the pandemic. The use of smart health cards to represent vaccine status, the development of smart health links encoding minimal clinical information, and the efforts of the International Patient Summary Group in standardizing clinical information for emergency services are underscored. The argument is that the momentum around this standards work should be maintained and expanded.

The analysis also acknowledges the challenge of keeping up with the pace of innovation.

Additionally, it emphasizes the importance of gatherings and dialogue among people with similar interests for advancing in the field. It also advocates for the integration of technological training into the academic system.

In conclusion, the analysis highlights several key points relating to health, technology, and social determinants. It underscores the importance of addressing social determinants of health, utilizing data and technology to understand and address disparities, forming partnerships, promoting digital skilling, adhering to responsible AI initiatives, ensuring accountability in AI development, valuing cross-sector partnerships, acknowledging achievements in standards work during the pandemic, and addressing the challenges of innovation. It also recognizes the significance of gatherings and dialogue and the integration of technological training into the academic system.

Debbie Rogers

The analysis highlights the potential of mobile technology in Sub-Saharan Africa to improve health literacy, personal behavior change, and access to health services. In 2007, more people in Africa had access to mobile technology compared to the so-called global north or western countries. This demonstrates the widespread use and availability of mobile technology in the region. REACH’s maternal health program in South Africa has reached 4.5 million mothers, representing 60% of the mothers who have given birth in the public health system over the last eight years. The program has had several impacts, including improved uptake of breastfeeding and family planning.

Low-tech solutions, such as SMS and WhatsApp, can also empower individuals in their health. These low-tech solutions are highly scalable and can be designed with scale and context in mind. Given the ubiquitous nature of mobile technology in Africa, massive scale reach is possible, thereby increasing access to health information and services.

Additionally, designing digital health solutions with a human-centric approach and considering the larger system can enhance health literacy. By placing the human at the center and acknowledging their existence within a larger system, health literacy can be improved without widening the technology-related divide. Using appropriate language and literacy levels makes digital health services more user-friendly. Furthermore, making these services accessible for free or at a reduced cost decreases the barriers to access.

Ignoring the wider context and blindly implementing digital solutions can inadvertently increase the digital divide. It is important to understand the contextual understanding and the impact of these solutions on the existing system. Ignoring the wider context can lead to unintended consequences and exacerbate existing inequalities.

Addressing systemic issues is crucial for improving health in Sub-Saharan Africa. Currently, Sub-Saharan Africa has 10% of the world’s population, 24% of the disease burden, and only 3% of the health workers. Simply training more health workers without addressing these systemic issues will not improve the statistics and may even worsen the situation.

Telecommunication companies can play a role in promoting health equity and bridging the digital divide. The Facebook Free Basics model, for example, provides essential information that is free to access, and people who are given this free access to data then go on to use the internet more, making them more valuable customers. Collaborating with telecom companies to reduce message costs further enhances digital health access. As the reach of large-scale programs increases, the costs for telecom companies are reduced, benefiting both the companies and the access to health information for users.

Digital health solutions should work in harmony with the existing health system. Creating a digital health solution should not overburden the system, and feedback mechanisms are crucial to understand the impact of these solutions on the overall system.

Biases in creating digital health services can be reduced by having a diverse team. The biases that exist in these services are often a result of the people building them not being the ones using them. Having a team that is diverse in terms of gender and race can address these biases and ensure that digital health solutions are more inclusive and equitable.

During the COVID-19 pandemic, digital health played a crucial role in reducing the burden on healthcare professionals and empowering patients with information. Large-scale networks such as Facebook, WhatsApp, and SMS platforms provided quick and reliable information to people, proving the effectiveness and importance of digital health in times of crisis.

Long-term investment in digital health infrastructure is crucial for preparedness. Digital health platforms that served the needs during the pandemic may no longer exist and need to be maintained for future use. Another pandemic is inevitable, thus preparation is necessary to ensure a timely and effective response.

Technology can be utilized as a great enabler to decrease health inequalities and improve digital literacy. By leveraging technology, health services can reach marginalized populations and bridge the gap in access to information and services. Digital health is a mature field with the potential for large-scale implementation, as evidenced by numerous case studies of successful implementations.

There is excitement and a positive view towards the role of youth in the evolution of the digital health field. Engaging youth and integrating their perspectives can lead to innovative solutions and advancements in the field. This aligns with the broader goals of SDG 3 (Good Health and Well-being) and SDG 4 (Quality Education).

In conclusion, mobile technology, low-tech solutions, and digital health have the potential to significantly improve health outcomes in Sub-Saharan Africa. Designing solutions with a human-centric approach, addressing systemic issues, collaborating with telecommunications companies, and considering diversity can enhance the effectiveness and inclusivity of digital health services. The COVID-19 pandemic has further emphasized the importance of digital health in reducing burdens on healthcare professionals and empowering individuals with information. Long-term investment in digital health infrastructure and harnessing the potential of technology are vital for achieving health equity, reducing inequalities, and improving overall well-being.

Rajendra Gupta

The analysis highlights the importance of digital health training for various stakeholders in the healthcare sector. Firstly, it emphasises the need for policy makers to be adequately trained in digital health. The International Society of Telemedicine and E-Health, with memberships in 117 countries, is an influential body in promoting digital health training. Additionally, the World Health Organization (WHO) has established a capacity building department in 2019 to support policy makers in this area.

Moreover, it is essential for frontline health workers to receive affordable and accessible digital health training. In India, ASHA workers, who are the first responders in healthcare, will be provided with affordable ยฃ1 training in the next two months. This will enable them to effectively utilise digital health tools and technologies in their work.

Patients also need to be trained to use digital health technology effectively. They should be educated on how to open an app, use it, and understand privacy and security measures. The International Patients Union is actively involved in training patients to use digital technology, ensuring they can benefit from its potential in managing their health.

The analysis also highlights the role of governments in addressing health equity and the digital divide, particularly in low- and middle-income countries (LMICs). Governments, such as India, have launched initiatives to provide digital healthcare access to underprivileged populations. For instance, India offers free telemedicine services through 160,000 health and wellness centres across the country. Additionally, the government has rolled out 460 million health IDs with plans for 1 billion under the digital health mission. These efforts help bridge the gap in healthcare access and promote health equity.

A well-crafted policy and substantial government investment are deemed essential for the successful implementation of digital health programs. The Indian government, for instance, has established a national digital health mission and is investing in advanced systems like artificial intelligence and natural language processing to enhance telemedicine services. They are also rolling out the Ayushman Bharat Health Account number (ABHA number) to further support digital health initiatives.

Digital health is seen as a promising solution for health inequity and has the potential to bridge the gap between urban and rural healthcare service delivery. Technologies such as conversational AI and chatbots can offer basic health consultations for routine problems, while the creation of 460 million health records in India demonstrates the progress being made in digitising health information.

The analysis also acknowledges the role of technology during the COVID-19 pandemic. It highlights fast-track vaccine development through global collaborations and the use of artificial intelligence for repurposed drug use. The delivery of 2.2 billion vaccinations digitally through a COVID App further demonstrates the readiness of technology in responding to the pandemic.

The momentum of using technology in the health sector must be maintained, with government incentives and flexibility in telehealth during the pandemic playing a crucial role. Additionally, digital literacy is important for anyone in the health sector, with initiatives such as the Digital Health Academy collaborating with Google to create developers for health. Courses on robotics, artificial intelligence, and digital health are being developed to ensure that individuals at all levels of the healthcare sector possess the necessary skills.

It is further highlighted that those who do not understand digital health risk becoming professionally irrelevant. Therefore, it is crucial for healthcare professionals, including doctors, to stay updated on digital health developments to better serve informed patients.

The analysis points out that scalability is crucial in healthcare. This means that the ability to expand digital health initiatives and ensure they are accessible to all is of utmost importance in order to achieve the desired impact in improving healthcare delivery.

Overall, the analysis underscores the importance of digital health training for policymakers, frontline health workers, patients, and the broader healthcare sector. It highlights the role of various stakeholders, including private organisations, civil society, and governments, in promoting digital health literacy, addressing health equity, and bridging the digital divide. The analysis also highlights the potential of technology in managing healthcare, particularly during the COVID-19 pandemic. Moreover, it emphasises the need for digital literacy and scalability in order to maximise the benefits of digital health in the healthcare sector.

Yawri Carr

The analysis delves into several key topics related to digital health and technology. One of the main focuses is the Responsible Research and Innovation (RRI) Framework, which aims to harmonise technological progress with ethical principles. The framework advocates for policies that preserve digital rights and establish mechanisms of accountability. This is seen as crucial in guiding the development of digital health technologies, ensuring that they are ethically sound and aligned with societal values.

Ethical considerations in the development of digital health technologies are explored further. It is argued that in competitive environments, where efficiency, speed, and profit are prioritised, ethical concerns can be compromised. This tension between ethics and industry objectives highlights the need for a careful balance between technological advancements and ethical principles, ensuring that technology is developed in a responsible and sustainable manner.

The involvement of youth in digital health is highlighted as a significant factor in bridging the digital divide and enhancing digital health literacy. Youths can play a crucial role in the research process, ensuring that interventions are culturally sensitive and address the specific needs of their communities. Innovation challenges and mentorship programmes are seen as powerful tools for guiding youth in the development of their ideas. Additionally, digital health literacy programmes can be initiated to equip young individuals with the necessary skills and knowledge to navigate the digital health landscape effectively.

The analysis also emphasises the importance of youth participation in internet governance policies. By actively engaging in discussions and decision-making processes, young advocates can ensure equitable access to digital health resources. It is argued that youth coalitions can amplify their collective voice on topics such as digital health equity, ultimately driving positive change and promoting inclusivity in healthcare.

Innovation hubs are suggested as a collaborative platform where young innovators, healthcare professionals, and policymakers can come together to create solutions for digital health challenges. The involvement of supportive companies and resources can aid in filling innovation gaps and promoting meaningful advancements in the field.

During a pandemic, telemedicine and the implementation of robots are highlighted as crucial. Telemedicine enables the delivery of remote healthcare, minimising contact and reducing the risk of contagion for healthcare workers. Robots, on the other hand, can perform tasks considered dangerous or dirty, thus protecting the health of patients and medical professionals.

The analysis also supports the initiative of Open Science, emphasising the importance of open access to data and research. Costa Rica’s proposal for an open science initiative to the World Health Organization (WHO) is highlighted as a positive step towards facilitating collaboration and partnerships for the advancement of digital health technologies.

The role of technology in emergency situations is underscored in the analysis. It is argued that technology can help protect healthcare professionals and patients during emergencies, providing essential support and resources to mitigate risks and ensure effective healthcare delivery.

Finally, the analysis recognises the value of ethicists’ work and emphasises the importance of their active involvement in discussions about responsible AI. Ethicists are seen as vital in ensuring that the development and deployment of AI technologies align with ethical considerations and respect for human values.

In conclusion, the analysis provides a comprehensive examination of various aspects of digital health and technology. It highlights the importance of ethical considerations, youth engagement, innovation hubs, and the role of robots and telemedicine. The insights gained from this analysis further emphasise the need for responsible and inclusive development of digital health technologies, while recognising the value of collaboration, inclusivity, and ethics in driving positive advancements in the field.

Session transcript

Man Hei Connie Siu:
So, hi, everyone, both on-site and online. Welcome to our workshop titled Equity, Closing the Gap with Digital Health Literacy. My name is Connie. I’m a 22-year-old biomedical engineering student and also a United Nations International Telecommunication Union Generation Connect youth envoy with a passion for internet governance. So, in the next 85 minutes, we’ll be exploring how digital technologies have transformed healthcare, especially during the pandemic. However, despite progress, digital health has not necessarily improved health equity. Low digital health literacy and the digital divide are still persisting, in turn creating disparities in access to care. So, in this session, we will discuss strategies to enhance digital health literacy and identify measures to promote equitable digital health access. Our goal is to find innovative policy solutions that bridge the digital divide and ensure that digital health truly advances healthcare outcomes for all. Thank you all for joining us on this important journey and let’s get started. We have three key policy questions that will guide our discussion today. How can comprehensive frameworks and assessment tools be developed to capture and assess different dimensions of digital health literacy, ensuring holistic understanding of individuals’ abilities in navigating digital health information and services? What strategies towards health equity can be adopted to ensure digital health literacy programs effectively address unique needs and challenges faced by marginalized communities, promote inclusivity and equitable access to digital health resources? And also, how can partnerships between key stakeholders, including healthcare providers, educational institutions, technology companies, and governments be leveraged to enhance digital health literacy skills, foster collaboration and knowledge sharing to advance health equity? Our panelists will be addressing these issues today. So if you would like to ask a question towards the panel, we will have a Q&A session at the end for on-site participants. And online participants may use the Zoom chat to type and send in your questions. And my online moderator, Valerie, will be helping me with them. So without further ado, to kick off our discussion, I would like to introduce our esteemed panelists who will share their insights on these matters. First, joining us online, we have Ms. Geraldine Miller, an innovation leader driving change in healthcare and life sciences through AI. She is a senior director at Microsoft in product incubations, Microsoft Health and Life Sciences Cloud, Data and AI. And she’s also the co-founder and head of AI for Health, which is Microsoft AI for Good Research Lab. And then we have Professor Rahindra Gupta joining us on site here today, a leading public policy expert with vast experience in policymaking. And he’s been involved in major global initiatives on digital health and holds several key positions in the digital health arena. He’s also the founder and behind many pathbreaking initiatives like his Project Create and organizations working for digital health. And next we have Ms. Debbie Rogers joining us on site as well. She’s an experienced leader in the design and management of national digital mobile health programs and the CEO of Reach Digital Health, aiming to harness existing technologies to improve healthcare and create societal impact. And last but definitely not least, we have Ms. Jari Carr joining us online. She’s an internet governance scholar, youth activist and AI advocate. And she’s also a digital youth envoy for the ITU like me and a global shaper with the World Economic Forum with her work centering on responsible AI and data science for social good. Now let’s begin section one of today’s workshop on low digital health literacy and strategies. And I would like Ms. Geraldine to take the floor first. So what research and development initiatives, for example, including the creation of comprehensive frameworks and assessment tools, is Microsoft pursuing to address the multifaceted challenges of low digital health literacy? And additionally, can you highlight your thoughts and innovative strategies and partnerships that Microsoft is employing or supporting to enhance digital health literacy among marginalized populations with a focus on inclusivity and equitable access, especially in low income and rural areas? Ms. Geraldine, over to you.

Geralyn Miller:
Yeah, great, thanks. And thank you for inviting me today to participate in this. So the lens I’m gonna take from this is really based on something that is known as social determinants of health. So I wanna start by defining and sanity checking that social determinant of health is a non-medical factor that influences health outcomes. So this is the conditions that people are born, work and live in, and the wider set of forces that shape conditions of our daily lives, right? So this includes things like economic policy and development agendas, social norms, social policies, racism, even climate change and political systems. And this affects about, from research, we know that this is about 30 to 55% of health outcomes. are actually really dependent on social determinants of health. So when you want to think about health equity in digital literacy, it’s really important for two things. First, to understand the problem based on data. And I’ll share a little bit about what Microsoft Research is doing in that area. And the second is to open your mind and have a willingness to address the underlying, often systemic problems that affect health outcomes. And that includes social determinants of health. So Microsoft has some things that we’re doing to understand the problem with data, including the Microsoft AI for Good team has built something that we call a health equity dashboard. That is essentially a Power BI dashboard that takes a number of public datasets and allows one to look at them from a geography perspective, slice and dice the data by rural, suburban and urban populations, and then also examine different health outcomes, including things like life expectancy. So that’s the first thing, right? Is really being able to understand and visualize the problem itself. So I invite you to actually have a look at that information. There’s a number of other things that from a Microsoft perspective, we’re doing to look at on the social determinants of health side. So I’ll point for example, to some of the work we’re doing on climate change. We announced a climate change research initiative that we call MCRI, which is really a multidisciplinary research initiative that is focusing on things like carbon accounting, carbon removal and environmental resilience. We also have our Microsoft AI for Good research lab and our humanitarian action program. They have, for example, worked with a group called Humanitarian Open Street Map Team or HOT, which partnered with Bing Maps to map areas vulnerable to natural disaster and poverty. So that’s an example of some of the work out of the research lab and the humanitarian action program coming together to help give relief teams information to respond better after disasters. There’s also a lot of work that we have happening from a Microsoft perspective that ties more directly to economic development and digital skilling. So we have some work out of LinkedIn, something called the Economic Graph, which is a perspective or a view based on data of more than 950 million professionals and 50 million companies. LinkedIn, which is a Microsoft company, also has a data for impact program. And this program makes this type of professional data available to partner entities, including entities like the World Bank Group, the European Bank and others. So it’s data on more than 180 countries and regions, and this is at no cost to the partner organizations. An example of the impact of this type of data, this data for impact information was able to advise and inform a $1.7 billion World Bank strategy for the country of Argentina. And then there’s also the Microsoft Learn program, which is a free online learning platform enabling students and job seekers to expand their skills. So role-based learning for things like AI engineers, data scientists and software developers, hundreds of learning paths and thousands of modules are localized in 23 different languages. So in summarizing, I just wanna say that we look at this from a holistic broad perspective as digital health literacy and digital skills as part of the social determinants of health and the work that we’re doing to support those.

Man Hei Connie Siu:
Thank you very much, Ms. Miller. And now moving on to Ms. Debbie, as an experienced leader in the design and management of national mHealth programs and the CEO of Reach Digital Health, can you share your thoughts on digital health literacy, digital divide and health equity, effective strategies for enhancing digital health literacy among marginalized populations, particularly in resource constraint settings? And additionally, how can partnerships between nonprofit organizations like Reach and private sector mobile operators be strengthened to promote digital health literacy among women and marginalized communities addressing gender-based barriers and limited resources while contributing to bridging the digital divide?

Debbie Rogers:
Thanks very much. So I think the first thing just to talk about is a little bit of the context. So we work primarily in Africa. To give you an idea around inequality in health in Sub-Saharan Africa, we have 10% of the world’s population, 24% of the disease burden, and only 3% of the health workers. And so we really do have the odds stacked against us in a time when we’re supposed to be going towards universal healthcare, which quite honestly is a pipe dream if you look at where things are at the moment. While we’ve made some progress in addressing maternal and child health and addressing infectious diseases such as HIV, we are getting an increased burden when it comes to non-communicable diseases. So the burden is just increasing, not decreasing. And so really, if we follow the same. patterns over and over again and we keep just training more and more health workers and not addressing the systemic issues or relieving the burden from the health system, then there’s absolutely no way that we’re going to be able to improve these stats. We’re going to go backwards and not forwards. And so I think I’m fairly optimistic actually because I think that digital, and particularly mobile, has the opportunity to really address some of these issues in a way that many other interventions don’t. REACH Digital Health was founded in 2007 with the idea that the massive increase in access to mobile technology in Africa, at the time more people in Africa had access percentage-wise to mobile technology than in the so-called global north or western countries, was a way for us to leapfrog some of the challenges that we’ve had in the global south and to actually address some of these issues. And we really have been able to see that. We have been able to see how the access to information and services through a small device that’s in the palm of many people’s hands has been able to improve health, both from a personal behavior change perspective but also health systems as a whole. And so what we primarily focus on is using really, really low-tech but highly scalable technology. So things like SMS, WhatsApp, these are the things that everybody uses every day to communicate to their family and friends. And we use that to empower them in their health, help them to practice healthy behaviors, to stop unhealthy behaviors, and to access the right services at the right time. And with the fairly ubiquitous nature of mobile technology in Africa, we’ve been able to reach people at a massive scale. So for example, we have a maternal health program with the Department of Health in South Africa. It’s been running since 2014. We’ve reached 4.5 million mothers on that platform, but that represents about 60% of the mothers who have given birth in the public health system over the last eight years, which percentage-wise is huge. And we’ve been able to see that this has had impacts such as improved uptake of breastfeeding, improved uptake of family planning, and really has seen not just an individual change but a more systemic change with the ability to understand what is the quality of care on a national scale for the Department of Health. in South Africa. And so we really do believe that if you harness the power of the simplest technology, if you design for scale with scale in mind, if you design with understanding the context, then you can actually use digital to be able to increase health literacy. And so it’s not all doom and gloom. It’s not just about the fact that digital is always excluding other people. It can be an enabler, but only, of course, if we consider the wider context and we don’t go blindly into things and ignore the fact that this could be something that increases it. And so I think I’ll talk a little bit later more about some of the strategies that can be used, but I think two things to remember is design with the human, not patient. I don’t like the word patient, but in digital health we tend to use that word, with the human at the center of what you’re trying to do. And design understanding that you are a part of a bigger system, and this is not something that exists by itself. And if you do those two things, not only will you be able to improve health literacy, but you’ll be able to do so in a way that doesn’t widen the divide that many technologies already put in place.

Man Hei Connie Siu:
Thank you very much, Miss Devi. Moving on to Professor Gupta, with your extensive experience in policy development, digital health education, and founding the world’s first digital health university, can you share your thoughts and offer key policy recommendations that governments and international organizations should prioritize to comprehensively enhance digital health literacy, especially amongst marginalized populations? Additionally, can you share insights into successful and scalable educational strategies and approaches that have effectively improved digital health literacy, with a focus on adapting these methods globally to meet health care scaling needs for digital health?

Rajendra Gupta:
Thanks, Connie. Firstly, I congratulate you for picking up this very important topic. And secondly, I’m a little worried for. for such a long question because after 5 p.m. almost like I’m half asleep. It’s been an engaging session throughout the day, but yes, it’s a very important topic. It keeps me awake, but pardon me for my incoherence. But let me give you a little backdrop of why this topic is important. There is an international society called International Society of Telemedicine and E-Health. It’s been around for a quarter of a century and has memberships in 117 countries. So way back in 2018, I said that digital health has two opportunities and two challenges, but the two challenges are like we have reached a stage of technical maturity. Give me a challenge, I’ll give you 100 solutions. But where we lack is organizational maturity. People are not trained enough to leverage technology that’s available, so I said let’s look at capacity building. I think the issue that you brought up. So 2019, they formed the Capacity Building Working Group, which I chair, and post that, we have done two papers on capacity building. One is listing the kind of people we need to train across digital health, and second, we have done a deep dive and released that in partnership with World Health Organization. So there is, for those who are looking at what kind of capacity we need, the ISFDH website has a list, two papers written on this topic. And then 2019, WHO set up their capacity building department, which is a very recent thing. So I think there is a lot of focus. And now coming back to what my experience was. So having pushed various organizations to do that, but I still relied, we were just doing policy papers, and policies take time to translate. I mean, people like Debbie would need people to help her in technology. I mean, a policy paper can’t help her. She needs people trained in digital health. So in 2019, I set up the Digital Health Academy, which now is now the Academy of Digital Health Sciences. We have started a course for doctors and for people in healthcare. It’s a global course, fully online, as digital course should be. But to your point, that also would not solve my biggest overall challenge. I am training doctors, you know, it is so shocking, and I’ll put a context to that, that we had a half page advertisement in a leading newspaper in India. A very senior doctor called me and asked, Rajen, what’s digital health? So I was shocked that even doctors need to be first surprised that what does word digital health mean. I’ll give you another example. There’s a company that works exclusively in data domain. So I. called the founder who is a doctor and asked, do you do digital health? He said, no Raj, we don’t do digital health. I said, do you use data? He said, we only use data. So I said, you only do digital health. So the challenge is first people should know the definition of digital health. That is the level we have to get in and which is needed across the ecosystem. So right from the bureaucracies and the ministers and the ministries of health, they need to understand what is digital health because they come for a fixed tenure or they get transferred. If that level they are sensitized, then things flow down the line because government makes policies which get implemented as programs. So that’s one level of competencies that I’ve told WHO to look at because my experience in WHO meetings is that bureaucrats come, they spend two, three days in Geneva or New York and then they go back and forget it. So there has to be a course for policymakers at the highest level, which probably WHO or any organization could do. The second level is what we need to do is the courses for doctors and health professionals. And third and the most important which we are launching in next two months is frontline health workers. But understand the challenge that frontline health workers are either doing voluntary service, like you have the ASHA workers in India, which is a million workers. They are our first line or first responders. Don’t expect them to pay you $1,000 or $100. So we had to actually innovate and convince one of the Institute of National Importance that we need to bring out $1 trainings. So we should train people for as low as $1 and this we’re doing globally. So frontline health workers, if I’m able to train, I think I would have addressed the biggest challenge for healthcare. Now one of the government’s agency has approached us to work with us. So as such, on the capacity building, I think governments just focus on the program minus capacity building, which is a serious lapse. And I think this is across the board. I think that we would agree on that is that we are very focused on saying maternal health, mobile application, child health, mobile application, rural health telemedicine, but who will do it? We don’t know. But people who are going to use don’t even know how to use a mobile phone. They do not know how to log in on the account. So we need basic training and I think this is what private organizations, not-for-profits and then government step in very late, let me tell you that. So they are not the ones who would initiate, so once you go with the program, talk to them, they will partner. So as a policy, I’m glad, Connie, that you have put a session on this, something that our Digital Health Dynamic Coalition should have done, but they only allow one session for a Dynamic Coalition. So we had our session, which we are doing tomorrow, but now that you have taken it up, it puts the spotlight on this important topic. At ISFDA, there are policy papers, they have been given to WHO, WHO set up the Capacity Building Department, but honestly, nothing much has moved between 19 and 23, four years. We are still to look at, and they’re still forming a committee, so I think it’s mostly going to be the civil society organizations and private sector that will take the lead. On policy side, I have not seen documents that talk about it so far, so we will have to wait for a normative guidance from WHO, which will be still, I think, a few years away. It takes time to build a document in WHO. How this will happen fast is like this. In India, we have a digital health mission, which has rolled out 460 million health IDs. In this year, we will roll out one billion health IDs. Our health consultations, teleconsultations, have crossed 120 million. I think that is the first point, so I’m inverting the process from policy to let’s first have implementation. When the government rolls out at such level and scale, automatically, you will start feeling the need of trained people in this. I think this is one thing, but more than structured courses, it will be more of continuous upscaling that everyone will need to do, because technology is also changing. Till last year, no one talked about generative AI. Now, people have started talking about generative AI. I think we need to keep that training as fluid and make it more as a continuous upscaling program for people across health care. We are not waiting for government policies. We are rolling out as digital. Academy of Digital Health Sciences, and these are global programs. We are making it really affordable as $1 trainings for front-end health workers, for doctors, and for the industries, the post-graduate program. And we will announce undergraduate programs as well, because I think this is where we need to build capacity. So for now, I think policy interventions will happen. I think overall, a part of the health policy, everyone should put capacity building, and digital health is now an integral part of health. So digital upscaling is required for digital scaling. So I think this is something that governments have to look at, and WHO should take a frontal role. So I would say more to WHO, and organizations like the one that Debbie runs, organizations like the ones that I run with my team. And more importantly, there are two people sitting in this room, Priya and Saptarshi. They run patients union, International Patients Union. Even if you train doctors, industry, and the front-end health workers, if patients are not trained, who will use digital? At the end of the day, they have to open an app, use it. They need to know what’s privacy, what’s security. So it’s on us on people like them, to go and train patients for how to use digital technology. So it’s a multidimensional topic, and I’m happy that there’s a session dedicated to this. Unless we address this in a complete ecosystem perspective, we have not done justice to this topic. Thank you.

Man Hei Connie Siu:
Thank you very much, Professor Gupta. And now to Jari. As someone with expertise in responsible AI, digital rights, and a passion for the intersection of technology and society, how can policymakers craft regulations to ensure the responsible development and deployment of digital health technologies, especially for marginalized communities? And also, what role do you see for youth-led initiatives in enhancing digital health literacy, bridging the digital divide, and engaging with policymakers to drive policies that support equitable access to digital health resources? Over to you.

Yawri Carr:
Hello, everyone, dear organizers, participants, and guests. Thank you very much, Connie, for the organization, and thank you for inviting me. Well, so in a world where technology and healthcare are more intertwined than ever, the responsible development and deployment of digital health technologies are of paramount importance. This is especially true when considering marginalized communities, where equitable access to healthcare is not just a goal, but a moral imperative. So in this case, I would like to mention the Responsible Research and Innovation Framework as one of the guiding philosophies that serve as a roadmap for navigating the intricate terrain of AI in healthcare. At its core, RRI is a commitment to harmonizing technological process with ethical principles. It places a premium on transparency and accountability, recognizing them as pivotal elements in the responsible development and deployment of AI technologies. In the realm of healthcare AI, RRI advocates for policies that do not only uphold digital rights, safeguarding privacy and security, but also establishing mechanisms to hold AI systems answerable for their decisions. It is a holistic approach that seeks to ensure that benefits of innovation are realized with a compromise in ethical standards or jeopardizing individual rights. So who should be involved in a process of responsible research and innovation? Societal actors and innovators, scientists, business partners, research funders and policymakers, all stakeholders involved in research and innovation practice, funders, researchers, stakeholders, and the public, large community of people, early stages of R&I processes, and the process as a whole. And when? Through the entire innovations life cycle. And to do what? So it is important to anticipate risks and benefits, to reflect on prevailing conceptions, values, and beliefs, to engage the stakeholders and members of the wider public, to respond the stakeholders, public values, and also the changing circumstances that are present in these kinds of processes, to describe and analyze potential impacts, reflecting on underlying purposes, motivations, uncertainties, risks, assumptions, and questions, and that a huge amount of dilemmas that could also emerge in this kind of circumstances, and open to reflections and to have a collective deliberation and a process of reflexivity, and to integrate measures throughout the whole innovation process. So these are also in which ways should we do this? Working together, becoming mutually responsive to each other, and of course, in an open, inclusive, and in a timely matter. And to what ends, what this framework proposes is that it’s allowing appropriate embedding of scientific and technological advances in society to better align the processes and outcomes with values, needs, and expectations of society, to take care of the future, to ensure desirable and acceptable research outcomes, solve a set of moral problems, and will also protect the environment and consider impacts on social and economic dimensions, also promote creativity and opportunities for science and innovation that are socially desirable and are taken in the public interest, and how these can be applied specifically in a context of healthcare technologies. For example, there are academic projects and also societal projects. One example of an academic project is one from the Technical University of Munich in which I am now studying. Well, we have a project that’s an AI-driven innovation, including a robotic arm of exoprothesis and an advanced version of bimanual mobile service robot. So to ensure the responsible and ethical integration of these technologies into broader healthcare applications, the developers from the Machine Intelligence Institute have collaborated with the Institute of History and Ethics of Medicine, as well as the Munich Center for Technology and Society. And these teams are employing embedded ethics, incorporating ethics, social scientists, and legal experts into the development processes. So they have initial onboarding workshops where these experts have become integral members of the development team. They have been actively participating in regular virtual meetings to discuss technological advancements, algorithmic development, and product design collaboratively and interdisciplinary. And when ethical challenges are raised, they are addressed as part of the regular development process leading to adjustments in product design. An example involves the planning of model flats for a smart city where initial designs focus on open-play layouts. Embedded ethics is highlighted in this case, potential challenges for elderly population unaccustomed to such arrangements, promoting a reconsideration of the layout. Also taking into consideration that these kind of projects in this specific case had a target population of the elderly population. So this is why it is very important to look at this target population and actually see if they are prepared and if they could be adapted to these kind of technologies. So insights from this discussion influence the design process, emphasizing the importance of directly seeking future inhabitant perspectives in layout planning. And simultaneously, the project also involves interviews with various stakeholders, including developers, programmers, healthcare providers, and patients. Well, workshops, participant observations of development work and collaborative reflection and case studies contribute also to active ethical consideration. And while the project is also aiming to develop a toolbox to facilitate implementation embedded ethics in diverse settings in the future, but there are also several unresolved issues that remain and that are also like with cultural setting and with corporate and organizational structures because even in a research setting funded by public resources, the development of AI is predominantly situated in a fairly competitive landscape with prioritization of efficiency, speed, and also profit. So, and also in the case of health, so ethical considerations might be normally isolated or like are normally like not so taken into an importance when they directly clash with profit-driven motives. So, taking ethical concerns seriously often creates a tension with industry objectives and faces the risk of being assimilated into broader corporate commitments to concepts like technological solutionism, micro-fundamentalism, that at the end prevents ethicists to actually do their work and to do a responsible healthcare technology. Normally, embedded ethicists may find themselves working within contexts that are characterized by pronounced power imbalances, particularly those of a financial nature. And it is probable that some form of enforcement measures will become very necessary in such environments. So, not just for the development of the technical aspects, but also like for the work of the persons that are working on the responsible development and deployment, so that maybe regulatory framework certification processes or even voluntary initiatives into the organization can make an awareness of these kind of issues that are arising in these situations. And well, okay, I also needed to talk about youth-led initiatives, right? If I still have time. Okay, so, well, there are also like a lot of ways in which youth-led initiatives and also marginalized community could also engage with responsible research and innovation. So, for example, youth-led initiatives could connect or could try to participate in events such as this one, but also like try to, that universities or centers of education could inspire the youth so that they can also learn about telemedicine, how can they develop telemedicine initiatives in countries and also in a special rural areas as the professor was mentioning about in India that these kinds of populations don’t have the same access. Also, for example, community-based participatory research projects that are involved in communities in their research process, ensuring that interventions are culturally sensitive and address the specific needs of a population. Also, digital health literacy programs. And also like innovation challenges could be motivated between students and youth so that they can also engage. And I also consider the mentorship that these students or youth can also gain from experienced people is also very important because they need a guidance and also like foundations and also examples of how can they develop their ideas. So thank you.

Man Hei Connie Siu:
Thank you very much, Jari. So while low digital health literacy is a challenge for all populations, it’s also particularly harmful for marginalized communities. So in this section, we’ll discuss strategies for addressing health equity and the digital divide in the context of digital health. So let’s start this off with Ms. Gerilyn again. So in light of the session’s focus on health equity and the digital divide, could you share your thoughts and elaborate on specific policy measures and initiatives that Microsoft is advocating for or actively participating in to bridge the digital divide and promote equitable digital health access? And also how is Microsoft addressing barriers faced by diverse populations and how are these efforts contributing to advancing health equity? Over to you.

Geralyn Miller:
Yeah, thank you very much for the question. So I want to respond to in this context to some of the comments that Dr. Gupta and Ms. Carr mentioned and really shine a light on the concept of artificial intelligence, generative AI and what we at Microsoft call responsible AI as an example of policy. So one of my favorite quotes in this area is a quote by our Chief Legal Officer and President Brad Smith. And I’m gonna paraphrase a quote I don’t have exactly but Brad has a quote that basically says that when you bring a technology into the world and your technology changes the world, you bear a responsibility as a person that created that technology to help address the world that the technology helps create. And so from a Microsoft perspective, we look at this under the lens of something that we call responsible AI. Our responsible AI initiatives date back far before the birth of the chat GPT and generative AI and large foundation models and large language models, really back to about 2018, 2019. And we have a set of principles that we’ve established that are around how you design solutions that are worthy of people’s trust. So these are our principles, what we call our responsible AI principles. There are many people who have different principles around responsible AI. I’ll share with you ours. I would just offer that it’s something worthy of thought. And very often when I work with academic medical centers or healthcare providers who are starting to use AI or build and deploy AI models, I also offer to them, hey, you should have a position on responsible AI, right? Do your thought work, do your homework. You should have something that is consistent with your own values, your own entity’s values. And, but going back to, from a Microsoft perspective, what we believe those principles are. The principles are really based on fairness. So treating all stakeholders equitably and not making sure that the models themselves don’t reinforce any undesirable stereotypes or biases. Transparency, so this is all about AI systems and their outputs being understandable to relevant stakeholders. And relevant stakeholders in the context of healthcare means not only patients who may be receiving the output of this, but also clinicians who may be using these as decision support tools or to do some type of prediction. Accountability, and so people who design and deploy AI systems have to be accountable for how the systems operate. And I’m gonna do a click down on accountability in a second. Reliability, so systems should be designed to perform safely, even in the worst case scenarios. Privacy and security, of course, that goes, those are underpinnings behind any technology. And AI systems as well should protect data from misuse and ensure privacy rights. And then inclusion, and this is all about designing systems that empower everyone, regardless of ability, and engaging people in the feedback channel and in the creation of these tools. And there are some things I will drill down a little bit on the inclusion front as well. So when you, an example, as I mentioned, of the accountability, I’d like to share some things that our, you know, President Brad Smith was offering when he testified before this, the U.S. Senate Judiciary Subcommittee. This was back in the beginning of September, around September 12th, on a hearing entitled The Oversight of AI, Legislating and Artificial Intelligent. So Brad highlighted a few areas that he is suggesting help shape and drive policy. One is really about accountability in AI development and deployment. Things like ensuring that the products are safe before they’re offered to the public. Building systems that put security first. Earning trust. So this is things like provenance, technology, and watermarks so people know when they’re looking at the output of an AI system. Disclosure of model limitations, including effects on fairness and bias. And then also really channeling research energy and funding into things that are looking at societal risk associated with AI. He also suggested that we need something called, you know, what he terms safety brakes for AI that manages any type of critical infrastructure or critical scenarios, including health. And, you know, when you think today we have collision avoidance systems in airlines, we have circuit breakers in buildings that help prevent a fire due to, for example, power surges, right? AI systems should have safety brakes as well. So this involves classifying systems so you know which ones are high risk, requiring these safety brakes, testing and monitoring to make sure that the human always remains in control, and then licensing infrastructure for the deployment of critical systems. And then from a policy perspective, ensuring that the regulatory framework actually maps to how these systems are designed so that the two flow together and work together. So that’s an example of the policy in action side of things. And from a Microsoft perspective, we put our responsible AI principles that I mentioned into action through our commitments at a policy level. Our voluntary alignment, for example, here in the US out of some of the things coming out of the White House. So voluntary alignment with commitments around safety, security, and trustworthiness of AI. And on one last point, I did wanna go back to the responsible AI principle and talk about inclusion. And so we’re doing some work from a Microsoft perspective in the health AI team that I am a product manager on to really look at how, when we have data that guides models, and either this is either custom AI models, or when we’re grounding large foundation models or large language models with data, how do we make sure that we understand the distribution and makeup of that data to ensure that their bias doesn’t creep in from the data perspective? And we’re also doing work, for example, on the deployment of models. How do you understand if models are performing as they intended? How do you monitor for things, something called model drift? So when models start to perform in a manner that isn’t how you think, right? When the accuracy starts to decline, and then what do you do when the models don’t perform that way? And this last part, the model monitoring and drift is some of the things that we have happening out of our research organization. So thank you.

Man Hei Connie Siu:
Thank you very much, Ms. Cherilyn. So now I want to move back to Ms. Debbie. Drawing from your experience in developing the digital strategy for a major telco in South Africa, how can telecommunication companies play a more significant role in advancing health equity and bridging the digital divide through innovative approaches and digital solutions? And also, what lessons can be learned from your work in South Africa that can be applied globally to improve digital health access?

Debbie Rogers:
Thanks. I think one of the most interesting examples of how mobile network operators have really had a big impact on decreasing any inequities around health is the Facebook Free Basics model. You may not know what that was, but Facebook basically put together simple information through what looked like a little Mobi site. And this was essential information that they felt everybody should have access to. And they work with mobile network operators to zero rate access to only that portion of Facebook, just that portion, not to everything, but just that portion. And they were able to show that by providing essential information that is free to access, they were able to improve people’s literacy and use of data. So they then… went on to use more data and to use the internet more often, and therefore become more valuable customers to the MNOs. So by doing something like providing free access to essential information, there was also an increase in profit for the mobile network operators. And I think that’s a really interesting model to look at. I think very often we forget that it’s just as important for mobile network operators to be reaching as many people as possible as it is for those of us who are trying to improve health through something like digital health. And so if there are aligned priorities, then there are very good ways that you can work together. One of the ways that we’ve worked with mobile network operators in South Africa has been to reduce the cost of sending messages out to citizens of the country. And that’s been done not in a way that prohibits the mobile network operators from making a profit, but what it does do is it makes it completely free for the end user. So if it’s completely free for the end user, you’re reducing the barriers for them to be able to access this kind of information. But the reduced cost is then something that can be brought to the table because of the increased size of access. So the more we scale out these programs, the more we’re able to see economies of scale, and the more worthwhile it then becomes for mobile network operators to engage with us. And so one of the very interesting models that’s been used was to reduce churn. So if people can only access information, say, using a MTN SIM card, they’re less likely to switch to other SIM cards if that’s the case. And so being able to align the health, the desires of a health, digital health organization or government with those of mobile network operators is incredibly important for being able to ensure that you’re working towards the same goal but without. anyone asking for any handouts, because that’s not going to work. I think when it comes to strategies for decreasing inequity, I think the one that we really need to talk about more is about being human-centered. And that doesn’t just mean designing for people and occasionally having them attend a focus group. It means designing with them and ensuring that the service is actually something that they want to use, something that they love using. Make it easy and intuitive for them to use. No one starts a course on how to use Facebook before they use Facebook. We shouldn’t create services that need so much upscaling. We should create services that are simple and easy for people to use. You need to use appropriate language and literacy levels. And this is something that the medical fraternity often forgets about, because it is a very patriarchal society. Make it something that is at least close to free for people to access. We find that access to a mobile device is less of a problem than the cost of data, for example. So just because somebody has access to a device doesn’t mean that they’re going to be able to go and look up information because they may not have data on their phones. So you can work very closely to reduce the cost or make it zero cost, and that’s really going to ensure that you reduce the barrier to access. And then you really have to try and think about the system that you’re in. By creating a digital health solution, are you overburdening the health system that already exists, for example, or are you reducing the burden on it? Are you creating feedback mechanisms that mean that you can understand what the impact is that you’re having on the system itself rather than working within a vacuum? Are you making sure that where a digital health solution may not be accessible to somebody, there is an alternative in place that does not rely on the digital health solution? We can’t just operate within … silos, we have to think about the fact that digital health is just as much a part of health infrastructure as the physical facilities, for example. Until digital health is seen as just as much of an infrastructure, it’s going to be a fun project on the side and not something that’s going to have some systemic change. So it’s really important for us to think about that system. And then recognizing biases, I think Geraldine mentioned this, very often the people who are creating digital health services are not the people that are using the digital health services. So this goes back to why human-centered design is so important, but it’s also important to understand that you will be introducing biases if the people who are building the system are not the people who are using the system. And so you have to look more systemically. Look at the makeup of your team. How diverse is the makeup of your team? I would assume, having been an electrical engineer myself, that it’s probably not particularly representative from a gender or race perspective. So look at the team that you have. How are you working to make your team more representative and therefore address some of the biases that are going to be put in place by having a non-representative team building out the systems? So there’s a bunch of things in there, but I guess in summary, build for the end user in mind. Make it human-centered. Make it easy to use, appropriate, and intuitive. Design with the understanding that you work within a system and make sure that you don’t have unintended consequences and that you’re always feeding back to understand what the impact on the broader system is. And ensure that you think about the biases that are going to be inherent in the fact that the people building the system are not necessarily the people using the system.

Man Hei Connie Siu:
Thank you very much, Ms. Debbie. And now moving on to Professor Gupta. So based on your background in advising the Health Minister of India and drafting national policies, how can governments play a pivotal role in addressing the intersection of health equity and the digital divide, particularly in the context of health care access for marginalized communities and also what policy measures should be prioritized to ensure equitable digital health access?

Rajendra Gupta:
Thank you, Connie. This depends on the economic status of the country. So when you have an LMIC country like India, so I’ll give you an example of what was done. So we understand that there is a sizable population which is underprivileged, which is is marginalized, so there was a scheme that was launched for 550 million people, and you have to understand that countries are at different phases of development and they require investments on infrastructure, they require investments on health and education, and it’s not possible to give the amount that the sectors actually deserve. So what was done very carefully since I was in drafting the health policy I played a role in that, so we carefully treaded the path of saying let’s first make primary care a comprehensive primary care, so first guarantee primary care, so that’s comprehensive, that includes chronic disease management to all the things, then let’s convert the sub-centres and private centres into health and wellness centres and put telemedicine as a part of it. So what happens is 160,000 health and wellness centres now across the country offer you telemedicine. Then we created a eSangeevani programme, which is a telemedicine programme which is you can get a doctor consultation for free, so that is across specialities, that’s why it’s 120 million consultations, and now what’s going to happen is we’re putting in AI and NLP in that, so given that India has 36 states and people talk different languages, their dialects are different, so a person talking from a southern state to a doctor in a northern state will hear like his language when he speaks and the doctor will hear in his language when he listens to the patient’s problem. So I think India has planned its strategy for addressing the vulnerable and the underprivileged sections as it charts its course of development, one is that integrate technology in the care delivery right from the primary care, so that has proven, as I said, 460 million health records, 550 million people given insurance, which is of a very decent amount, I would say, which a typically middle class would afford. So on the policy side, on digital health, India has, as we speak, is probably the largest implementation of digital health in the country that is happening, and I would bring here one point that the government has not only to take the stewardship, but also the ownership of investing in digital health. Debbie would understand it very well that digital health is still figuring out the business model. That’s why you see the largest companies have withdrawn digital. health, and as much they can give, you know, talks on the forum, but their investments are on futuristic technologies, which are probabilistic technologies. But the companies that forayed into it years ago don’t exist on the map. So I think governments have to play a frontal role on investing, like Indian government has done. They set up a national digital health mission, rolling it across states, ensuring that everyone has what you call the Ayushman Bharat Health Account number, ABHA number. And you know, we actually will be probably the first country to work towards what I have championed is that let’s work to make digital health for all by 2028. And this for those who work in health care and more so in public health. Forty-five years back in Alma-Ata, we promised health for all by 2000. It’s 23 years after the deadline that we’re still not close to that. At least we can, you know, champion digital health for all by 2028. If that is one objective we pursue as governments across the world, I think a lot of issues will get addressed, because there is a whole lot of planning that will go into doing that. And it’s doable. That’s the only way you can address the issue of health equity. Because the practical part is that doctors who study in urban areas do not want to go to rural areas. They will not. I mean, even if you push them to do, they will find a way to scuttle that. But the only way you can do is you can get technology into their hands with the mobile phones. I think now the systems are fairly advanced. Tomorrow we are hosting a session on generate the conversational AI in low resource setting. So you can have chatbots interacting with people, addressing their basic problems. And 80% of the problems are routine, acute problems. So I think we need to leverage technology not only as a policy but as a program. And there are best practices available. I think India has, parts of Africa have. But these are like islands of excellence. I think forums like these are good to discuss if they can be mainstreamed into islands of excellence to center of excellence, then we can replicate them. and scale those programs. So I think India probably would have a good story as we speak about scale-up of digital health program, but again the key point is that the federal government has to be the funder for the program. Where do you start as health helpline? If you really want to address the inequities, start a health helpline which people can pick the phone, talk to a doctor or a paramedic and get a consultation free of cost. Get into projects like East and Givni, which I think the country is offering to other countries as a goodwill gesture, is where you connect to district hospitals and tell doctors to allocate time for doing digital consultations. So these programs actually help you bridge the digital divide and health and wellness centers. A phenomenal experience of under $60,000 health and wellness centers which have telemedicine facility. So I think picking up the queue, I would say it’s time for implementation. For policy-wise, I think we all know that. I think that we very clearly said it’s getting integrated. In fact, I go further line and say, if you’re not into digital health, you’re not into health care. Don’t talk health care. That’s the truth actually. Thank you.

Man Hei Connie Siu:
Thank you very much, Professor Gupta. Finally, to Jerry, drawing from your experiences in speaking about youth in cyberspace and Internet governance, how can young advocates actively participate in shaping Internet governance policies to ensure that digital health resources are accessible and equitable for all, regardless of socioeconomic status or geographic location? And also, what are some successful examples of youth-driven initiatives in this context? Over to you.

Yawri Carr:
Thank you very much. Well, in the realm of youth in cyberspace and Internet governance, empowering young advocates to actively shape Internet governance policies is crucial for ensuring equitable access to digital health resources. So young advocates can play a transformative role in policy discussions by engaging in many ways, such as, for example, participating in the IGF, because with this active participation, we start to break the ice in how to discuss, how to have dialogues, how to ask questions, and all of these activities, even though they are seen as very daily for experienced people. For youth, this is ways to break the ice and to gain confidence in how to participate in public debates. And they also get insights into current challenges and opportunities in digital health governance. Second, for a formation of youth coalitions, young advocates can form coalitions or networks dedicated to digital health equity, and these coalitions can amplify the collective voice of young people advocating for policies that prioritize accessibility and inclusibility in digital health. For example, we have the Internet Society. We have a youth group, or we have regionally different youth initiatives, and a chapter about digital health could also be open so that coalitions in this specific topic can deepen into these kind of topics. Also, third, it would be engagement with multi-stakeholder processes. So not just the IGF, but also in other kind of processes that are led by governments, NGO, or industry stakeholders. And their participation ensures that diverse voices contribute to shaping policies that consider the needs of all. And it is also important that in this circumstance, so public sector and industries and NGOs can also open this kind of opportunity for youth and that they actively seek for youth that could participate into their processes as well. Because if they don’t do it in such a direct way, so youth, as I mentioned before, they could feel intimidated and think that they are not experienced enough to participate. The fourth, youth-led policy research. Young advocates can initiate research projects to understand the specific challenges faced by marginalized communities in accessing digital health resources. Because evidence-based research can be a powerful tool for advocating target policy changes. And I think this is something that it is a situation, it is a possibility in many countries that have the resources for research, but it is still very behind in countries, for example, in Latin America, where we don’t have so much support from public foundations or from the government to do research. And we also don’t have like so big research focus in our university. So I think maybe one professor can bring this kind of perspective that can inspire the students to make a research group. For example, universities in Brazil, they have like student groups in which they meet some day of the week or some day monthly and they discuss specific topics. So I think this is a good practice so that youth can start to create, that they can start to discuss and that they can start to bring this university and to other colleagues and classmates. Of course, it would be great if some countries could also start to help other global South countries in order that they can have more research and that the students can participate more in these kinds of initiatives in their own countries. Also innovation hubs for digital health. So, for example, in which a hops in which a young innovators healthcare professionals and policymakers can create a solutions together in the sense, it would be also a good to have a funding from an organization or a company that can also collaborate, so that these kinds of innovations at the end can also maybe have like starting a month of financial resource so that they can start with this kind of innovation and that a youth can feel that they are able to become a innovators in this kind of field. But also, I think that this kind of innovation address gaps in digital health accessibility and some kind of examples of youth driven initiatives are for example detail health task forces, because in several regions, you’ve let task forces focus on creating policy recommendations for integrating digital health into broader internal governance frameworks. Also, you’ve led data privacy campaigns in which youth can also, for example, create dialogues in various communities, and they can make provide awareness about the importance of robust data privacy match measures in digital health technologies that people and common patients can also understand why is important to protect their privacy when they go to a when they access some kind of detail a health tool and a global youth hackathons for health, in which in there are health challenges that can develop on innovative apps and platform addresses specific healthcare needs that are in. Yeah, specifically related in the communities of these youth. And I also consider in another action. It’s a this movement also have paid internships that in students can also have access to internships that are paid so that they can equally participate in in a practical application of what they are learning at university or what they are in our sitting. So, um, well I think that by actively participating in these initiatives in active advocates contribute with fresh perspectives, innovative solutions and commitment to digital health equity in internet governance policies because they are digital natives, and they also could understand. I consider they could understand a rapidly a how the how the technologies can help them but also their challenges, their issues and they can also in become more active as they are not just the future but also the present.

Man Hei Connie Siu:
So thank you. Thank you very much Jerry and also thank you once again to the panel for their responses. And so now we’ll move on to the q&a session so if any onsite participants would like to raise their questions, please feel free to walk up to the mic.

Audience:
Hello I’m Nicole, and my YouTube student in Hong Kong. In case of another pandemic like covert 19 nowadays. How do you think the current digital health can be developed and improve and contribute to the society in recovering and ensuring each individual can receive the accurate and same medical advice and treatment without physically visiting a healthcare facilities as it will be crowded with a lot of people or elderly. Thank you.

Debbie Rogers:
I think one of the things that has really been a challenge in the work that we do is that we speak directly to citizens and empower them in their own health, given that the medical fraternity is quite patriarchal that’s not usually a priority. And so what we found is that when an issue is something that happens to somebody else, then there isn’t. It isn’t seen as a need to provide people with the right information, but when covert 19 happened, everybody was affected, nobody had the information. It didn’t matter if you were the president of the of the country, or if you were a student at a high school, no one had the information about the pandemic that was needed and so we’re able to use really large scale networks and things that were already there like Facebook, like WhatsApp, like SMS. platforms to be able to get information to people extremely quickly, and in a time when the information was changing on a daily basis. This wasn’t something where you could take a lot of time, think through things, and put up a website, and think about how things are going to be talked about. This was happening in real time, so you continually had to be updating things. People continually had to get the latest information, and without that, many more people would have died than did already in the pandemic. I think what’s important, though, is for us not to forget the lessons of COVID-19. We very quickly forget, as human beings, when things go back to so-called normal, we very quickly forget the lessons that we learned. I think one of the really important things that needs to continue from COVID-19 is an understanding that knowledge is power in the patient or citizens’ hands, and this isn’t something that needs to be hoarded by the medical fraternity. By giving information to people at a really large scale, you can improve their health, and you actually make your life easier at a time when you are most needed. Digital health can’t replace a healthcare professional, but it certainly can reduce the burden for healthcare professionals, and so that’s a really important thing that we need to continue to consider as we move on from COVID-19. I think the other thing to remember is that we built up platforms, digital health platforms, that solved problems during COVID-19. Screening for symptoms, for example, gathering data that could be used for decision-making, sending out large-scale pieces of information to people. Many, many people in the digital health space reacted very quickly and created incredible platforms that could be used to solve the problems during COVID-19. Many of those no longer exist. exist today. And so we need to remember that there needs to be an investment in digital health infrastructure in the long term so that we don’t have to spin up new solutions every time there is a new pandemic, because there will be another one. It’s not something that is going anywhere. So how are we preparing so that when the next pandemic comes we’re not having to start from scratch all over again? And I think that’s something that we very quickly have forgotten. I want to take a minute and address that as well, if you don’t mind.

Geralyn Miller:
A couple of things I think from the pandemic, and that’s a really great question because you know as a society we want to learn from the past. There’s two areas where I think are worthy to bring forward from the pandemic. First is that there is an incredible value in these cross-sector partnerships. So in public, private, and academic partnerships. We saw a lot of that during the pandemic, literally to light up research on understanding the virus, to do things like drug discovery. Some of this was governance-sponsored consortium, other were more privately funded consortium, and then third class was kind of just similar groups of people coming together, what I would say almost community-driven groups. So really this cross-sector collaboration, that’s the first thing. Second thing is there is some good standards work that I think was done during the pandemic that could be brought forward. So we saw the advent of something called smart health cards during the pandemic. Smart health cards are a digital representation of a relevant clinical information. During the pandemic it was used to represent vaccine status. So think of it as information about your vaccine status encoded in a QR code. There has been an extension of that, something called smart health links, where you can encode a link to a source that would have a minimum set of clinical information. And it’s literally encoded in a QR code that can be put on a mobile device or printed on a card for somebody to take if they don’t have access to a mobile device. Smart health cards also reinforces the concept of some of the work being done by the IPS, or International Patient Summary Group. It is a group that is trying to drive a standard around representing a minimal set of clinical information that could be used in emergency services. And so some of those things that happened in the standards bodies, I think were very powerful during the COVID-19 pandemic. And I would love to see more momentum around driving those use cases forward and also expanding them. Thank you.

Rajendra Gupta:
Thanks. Firstly, another COVID shouldn’t happen. That’s first. Second, I don’t think that technology at any time failed. Actually, it proved that it was ready. So whether you looked at the fast track development of vaccine, which was collaborating researchers across the globe, what technology repurpose drug use artificial intelligence, that’s why we did it. I think almost every country, our country use COVID app, we deliver 2.2 billion vaccinations, totally digital. So I think digital health proved that it was ready, it is ready. Challenges will come, but I think technology is the only one that saved the life. We wouldn’t be sitting in this room, trust me, if technology wasn’t around. The only thing that we should do through forums like this is to keep the momentum going. What we want is to forget the COVID and go back to the old ways. I think there were incentives given by the government, there were flexibilities offered in terms of continuing the telehealth regulations like in the United States. I think that should become permanent. That’s all we should do. So technology has already proved that it’s ready. we were waiting for COVID to be shaken and start using it. So I think technology is ready, we’ll always be ready with us for anything that comes our way. Thank you.

Man Hei Connie Siu:
Jerry, would you like to provide a response?

Yawri Carr:
Yeah. I just wanted to say that I consider that in this situation of a pandemic, telemedicine and also the implementation of robots, as the case that I mentioned previously, are of a huge importance and could also be very useful, taking into consideration that it’s very dangerous for humans to attend or to take care of people because of the contagious possibilities or risks. So I think that in these specific scenarios, the application of telemedicine and robots is particularly useful. Of course, taking into consideration that it’s an emergency, that the robots should not be working alone, they should also be guided by humans, but at least they are protecting also that workers such as nurses, that are commonly workforce that is not so valued in different societies because the tasks that nurses do, for example, are normally considered as dirty or not of a great importance. So I think actually, these technologies can protect not just the health of the patients that are infected by COVID or other pandemic, but also the work of the medical professionals such as nurses that are normally very exposed. In the other side, I also remember the initiative of Open Science that my country, Costa Rica, actually had proposed to the World Health Organization so that the initiatives, the projects, and the research that was done in a context of a pandemic is opened and that also is kept available for every person that’s interested. The data can also be accessed without having to pay, without having to make a patent of that. I consider this also of extremely importance because in a case of an emergency we just don’t have time for that and we should really try to cooperate within each other and to try to respond to the emergency in a holistic and collaborative way. Thank you.

Man Hei Connie Siu:
Thank you very much to the panel for your responses. Are there any other on-site questions? If not, then I’ll take the question from the chat. What are some emerging trends and future directions in digital health literacy, and what do you suggest to individuals to stay informed and up-to-date in this rapidly evolving field and ensuring they have the accurate guidance and outdated information?

Rajendra Gupta:
I’ll take that because of the couple of initiatives we are learning. So one is on the technical community side, what we are doing is within the health parliament that I run with my team, we have created CoLabs. We are creating developers for health working with companies like Google and others, because I think what we need to do is to create developers to solve problems. So that’s one initiative where people who are enthusiastic about being part of the technical contributors to digital transformation of health, that’s one. The other thing, in next three months, we’ll be starting courses for class eight students on robotics and artificial intelligence, an elementary course. We want to educate them very early on, so that they can choose what they want to do. They will be aware on what the opportunities are, and same way we have doing courses, which are very elementary level for people to understand rather than going to deep dive into tech. So and everyone who is into health, I would strongly recommend that if you don’t know digital health, you will hit a zone of professional irrelevance. Please update, whatever you do, whether you do a one-week course, two-week course, just make sure that you know digital health from ecosystem perspective. Thank you.

Man Hei Connie Siu:
Would any other speaker like to take the question?

Geralyn Miller:
Yeah, just a few comments on that. I think it’s always a challenge at the pace of innovation that we’re seeing today to keep current. So I want to call out and our panel here today and the people who put the panel together today and gave us this opportunity. This is one way that the dialogue starts and that information is shared. And so more opportunities for people of similar interests to come together, I think, will always help advance the state of where we’re at from an understanding perspective. So opportunities like this, you know, training as well. And it’s not just training from tech providers. It is just training infused into the academic system as well. And so I would agree with what Dr. Gupta said there. But again, a call out to the folks who put together this panel, because I think this is one way that that starts. Thank you.

Man Hei Connie Siu:
Thank you very much, Ms. Geraldine. So we have about five minutes left. So maybe we could go with the closing remarks from each of the speakers. Maybe starting with Ms. Debbie.

Debbie Rogers:
I guess my closing remark would be that technology is a great enabler. It can actually be used to decrease the inequity that we see in health, but also in digital literacy. I am actually very positive about the future. That we see with digital health. And I think Dr. Gupta is right. The technology is ready. We’ve seen many case studies where things have been done at a really large scale. This is no longer a fledgling area. This is now a mature and really large scale area of practice. And so I’m really excited to see what happens from this point. And I’m excited to see that we have youth involved in this panel. Because, yes, absolutely. Youth will be the people who will be building the next evolution in this space. So really excited to see how that works and to see how things evolve from here.

Rajendra Gupta:
I think I would say that in this age where patients are more informed, if not, you know, than anyone about health conditions, about the treatment options. It is high time doctors know them before patients start selling them. You don’t know about it? Let me tell you this. I saw this. So I think, one, this is that digital health is something that everyone who is into health care, whether it is a clinician or a paramedic, needs to learn this. Second, if you’re talking about digital health, scalability. Scalability comes first. So I think continuously upscale, cross-scale yourself. And lastly, I must say thanks, Connie, for putting up this wonderful panel discussion.

Man Hei Connie Siu:
Ms. Jolin?

Geralyn Miller:
Yeah, first off, I want to start by expressing my gratitude for being included in this, it was a wonderful opportunity. I wanna echo the sentiment that youth play a huge role in this going forward. And I’m very appreciative that you brought everybody together under this umbrella. The thing from a tech perspective, I agree with the panelists on that, digital health is here now. The one part that I would add to this is that when we’re thinking about things new evolving technology like generative AI, let’s do this in a responsible way, open the dialogue around policy discussion. A discussion is always healthy. And let’s make sure that this technology that we’re bringing to light with good intent benefits everyone. Thanks.

Yawri Carr:
Well, and in my case, well, in conclusion, yeah, let us strive to be digital health leaders equipped not only with technical skills, but also with a profound commitment to equity. I consider value the work of nurses is very important. Even though the technology evolves, of course, professionals, humans will be very necessary. And it is a fact that technology can help us to protect them and also the patients in situations of emergency and also value of the work of ethicists in when they have something to say that they are not misvalued that they can take into consideration. And also when there are conflicts with, for example, a profit so that ethicists can also have a opinion of that and that they can also try to contribute in the mission of responsible AI so that they are not just there as a decoration, but they are actually taken into consideration. And also, well, of course the role of youth is fundamental as we see all that youth led initiatives that could strengthen the mission of digital health literacy. Nowadays, can in the future, so developed in a very good environment that it’s inclusive, that it’s included marginalized communities and all the population. So I consider that now healthcare and digital healthcare should not be more a privilege, but also a right. And yes, and I’m very thankful also for the opportunity to be here and to express, Yeah, my opinions and to talk about youth as well. Thank you very much.

Man Hei Connie Siu:
Thank you very much once again to the panel for your insightful responses and the workshop has closed today. Thank you very much for coming and together we hope we can create a future where digital health resources are accessible, equitable and can empower individuals to navigate their health journey confidently online. Thank you. Thank you. Thank you. Thank you so much. Okay bye. Bye bye. Bye bye. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

152 words per minute

Speech length

76 words

Speech time

30 secs

Debbie Rogers

Speech speed

166 words per minute

Speech length

2880 words

Speech time

1043 secs

Geralyn Miller

Speech speed

175 words per minute

Speech length

2721 words

Speech time

933 secs

Man Hei Connie Siu

Speech speed

174 words per minute

Speech length

1802 words

Speech time

620 secs

Rajendra Gupta

Speech speed

204 words per minute

Speech length

3459 words

Speech time

1016 secs

Yawri Carr

Speech speed

141 words per minute

Speech length

3176 words

Speech time

1354 secs

Elections and the Internet: free, fair and open? | IGF 2023 Town Hall #39

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Felicia Anthonio

Internet shutdowns have become a widespread problem globally, with detrimental effects on lives and democratic processes. The Keep It On campaign, which aims to combat internet shutdowns, has recorded over 1,200 incidents of shutdowns in approximately 76 countries since 2016. These shutdowns are typically carried out by state actors during critical moments such as elections, protests, and conflicts.

One of the main concerns regarding internet shutdowns is their impact on democratic processes, particularly during elections. The internet plays a crucial role in enabling active participation and promoting transparency and fairness in electoral proceedings. However, when shutdowns occur, it becomes challenging to effectively monitor and ensure the integrity of electoral processes.

Governments often justify these shutdowns as a necessary national security measure to prevent the spread of misinformation. However, in practice, the opposite tends to occur. Shutdowns tend to benefit incumbent governments, as they can control the flow of information and stifle opposition voices. This, in turn, often sparks public outrage and protests. Incidents in countries like Uganda, Belarus, and the Republic of Congo serve as examples of how shutdowns have been used for political gains and to suppress dissent.

Addressing this issue requires the collaboration of various stakeholders, including businesses, big tech companies, and governments. The fight against internet shutdowns necessitates a multi-stakeholder approach, emphasizing the importance of secure, open, free, and inclusive internet access during critical moments such as elections.

Furthermore, it is crucial to highlight that internet shutdowns do not contribute to resolving crises. On the contrary, they tend to exacerbate the situations at hand. Shutdowns provide an opportunity for governments and perpetrators to commit crimes with impunity. Moreover, in conflict situations, shutting down the internet in response to flagged dangerous content ultimately escalates the crisis.

The Keep It On Coalition, a prominent advocate against shutdowns, strongly condemns all forms of internet shutdowns. In addition, they call upon big tech companies to exercise responsibility in promptly removing violent content to ensure people’s safety.

In conclusion, internet shutdowns are an escalating issue that negatively affects lives and democratic processes. The Keep It On campaign’s documentation of a significant number of shutdown incidents highlights the magnitude of the problem. The justifications used by governments for shutdowns often raise concerns about political motivations and human rights violations. Tackling this issue necessitates collaborative efforts between various stakeholders, and it is essential to prioritize secure, open, and inclusive internet access during critical moments. Additionally, internet shutdowns have been observed to worsen crises rather than resolve them, underlining the need for alternative approaches. The condemnation of shutdowns by organizations like the Kipiton Coalition further emphasizes the importance of combating this issue and ensuring the responsible conduct of big tech companies in safeguarding online spaces.

Audience

The speakers discussed several key aspects related to free and fair elections and the issue of internet shutdowns. They emphasised the importance of communication and the role it plays in ensuring fair elections. They highlighted the significance of the internet, GSM networks, and blockchain networks as essential tools for facilitating communication during election processes. Additionally, they emphasised the need for independent observers, journalists, and international organisations to monitor elections and ensure their fairness. These independent entities play a crucial role in preventing election fraud and promoting transparency.

Another critical aspect discussed was the use of blockchain technology in elections. The speakers highlighted the immutability of election results that can be achieved by leveraging blockchain technology. They stressed that this feature is essential in guaranteeing the credibility of election outcomes. Furthermore, they emphasised the role of cryptographic protection in ensuring the security and safety of the election process. Robust cryptographic measures can prevent tampering or manipulation of sensitive election data.

Scalability was identified as another crucial component of free and fair elections. The speakers pointed out that a scalable network is necessary to efficiently manage a large number of voters, such as a population of 300 million. This ensures that the election process can accommodate a significant number of participants without any disruptions or technical limitations.

While the discussion mainly focused on the positive aspects of communication, independent observers, blockchain technology, and scalability, there were also concerns raised regarding the resorting to internet shutdowns by governments. The speakers highlighted that governments sometimes lack alternative tools to address legitimate concerns and therefore turn to internet shutdowns as a means of control. This practice was seen as problematic as it limits citizens’ access to information and disrupts the democratic process.

The potential economic impact of internet shutdowns was also discussed. Lack of reliable connectivity was identified as a significant factor that creates a difficult investment climate. Internet shutdowns and restrictions on data flows were acknowledged as factors that negatively affect a country’s economy.

The Internet Society’s efforts in developing a tool called Pulse to track and provide information on internet shutdowns and data flows were applauded. This tool aims to support activists and democracy by providing digestible information that can help address concerns related to internet shutdowns.

The concerns about potential misuse and the legitimisation of internet shutdowns for specific cases were also raised. It was acknowledged that the legitimisation of internet shutdowns during religious ceremonies or events that might incite violence could encourage misuse of this strategy by other governments. This highlighted the need to explore solutions to address structural issues within governments that may lead to internet shutdowns.

Furthermore, the speakers identified the spread of disinformation as a significant challenge during elections. Disinformation was acknowledged as damaging to the image of political leaders and the democratic process as a whole. It was proposed that internet service providers should be held responsible for controlling the spread of disinformation, and artificial intelligence could be used as a tool to achieve this.

Lastly, the role of digital technology in promoting government accountability and responsiveness was emphasised. It was suggested that the use of digital technology can enhance the accountability of governments, making them more responsive to the needs and concerns of citizens.

Overall, the discussions highlighted the multifaceted nature of free and fair elections. It was concluded that a comprehensive approach involving governments, internet service providers, political parties, and citizens is necessary to ensure the integrity of electoral processes. The discussions also shed light on the challenges and potential solutions related to internet shutdowns, disinformation, and the use of digital technology in elections.

Ben Graham Jones

The discussion revolves around the detrimental effects of internet shutdowns and the importance of safeguarding online rights. The primary argument is that the rights people enjoy offline should not be diminished when they are online. This argument is supported by the agreement at the UN General Assembly that there should be equality between online and offline rights. It is emphasised that internet shutdowns have a negative impact on communication, as they silence the entire population by cutting off their access to the internet.

Another argument put forward is that internet shutdowns exacerbate the problem of disinformation. This is because during shutdowns, state TV or selective channels often remain functional, thereby monopolising the sources of information available to the public. This concentration of information sources leads to a limited pool of information and increases the likelihood of disinformation spreading. The inability to access fact-based information compromises people’s right to access accurate information and undermines the integrity of elections.

The discussions also highlight the need for cross-context learning to effectively counter disinformation. It is suggested that there is considerable overlap in the types of disinformation narratives spread across different electoral contexts. To address this challenge, there is a call for organisations working in vulnerable contexts to learn from other contexts and enhance their preparedness for countering disinformation. This entails shifting efforts from response to prevention and providing fact-based information at an earlier stage.

Furthermore, risk forecasting is deemed crucial in addressing potential internet shutdowns. The discussions stress that by the time an internet shutdown takes place, it is often too late to take substantial action. Therefore, organisations need to map out potential risks and adjust their plans accordingly to minimise the impact of such shutdowns.

Additionally, the analysis reveals that election technology, including blockchain, can become targets for disinformation. While the details and evidence supporting this argument are not provided, it is suggested that election technologies may be vulnerable to misinformation campaigns, potentially undermining the credibility and integrity of elections.

Overall, there is a strong positive stance that internet shutdowns should be fought against. The primary reason cited is that these shutdowns impede the ability of fact-checkers and journalists to perform their roles effectively, thereby undermining freedom of information. The importance of preserving online rights and resisting the negative consequences of internet shutdowns is emphasised throughout the discussions.

In conclusion, the expanded summary delves into the various arguments and evidence related to the negative consequences of internet shutdowns and the imperative to protect online rights. Additionally, the need for cross-context learning, risk forecasting, and the vulnerability of election technology are addressed. The overall message conveys the importance of combating internet shutdowns and their detrimental impact on freedom of information and the integrity of elections.

Kanbar Hossein-Bor

Internet shutdowns have a significant impact on the flow of information, freedom of expression, and human rights. These shutdowns not only hinder individuals’ ability to express themselves online but also threaten the exercise of human rights. It is important to consider internet shutdowns in the context of broader issues, such as media freedom and misinformation.

Recognizing the gravity of the situation, the Freedom Online Coalition issued a joint statement focusing on internet shutdowns and elections. The UK has taken a leading role in addressing this problem by leading a Task Force on Internet Shutdowns as part of the Freedom Online Coalition. This collaborative approach involves stakeholders such as the UK’s Foreign Commonwealth and Development Office, Access Now, and the Global Network Initiative. The Task Force, chaired by Kanbar Hossein-Bor, advocates for a multi-stakeholder approach to effectively tackle internet shutdowns and disruptions.

Internet shutdowns not only impact individual rights but also pose a threat to the wider democratic process. By restricting access to the internet, these shutdowns hinder the exercise of offline rights online. Additionally, the economic costs incurred by societies affected by internet shutdowns are substantial.

Despite the challenges, there is a strong desire to support policymakers who may lack the capacity, but not the intent, to address internet shutdowns. This recognizes the need for collaborative efforts between various actors to tackle this issue effectively.

In the face of those with ulterior motives, it is crucial to stand firm and uphold principles of open internet access and the protection of human rights. The comprehensive impact of internet shutdowns has been highlighted by the Oxford statement, and the launch of the FOC statement further emphasizes the urgency of addressing this issue.

In conclusion, internet shutdowns pose a grave threat to the free flow of information, freedom of expression, and human rights. Addressing this issue requires a collaborative, multi-stakeholder approach, as advocated by Kanbar Hossein-Bor and demonstrated through the Task Force on Internet Shutdowns led by the UK. Policymakers must prioritize efforts to combat internet shutdowns, even when capacity is limited, but there is a strong intent to address the issue. It is essential to remain steadfast in the face of those seeking to restrict access to information and suppress rights.

Andrea Ngombet

The analysis highlights several key points concerning internet shutdowns and information control in Congo. During the 2021 elections, the government not only blocked the internet but also telecommunications, justifying this action as a measure against foreign interference and misinformation. However, this move has been widely criticized as an attempt by the government to control the flow of information.

Furthermore, anti-terrorism and cyber-criminality laws have been used to suppress opposition in Congo. Activists were arrested based on their social media posts during the internet shutdowns, raising concerns about the government’s use of legal mechanisms to target dissent and stifle freedom of speech.

The government of Congo is seeking assistance from the Republic of China to acquire advanced tools for internet control, such as a firewall. However, this approach lacks technological sophistication, highlighting the need for aid in developing domestic technology and innovation.

One important argument made is that tech companies like META should play a role in preventing the spread of misinformation, particularly during elections. Through collaboration with META, Congo was able to establish the Congo Fact Check initiative, demonstrating the positive impact of cooperation between tech companies and local organizations.

Civil society organizations also have a crucial role in moderating hate speech and misinformation online. In Congo, META worked with civil society organizations to create a task force on elections, addressing hate speech and misinformation from both the opposition and government. The involvement of civil society organizations can serve as a middle ground, reducing the perceived need for the government to impose internet shutdowns.

Additionally, it is emphasized that big corporations should be encouraged to participate more actively in online moderation efforts. It is noted that these corporations often have a reactive approach to tackling online misinformation. By reaching out to them, local civil society organizations can facilitate their involvement in countering online misinformation and make their efforts more proactive.

In conclusion, the analysis reveals a concerning pattern of internet shutdowns and information control in Congo, which is seen as an attempt by the government to control the narrative during elections. There is a call for tech companies, civil society organizations, and big corporations to proactively work together to prevent the spread of misinformation and hate speech. By doing so, the likelihood of internet shutdowns can be reduced, ensuring the protection of freedom of speech and public access to information.

Nicole Streamlau

Internet shutdowns are increasingly seen as necessary measures to address concerns related to elections, such as interference, disinformation, and post-election violence. Research carried out in Africa has shown a growing acceptance of internet shutdowns as a means of controlling election-related issues. Historical practices like banning opinion polls and political campaigning near voting day have also contributed to this acceptance.

Governments in the global South express frustration with the perceived lack of response, engagement, and oversight from large social media companies. Internet shutdowns are viewed as a form of resistance and sovereignty against the dominance of these companies, which are often based in distant countries. This dynamic highlights the tensions between governments and technology companies in terms of information governance.

The decision to implement internet shutdowns is partly influenced by a lack of information literacy. Governments with limited experience and understanding of online content moderation may resort to internet shutdowns as a response. Oxford University has launched a training program aimed at increasing information literacy among policymakers and judges, promoting a better balance of competing rights and addressing information disorder within a human rights framework. The goal is to reduce reliance on internet shutdowns as a solution.

Policymakers in peripheral markets, such as Ethiopia and the Central African Republic, struggle to understand and engage with technology companies. This observation underscores the difficulties faced by policymakers in regions with limited presence and engagement, in contrast to countries like Germany, which have embassies in Silicon Valley. The complexities of the relationship between policymakers and technology companies contribute to the challenges of addressing issues like internet shutdowns.

In conflict-affected regions, internet shutdowns are becoming accepted by local populations as a means to combat online hate speech and incitement to violence. Research carried out in conflict-prone areas of Ethiopia shows that locals prefer internet shutdowns as a way to avoid exposure to harmful online content. The acceptance of internet shutdowns in these regions arises from a lack of effective alternatives to address widespread hate speech and incitement to violence online.

Overall, while internet shutdowns are increasingly seen as a response to election-related concerns, the lack of information literacy and strained relationships between governments and technology companies contribute to their implementation. However, efforts to enhance information literacy among policymakers and judges through training programs, such as the one initiated by Oxford University, offer a promising approach to reducing reliance on internet shutdowns. Finding effective and sustainable solutions beyond internet shutdowns requires striking a balance between addressing concerns and protecting rights within a human rights framework.

Sarah Moulton

Increased internet disruptions during elections have a detrimental impact on the work of ground observers and pose a serious threat to domestic observer networks. These networks play a crucial role in reporting on electoral processes and collecting vital data. The disruption of internet services hampers their operation, making it difficult to effectively monitor elections and gather accurate information.

Moreover, observers on the ground face higher risks, including the risk of being arrested. This underscores the urgent need to safeguard them and provide them with the necessary tools to measure and report data effectively. Without adequate protection and support, these observers may be deterred from carrying out their important work, compromising transparency and accountability in the electoral process.

The importance of political parties and policymakers engaging in the process is also highlighted. Attendees at the FIFA Africa event in Tanzania displayed interest in the issue, emphasising the need for their active involvement. It is crucial for political parties and policymakers to recognize the significance of internet disruptions during elections and take proactive measures to address this issue.

Early collaboration is essential, with a particular focus on data collection relating to the economic and social impacts of shutdowns. The repercussions of internet shutdowns extend beyond the electoral process and can have a significant negative impact on healthcare and various economic sectors within a country. Therefore, it is essential to gather comprehensive data on these impacts to understand the full extent of the problem and develop effective strategies to mitigate them. Training programs for politicians and political parties can also be instrumental in preparing them for potential shutdowns and equipping them with the necessary skills and knowledge to respond effectively.

Accurate data that reflects the specific local context is vital in reports related to internet shutdowns. It is crucial that policy decisions are based on accurate and contextually relevant information, as the impact of internet disruptions can vary greatly between different regions and countries. The work being done through the Summit for Democracy highlights the recognition of this need and the ongoing efforts to ensure that data used for policymaking accurately portrays the local realities and challenges associated with internet shutdowns.

Collaboration between various stakeholders, including policymakers, civil society, internet service providers, technology platforms, strategic litigators, and international organizations, is paramount. Given the complex and multifaceted nature of internet disruptions during elections, a collaborative approach is necessary to address the issue effectively. All these actors must come together and share their resources, expertise, and data to build a comprehensive case and develop robust strategies for combating internet shutdowns, particularly during election times.

Furthermore, the platform created by the Internet Society is highly valued and supports the measurement of the cost of internet shutdowns. This platform plays a crucial role in helping to quantify the economic impact of internet disruptions and provides valuable insights into the true costs of such disruptions. By highlighting the financial consequences, the Internet Society facilitates a deeper understanding of the gravity of the issue and advocates for necessary actions to prevent or mitigate internet shutdowns.

In conclusion, increased internet disruptions during elections pose serious challenges for ground observers and domestic observer networks. It is imperative to protect and support these observers, provide them with effective tools, and engage political parties and policymakers in addressing this issue. Early collaboration, accurate data collection, and collaboration between various stakeholders are all crucial aspects of combating internet shutdowns during elections. The platform created by the Internet Society is instrumental in measuring the cost of internet shutdowns and emphasizes the need for action.

Session transcript

Kanbar Hossein-Bor:
Hi. Good morning, everyone. I think we’ll just give it about another couple of seconds. I can see some people are still entering the room. And then hopefully we will start. And just a reminder that this is a session on elections and the Internet, free and fair and open. I hope that’s the session you’ve come for. If you haven’t, you’re very welcome to stay. Fantastic. Well, let’s make a start. Firstly, a good morning to everyone here. And I know we’ve got a lot of colleagues online as well. A good morning from Kyoto to them, wherever they may be joining us. It’s a real privilege for me to be moderating this session today. My name is Kambor Sainbor. I’m the head of the Democratic Governance and Media Freedom Department of the UK’s Foreign Commonwealth and Development Office. We have a wonderful panel here today with you. I’m going to ask them each to introduce themselves when I hand them over to the floor to engage in this session. I’ll start off with making a few introductory remarks to set the scene, as it were. From the UK’s perspective, it’s a real privilege for us for this year, as part of the Freedom Online Coalition, to be chairing one of the task forces of the Freedom Online Coalition. In this case, the Task Force on Internet Shutdowns. And true to the multi-stakeholder spirit of the IGF, we’re delighted to be chairing that with the FOC Advisory Network members, Access Now, and the Global Network Initiative. We are chairing this task force because We passionately believe that internet shutdowns pose a significant threat to the free flow of information. They are a significant threat to the ability of everyone to express themselves online. They are a major source of censorship. And as all of you know, in a world where we are increasingly exercising our offline rights online, they are a fundamental impediment to the ability of us to exercise our human rights. In that regard, we want to use our task force chairship to highlight the increasing prevalence and use of shutdowns and internet disruptions. And we passionately believe that the multi-stakeholder approach is the right one. But we also recognize that internet shutdowns need to be seen as part of a much broader set of issues, all of which are related. For example, we have the issue of media freedom, online violence against women, development, mis- and disinformation. All of them come together to pose a significant threat to the ability of all of us to exercise our rights and actually lead to the full exercise of the realization of development. So in that regard, I want to briefly, before I hand over to the panel, highlight for the benefit of all of you that there has been a joint statement on internet shutdowns and elections, which is actually going live today. So if you have a look at the screen. We have a quick snapshot of this statement, the first issued by the FOC. In that regard, I think it’s a great way to introduce the session today, a reminder of the determination of the FOC to take up the challenge that this issue poses. For all of you in the room, and I hope for all of you online, you can see the statement now. We will share a copy of that later. I’m very happy to discuss that as well during the Q&A. So insofar as today’s session is concerned, we’ve got, I think, five speakers. I’m gonna ask them each to come in. Firstly, with a few words of self-introduction, and then they’ll spend about three, five minutes reflecting on a particular point of this session. And then we will have, I hope, a good half an hour or so of discussion where we can answer questions or reflect on any points that you in the room or virtually are making. So without any further ado, I’m gonna ask Felicia Antonio to start off and give a reflection on the Keep It On campaign and what some of the initial recommendations the policy makers are. So over to you, Felicia. Hello, can you hear me okay?

Felicia Anthonio:
All right, I’m Felicia Antonio, Keep It On campaign manager at Access Now. And for those who don’t know what the Keep It On campaign is, it’s a global campaign that unites over 300 organizations around the world. And our objective is to fight internet shutdowns. And this campaign was launched in 2016 by Access Now and other stakeholders. And since then, we’ve monitored, documented, and advocated against shutdowns. I’m going to give a few highlights of what we’ve seen across the globe with regards to shutdowns in general, and then I’ll narrow my submission to election-related shutdowns and the impacts. So according to our data and monitoring that is accessed now in the Kipiton Coalition, Internet shutdowns are spreading, they are lasting longer, and they are also impacting lives. Since 2016, we’ve documented at least 1,200 incidents of shutdowns in about 76 countries worldwide. And these incidents of shutdowns are usually perpetrated by governments, state actors, warring parties, military juntas, or third parties, and they take place during very critical moments like elections, protests, and conflict situations. In relation to shutdowns documented around elections, we have seen at least 57 election-related shutdowns globally since 2016. Africa accounts for 44 percent of these shutdowns. That is about 25 of these shutdowns happened in Africa. We also have countries like Iran, Bangladesh, Pakistan, Iraq, Belarus, Turkmenistan, among others, that have weaponized shutdowns during elections. We all know and believe that the Internet and digital platforms continue to enable and enhance fundamental human rights of people to access information, to express themselves, and to also enjoy their rights to freedom of assembly. In times of elections, the Internet plays a critical role in promoting free, transparent, and fair electoral process by providing political candidates avenues. to reach their supporters or audience, as well as allow equal access to communication channels for both the incumbent and the opposition to debate and highlight their political manifestos and policies. And for voters, keeping the internet and essential platforms on during elections enable them to actively participate in democratic processes, scrutinize policies put forward by political candidates, and also provide opportunities for people to hold their governments to account. Elections, particularly in growing democracies, are a critical time of transition, and active participation in the process contributes significantly to a credible democratic outcome. Journalists, human rights defenders, election observers, and other key stakeholders also rely on the internet and digital communication tools to monitor the electoral process. And shutdowns make it extremely difficult for all these actors to effectively monitor the electoral processes across the globe. Some governments have attempted to justify these shutdowns as relevant to prevent the spread of misinformation or hateful content, or as a national security measure. However, the opposite is true. When you shut down the internet during elections, it results in chaos, in the sense that it blocks alternative sources of information verification channels and seeks to benefit only the incumbent governments. Imposing shutdowns during elections is likely to also agitate people to protest and, in that regard, it questions the national security bits of governments trying to justify shutdowns. And according to a study that was done in 2019 by the collaboration of ICT policy in East and Southern Africa. Shutdowns remain a go-to tool for governments who want to hold on to power. With examples in Uganda, Belarus, Republic of Congo and most recently we saw this happen in Gabon when the internet was shut down and then the incumbents was announced as a winner of the elections but there was a military coup which overthrew him and so if that hadn’t happened we would have the incumbents in power for the next term of elections. And then I think that although the number of elections around the world have reduced over the past few years with some authorities in countries like Ghana, Kenya, Nigeria, Sierra Leone among others making commitments to keep it on during elections I think it still remains a crucial priority for all actors working to advance democracy around the world and so next year we have or next year has been described as the year of elections with at least 50 or so elections, 50 or so countries scheduled to go to the polls and so given the direct interference of shutdowns on electoral processes and the outcomes of elections I think it’s important for all stakeholders including governments, regional and international bodies like the United Nations, the African Union, European Union, the Freedom Online Coalition among others to support the Keep It On Coalition and other stakeholders to ensure that governments do not normalize shutdowns during elections and we welcome the just published statements by the Freedom Online Coalition. denouncing election-related shutdowns. And my other recommendation also goes to the businesses and telecom companies, as well as big tech companies, to ensure that people have access to secure, open, free, inclusive internet access throughout electoral processes, as well as ensure that these platforms are safe for people to be able to express themselves, and to also avoid giving governments reasons to justify their actions by shutting down the internet. So in conclusion, I think that the fight against shutdowns requires a collaborative effort, as we’ve seen. And so this is not just something that civil society alone is working on. We’ve seen the just-released statement by the Freedom Online Coalition, as well as statements denouncing the use of shutdowns by several governments and other institutions, which we appreciate as the Kipiton Coalition. And we look forward to working with all of you to push back against shutdowns. Thank you.

Kanbar Hossein-Bor:
Thank you very much, Felicia, for that really great overview of the elections and shutdowns. I’m now very pleased to hand over to our colleague joining us on screen, Andrea Ngombe, who will reflect on the impact of shutdowns on the ground, especially as seen from the Republic of Congo. So over to Andrea. Hey. Can you hear us? Can you hear me? Yes, we can hear you. Yeah, I can hear you. Yeah, we can hear you. Please continue. Thank you. OK.

Andrea Ngombet:
Let me stop the video. OK, it’s OK. So thanks for having me. I’m Andrea Ngombe from Republic of Congo, leader of SASOP Collective, which is an organization based in Paris, but working on democracy and human rights in the Republic. of Congo. We started just for human rights and democracy, and then we extend in many topics as anti-skeptocracy, and we really work with KPI on campaigns since 2015. So what happened in the last election in 2021 in Congo was not just informative, but it followed up what Felicia said. The narrative first is about safety, the safety of the public against foreign interference, against misinformation coming from the opposition, but never about the misinformation coming from the government, of course. And by using this narrative of fighting the foreign influence into the electoral process, they are able to sell the internet shutdown as something as, oh, we are so weak. We are a weak democracy. We don’t have a tool to keep the internet on because we don’t have the necessary tool to block that misinformation. And during that election, it was surprising that this narrative was even effective in the public opinion, general public opinion of the Congolese. And it goes on for about one week without phone and internet because they just not block the internet, they also block the telecommunication directly in the country. And with that narrative, they extend it to the anti-terrorist activity. And my point here is to say that this internet shutdown is not just for the internet, it also has an impact directly on the people. Because of this new anti-terrorist and cyber-criminality law in Congo, they are able to arrest militants from the opposition because of social media posts. Even if the internet was blocked, if you post something earlier about the election process, they can go and arrest you. Three activists from the opposition were arrested and put in jail for about three or four months because of this internet shutdown and the information they spread. And on our side, Sophie, and this is what I was trying to make as a point, we work with people from META to say that during election time in Africa, because of the behavior of our government, they need to step up. I don’t ask for a full and permanent a task on election, but during election time, because of the spread of hate speech from the opposition and from the government, someone needs to be in the middle and like a referee for the competition on the free flow of information. And we were able to secure a tacit way to work with META and they put up something called Congo Fact Check to check on the information putting out during that special time. And they were able to block a very vast disinformation coming from government related Facebook account. And it was really shameful for government to come up with this idea of blocking misinformation by shutting down internet and being themselves broke because they use robot and bot to spread lies during election time. So this is what was happening. And because of that, I also think that the next move of this internet shutdown in Africa is not just about internet shutdown. It’s about control, control of the information coming in the country. So because they are not able to have the newest technology, they use the internet shutdown. But in the coming years, and in the perspective of Congo-Brazil, they are trying to have a set of tools coming from the Republic of China, people of the Republic of China, so they can have this kind of firewall and secure themselves from any kind of information coming from outside the world, inside the country. So this is what we need to be focused on, not just the regular internet shutdown, but this next step they are trying to make to block any kind of information coming from outside, inside the country. Thank you.

Kanbar Hossein-Bor:
Well, thank you very much, Andrea, for that really powerful reflection on the ground, especially some of those future challenges. Also, thank you for staying on time, and a special thanks for joining us. I think it’s one o’clock where you are in Paris, so we’re very grateful that you’ve dialed in, much obliged. I’m now going to hand over to Ben Graham-Jones to reflect on shutdowns and freedom of expression.

Ben Graham Jones:
Thanks ever so much, Kambar, and thank you, other colleagues. My name’s Ben Graham-Jones. I am an elections consultant, work on many elections every year, and an advisor to the Westminster Foundation for Democracy, a UK public body. Let me start by applauding the joint statement on internet shutdowns and elections by the online coalition. I think it really provides a sound basis for calling out the illegitimacy of internet shutdowns, wherever they may occur. I’m gonna make three brief points today, and really, the first is that I would like to, I’d like you to imagine, if you would, a situation where you have an election, and at some point during the election process, perhaps on election day, or perhaps as the results are being counted and tabulated, 10,000. Nearly 10,000 journalists are locked up by government authorities. How much condemnation and opprobrium this would attract from the international community and from domestic actors, and rightly so. And yet it strikes me that when the communications of an entire population is silenced for that period, there is not always the same level and the same degree of condemnation. And perhaps we need to think carefully about how we can equate those two events that the legitimacy of the rights that we enjoy offline are in no way diminished when they are online. And so this is the first point I wish to make. That equality between the rights we have offline as they are online, it’s been agreed at the UN General Assembly. It’s something which I think we need to underscore. And I applaud the work of Access Now, of NDI, of other partners here today, who do such an excellent job in really raising awareness of that fact. Of course, internet shutdowns are not just about the right to freedom of expression. And the second point I would like to make pertains to disinformation, the right to access information, and the right to credible elections, all of which depend on having that basis of fact-based information. And one of the things that I see as someone who specializes in counter disinformation is that when internet shutdowns occur, they amplify disinformation. How do they amplify disinformation? Because they concentrate the sources of information that people can access. Your state TV, for example, may remain on, or it may be that the channels that are closed down or the means which are throttled are selective. What that means for those of us working in counter disinformation is that we need to be thinking seriously about pre-bunking, about moving. our response efforts to prevention efforts, to mitigation efforts, about providing fact-based information at an earlier stage of the process where there is a risk of Internet shutdowns. I want to very briefly suggest four actions that can help in that regard. Number one, when we’re working in contexts which may be vulnerable to Internet shutdowns, we need to learn from other contexts. If you’re sat there in an election commission in Nigeria, let’s say, you may not be thinking about the recent election that took place in Kenya or France or Kazakhstan. Your previous points of reference is probably the previous elections that took place in Nigeria. But actually, what we need to be bearing in mind is that we see quite a lot of overlap in the types of disinformation narratives that are circulated across different electoral contexts. I see this. I work globally across lots of different elections each year. And so by looking at other contexts, we can bolster preparedness for counter disinformation in advance of any Internet shutdown and information monopoly being imposed. The second thing is to think about narrative forecasting. Our organizations, whether it’s election management bodies, civil society organizations, political parties, really making a plan for thinking about what types of narratives might be deployed at different points in the process, informed by that international best practice. And then thinking about what response might look like. Thirdly, overcoming selection bias. We know that people don’t seek out counter disinformation. We know people don’t look to check whether or not their pre-existing opinion is correct. There’s decades of psychological research on this. And so we need to find ways of bringing that fact-based information before shutdowns occur into the places where it needs to be. Because the very people who will otherwise seek out fact-based information are precisely the people who you need to reach least. And fourth, thinking about drafting that preemptive response early. If you can draft, well, if you can draft effective infographics and videos to counter some of these narratives early on, then when they do come up, it’s going to reduce your response times and cut the virality of disinformation before any shutdowns are imposed. The third point I’d like to very briefly make is on risk forecasting. And when we’re thinking about internet shutdowns, by the time it takes place, often it’s too late to do a lot of the actions that can have substantive consequence, whether that’s the publication of telecommunications licensing agreements, whether that’s putting concerted pressure, the sorts of things that keep it on, coalition does so effectively. And so we really need to be thinking about, on a sector-wide basis, but also within our individual organizations, mapping out risks. So for example, if you’re a body that sends election observation missions, you might be thinking about, okay, were there known risk factors of internet shutdowns present in particular contexts, and then prioritizing the deployment of your missions to those places so that you can serve as a counterweight to the monopolization of information. Likewise, if you are a civil society organization whose communications plan depends on releasing a statement around election day, but you realize that there is a chance of an internet shutdown, then maybe you need to think a little bit carefully about how to communicate your key messaging around the election if that’s not going to be possible. So three key points, remembering that the same rights online apply offline, thinking about farsighted disinformation response, and forecasting risk. Thanks ever so much.

Kanbar Hossein-Bor:
Thank you very much, Ben, especially for those pretty practical recommendations as well. We’re now going to go back online. We’ve got a colleague joining us, Nicole Stremlow, who will reflect on the research on government decisions around internet shutdowns, especially in Africa. Over to you, Nicole.

Nicole Streamlau:
Good morning, everyone. I hope you can hear me. We can. Please continue. Okay, thank you so much. So I just wanted to take a couple of minutes to reflect on some of the research we’ve been doing at Oxford around internet shutdowns, and particularly around elections and conflicts, primarily in Africa. So we’ve been conducting some research on government decision making, so basically asking why governments are choosing this relatively blunt tool of internet shutdowns, as compared with other forms of control, and specifically in Ethiopia, and I just returned from Ethiopia, we’ve also been looking at the impact and the perception of shutdown in violence affected communities. And actually, like Andrea, we found sort of a growing acceptance or acquiescence that this is actually an important tool. And in the process of our research, we’ve also sought to come up with a different reading of internet shutdowns. So to look beyond this framing this dichotomy of digital authoritarianism and ask whether or not it’s possible to identify these alternative logics and rules rather than the assumed motivations of what’s actually driving shutdowns. And I also have three points, like my previous colleague, and I would say, first of all, a somewhat obvious point is that we’re seeing a growing acceptance of shutdowns. So they’re becoming increasingly normalized as a tool to address very legitimate concerns around election interference, concerns about disinformation, concerns about incitement to violence post-elections, and they’re seen as a useful tool or a necessary trade-off to protect the integrity of the electoral process. And by this, I’m talking about a lot of reflections around the research we’ve been doing on the ground in Africa. So I think it’s also helpful to remember that there’s long been information controls around elections in different democracies. So the banning of public opinion polls within weeks of election is seen in Kenya, the prohibition of political advertising or campaign rallies close to voting day that might arise in particular contexts in accordance with historical experiences. And the challenge of social media is that it’s making it makes imposing these kinds of silences around elections increasingly difficult. So shutdowns are this blunt tool, this very crude tool for addressing some of these concerns in the context of having less precise tools or not knowing what else to do that might historically have been available for dealing with concerns around mass media, for example. And second, I think most importantly, we see shutdowns as a growing form of resistance, an expression of frustration to the overwhelming power of large social media companies that are typically based in the US or China. And we see this frustration with the failures and the inequalities of online content moderation. And I think to some degree, this has become well documented. People have been writing about this and doing research about this, particularly around the failure of online content moderation in local African languages and the lack of attention given to resource poor communities. So we see governments in these more marginal markets in the global South being frustrated with this inadequate response, the lack of engagement, the lack of product oversight from these large tech companies. And so shutdowns are seen by some, and it’s not always explicit, but as a way of expressing sovereignty, as a way of pushing back against what is often seen to be these arbitrary responses of incredibly rich companies deciding good and bad actors from a distance, and the frustration also with the rules that are being written in far off countries according to certain logics that local authorities feel powerless to engage or really to challenge. And so like Andrea, you know, I agree there’s a lot of discussion and debate about what more can these companies do, not necessarily in Kenya, but more in the Central African Republic, for example, or the failures of what’s been happening in Ethiopia. And third, I think we’ve also seen that the decision to implement shutdowns partly is an information literacy check. And I think to some degree, this has been overlooked, but our research has shown that governments often resort to shutdowns because of a lack of experience of how to actually engage these large tech companies, or a lack of understanding about alternative ways of addressing the very legitimate concerns about the failures of online content moderation, particularly around elections or in cases of extreme violence, and how to navigate this balance between the competing rights, such as the responsibility to protect in cases of extreme violence, as well as freedom of expression, or the right to information, as we’ve mentioned on this panel already. And if I can say a very tiny plug, we at Oxford, we were just awarded a European Media and Information Fund award to actually launch a new program to train policymakers and judges through a new executive program on information literacy. And we’re specifically going to be working on how to improve understanding among these key influencers, and how to address these really very real challenges that information disorder poses, particularly in the context of generative AI, but really how to do so through human rights, through a human rights framework ahead of elections in context of extreme violence, and hopefully reducing the need or the turn towards these blunt crude tools of censorship that internet shutdowns are. Thank you.

Kanbar Hossein-Bor:
Thank you so much, Nicole, really helpful there. And also grateful for you joining us. I know it’s a difficult time zone where you are as well. We’re now going to go to our last speaker. That’s Sarah Moulton. Before we open it up to you, the audience for Q&A, Sarah will be reflecting on the multi stakeholder coordination challenge. Over to you, Sarah.

Sarah Moulton:
Thanks. My name is Sarah Moulton. I’m from the National Democratic Institute. I’m the deputy director of our democracy and technology team. NDI is a nonprofit. partisan non-governmental organization based in the US but we work in about 50 countries around the world and we come at this from an implementation angle you know NDI works and supports democratic processes strengthening democratic institutions and provides a lot of on-the-ground election support for many of the elections that have been discussed already but in primarily we do a lot of work with domestic observation groups independent groups on the ground who are deployed in advance of an election to report back on you know what they’re seeing at the polling station and reporting on the process and the results and obviously for us from a practical standpoint it’s really important for the internet to be working so that they can transmit their findings and what they’re seeing throughout the day and then allowing the observer group to then report on make a statement about the process that they’re seeing and you know hopefully verify that it was indeed democratic and properly run that’s not always the case however you know what we’re seeing often these days is that these disruptions are making it more challenging for groups on the ground to do so there’s definitely been a lot more concern about what might be happening and trying to plan in advance for potential shutdowns and so one thing that we’ve really explored is how do we better utilize this network which can often include thousands you know maybe up to 2,000 in some cases observers deployed in all parts of a country and how do we take advantage of that distribution in order to collect better data on what we’re seeing across country in terms of whether there’s a shutdown whether there’s just a disruption or there’s throttling perhaps censorship of particular sites that can lead to better data collection in that process so how can we feed that data to the wider network of stakeholders that we’ve been talking about This is, you know, our topic here is multi-stakeholder collaboration and how do we share that data with those who can perhaps do that more direct advocacy with individuals maybe across on an international level but also even domestically. Our concern with that particular group, obviously there’s higher risks to observers these days. We’ve been seeing on a couple of recent examples Sierra Leone and perhaps Zimbabwe is the more difficult one to talk about but you know seeing observers arrested for simply the process they’re doing an independent analysis and verification sometimes in the middle of what they’re doing on Election Day in the case of Zimbabwe and so we have to look at you know how do we protect these groups who can collect this data but also enable them to to do so because there’s a lot of opportunity there’s a lot of tools out there now you know in order to take these measurements and then report them up. The other thing like sort of the other side of this angle is NDI also works with politicians and policymakers and I think that there’s a real opportunity for collaboration here but really needing to do so well in advance of an election we need to get this process started now yesterday you know especially with 2024 we’ve been talking about 2024 for years now but you know when are we you know we really have to actually start working towards it the statement is a great is a great start and a great recognition of that coming up and I know you know thanks to you know keep it on campaign really puts a lot of effort into planning and tracking which elections are going to be you know perhaps the most significant in potential for shutdowns but also reflecting having just come from the FIFA Africa event in Tanzania last week I think that you know there’s a lot of I think interest from policymakers to engage in this process but there’s also a lack of information at times there’s a lack of understanding of the environment and sometimes the approach from civil society might potentially be aggressive in this and they perceive it as that we are not being collaborative that we are coming at them as opposed to working together with them and frankly sometimes there’s a challenge in trying to get policymakers to care about the issue there’s you know the the prospect of freedom of expression may not really resonate especially when it came up the reference Felicia made to national security is often an argument made but I think really where we can make a difference here is really the impact of a shutdown beyond that it’s you know looking at the you know health care issues it’s looking at economic loss like has a huge impact on a country and really trying to you know collect that data use the data that we’re collecting in order to make that case in order to work earlier on with not only politicians or individuals but political parties generally because politicians during the time of an election are really concerned more about their election than they are about you know potentially a shutdown so how can you work with the wider you know political party ecosystem and I think there’s things we can do in preparation of that you know there is a desire for training programs for learning about these tools for working together with multi stakeholder approaches whether that’s civil society or others and I think for us if we can make better efforts to connect civil society with political parties as also international initiatives that we can go a long way towards kind of mitigating this potential damage is coming up so I’ll stop there

Kanbar Hossein-Bor:
thank you so much Sarah really really helpful we’re now going to open up for Q&A, really. I want to start with folks in the room first. If you can briefly introduce yourself and also briefly set out your comment or question, that would be great. I think the format is I see a mic in the middle. So maybe we’ve got a colleague already there. If you could, the gentleman in the white shirt, if you could start off.

Audience:
Yes, hello. My name is Eugene Morozov, and I represent devoteusa.com, a voting solution. I want to thank this panel for bringing up two very important components of free and fair elections. And one is availability of communication. You talk about internet, but there are, of course, other ways to communicate, like GSM networks or blockchain networks, which do not use TCP IP protocol at all. Then you talked about something also very important, and that is availability of independent observers and journalists and international organizations, very important. But there are three other critical components on free and fair elections, which this panel have not touched upon. So I just wanted to raise them. One of them is true immutability of election results. And that is achieved by, for example, using a blockchain, which is what we use. Then, of course, there is an issue of security and safety, and that is achieved by using a cryptographic protection. And you also need scalable networks to conduct elections with a country, let’s say, with 300 million voters. You must have a scalable network to conduct those. So my question is, are there any thoughts to those other components of free and fair election process that this panel is thinking about? And if not, of course, come talk to us. We can help. Thank you.

Kanbar Hossein-Bor:
Perfect. Thank You, Eugene. That’s some really powerful reflections there on the wider technological context of elections. I’m going to look to the panel in the room first to see if anyone wants to respond to Eugene. And Ben’s volunteered. Ben, over to you.

Ben Graham Jones:
Sure, thanks Eugene. I mean I don’t want to keep us close to the remit on internet shutdowns, but just to say that I think there’s probably two components to electoral legitimacy. There’s the actual process itself and how that pans out, and then there’s the perception of the process and election technology is a classic target for disinformation, in part because it’s very difficult to explain your blockchain or to explain. You can’t observe electrons as well. It makes it quite tricky sometimes because there are big tools for building confidence like this limiting audits like cryptographic methods as well. But I think that’s exactly why it’s so important that we fight internet shutdowns, is because when you’ve got that sort of disinformation that can be levied against election technologies in particular, then you can’t actually fight that if the fact-checkers and the journalists don’t have the ability to do so.

Kanbar Hossein-Bor:
Great, thank you. We have another speaker now. Over to you.

Audience:
Hi everyone, Nikki Muscati. I’m from the US Department of State, Bureau of Democracy, Human Rights, and Labor. I also serve as our focal point for the Freedom Online Coalition. Thank you so, so much for this panel. I was excitedly writing so many notes and I have so many follow-ups that I’d love to have with all of you. I have a question that was sparked by some of Nicole’s comments, but really just open it up to the whole panel. One of the things that I’ve also found is pretty consistent with the first finding that you noted about the acceptance, it seems, of this tool internationally, really due to the fact that so many governments feel that they really don’t have very many other tools to address what, again, are legitimate concerns. And, you know, when we’re going through the list that we see oftentimes, and access now keep it on reports of these are all the real reasons that an internet shutdown might be happening. I think one of the reasons that’s often cited as a I can’t believe that’s a reason is to prevent cheating and so I am a little bit curious just across the board you know what are some of the solutions that folks have been thinking about to address what is really just seen like as a institutional frameworks and sort of like structural issues within governments that lead them to be unable to address some of these again legitimate concerns that are happening within the country and then turning to the internet to then just bluntly just use that to to try to address everything and then creating so many other additional concerns that just build on top of the original legitimate concerns thank you

Kanbar Hossein-Bor:
thank you Nikki a really important point there about practical alternatives of policymakers if I may you implied you suggested Nikki Nicole had touched on some of these points I might ask Nicole online if you want to come in on Nikki’s question over to you

Nicole Streamlau:
Nicole sure no thanks for that Nikki well I think we see it at both levels and maybe Andrea wants to also come in with what he’s seeing in the Congo so I think we do see it with policymakers not understanding and I think it’s particularly in markets that are peripheral to the large tech companies so here you know I’m not speaking about Kenya I’m speaking about the Central African Republic I’m speaking to some degree about Ethiopia but you know that where they don’t have the same channels the same lines open they don’t have embassies in Silicon Valley like Germany does you know like it’s just a very different environment in relationship with these companies and so they’re also not sure how to engage with them and I think it’s not only the at the level of the companies but it’s also an understanding about technology it’s an understanding about what other tools they have have, how else they can deal with it, other than shutting down the Internet. And I think we also, and I think what is very concerning, and this is some research that as I mentioned, I just returned from Ethiopia, and we’ve been doing long standing research in Awassa and Shashamani, which are two conflict affected regions, looking at how communities there are engaging with Internet shutdowns and how they see the impact of Internet shutdowns. And we have seen there that there is an acceptance of these Internet shutdowns, because people are so fed up with the content that they’re receiving online, with the massive amounts of online hate speech, with the incitement to violence, and they’re also experiencing violence on the ground. So they’re just saying, and I’m putting it very crudely, our findings are more nuanced than this, but in the interest of like 10 seconds, they’re finding that there aren’t any other alternatives, so they’d rather not be exposed to this, what they see as inciting real world violence, and they’d rather just have it shut off.

Kanbar Hossein-Bor:
Great, thank you for that. I see some more hands up in the room, we’ve got two speakers, if the lady at the microphone could come in, and then afterwards, the gentleman here, if you could go after, we’ll take these two questions together in a bunch. So over to you.

Audience:
Wonderful, thank you. I’m Sally Wentworth, I’m from the Internet Society, and I want to thank you for this great panel, learned a lot, a lot of things to be concerned about. We at the Internet Society, we are a more technical organization, and we’ve thought hard about what role we can play to support the work that many of you are trying to do to support freedom and to support democracy and free flow of information, and where we stand is we like to look at it from what do we see in the Internet, and is there information that we can see on the Internet about shutdowns, about data flows, about cross-border connectivity, and make that information available in sort of digestible ways that you all can use in your advocacy and promotion of democracy and free elections. Sarah, I was particularly struck by your comment of putting this in a broader context of what is the impact, not just in the immediate term with respect to the election, but what does this do, you know, ongoing shutdowns, what does that do for a country’s economy, right? If we see governments saying we want to be, you know, an online economy, we want to be a digital marketplace, we want to have all these opportunities, but there’s no reliability of connectivity, that makes a very difficult investment climate. And so that’s some of what we’re trying to do. We have a tool called Pulse, a little bit of a shameless plug, but really what we’re trying to do is create resources that are useful for activists that are doing this kind of important work. So I want to thank you for that and express our support and willingness to be helpful in this.

Kanbar Hossein-Bor:
Thank you so much. I’ll take the two further questions in the room in a bunch, and then we’ve got a hand up in the virtual room as well, and then we’ll do one final round of reactions from the panel. But over to yourself.

Audience:
Hi, Jamil. I’m a barrister, but I’m also a policy counsel for many of the tech companies in Pakistan. And one of the things I found very effective was to actually run a timer, a clock that shows how money is being lost every time, you know. It worked really well with ministers and other policy folks as well. My question really is to Nicole. I completely understand there are certain things we’re also seeing in countries like Pakistan where there’s religious ceremonies or religious days where there could be violence, very serious violence. And so handling the internet in some ways becomes important. And if they don’t know what to do, they will shut it down. That’s what’s happening every single time. My concern is that while we legitimize that, and we said that’s a concern, from what I’m hearing constantly in this room has been this sort of an idea that, you know, there are actually good reasons. I’m concerned about certain governments. They might not be in Central Africa, for instance, but in other places who might actually take heed from this and say, wonderful, we have people who agree with us. So I’d just like to sort of make sure that we balance that out a little bit. Thank you so much.

Kanbar Hossein-Bor:
Thank you. A really good point there. And last but not least, over to you, sir. Thank you.

Audience:
Thank you. Let me introduce myself. This is Ganesh Pandey. I work for the government of Nepal in the prime minister office. While talking about the free and fair election and internet, right now we focus only on the internet shutdown and the something else, but we should not forget free and fair election needs comprehensive approach. There is a government, there is the internet service provider, there are political parties, and there are also the citizens. So sometimes government intentionally or purposefully controls the media and the internet to get some vested interest or hide the information. In that way how we can make the government accountable. Sometimes the public make the criticize of the some of the leader or the candidate of the elections. If that is disinformation, within one hour it is spread so fast that the image or the trust of that leader goes down immediately. And we don’t have the access to the internet service provider to control or to check or restrict that disinformation or the disclaim of that that is the false false information. So how the internet service providers we can make responsible through the use of AI or any tools so that such kind of disinformation should not be spread in the social media or something else. And the second thing is that how government can make, how we can make the government more accountable and responsive through the use of digital technology. This is also very important. Thank you very much.

Kanbar Hossein-Bor:
Thank you very much. Some really important reflections from a policymakers perspective. I’m going to ask the panel to come in. I’ll introduce you. and ask you to come in. But first to come in is I think we have Andrรฉ, one of our speakers online, who would like to come in. So Andrรฉ, did you want to come in on those points or another point?

Andrea Ngombet:
Yeah, thank you. So in the case of Congo, it was Sassoufi who reached out to META to have a kind of task force on the elections. And it’s not that we are trying to justify the reason of the government. We are identifying a pain. The pain is there is hate speech from opposition and government. There is misinformation from every side. So as a civil society organization, we need to be the referee in the middle ground. And if we can, as a takeaway for the group, have more civil society organizations reaching to those big corporations and doing this online moderation in the local level, because those companies won’t do it themselves, we need to push them. And by doing so, we will erase that argument of a government that internet means violence, because we will have these local civil society working against the hate speech, working against the incitement to violence. And if we are more of us doing that work, the government won’t have this legitimate argument to say, oh, nobody is doing that work, so we have to shut down the internet. This is our way to address that specific problem.

Kanbar Hossein-Bor:
Thank you, Andre. I think that addresses one of the points made just now about how to engage with internet providers and content platforms. But I think we had a question from the Internet Society around data and economy and making that argument. Maybe if I could ask Sarah to respond to that. And then we had a comment made from our colleague based in Pakistan about the dangers of potentially giving some arguments to those states. who don’t have, put bluntly, the best of intentions in this area and the unwitting power we might hand over to them. Maybe I can ask Felicia to respond to that. So, Sarah and then Felicia.

Sarah Moulton:
Yeah, all I would say is thank you to the Internet Society. I know that there’s been a lot of work being done lately, especially through the discussions from the Summit for Democracy. The platform that’s come out of that or that’s been strengthened through that and also the cost of Internet shutdown. Well, that’s a different title. But that tool, I think, is really critical. And it’s really getting it into more hands and how do we make sure that that data is accurate or reflects the local context, because that’s the other situation that we face. If we’re going in and speaking to a particular policymaker, they want to make sure that it reflects their situation and their context. And I think maybe, as I said, my main point is these conversations, this work, needs to start now, particularly for the elections coming up. How do we, you know, I think sitting down and collaborating and figuring out what data you have, what data you have, and that we have from on the ground and how do we, this is still my question is how can we, you know, have this collaboration point in advance to make sure that we’re all sharing the information that we’re collecting and working with the right, whether it’s policymakers, whether it’s civil society or ISPs or the tech platforms or strategic litigators, all of these components or international, you know, FOC is like, this is very critical for, you know, raising the alarm. And all of this data comes together to make the case. And so thank you for that. I’m not sure if I’m answering this particular question, but I just want to note that the importance of that platform and how much we value it.

Felicia Anthonio:
Thank you. Over to you, Felicia. Yes. Yeah, so for us, and I keep it on coalition campaign, we haven’t seen any evidence. of shutdowns contributing to resolving crises that governments tend to cite. When you shut down the internet during conflicts in response to dangerous content being flagged online, it only escalates the crisis. It endangers more people. It provides an opportunity for governments and perpetrators to actually commit heinous crimes against people with impunity. And so for us, we believe that what needs to be done is that, yes, there is violence content on platforms. Big tech companies need to be responsible in taking down violence content or hateful content or dangerous content in order to keep people safe. And so I just want to emphasize that the Kipiton Coalition denounces all forms of shutdowns. We haven’t seen shutdowns as a solution to any form of violence anywhere around the world. And if anything, what shutdowns do is that, as I said, it provides an opportunity for governments, warring parties, to perpetrate heinous crimes against people around the world. And so I just want to make this very clear on behalf of the Kipiton Coalition. Thank you.

Kanbar Hossein-Bor:
Thank you very much. We’re really reaching the end of it. I’ve got the unenviable task of trying to sum this up in about a minute or two. Three very quick points from me. Firstly, a big thanks to our speakers for coming in and setting out this very complicated issue for us. And also a big thank you to all of you, both online and in the room, for engaging in this. Secondly, I think for me, this is a reminder of the importance from a principles-based perspective. Namely, internet shutdowns pose a massive threat to not only the ability to exercise offline rights online, but they also pose a threat to the wider democratic process. fabric of society and also they pose significant economic costs to societies as well. Finally and the third more positive hope I want to end on is to say that there’s a lot of good intentions I’m hearing across this discussion about trying to support policymakers where they might not have the capacity but they have the intent to address these issues but also recognizing that we should stand firm in the face of those who actually don’t have the best of intentions here and next year we potentially have the fate of 2 billion people in about 50 or so elections to consider and the need to stand up for that on a norms-based basis. In that regard I really want to remind everyone of the FOC statement that we launched today. That’s a start about two weeks ago as part of the UNESCO International Day for Universal Access Information. There’s an Oxford statement also I like to bring your attention to I hope it will be on screen which highlights the comprehensive impact of these issues together and finally we hope through the FOC and the task force internet shutdowns we can through a multi-stakeholder approach bring all the expertise together the data together to come up with some practical measures to try and address the significant challenges that not only happening today but also will be facing collectively next year as well. So thanks again to all our speakers and to all of you for in this session. you You You

Andrea Ngombet

Speech speed

147 words per minute

Speech length

932 words

Speech time

381 secs

Audience

Speech speed

168 words per minute

Speech length

1401 words

Speech time

501 secs

Ben Graham Jones

Speech speed

177 words per minute

Speech length

1299 words

Speech time

441 secs

Felicia Anthonio

Speech speed

140 words per minute

Speech length

1276 words

Speech time

547 secs

Kanbar Hossein-Bor

Speech speed

162 words per minute

Speech length

2009 words

Speech time

744 secs

Nicole Streamlau

Speech speed

181 words per minute

Speech length

1330 words

Speech time

440 secs

Sarah Moulton

Speech speed

178 words per minute

Speech length

1392 words

Speech time

468 secs

Disrupt Harm: Accountability for a Safer Internet | IGF 2023 Open Forum #146

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Alexandra Robinson

Women and girls are subjected to significant levels of online harassment and gender-based violence, as highlighted by the analysis. This underscores the urgent need to prioritize their safety in digital spaces. Alexandra Robinson emphasizes the importance of addressing this issue through a combination of law, policy, and civil society movements.

There is a growing awareness in the international community regarding the prevalence of technology-facilitated gender-based violence. The Commission on the Status of Women dedicated a session to the intersection of gender and technology, and global outcomes documents now incorporate language on gender-based violence. These developments indicate an increasing focus on addressing this issue at a global level.

Several countries are taking action by implementing laws and policies to combat technology-facilitated gender-based violence, demonstrating their commitment to protecting the rights of women and girls. This positive step toward achieving gender equality and ensuring the safety of women online.

In addition, there is support for the progression of policy and legal systems concerning gender-based violence, highlighting the need for robust frameworks to effectively address and prevent such violence. This recognition of the importance of institutions and mechanisms to disrupt online harm experienced by women is encouraging.

In conclusion, the analysis highlights the high rates of online harassment and gender-based violence faced by women and girls. It emphasizes the significance of prioritizing their safety in digital spaces and the role of law, policy, and civil society movements in addressing this issue. The international community’s increasing awareness of technology-facilitated gender-based violence and implementation of relevant laws and policies offer hope for meaningful change. Achieving gender equality and combating gender-based violence require continued efforts and support for the progression of policy and legal systems.

Karla Velasco

The Association for Progressive Communications (APC) and its member organisations are actively involved in addressing various aspects of women’s rights, sexual rights, and feminist movements. Their work spans approximately 40 countries, mainly in the global south. A significant achievement of APC and its members is the successful recognition of online gender-based violence as a violation of human rights in 2022. This recognition is a result of their continuous efforts and advocacy.

APC aims to create a gender-inclusive internet that goes beyond providing access to the online world. They highlight the importance of understanding the challenges faced by women, as well as individuals from diverse genders and sexualities, online. Critical issues that APC addresses include online gender-based violence and technology-facilitated gender-based violence. They believe that discussions on these topics should not only raise awareness but also focus on the response and remedy for victims and survivors.

Intersectionality is another key focus for APC. They assert that a gender-inclusive internet should consider factors such as race, gender, identity, sexuality, class, and ethnicity. By highlighting these aspects, APC aims to create a comprehensive and inclusive digital space that addresses the needs and concerns of all individuals, regardless of their social backgrounds.

APC promotes a vision of transformative justice, emphasising values such as pleasure, sexuality, joy, and freedom of expression. They believe that promoting a more positive and empowering narrative around gender issues online can lead to societal transformation that respects and upholds the rights and dignity of all individuals.

One important observation is that APC urges the discussion to move beyond acknowledging and condemning online gender-based violence towards implementing measures that provide support and remedies for victims and survivors. They call for comprehensive discussions and actions on victim support and advocacy to ensure that those affected receive the necessary assistance and justice they deserve.

In conclusion, APC and its member organisations play a crucial role in advancing women’s rights, sexual rights, and feminist movements. Through their advocacy and initiatives, they have been instrumental in recognising online gender-based violence as a human rights violation. APC’s emphasis on a gender-inclusive internet, intersectionality, and transformative justice demonstrates their commitment to creating a more equitable and empowering digital world. Their call to prioritise victim support and remedies further reinforces their dedication to addressing the needs and challenges faced by individuals affected by online gender-based violence.

MARTHA LUCIA MICHER CAMARENA

The analysis highlights a concerning issue in Mexico, where digital violence is affecting women, adolescents, and girls. Startling statistics reveal that three out of ten women internet users in Mexico have become victims of cyberbullying. Furthermore, a staggering 74.1% of women victims of digital violence are between the ages of 18 and 30.

What’s more alarming is that the majority of the aggressors responsible for these acts of digital violence are known individuals, with former partners being the main culprits, accounting for 81.6% of the cases. This indicates that digital violence is not a random occurrence but often involves individuals with intimate or prior relationships with the victims.

Recognising the seriousness of this issue, the call for legislation to provide safety for women in digital spaces has been raised. One positive aspect is the existence of the Gender Equality Committee in the Mexican Senate, chaired by a prominent figure who is actively working towards this cause. The committee has successfully enacted important reforms that define digital violence and establish regulations for protection orders in cases of digital violence.

However, despite these positive steps, challenges remain in the judicial system, public ministries, and amongst judges. These institutions and individuals pose significant obstacles to achieving gender equality. Lack of adequate understanding, biases, and systemic issues still prevalent in the judicial system hinder progress in addressing gender-related issues effectively.

On a more positive note, the analysis also highlights significant progress made in the realms of gender equality and women’s rights over the past three decades. This progress is evident considering the participation in the Beijing 1995 Conference, which focused on gender inequality and highlighted various gender-related topics. Notably, these topics were once considered ‘crazy’ but are now internationally recognised areas of concern and focus.

In conclusion, the analysis sheds light on the issue of digital violence affecting women, adolescents, and girls in Mexico. Legislation is urgently needed to ensure their safety in digital spaces. Although advancements have been made in this regard, challenges in the judicial system and among public ministries persist, hindering progress towards gender equality. Despite these challenges, notable progress has been achieved in gender equality and women’s rights, with gender-related issues now receiving international attention and recognition.

Audience

This analysis explores several crucial topics related to gender-based violence, social justice, and the intersection of digital technologies. The speakers discussed the various risks and opportunities presented by the internet in combating gender-based violence and promoting social justice.

One speaker highlighted the work of NGO Diretos Digitales, which operates at the intersection of human rights and digital technologies. They argued that the internet is a place of both risk and opportunity. On one hand, it allows for greater visibility and the potential for addressing social justice issues. On the other hand, it also exposes individuals to risks and potential harm, particularly in relation to gender-based violence.

Another speaker focused on the need for sensible legislation, enforcement, and understanding to address technology-facilitated gender-based violence. They emphasized that standardizing such legislation is currently under discussion. However, they also noted that the legislative aspect alone is not enough to combat this issue effectively.

In connection with this, the importance of legal frameworks that consider the privacy, freedom of expression, and access to information of survivors was raised. The speakers argued that it is not just the rights of offenders that should be considered, but also the rights and protection of those who have experienced gender-based violence.

Furthermore, an intersectional approach, which takes into account contextual and social differences, was advocated. The speakers acknowledged that social problems disproportionately affect individuals in vulnerable situations. Therefore, any efforts to address gender-based violence and promote social justice must consider these differences and work towards a more inclusive and equitable solution.

Lastly, the analysis included a notable call for age-based protections, particularly for adult women, within the legal system. It was highlighted that while there are existing protections for children up to the age of 18 in the speaker’s country, violence against adult women is often normalized and they are not always recognized as victims. This observation emphasizes the need for a comprehensive approach to tackling gender-based violence and ensuring justice for all individuals affected by it.

In conclusion, this analysis highlights the multifaceted nature of gender-based violence and the need for comprehensive strategies to combat it. It underscores the importance of legislation, legal frameworks, and an intersectional approach in promoting social justice and addressing the risks and opportunities presented by digital technologies. Additionally, it raises awareness about the need for age-based protections, especially for adult women. By considering these factors, society can take meaningful steps towards creating a safer and more equitable environment.

Julie Inman Grant

The analysis explores several crucial aspects of online harassment and the urgent need for effective measures to combat it. One notable observation is the disapproval of the term ‘revenge porn’, which is deemed to trivialize and victim-blame. Instead, there is an argument to adopt the term ‘image-based abuse’ to better convey the seriousness and harm caused by such actions. This emphasises the importance of using language that accurately depicts the nature and impact of online harassment.

Another significant finding is the intersectional nature of online harassment. The analysis highlights that indigenous Australians experience twice the amount of online hate compared to other groups. It also reveals the different challenges faced by urban and rural indigenous populations, as well as culturally and linguistically diverse communities. This underscores the necessity of understanding and addressing the unique vulnerabilities and perspectives of these groups to effectively tackle online harassment.

The analysis further emphasises the importance of co-designing preventive solutions with vulnerable communities. It stresses the need to consider diverse experiences and vulnerabilities when designing mechanisms to prevent online harassment. This promotes a more inclusive approach that is better equipped to address the specific challenges faced by different groups, thereby increasing the effectiveness of preventive measures.

Furthermore, the analysis highlights the successful implementation of deterrent powers in curbing online abuses. It indicates a 90% success rate in removing abusive content, which is a positive outcome. Moreover, it suggests that women who sought help had positive responses, affirming the effectiveness of these measures in providing relief and protection to victims.

Finally, an important observation from the analysis is the willingness of the eSafety Commissioner to collaborate internationally. Recognising that online harassment is a global issue, the Commissioner acknowledges the importance of a global approach to addressing it and ensuring a safer online environment for all. This demonstrates the recognition of the need for partnerships and information sharing to effectively tackle online harassment.

In conclusion, the analysis underscores the need for a comprehensive and inclusive approach to combat online harassment. It highlights the importance of using appropriate language, understanding the intersectional nature of online harassment, co-designing preventive measures with vulnerable communities, implementing effective deterrent powers, and collaborating internationally. These insights provide valuable guidance in tackling the complex issue of online harassment and ensuring a safer online environment for everyone.

Juan Carlos Lara Galvez

The internet is a space that presents both risks and opportunities. In the context of social justice and combating gender-based violence, the internet has provided a platform for giving visibility to social demands. It has allowed for the amplification of voices and the dissemination of information related to these issues. This is a positive sentiment as it signifies the potential for social change and progress.

To effectively address technology-facilitated gender-based violence, legal frameworks should take a balanced perspective. This means considering the rights of individuals, including privacy, freedom of expression, and access to information of survivors. This approach recognizes the need for sensible legislative efforts and standards that uphold these rights while addressing the issue of gender-based violence. It is a positive stance that acknowledges the importance of striking a balance between protecting survivors and ensuring their rights are respected.

In addition, an intersectional approach is necessary to address the contextual and social differences that exist within gender-based violence. This understanding recognizes that certain issues disproportionately affect people in situations of vulnerability. It highlights the need for a comprehensive and inclusive approach that takes into account factors such as race, class, and sexuality. In particular, it acknowledges that women also face such issues in the public sphere, further emphasizing the importance of an intersectional perspective. This stance is positive and highlights the significance of considering various dimensions of identity and vulnerability in addressing gender-based violence.

However, it is important to note that legislation alone cannot fully resolve complex social issues. While legal frameworks are a crucial component, enforcement and understanding throughout the system are equally important. This neutral sentiment indicates that a comprehensive solution entails not only enacting laws but also ensuring their effective implementation and creating a deeper understanding of the underlying causes and dynamics of gender-based violence. It is a reminder that a multifaceted approach is needed to address the complexity of these social issues effectively.

In conclusion, the internet has the potential to serve as a platform for social change and combating gender-based violence. Legal frameworks should take a balanced perspective, considering the rights of individuals, while addressing technology-facilitated gender-based violence. An intersectional approach is necessary to address the contextual and social differences that exist within gender-based violence and other social issues. However, it is essential to recognize that legislation alone is insufficient in resolving complex social issues and that enforcement and understanding are crucial factors in achieving meaningful change.

Eiko Narita

The analysis highlights several important points related to internet governance and the fight against harm online. One of the main arguments is the significance of multi-stakeholder conversations in this endeavor. These conversations involve various stakeholders such as governments, regulatory bodies, civil society organizations (CSOs), businesses, and rights-based organizations. By including diverse perspectives and expertise, multi-stakeholder conversations can lead to more effective strategies and solutions for combating harm on the internet.

Civil society organizations (CSOs) are specifically emphasized as crucial entities in internet governance. Their role in giving voice to ground realities is recognized, and organizations like PC and Audrey are mentioned as examples. These CSOs play a vital part in ensuring that the internet remains a safe and inclusive space for all users.

Accountability in online digital technology crimes is identified as a significant challenge. The analysis highlights that holding individuals accountable for online crimes is much more difficult than accountability for crimes against humanity such as genocide. This observation sheds light on the complexities associated with addressing online crimes and the need for robust systems and mechanisms to ensure accountability.

The importance of continuing to use platforms like the Internet Governance Forum (IGF) is emphasized. These platforms provide spaces for interaction and the amplification of important voices. By engaging in ongoing discussions and collaborations through platforms like IGF, the momentum in addressing issues related to internet governance can be sustained.

Additionally, the analysis includes the UNFPA’s efforts to end gender-based violence. It is stated that the UNFPA is actively working with governments and policymakers in this regard. Their commitment to tackling this issue aligns with the Sustainable Development Goal 5 on Gender Equality.

Another noteworthy observation is Eiko Narita’s stance on cybercrime and online harassment. Narita emphasizes that if something is not acceptable to do in person, it should not be tolerated online either. This highlights the importance of creating a safe and respectful online environment and holding individuals accountable for their actions.

Overall, the analysis underscores the importance of multi-stakeholder conversations, the crucial role of civil society organizations, the challenges in accountability for online crimes, the significance of continuing to use platforms for interaction, the efforts of the UNFPA in combating gender-based violence, and the need to address cybercrime and online harassment. These insights shed light on the complexity of internet governance and the ongoing efforts to create a safer and more inclusive internet for all.

Sherri Kraham Talabany

Women and girls in Iraq and across the Middle East face significant risks of online violence, which are exacerbated by the high internet penetration in the region and social conservatism. This online violence poses serious threats to their safety and well-being. Approximately 50% of women and girls in Iraq have either experienced or know someone who has experienced online violence, highlighting the prevalence of this issue.

The consequences of online violence in Iraq are not limited to the digital sphere but often result in real-life tragedies, such as honor killings and increased rates of suicide. This demonstrates that the impact of online violence extends beyond virtual interactions, causing physical harm and loss of lives. Urgent interventions are necessary to address this issue effectively.

To tackle this pressing concern, a nationwide task force has been established in Iraq. This task force focuses on human rights-based legislation and policy to combat Technology-Facilitated Gender-Based Violence (TFGBV). Its objectives include enhancing access to safe and confidential reporting facilities for victims and survivors, as well as promoting skilled investigations into cases of online violence. The task force also aims to train local non-governmental organizations (NGOs) to better understand and respond to these unique crimes. These efforts represent positive steps towards providing support and justice to victims of online violence.

Tech companies play a crucial role in addressing and combating online violence. They are urged to establish survivor-centered, rights-focused redress systems that take into account how online violence in the Middle East can lead to real-world harm. Understanding the manifestations of online violence across the region is essential for developing appropriate responses suitable for the unique environment. Tech companies should proactively contribute to creating a safer online space for women and girls in the Middle East.

When formulating internet governance frameworks, it is vital to consider the unique challenges faced by women in the Middle East due to online violence. These challenges should be integrated into emerging policies or regulations concerning internet governance. Broad governance mechanisms must be incorporated to address the specific considerations and situations encountered by women in the Middle East. By doing so, a more inclusive and supportive online environment can be created, prioritizing the rights and safety of women.

In conclusion, women and girls in Iraq and across the Middle East face significant risks of online violence, with high internet penetration and social conservatism exacerbating the issue. The establishment of a nationwide task force in Iraq dedicated to addressing TFGBV represents a positive step towards combatting online violence. The involvement and commitment of tech companies are crucial for establishing survivor-centered redress systems and developing appropriate responses to effectively tackle this issue. Furthermore, integrating the unique challenges faced by women in the Middle East into internet governance frameworks is essential to create a safer and more equitable online space.

Session transcript

Moderator – Alexandra Robinson:
This is working? It’s working? Okay, thank you. Okay, it feels quite awkward sitting, but hi everybody. Thank you for joining today. My name is Alexandra Robinson. I’m the Gender-Based Violence Technical Advisor for the United Nations Population Fund, UNFPA. And today we welcome everyone to our event on Disrupt Harm, Accountability for a Safer Internet. So ending gender-based violence and harmful practices is at the center of what UNFPA do. And increasingly in a digital world, we realize that we can’t achieve that without ensuring that all women and girls are safe in all spaces, including online spaces and through their use of technology. So we are hosting the event today to explore those mechanisms through which law and policy and civil society movements are operating to disrupt that harm. experienced by women in online spaces and technology. And we’re gonna hear from a really amazing panel. I feel really privileged to be sitting here with such phenomenal people. But we will hear from a range of different perspectives, their wealth and experience across their work in doing exactly this and disrupting harm. We’ll then open for a Q&A both with online, we have an online presence, so we’ll have a Q&A with you in the room, but also with Q&A for people online. So, and we’re a relatively small room, so please don’t be shy in taking the microphone and asking. With that, I will turn to our first panelist, who is Senator Mata Lutia Misheka Morena, known as Malu Mishe. She is a staunch feminist. She is the Morena Senator for Guanajuato. She is a mother. She has been a federal representative on three occasions. Currently a legislator in the Congress of Union representing the state of Guanajuato. And she will speak specifically around the legal measures and regulations implemented in Mexico for the prevention and response of technology-facilitated gender-based violence. Thank you, Senator.

MARTHA LUCIA MICHER CAMARENA:
Thank you. Thank you, Alexandra. Thank you for inviting me. Nice to meet you, and thank you very much for this invitation. Good afternoon. I am Mata Lutia Misheka Morena, a Mexican Senator, and today I want to share the current situation of women, adolescents, and girls regarding information and communication technologies. I now want to address an important and troublesome issue, digital violence, which, according to the UN, three out of 10 women internet users in Mexico have been victims of cyberbullying that is approximately 10 million women. In addition, the National Front for Sorority and Digital Defenders, a Mexican civil society organization, an NGO, has indicated that 19 out of every 100 victims of digital violence are women, pointing out that 74.1% of women victims of digital violence are between 18 and 30 years old, 72.3% are university students, and 81.6% of the aggressors are a known person, mainly former partners. Among the main behaviors reported by this violence are dissemination of intimate content without consent, threats of dissemination, harassment, and or sexual harassment, exhortion, sexual assaults not related to sexual intimacy, distribution of child pornography, production of intimate content without consent, dissemination of personal data offering sexual services without consent, and identity theft. The main formats in which digital aggressions occur are intimate photo sharing groups or websites, direct message, creation of fake profiles, and attack from fake profiles. Currently, I chair the Gender Equality Committee in the Mexican Senate, a legislative space that has allowed me to create, contribute, and adapt legislation to current times. Thus, we are not only concerned, but we have also dealt with legislating important reforms that provide safety of women in digital space. Well, the reform, and it was approved in unanimity. How do you say unanimity? Everyone, uh-huh, see. The reform entails the following. Define digital violence as any malicious action carried out through the use of information and communication technologies by which images, audios, or real or simulated videos of intimate sexual content of a person without their consent without their approval or without their authorization are exposed, distributed, disseminated, exhibited, transmitted, market, offered, exchanged, or shared with costs psychological and emotional harm, as well as damage any area of the person’s private life or own image. It also includes those malicious acts that cause damage to the intimacy, privacy, and or dignity of women which are committed through information and communication technologies. Second, regulates protection orders for digital violence cases in which the public prosecutor’s office or judge will immediately order the necessary protection measures, introducing electronical or ingriding the companies of digital platforms, the media, social of website space, pages, individuals, or companies to interrupt. block, destruct, or delete image, audios, or videos related to the investigation. And third, adds the crime of violation of sexual intimacy punishable by a penalty of three to six years in prison to anyone who discloses, shares, distributes, or publish image, videos, or audios of intimate sexual content of a person who is a legal age without the person’s consent, approval, or authorization, as well as anyone who videotapes, audiotapes, photographs, prints, or develops image, audios, or videos with intimate sexual content of a person without their consent, without their approval, or without their authorization. Well, I am convinced that one of the best ways to achieve women, adolescents, and girls’ safety is to provide an applicable legal framework to face situations that cause serious harm to their lives. Never take one step back on women’s rights. Thank you very much.

Moderator – Alexandra Robinson:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you so much. I think that set the stage for the entire event very well. I will now introduce the other panelists who’ll be speaking with us today. We have Sherry Karam Talabani, who is sitting right next to the senator, who is the executive director of the SEED Foundation. Sherry is a human rights lawyer and has over 20 years of experience as a policymaker, program manager, and advocate for gender and human rights and social justice. And today, she’ll be speaking to us specifically around contemporary legal frameworks and political discourses on technology facilitator, GBV, in Iraq. And then sitting on the other side of Sherry, we have Carla Velasco Ramos, the policy advisor coordinator at the Association for Progressive Communications. And Carla has many years of experience in internet access, gender and technology, and with the APC, plays a crucial role in convening CSOs, tech companies and online platforms to address TFGBB. And then we will be speaking with our eSafety Commissioner, Julie Inman-Grant, one of the only intergovernmental regulatory bodies in the world committed to keeping citizens safer online. The eSafety Commissioner has extensive experience in the non-profit and government sectors and has spent two decades working in senior public policy and safety roles in the tech industry, including at Microsoft, Twitter and Adobe. And as the Commissioner, plays an important role as the global chair, an important global role as the chair of the Child Dignity Alliance’s technical working group, a board member of the We Protect Global Alliance, and the Commissioner also serves on the World Economic Forum’s Global Coalition for Digital Safety and on their Exide ecosystem governance steering committee on building and defining the metaverse. I’m not sure. And finally, we will conclude our panel discussion with Juan Carlos Lara, who is the Executive Director at Derechos Digitales, an organization working at the intersection of human rights and digital technologies, where he is a lawyer by training and has experience in legal and policy analyst and research on data privacy, surveillance, freedom of expression and access to knowledge in the digital environment. So with that, I will now turn to you, Sherry. Thank you for being with us.

Sherri Kraham Talabany:
Thank you for hosting us. I think at the conference so far, we’ve seen everything at a 10,000 foot level and I really So, you’ve been talking a lot about government structures and platforms, and I really wanted to drill down on some of that. Thank you. Surprisingly, I’m not very tech-savvy myself. So what I really wanted to do is drill down on what online violence means in Iraq, but I think also across the Middle East, because I think it’s an area where we see very high internet penetration, but also very high rates of gender inequality, extremely conservative norms, which creates unique vulnerabilities of people who already have high vulnerabilities, and it exacerbates those. So unique vulnerabilities to TFGBV, and with real-life disastrous consequences for women and young girls. We see TFGBV endemic and increasing across the Middle East and Iraq. So Iraq is the fourth worst country in the world when it comes to women’s peace, security, access to justice, women’s rights, and their safety. And that’s according to the Women, Peace, and Security Index, and it relates to their participation in every aspect of life. We have the highest rate of intimate partner violence in the world, 45% of women face violence in their home, so women aren’t safe at home. And we have conservative norms that shape and constrain how women and girls, what they can do, and how they behave, and we see adolescents and young women with extremely limited freedoms and spending a lot of their free time online. We have the largest gap globally. It shows up in the economic sphere, in political participation, in education and health attainments, in their very survival, and we have very limited protections in place. At the same time, Iraq is very well-connected. Seventy-five percent of the population are active on the Internet. Almost everybody has a cell phone. The gaps between women and men exist for sure, with the biggest gap in connection with rural women, and with women lagging behind in terms of digital literacy. Nonetheless, 50 percent of women and girls in Iraq say that they have faced and experienced TFGBV or know someone who has experienced it. In this context with these social and cultural dynamics, women and girls are extremely vulnerable to online violence, with the high likelihood that this violence shows up offline as well. So what are we worried about? Much as what the senator just described, harassment, abuse, exploitation, trafficking. We also see these phenomenon lead to murder, honor killings, and increased rates of suicide. So what do we see? We see image-based abuse, just what you just described, private photo or image or film, sometimes real and sometimes manipulated, used to exploit sextortion, used to traffic, used in every economic strata in our society for women and girls. Besides the violence that women and girls face from the perpetrator, or the person that’s abusing them online, we also see them face extremely high rates of violence in their home life as a result of this threat and of this violence. So if their families find out, it could lead to honor-based violence and murder, and it has, and we have many cases of this. Harassment, threats, and defamation. It’s against women and girls generally, but it’s especially a risk for women in the public space, academics, politicians. NGO leaders, and women of every walk of life, and it’s intended to inhibit and constrain women and girls’ participation. And so we see them being harassed and intimidated online. We’ve seen a spate of murders of social media influencers for dressing what is perceived to be provocatively smoking, and punishing them for going outside social norms. So it’s violent, and it’s scary, and it’s intended to keep women’s representation and participation low, and it’s very effective at that. We have other challenges with predatory practices, including of children through gaming, child porn, child trafficking rings, but these are less documented and well-known. And of course, the most obvious and horrific abuse is that we saw women sold like chattel by ISIS online, and that fostered the trafficking of women during the ISIS crisis, which continues even to today online. So what do we need to do to address it? My organization two years ago started a nationwide task force. I think it’s the only nationwide task force called the TFGBB Task Force, and you can find out and connect to our task force here. We’re focused on human rights-based legislation and policy across the Middle East and Iraq. Legislation to protect against these harms is often used to decrease public expression, free media, and the response tends to be rules that inhibit and criminalize public expression. So we need to focus on the crime, but not on expression. We need increased access to safe and confidential reporting, along with investigations and protections from designated agencies with clear mandates and skilled personnel. So we don’t have that in Iraq today. We don’t have a designatedโ€”we do have some legislation, but we don’t have a designated agency, and we certainly don’t haveโ€” investigations that are skilled or experienced. We also need skilled and experienced NGOs that understand this unique kind of crime and how it impacts women and girls across Iraq and of course the Middle East. And this requires serious training capacity building which we are undertaking. And then finally we need to focus on the tech companies. They need to have proper redress that is both survivor-centered, rights-focused, including child’s rights-focused that understands how this type of online violence manifest into real-world violence across the Middle East in a unique way and develop appropriate safe responses for the environment that we face. So to close, we really need a regionalized local response in whatever internet governance architecture that emerges from these forums, whether it’s the Global Digital Compact or other thread, we need to address in these broad governance mechanisms the unique violence and considerations that we face across the Middle East. Thank you.

Moderator – Alexandra Robinson:
Thank you so much, Sherry. And really to build on your work as a CSO in Iraq, I’ll now turn to the Association for Progressive Communications who have demonstrated a longstanding ability to mobilize communities and community organizations around the issue of addressing tech facilitator GBV. I wondered if you could speak to the role of APC in shaping those movements, but also perhaps talk to some of the voices that you think might be missing from those movements.

Karla Velasco:
Yes, thank you. So I am Carla, and I’m the Policy Advocacy Coordinator of the Women’s Rights Program, which is part of the Association for Progressive Communication. So today I’m going to speak on behalf of WRP and APC. The Association for Progressive Communications is a network organization, it’s a member’s organization. We have around 70 member organizations that work in approximately 40 countries around the world, mostly of them in the global south or in the majority world. So the work that we do with our member organizations since APC’s inception, which was almost 30 years ago, is through the women’s rights program to work on women’s rights, sexual rights, and feminist movements. And back then when we started working with the women’s rights programs, a language like online gender-based violence was something that didn’t even exist, right? So it is a celebration for us that 25 years after we get to see these issues in the agenda and we get to see that different governments are taking these very important issues into account and these are being mentioned right now at the Internet Governance Forums, at the Global Digital Compact, and in the different feminist foreign policies that are currently pushing this subject, right? So for us, it has been a major achievement to have this. In 2022, the term was successfully recognized as a human rights violation, and it was thanks to the work of member organizations together with APC and with other organizations from the feminist movements, and it has been a successful work for us to be able to find a pathway between feminist organizations and digital rights organizations, right? Because that’s also a very big struggle right now. So for us… it is very important to bring into the digital space the voices of women and people of diverse genders and sexualities and the things that are very important and crucial right now is that even though there is a discussion between online gender-based violence and technology facilitated gender-based violence, we need to go beyond the discussion of the term and we really need to discuss the response and remedy to victims and survivors where they are. So for example, one of the things that I want to highlight here is that we hear in many of the discussions the phrase, yes access and digital skills for women and girls as a possible solution to the problems that we have for gender and my urgency here is to please go beyond that, you know, because access is only part of the problem. But what we really need to look at is the usage of the internet and how women and people of diverse genders and sexualities are connected, the issues that we face online and how we have differentiated effect when we are using the internet, right? And how that crosses intersectionality and how that crosses where we come from, where are we connecting from and it intersects with race, gender, identity, sexuality, class, ethnicity. So we need to take all of these things into account. So once you look into or beyond the gender gap, you get to see that there’s a lot of complexities around that and we really need to focus on this and this is what the members are currently asking for us to do, right? To bring the conversation beyond that, to bring technology-facilitated gender-based violence, to bring gender disinformation into the discussion, but also to change a little bit the narrative because we always think about the negative things and we always see the negative effects and impacts that we have. But for example in APC, we have a vision of transformative justice. So we really, the proposal that we have here and that we also show in our feminist principles of the internet is how through bringing values such as pleasure, sexuality, joy, freedom of expression, we get to change the narrative of how we see these issues that we are currently facing as women and people of diverse genders and sexualities. So my time is up. Thank you very much.

Moderator – Alexandra Robinson:
Thank you, Carla. And with that, I think, you know, another pathway around how we achieve those safe spaces for women and girls to enjoy technology and online spaces, we have Commissioner Iman Grant. It would be lovely to hear from you about what does a regulatory body look like and how is that disrupting harm so that women can have a transformative experience online.

Julie Inman Grant:
Well, thank you. I’d also like to play off the really important discussion that has already been had and congratulate everyone for not using the term revenge porn. When I was announced as eSafety Commissioner, I was asked to set up a revenge porn portal and I said, yes, I will, but no, I will not call it revenge. Revenge for what? And porn, not for the titillating purposes. We can’t be using language that trivializes or victim blames. So it’s so good to see that in many languages, in many contexts, that image-based abuse is being adopted as a much more empowered terminology. I think it’s really important. The role that we have actually gives me a legislative role to coordinate all online safety activity across the Commonwealth, but also to be the educator and the regulator for online safety. Now, I think it’s really important, we’ve heard this, there is no one-size-fits-all. So when we’re talking about prevention and education, it’s really important to establish an evidence base and understand how the most vulnerable communities are being impacted and how it might be manifesting differently. So for instance, in Australia, indigenous Australians are twice as likely to receive online hate than the broader general population. And the way the indigenous communities use technology is different. They tend to share devices, they tend to share passwords. It’s a very familial base, but that also means that there are more imposter accounts and takeovers and lateral violence. But you also can’t say there’s a one-size-fits-all for Indigenous communities. The experiences of urban Indigenous people are different from those in rural and remote communities. By the same token, in culturally and linguistically diverse communities, when we looked at technology-facilitated abuse, not only are they experiencing the harm and the mental and emotional distress that the everyday Australian is experiencing from technology-facilitated abuse, they often have low digital literacy, low technology literacy. The man controls the technology in the home. There are additional threats of deportation. There may be mistrust of police and government organizations, and just general disenfranchisement from the community. And then when we look at those with intellectual disabilities, these women are afraid to tell the truth. They’re afraid that they will not be believed. And it’s often their carers or their partners that control their technology and threaten to cut them off from their peers and their friends. And they may not have the capability of knowing where to report to or where to get help. So we do have the intersectional nature that we have to make sure that we understand we need to co-design solutions for prevention with these communities. When we get to the protection side of things, to echo the senator’s comments, because we take complaints from the public around child sexual abuse material, around image-based abuse, around youth-based cyberbullying and adult cyberabuse, every single one of those forms of abuse is gendered. The average age for girls being bullied used to be 14. We’re now getting reports from girls as young as 8 or 9 years old. I’ve just issued end-user powers against a group of six 14-year-old boys who are sending rape and death threats to another 14-year-old girl. We’re helping women in Iran and Pakistan with… Australian Connections get their image-based abuse materials down because they’re at risk of honor killings and you know a terrible shame that we don’t experience it the same way in the Australian context and so we’re now issuing some remedial directions against some of those people. So using these deterrent powers and naming and shaming does have an impact. We have a 90% success rate in terms of getting this content taken down and I can tell you that so many women that come to us that’s what they want. They’ve been to the police and they were told why’d you take the image in the first place why didn’t you just get off the internet. So again we need to we need to learn from each other so that we can develop solutions that will work in every jurisdiction and every context and my time is up but I just want to offer that we’re willing to work with all of you to help share our learnings. Thank you. I

Moderator – Alexandra Robinson:
will turn to our last panelist now Juan Carlos to speak to the significance of some work where UNFPN Diretos Digitales are doing jointly around what rights-based law reform looks like to address to address TFGBB and why this is

Juan Carlos Lara Galvez:
an important piece of work. Yes thank you very much. Thank you to NFPA for the invitation to participate in this. I am now saluting you all from Diretos Digitales. We’re an NGO working in the intersection of human rights and digital technologies working in Latin America and I speak also on behalf of my wonderful colleagues who are working in this effort to provide guidance for for law reform. I’d like to begin by highlighting the fact that as a civil society organization based in the global majority we understand that the internet is indeed a place of risk but it’s also a place for opportunity. that the digital realm has meant and has allowed for more spaces to give visibility to social demands, to social justice demands, and also for the demands of combating and preventing gender-based violence, especially that which is facilitated by technology. At the same time, I do wish to acknowledge like the significant contributions to this panel which are a big summary of the amounts and the diversity of the violence that women, gender non-conforming people, LGBTQIA plus people face daily on the internet. But at the same time, the work that the digital is conducting is trying to address the fact that we need sensible legislation, legislative efforts, standards are being discussed right now. However, how that applies to the internet and to the complexity of the social backgrounds of these types of violence is a very complex problem and the legislative side of it is only one part of it. And we need to take it into consideration in its right way to balance the rights and of course to provide the solutions that the legislation by itself is able to provide. We need to also understand that complex social issues are not going to be just solved by virtue of enacting new laws, but that we also need enforcement and we need a level of understanding throughout the system that should be reflected as well. So we need to develop legal frameworks that addressed technology facilitated gender-based violence from the perspective of balance and also taking into consideration that the privacy of the survivors themselves, their freedom of expression, their access to information are relevant also for them. It’s not just a matter of the rights of the people who are committing the offenses. So because these are social problems that disproportionately affect people in situations of vulnerability and women also in the public sphere, we need to defend an intersectional approach that addresses contextual and social differences. and also that there are groups

Audience:
that are being taken care of in the legal system. So there’s almost protection to children up to the age of 18 in my country at least. And then from 18, you’re a woman and your harm is normalized, violence against you is normalized and you’re not even considered a victim. So those are my two statements, thank you.

Moderator – Alexandra Robinson:
Thank you very much, Angela. Very quickly on the global stage, I think we’re lucky to have Ellen here from Young Women, but last, you know, in March, the entire Commission of Status of Women was dedicated to gender and technology. And I think there was a really strong focus around technology facilitated gender-based violence, integrated into global outcomes documents and language. So I think at a global level, there is certainly movement around building international language and policy. And I think at a national level, we’re very much seeing, you know, movements around different countries implementing laws and policies. I will pass to the senator.

MARTHA LUCIA MICHER CAMARENA:
Si, bueno. Let me tell you that I was in Beijing in 1990. No, it’s 95. 1995. And everyone told us that we were, que estamos locas. Locas, crazies. They told us, you are crazy. They didn’t want us to talk about violence against women or this kind of issues about penalties and now, yo veo que hemos avanzado mucho, si lo puedes decir. Pero muchรญsimo. Hace 30 aรฑos, este tema era del brujas. Era un tema prohibido. Y ahora, vamos muy avanzadas. Pero yo creo que el reto es los juzgadores y los ministerios pรบblicos. Creo que ahรญ es donde tenemos que trabajar. So the senator is sharing that she has seen a lot of progress in the last 30 years and that we should definitely take that into account. It has completely shifted. For 30 years, so that’s something to to remember and that the problem now is the judiciary system public ministries and judges This is the the most important problem As she shared just now

Moderator – Alexandra Robinson:
Thank you. I we do have to wrap up now, but I will ask Eiko Narita the Representative of our Japan UNFPA office to to close us out today I’ll stand up so you can sit here because I think there’s a

Eiko Narita:
Well, well, thank you so much I’m six is a great number here, but I thought I would be the lucky seven barging in to close this session but it’s really, you know excellencies and you know, all these leaders and Wonderful colleagues and also friends, you know within the community for being here to this really rich conversation around, you know Disrupt harm accountability for safety say for Internet Since we keep talking about the importance of multi-stakeholder Conversations over the last couple of days. I mean, what does that really means? And I think you heard that here right more specifically, you know, we need to come together across governments, you know regulatory bodies CSO’s businesses and rights-based organizations to really Collaborate more efficiently to be able to disrupt this harm over the internet that we see so so frequently I think we also have to acknowledge at this moment the all and the power of civil society organizations I think it’s really important that they’re here today at IGF and You know, especially including those led by a PC and Audrey I Think what I was talking to Alex earlier this afternoon and say, you know, they they belong here, right they they are Entities that belong here to really make that voice be heard from the ground because that’s really important We’re not just talking sort of in theory. So, just going over what we discussed today, we learned about the experiences of one of the only intergovernmental regulatory bodies, the eSafety Commissioner of Australia, and also from the legislative scenarios of Mexico, all these steps taken, and from feminist digital rights activists whose global work inspires really all of us, and from community leaders in Iraq, one of the toughest countries to really handle and face this gender-based violence issue, not only online, but on the ground in person. And I think what this event did was to really put a human face over topics that are often so high up in the technology world, and I think that’s really important that we put this human face to it. And I think it’s interesting for me because we use this online digital technology, the accountability of it is unlike other crimes, like crimes against humanity, like genocide. They’re accountable. Who do we put accountability to it? When we have things like AI, suddenly that accountability is really much more difficult to put a finger to. So, at UNFPA, we’re really working hard to try, and as one of the transformative goals, to end gender-based violence. And as Maria Ressa mentioned earlier, if it’s not okay to do it in person, then it’s not okay to do it online either. So, I think we work with governments, policymakers like ourselves, and I just want us to finally maybe say that you’re all here in your own positions, whether civil society or not. I think it’s good for us to continue to use this platform as a way to interact and also continue the momentum of movement so that this becomes a place of exchange and also… amplifying the voices of what is really important. So with that, I know I have to do this. I have to extend my gratitude to everyone who made this event possible. Special thanks go to our Honorable Senator Camarena and also Julie Eman Grant, Sherry Karaham Talabani, Juan Carlos, Lara and Carla Valesquez Ramos, and also Alex, Stephanie and Eva, our team from UNFPA, and to all of you who have come here today to really make this conversation very rich. So thank you so much.

Audience

Speech speed

206 words per minute

Speech length

67 words

Speech time

20 secs

Eiko Narita

Speech speed

166 words per minute

Speech length

656 words

Speech time

236 secs

Juan Carlos Lara Galvez

Speech speed

170 words per minute

Speech length

489 words

Speech time

173 secs

Julie Inman Grant

Speech speed

166 words per minute

Speech length

779 words

Speech time

281 secs

Karla Velasco

Speech speed

167 words per minute

Speech length

739 words

Speech time

265 secs

MARTHA LUCIA MICHER CAMARENA

Speech speed

128 words per minute

Speech length

871 words

Speech time

408 secs

Moderator – Alexandra Robinson

Speech speed

163 words per minute

Speech length

1163 words

Speech time

428 secs

Sherri Kraham Talabany

Speech speed

150 words per minute

Speech length

1117 words

Speech time

448 secs

Donor Principles for the Digital Age: Turning Principles int | IGF 2023 Open Forum #157

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Augustin Willem Van Zwoll

During the discussion, the speakers conveyed their positive sentiments towards the USAID and IDRC, commending them for their high standard multi-stakeholder processes. These processes were praised for their ability to connect unconnected topics and tie them into rights agendas. This approach was seen as a commendable effort in promoting human rights and digital development programming.

Another key point raised by the speakers was the need for locally driven action in human rights-centered digital development. They emphasized the importance of adapting donor principles into more concrete tools that can be effectively utilized by local communities. The aim was to empower communities by providing them with practical and actionable frameworks to address inequalities and promote inclusive growth. To achieve this, the speakers expressed their intention to collaborate with fellow members and share best practices to investigate how donor principles can be effectively applied at the local level.

Moreover, the speakers also discussed the integration of various components, including development work, digitalization, connectivity, security, and good governance. Particularly, there was a strong emphasis on integrating cybersecurity tools and good governance for the unconnected third of the world. The need for this integration was driven by the realization that connectivity and digital development can only be truly beneficial when accompanied by secure and stable environments. Combining cybersecurity measures with good governance practices aims to ensure a safe and reliable digital environment for the unconnected population.

To summarise, the speakers exhibited a positive outlook towards the USAID and IDRC’s multi-stakeholder processes, highlighting their ability to connect diverse topics to rights agendas. They also emphasized the importance of locally driven action and the adaptation of donor principles into practical tools for communities. Furthermore, the integration of cybersecurity tools and good governance was recognized as crucial for supporting digital development and connectivity in the unconnected regions of the world.

Audience

The discussion centres around the challenge of integrating human rights principles into the operations of donor governments and foundations without imposing additional burdens on grantees and implementing partners. The main concern is to find ways to incorporate these principles effectively without causing excessive workload or duplication of effort. This is particularly important for donor agencies like 4USAID and IDRC.

Another key aspect highlighted in the discussion is the need for a broader understanding of digital security and resilience. It is argued that a more comprehensive understanding of these concepts would facilitate their integration into the work with grantees, going beyond emergency training for specific actors. This would ensure that digital security and resilience become embedded in the programmatic activities of organizations.

Within this context, the Ford Foundation is praised as a good example of a donor that takes a holistic approach to digital security and safety. Their approach includes building capacity in their grants, considering economic, social, and cultural aspects of digital security. This indicates a commitment to comprehensive and sustainable approaches to digital security.

The discussion also emphasises the need for more creativity in community outreach efforts. It is suggested that organizations should go beyond reaching out to the usual suspects and actively include communities that are commonly marginalized. By adopting a bottom-up approach and collaborating with private foundations, organizations can enhance their outreach efforts and have a greater impact.

Moreover, it is argued that the principles of donors should not only be used to guide their funding decisions but should also serve to facilitate the transfer of funds without imposing excessive bureaucratic measures. The objective is to ensure that funds are efficiently distributed to those in need, without unnecessary delays or obstacles.

Concerns are raised about the potential funding uncertainty following the potential withdrawal of support by Open Society Foundations. It is noted that Open Society Foundations have been major contributors to human rights and digital rights organizations, particularly in global majority countries. Smaller organizations in these countries may face challenges in securing alternative funding sources to sustain their important work.

Furthermore, the discussion highlights the existence of countries where strong civil societies are lacking, resulting in prevalent digital human rights violations. Ratilo from Botswana draws attention to this issue, advocating for financial and legal assistance to protect individuals from such violations. He shares his own experience as a member of parliament, expressing a willingness to take legal action against his government over such violations, despite the financial constraints involved.

In conclusion, the discussion revolves around finding effective ways to integrate human rights principles into the operations of donor governments and foundations. It emphasizes the importance of a comprehensive understanding of digital security and resilience, along with practical mechanisms and tools to align strategies with these principles. The potential withdrawal of support by Open Society Foundations and the need to support civil society and digital rights organizations are also highlighted. Notably, the discussion highlights the challenges faced by countries lacking strong civil societies in combating prevalent digital human rights violations.

Vera Zakem

The donor principles, which have received the official endorsement of 38 member governments of the Freedom Online Coalition, play a crucial role in establishing an international framework for donor accountability. These principles also align with the ethical obligations of donors to ensure that their actions do not cause harm. Additionally, the donor governments have committed themselves to implementing procedures that protect local partners and communities from the potential misuse of digital technologies and data.

However, despite these commitments, the annual Freedom on the Net report released by Freedom House paints a concerning picture. The report reveals that global internet freedom has experienced a decline for the 13th consecutive year. This decline raises concerns about the state of digital rights and the potential threats faced by individuals and communities worldwide.

Nevertheless, there is an argument put forth that it is possible to achieve digital transformation without compromising digital rights. This argument highlights the importance of prioritising safety and security in addressing these issues. Donor governments are believed to better fulfil their mandate when they place safety and security at the heart of their approach to digital transformation.

Overall, these findings emphasize the importance of safeguarding international assistance from digital repression and upholding digital rights throughout the process of digital transformation. This requires a comprehensive and ethical approach that takes into account the potential harm caused by the misuse of digital technologies and data.

Moderator – Lisa Poggiali

During the discussion, several important points were raised by the speakers. The breakout groups were organized around internal and external components, with each group focusing on a different question. This structure allowed for a comprehensive exploration of the various aspects and perspectives related to the topic at hand.

The inclusion of online groups in the discussions was widely supported, with a commitment made to involve them in the conversation. This recognition of the importance of diversity and inclusivity in decision-making processes aligns with the goal of reducing inequalities (SDG 10).

One of the participants, Lisa Poggiali, expressed appreciation for the idea of clarifying roles among stakeholders and partners. This notion of clearly defining responsibilities and actions of different actors is seen as valuable in fostering more effective collaboration and accountability in digital development. Poggiali also advocated for concrete commitments and actions by individual governments within their legal and strategic frameworks.

In moving forward, Poggiali suggested the development of toolkits as the next step in implementing the Freedom Online Coalition. These toolkits would provide specific guidance and resources for different stakeholders, including civil society, diplomats, and development actors. This approach aims to empower and equip these actors with the necessary tools to promote digital freedom and security.

Concerns were raised regarding the uncertain landscape of donor funding. The indication that Open Society Foundations may decrease their funding for various organizations has raised questions about the future financial support for initiatives and projects in the digital rights sphere. It was mentioned that statutory donors often provide larger grants, but it is more challenging to secure their support for smaller organizations.

On a positive note, the potential for partnerships between the private sector and donors in addressing digital security issues was highlighted. Private sector organisations often possess more financial resources than traditional donors, making them valuable allies in efforts to enhance digital security.

The need for greater synergy between conversations about human rights and traditional cybersecurity was emphasised. It was acknowledged that these discussions have been somewhat siloed in the past, and there is a desire to bridge this gap and integrate human rights and democratic values into cybersecurity practices. The Global Forum on Cyber Expertise (GFCE), the International Telecommunication Union (ITU), Microsoft, and the government of Sweden were mentioned as entities already working towards mainstreaming digital security with a focus on human rights and democratic values.

The discussion also shed light on the silo effect in conversations about democracy and human rights in technology. These topics have often been isolated from broader global technology discussions, limiting the potential for comprehensive and integrated approaches. The Democracy, Human Rights, and Governance Bureau at USAID and other donors have recognised this issue and are actively seeking ways to address it.

The importance of supporting civil society in countries where they lack leverage or resources to hold governments accountable for human rights violations was emphasised. In some instances, digital human rights violations occur, but there is no strong civil society to protect the interests of the community. Additionally, the cost of taking legal action against the government can be prohibitive for individual members of society. Therefore, it was argued that support should be provided to these civil society organisations to empower them to advocate for human rights and hold governments accountable.

The speakers concluded by urging donors to heed the call to support civil societies. The principles discussed throughout the conversation can serve as a foundation for addressing critical human rights issues. Collaboration and support among stakeholders and partners are crucial in achieving the goals set forth in the discussion.

Overall, the detailed discussion highlighted the need for inclusivity, clarity, and collaboration in the digital development sphere. By involving diverse voices, clarifying roles and responsibilities, and fostering partnerships, the participants aim to create a more secure and inclusive digital environment that upholds human rights and promotes sustainable development.

Shannon Green

The Donor Principles for Human Rights in the Digital Age have been developed and endorsed by 38 member governments of the Freedom Online Coalition. Shannon Green, an advocate for digital rights and freedom, applauds this development, stating that the principles serve as a crucial blueprint to protect individuals’ rights in the digital world.

Green highlights the significance of partnership between donors and various stakeholders, including government, civil society, and the private sector. She believes that donors have much to learn from their partners in different sectors and stresses the importance of collaboration in shaping the global digital ecosystem.

The principles are seen as a means to promote safer and more secure environments for partners and local communities. By equipping safeguards, donors can ensure the equitable distribution of programs, addressing concerns of accountability and reducing inequalities.

Green also expresses enthusiasm for the Open Government Partnership’s prioritisation of digital governance. She believes that this focus will result in improved transparency of public oversight of artificial intelligence and data processing systems. Green cites remarkable progress made under the commitments of the Open Government Partnership.

In conclusion, Green perceives the Donor Principles for Human Rights in the Digital Age as a significant contribution to a digital future that respects rights, promotes democracy, and ensures equitable sharing of technology benefits. She urges other donor governments to make concrete commitments aligned with these principles. Overall, the principles are applauded for their potential to protect and uphold individual rights in our digital world while fostering collaboration and safeguarding the equitable distribution of technology benefits.

Moderator – Sidney Leclercq

During a panel discussion, speakers from various countries and organizations provided insights into the implementation of donor principles. The Netherlands, represented by Van Zalt, a Senior Policy Officer, expressed their commitment to incorporating these principles as they assume the chairship in 2024. Emphasizing the importance of localized knowledge and evidence at the Internet Governance Resource Centre (IGRC), Immaculate Kassai, the data protection commissioner from Kenya, highlighted the significance of considering diverse perspectives and contexts when implementing these principles.

Zach Lambell, a Senior Legal Advisor for the International Center for Nonprofit Law, outlined a comprehensive framework for implementing donor principles. He stressed the need for international, domestic, and technical approaches to effectively apply these principles to ensure their adherence across different jurisdictions and organizations.

Michael Karimian, the Director for Digital Diplomacy, Asia and the Pacific, at Microsoft, provided a private sector perspective on donor principles. He recognized the relevance and importance of these principles in promoting responsible and ethical practices within the digital realm.

Closing the panel discussion, Adrian DiGiovanni, the team leader on democratic and inclusive governance at IDRC, shared closing remarks to acknowledge the contributions of all participants and their valuable insights. The discussion emphasized the need for collaboration and cooperation among stakeholders to ensure the effective implementation of donor principles and to promote inclusive and democratic practices in Internet governance.

Overall, the panel discussion underscored the significance of implementing donor principles in different contexts. It highlighted the importance of localized knowledge, international collaboration, and private sector involvement for effectively implementing these principles.

Michael Karimian

The analysis of the various speakers’ viewpoints reveals several important points regarding the role of businesses and the need for certain practices in advancing the Sustainable Development Goals (SDGs). One key point is the importance of businesses upholding international human rights norms and laws. Michael, who works on Microsoft’s digital diplomacy team, emphasises the need for responsible behaviour in cyberspace based on international law. This suggests that businesses should align their practices with established legal frameworks to ensure ethical conduct and protect human rights.

Transparency and accountability are highlighted as crucial aspects of businesses implementing human rights policies and grievance mechanisms. It is argued that companies should have publicly available human rights policies that are implemented by accountable teams. Additionally, businesses are encouraged to be transparent in their practices and engage with stakeholders while undertaking human rights due diligence. This approach ensures that businesses are open and receptive to feedback, allowing them to continuously improve their practices and address any potential violations of human rights.

The need for direct connections between businesses and local civil society stakeholders is also emphasised. Transnational private sector companies are often criticised for having weak connections with local communities. Platforms like the Internet Governance Forum (IGF) and organisations like Access Now are identified as potential facilitators in establishing and strengthening these connections. This suggests that businesses should actively engage with local stakeholders to ensure their operations align with local contexts and address the needs and concerns of the communities they operate in.

The importance of building products that align with human rights and democratic values is highlighted. Donors are encouraged to support products that incorporate “human rights by design” processes. This includes considering salient human rights risks such as privacy, accessibility, and responsible AI when developing new products. By prioritising human rights and democratic values in product development, businesses can contribute to building a more ethical and inclusive technological landscape.

The analysis also recognises the challenge and potential of professional codes of ethics for individuals, organisations, and institutions. It is acknowledged that incorporating ethical codes into university curricula can be difficult. However, continuous training for staff and access to experts within the company are identified as important interim steps. This indicates the importance of ongoing education and professional development to ensure that individuals and organisations are aware of ethical considerations and have the necessary tools to address them.

In the context of digital development and the SDGs, mainstreaming digital security is crucial for low- and middle-income countries. As these countries undergo digital transformation, the threat landscape for cybersecurity expands. Efforts by organisations such as the Global Forum on Cyber Expertise (GFCE), the International Telecommunication Union (ITU), Microsoft, and the government of Sweden are mentioned as initiatives aimed at addressing this issue. By prioritising digital security in the realm of digital development, low- and middle-income countries can mitigate risks and create a safer digital environment.

Lastly, it is argued that cybersecurity should be considered in the post-2030 agenda. The analysis does not provide additional details regarding this point, but it implies that cybersecurity is a significant concern that should be addressed in future planning beyond the current 2030 agenda.

In conclusion, the analysis highlights the importance of businesses upholding international human rights norms and laws, being transparent and accountable in their practices, and engaging with local civil society stakeholders. It also emphasises the significance of building products that align with human rights and democratic values. The challenge and potential of professional codes of ethics are recognised, and the importance of mainstreaming digital security in digital development is underscored. Additionally, the analysis suggests that cybersecurity should be factored into the post-2030 agenda. These insights provide valuable considerations for businesses and policymakers in their efforts to achieve the SDGs while promoting ethical practices and protecting human rights.

Juan Carlos Lara Galvez

Juan Carlos Lara Galvez, a member of an organization working on digital rights in the global majority, specifically in Latin America, emphasises the importance of engaging with governments and donor governments. These entities provide vital funding for organizations like his that strive to safeguard digital rights. Juan Carlos strongly believes that interacting with governments and donor governments is crucial for the success and sustainability of their work.

Regarding donor principles, Juan Carlos stresses the significance of not only formulating principles but also ensuring their implementation through concrete steps and actions. He highlights that the true measure of success lies in how effectively these principles are translated into tangible outcomes. He acknowledges that while the formulation of donor principles is an inspiring beginning, it is essential to monitor their progress and evaluate their impact on the ground.

An important aspect that Juan Carlos advocates for is stakeholder involvement, participation, and the recognition of human rights in various contexts, including technological development. He is pleased to see that the donor principles acknowledge the need for coordination with stakeholders. Juan Carlos believes that donor governments should actively foster collaboration between different stakeholders to promote and protect human rights. By involving diverse perspectives and including all relevant parties, these issues can be addressed more effectively.

Furthermore, Juan Carlos emphasizes that the priorities of advocacy should come from the ground level. He believes that advocacy organizations themselves, along with the individuals actively engaged in the work, hold valuable knowledge and insights into what is truly needed on the ground. By acknowledging and understanding this knowledge, officials can better advocate for and protect human rights. Juan Carlos highlights the importance of interaction and collaboration between stakeholders as a means to foster the promotion of human rights.

In conclusion, Juan Carlos Lara Galvez underscores the significance of engaging with governments and donor governments, implementing donor principles through concrete steps and actions, involving stakeholders in decision-making processes, and recognizing the importance of advocacy priorities that originate from the ground level. His arguments are rooted in the belief that collaboration and recognition of diverse perspectives lead to more effective promotion and protection of human rights.

Zora Gouhary

Zora Gouhary plays a crucial role in supporting the formation and smooth running of breakout groups for discussions. This process involves the creation of five groups, comprising four in-person groups and one online group. Each group will have its own moderator, ensuring effective facilitation and guidance during the discussions.

The breakout sessions will focus on four key questions, encouraging participants to explore and share their perspectives. These discussions are expected to last approximately 15 minutes, allowing for focused and in-depth conversations within each group.

Furthermore, Zora Gouhary actively facilitates the process of grouping participants. Participants are given the freedom to choose their own groups, potentially leading to a more diverse and engaging experience. Zora’s involvement in this process ensures that the formation of groups is well-organised and efficient.

All contributions made during the breakout sessions will be diligently summarised for later use. This summarisation enables the effective capture and consolidation of key ideas and insights generated during the discussions. By preserving these contributions, valuable information can be used to advance the next steps of the donor principles, indicating that the breakout sessions play a significant role in the overall decision-making process.

In conclusion, Zora Gouhary’s support in forming, moderating, and summarising breakout groups enhances the effectiveness and productivity of the discussions. The inclusion of multiple in-person and online groups, along with Zora’s guidance, encourages diverse perspectives, ensuring that the breakout sessions contribute meaningfully to the advancement of the donor principles.

Adrian di Giovanni

The discussion centres around the significance of donor principles on human rights in the digital age, particularly in response to the rapid advancements in technology. These principles are essential guidelines in establishing a framework to safeguard and ensure accountability for investments in digital initiatives. They are also designed to align with commitments to human rights and democratic values.

Digital technologies are recognized as powerful tools that facilitate information sharing, self-expression, and organization. However, they also present challenges, especially for marginalized and vulnerable communities. In certain cases, these technologies can be used to deny or diminish individuals’ rights, and there is a correlation between technological changes and the decline of democratic processes.

For this reason, it is crucial for donors to take responsibility for ensuring that their actions and investments in digital initiatives do not contribute to the erosion of human rights protections and democratic institutions. This necessitates adopting the principle of ‘do no harm’ when it comes to these investments. By embracing this principle, donors can mitigate adverse consequences and ensure that their initiatives have a positive impact on society.

The donor principles on human rights in the digital age provide an indispensable framework for safeguarding and ensuring accountability in investments related to digital initiatives. These principles are particularly critical in the face of fast-paced technological advancements, which continuously challenge existing norms and regulations. By aligning with commitments to human rights and democratic values, donors can contribute to the preservation and advancement of these fundamental principles.

In conclusion, the discussion underscores the importance of donor principles on human rights in the digital age. As technology continues to rapidly evolve, it is imperative for donors to proactively ensure that their investments do not undermine human rights protections and democratic institutions. This necessitates adopting the principle of ‘do no harm’ and utilizing the donor principles as a framework for safeguarding and accountability. Ultimately, by promoting responsible and ethical practices, donors can harness the full potential of digital technologies while upholding human rights and democratic values.

Allison Peters

The United States government has taken on the chairmanship of the Freedom Online Coalition, an international organization focused on promoting human rights in the digital landscape. This year, the U.S. Department of State and the government view the Coalition as a crucial partner in safeguarding and advancing human rights in the use of digital technologies globally. The U.S. government sees the Coalition as an important platform for global collaboration and sharing of best practices.

As part of its initiative, the Freedom Online Coalition has launched donor principles that provide guidance to donor governments in supporting human rights online. These principles aim to promote and protect human rights while guarding against the potential misuse of digital technologies. Donor governments, including the U.S., play an essential role in driving these efforts by responsibly investing in digital technologies with a focus on human rights.

Allison Peters, an advocate for digital rights, emphasizes the significance of donor governments investing in digital technologies while remaining vigilant against their potential misuse. The donor principles launched by the Coalition provide crucial guidance to ensure responsible investment and prevent any negative consequences that may arise from the misuse of these technologies. Peters highlights the importance of striking a balance between promoting accessibility and innovation in the digital sphere while also safeguarding against any destabilization and infringement of human rights.

Secretary of State Anthony Blinken echoes similar sentiments in his speech at the United Nations General Assembly. He emphasizes the need to govern digital technologies in partnership with those who share democratic values. This approach is essential to address the challenges and potential risks associated with the misuse of digital technologies. By working together and upholding democratic principles, governments can protect human rights, maintain stability, and ensure the responsible use of digital technologies.

In conclusion, the U.S. government’s chairmanship of the Freedom Online Coalition reflects its commitment to promoting and protecting human rights in the digital age. Through the donor principles and collaborations with like-minded partners, such as Allison Peters, the government aims to foster responsible investment and prevent any negative repercussions resulting from the misuse of digital technologies. This concerted effort aligns with Secretary Blinken’s call for governing digital technologies in partnership with those who value democratic principles. With these measures in place, the international community can work towards a digital landscape that respects and upholds human rights while promoting innovation and connectivity.

Zach Lampell

After conducting the analysis, three main arguments related to civil society organizations have been identified. The first argument emphasizes the importance of collaboration between civil society organizations and donor governments in shaping foreign assistance. It is suggested that civil societies should actively engage with donor governments to provide them with comprehensive information about the realities on the ground and the existing gaps in their country’s domestic legislation. By doing so, civil society organizations can influence the allocation of foreign assistance towards addressing these gaps and supporting initiatives that align with their objectives. The evidence supporting this argument includes the advice of Zach Lampell, who advises civil societies to utilize the Universal Periodic Review (UPR) process, ensuring that the voices and concerns of civil society are heard during the decision-making process on foreign assistance.

The second argument highlights the importance of civil societies pushing for inclusion in standard-setting bodies and integrating human rights protections into internet infrastructure. This argument acknowledges the increasing role of technology and the internet in today’s world, and the need for civil society organizations to actively participate in shaping the standards and practices that govern them. It is suggested that civil societies should seek assistance from the international community in developing their technical knowledge and expertise in this field. Furthermore, working with private companies is recommended to create systems that uphold human rights. This argument promotes the idea that civil society organizations have a crucial role to play in ensuring that technology and the internet serve as tools for peace, justice, and the protection of human rights. The evidence supporting this argument highlights the need for civil societies to leverage their partnerships and engage in collaborative efforts with relevant stakeholders to drive positive change in this area.

The third argument focuses on the significance of facilitating meaningful interaction with stakeholders in the process of drafting legislation. Civil society organizations are encouraged to work closely with donor governments and their own government to create open, public processes for the drafting of legislation. By actively engaging with stakeholders, civil society organizations can ensure that their perspectives, concerns, and expertise are taken into account during the development of legal frameworks. It is stressed that these legal frameworks should uphold international human rights standards and principles. The evidence supporting this argument underlines the importance of collaboration between civil society organizations and both donor and national governments to develop effective and inclusive legislative processes.

Overall, these three arguments analyzed in the research showcase the vital role civil society organizations can play in shaping policies and practices in various sectors. By collaborating with donor governments, pushing for inclusion in standard-setting bodies, and facilitating stakeholder engagement in legislation drafting processes, civil society organizations can contribute to the development of policies and initiatives that align with their objectives and promote peace, justice, and the protection of human rights. This analysis highlights the need for civil societies to actively utilize various platforms and opportunities to advocate for positive change and utilize their expertise to shape a better future for their respective communities and society as a whole.

Nele Leosk

Estonia has demonstrated the transformative potential of technology in various sectors. For the past 15 years, digitalisation has been a top priority for the country, allowing it to shift from being a recipient of aid to becoming a donor. This focus on digitalisation has played a crucial role in shaping Estonia’s development, economic policies, trade policies, and even its tech diplomacy efforts.

The integration of digital tools and processes has enabled Estonia to streamline its government services, making them more efficient and accessible for its citizens. Services such as e-residency, e-tax, and e-voting have facilitated a seamless and transparent democratic system. By placing digitalisation at the core of its development strategy, Estonia has successfully established a digital society that promotes democracy and empowers its citizens.

Moreover, Estonia has shown its commitment to supporting other nations in their development efforts, particularly through capacity building. A notable example is its 14-year partnership with Ukraine, where Estonia has helped them in building a democratic system. Ukraine’s progress in this area has been remarkable, surpassing that of many other countries. This highlights Estonia’s belief that development assistance should focus on enabling countries to develop their own capacities, sometimes even exceeding those of the donors.

Estonia’s approach to development cooperation is characterized by three main priorities: gender equality, collaboration with the private sector, and openness. Gender equality is consistently integrated into all policies and action plans, including tech diplomacy. The country aims to bridge the gender divide and ensure equal opportunities for all. Additionally, Estonia values the use of open-source principles in its development cooperation initiatives, ensuring control and transparency while avoiding dependencies.

Furthermore, Estonia’s development agency, which is only two years old, emphasizes partnerships with private companies and other organizations. This collaboration allows for a broader range of expertise and resources, contributing to national development goals. By engaging the private sector, Estonia harnesses innovation and leverages its potential for driving economic growth and sustainable development.

To conclude, Estonia’s success story exemplifies the positive impact of technology in building democracy, enhancing the economy, rebuilding trust, and establishing transparency and openness. Digitalisation has become a pivotal driver in Estonia’s development strategies, enabling the country to shift from an aid recipient to a donor. Estonia’s commitment to capacity building, gender equality, collaboration with the private sector, and openness further strengthens its approach to development cooperation. Overall, Estonia serves as a model for other nations, showcasing the possibilities and benefits that can be achieved by harnessing the power of innovation and digitalisation.

Immaculate Kassait

In the era of digitisation, the importance of data protection is emphasised, as highlighted by the arguments presented. Kenya has taken steps to address this issue by establishing a legal and institutional framework for data protection. The Office of Data Protection in Kenya has enforced six penalty notices related to the misuse of personal data, demonstrating their commitment to safeguarding individuals’ information. This positive sentiment towards data protection is further supported by the fact that 2,761 complaints have been received regarding data protection issues, indicating widespread recognition of the need for such measures.

However, challenges also exist in the realm of data protection. The newly established Office of Data Protection in Kenya faces operational and resource constraints, hindering their ability to carry out their responsibilities effectively. Additionally, there are concerns regarding the existing legal frameworks which may not adequately address the complexities posed by multinational companies operating in Kenya. The rapid progress of technological advancements, such as Artificial Intelligence, also presents additional challenges as the potential risks and implications on data protection need to be carefully navigated.

To overcome these challenges, collaboration and donor support are seen as crucial factors. Sharing expertise and best practices amongst stakeholders can enhance the regulation of data processing, allowing for a coordinated and effective approach to data protection. Donor support can play a vital role in aligning country-specific legal frameworks with international standards and providing the necessary resources for capacity building. This collaborative effort would enable Kenya to strengthen its data governance mechanisms and better protect individuals’ data.

In conclusion, the arguments presented highlight the significance of data protection in the digital age. While Kenya has made strides in establishing a legal framework and enforcing penalties for data misuse, challenges such as resource constraints, inadequate legal frameworks, and technological advancements remain. However, through collaboration and donor support, it is possible to address these challenges and enhance data governance practices. By doing so, Kenya can ensure the protection of personal data and align with global efforts towards sustainable development.

Session transcript

Vera Zakem:
I know it’s also early morning, but we really, really are grateful because we just really think this is such a momentous and exciting opportunity for us to roll out these principles and also what they mean for strengthening rights respecting digital ecosystem. So again, I’m delighted to welcome you to this event. I am pleased to announce that as of last week, the donor principles have been officially endorsed by 38 member governments of the Freedom Online Coalition, some of whom you will hear today. The donor principles establish international framework for donor accountability and cooperation on digital issues that align with donor ethical obligations to do no harm. Earlier this month, Freedom House released the annual Freedom on the Net report, a survey and analysis of internet freedom around the world, and we see that the global internet freedom has declined for 13th consecutive year in a row. The donor principles commit donor governments, including the United States, to reverse the trend. They call on the donors to safeguard international assistance from digital repression by establishing procedures to protect local partners and communities from the potential misuse of digital technologies and data. Over the past two decades, USAID and other donors have supported many digital initiatives around the world with, dare I say, positive outcomes. We have assisted countries to digitize their public service delivery systems from healthcare to education to participatory budgeting. We’ve also supported young entrepreneurs to develop financial technology or FinTech applications that have created new economic opportunities for those who have been excluded from traditional economic systems. At the same time, we have witnessed how governments have used digital data to target and threaten journalists and activists in Central America. We have seen how FinTech companies have weaponized the personal data of poor people through predatory digital lending practices. We have learned how consulting firms have exploited citizens’ personal data to influence their voting behavior in ways that undermine freedom of thought and expression and fundamentally weaken public trust. in democratic institutions. Such examples are common and are cause for concern, but digital transformation, we know, does not have to come at the expense of digital rights. As donor governments, we can best fulfill our mandate when we put safety and security at the heart of these issues and the values of democracy, respect for human rights and accountability, really at the heart and the center of our work. Suffice to say, I’m very pleased to be here with colleagues and partners from governments, civil society and the private sectors who have demonstrated their commitment to these values. I believe, and USCID believes, it’s only through this multi-stakeholder process and multilateral collaboration that we can fulfill the promise and the intent of these principles. I certainly want to thank the Freedom Online Coalition Support Unit who’ve made this event possible and the donor principles themselves. I also thank our panelists in the room and online. Where is Joost? I don’t think it’s, right here, thank you. Joost from the Netherlands. USCID, of course, is very much looking forward to working with you as Netherlands takes chairmanship of the Freedom Online Coalition next year. Estonia’s digital ambassador that we have here, Nili Lisk, again, congratulations to you for hosting phenomenal Thailand Digital Summit and Open Government Partnership last month in Thailand. Kenya’s commissioner, online, okay, good. Kenya’s commissioner for data protection, Immaculate Kassite. We commend you for the work that you are doing to keep Kenyans safe and look forward to partnering with you on digital governance. As Kenya begins co-chairmanship of the OGP, Open Government Partnership Steering Committee. And from the FOC Advisory Network, Juan Carlos Lara, the executive director of Derechos Digitales and Zach Lampel, senior legal advisor from the International Center for Not-for-Profit Law. We deeply appreciate your support in drafting the donor principles and, of course, very much look forward to working with you. And, of course, Michael Karimian from Microsoft. We really appreciate your company’s commitment to democratic values and respect for human rights. I also want to express especially deep gratitude to our Canadian colleagues from the International Development Research Center who have co-chaired the Freedom Online Coalition’s funding coordination group with us this year and co-led the donor principles drafting and negotiating process, so huge thanks to you. The donor principles reflect the U.S. and Canada’s shared commitment for digital inclusion with the support of the FOC Support Unit and the U.S. Department of State. U.S. and IDRC co-led the first ever public consultation process for the FOC deliverable which yielded inputs and insights from civil society, academia, and the private sector and various stakeholders from around the world. As a result, the principles better address the needs and desires of the communities that we seek that they serve. And finally, I’m so pleased that USAID at large is pleased to be here in partnership with our colleagues from the Department of State’s Bureau for Democracy, Human Rights, and Labor. It goes without saying that your collaboration on everything with the Freedom Online Coalition, these principles would not be possible, so I am especially delighted to turn it over to the Deputy Assistant Secretary at DRL, Alison Peters, a dear friend and colleague who has been really working hand and arm with all of us to really enable these principles to come to life, over to you.

Allison Peters:
Thanks so much, Vera, and especially to Lisa for your tireless leadership, getting these principles over the finish line. It is not ever easy negotiating anything in a multilateral, multi-stakeholder process, and we really appreciate your leadership. And also to Sydney and IDRC for your strong partnership in this effort. Thanks all for joining us. We know it’s an early morning. We hope everyone is well caffeinated, but this is a really, really momentous and exciting occasion to launch these donor principles, so we’re grateful that you took the time to join us this morning. The Department of State and the U.S. government as a whole view the Freedom Online Coalition as a key, indispensable partner in our efforts to promote and. protect human rights and the use of digital technologies globally. Pretty much every issue set that we have heard discussed here at IGF is a core priority of the work that we’re doing with the other governments in the Freedom Online Coalition to promote human rights online. As the chair this year of the Freedom Online Coalition, the United States made a firm commitment to work within the FOC and with our partners and allies to promote and protect fundamental freedoms, counter the rise of digital authoritarianism and the misuse of digital technologies, advance norms, safeguards, and principles for artificial intelligence based on human rights, and support ongoing initiatives to promote safe online spaces for marginalized and vulnerable groups. As we heard from our Secretary of State, Anthony Blinken, at the UN General Assembly, which feels like 100 years ago now but was just a couple of weeks ago, we are delivering. These principles launching today really translate these priorities into action, giving donor governments concrete guidance to hold fast to our commitment to invest in digital technologies only when it is possible to protect against their potential misuse. They reinforce the Freedom Online Coalition’s shared vision to enable individual dignity and economic prosperity. Technology should be harnessed in a manner that is open, sustainable, secure, and respectable of democratic values and human rights. And these donor principles will help us take one step in that direction. They also demonstrate our shared commitment to advancing the UN’s 2030 Sustainable Development Agenda as we look to harness the power of digital technologies in a rights-respecting manner to advance our shared goals from achieving gender equality to promoting inclusive and peaceful societies. As our Secretary of State stated at the UN General Assembly, we can develop the best technologies in the world. But if we haven’t determined how to govern them in partnership with those who share our values, these technologies are likely to be misused for repressive or destabilizing purposes, making our communities less peaceful, less prosperous, less secure, and unfortunately, more undermining of human rights. They’re also less likely to be leveraged for advancing societal progress around the globe. So again, I thank you all for joining us today. We have both an exciting panel with some key partners. And we’re thrilled to be joined by the government of the Netherlands, who are turning over the chairship of the FOC to next year. But we’re really thrilled to also join you in the breakout sessions to hear your thoughts on these donor principles and how we can move them forward through the FOC. So thank you again. And thank you again to Lisa and Vera for your leadership.

Moderator – Lisa Poggiali:
Thank you, Alison.

Moderator – Sidney Leclercq:
Thank you very much. And thank you to the US, really, for the commitment and dedication in that, in getting those principles. I think that was an important process and a decisive one. But you already made the transition, actually, to a first speaker in the panel from the Netherlands. And I’ll turn over to you, I guess. Yes, Van Zalt, who is Senior Policy Officer at the Human Rights and Political and Legal Affairs in the Netherlands. And as you take over. the chairship in 2024. It’ll be interesting to hear from you the intention to implement the donor principles in that chairship in 2024. Over to you.

Augustin Willem Van Zwoll:
Thank you, Sidney. First of all, thank you USAID and thank you IDRC for really bringing something new to the table here at the FOC. I think it’s great that you were able to to create these principles, not only tying all these different important topics that we’ve been hearing the last few days about, really like connecting the unconnected, but tying that into the rights agendas that we have been discussing in our little side sessions the last few days. And I think it’s an important bridge, not only indeed in getting the development goals, getting to the development goals or reaching the development goals, but also tying, it will be an important step for us, at least from a policy side also, to get where we need to be in order to have fruitful discussions with reaching to the GDC and the West-West-West-West. So I think it’s a great important step, not only from a digitalization perspective or an aid perspective, but also really connecting it to the more human rights-related discussions that we are having as well. And also thank you for setting a really high bar that will be very difficult to reach in form of having an open multi-stakeholder process. I mean you’ve done an excellent job in that and I really want to congratulate you. I rather had it that you would do it after our chairship because it will be so challenging to work to that high standard, but it’s a great inspiration for us and we’ll really try to continue that line of work as well next year. I mean now under the guidance of the USAID and IDRC, of course in partnership with the U.S. State Department, you really set up these important donor principles that encompass the basic conditions for human rights-centered digital development programming. But however, at least for us, this would only be the beginning. I mean turning these principles into locally driven action that truly serves the target communities that we support within the context of our very diverse coalition, that is really the big task that still lies ahead of us. During our upcoming chairship, the Netherlands therefore wants to see how we can adapt these principles into even more concrete tools that can be used by our community to practice and integrate them into the activities that we support. only be done through cooperation between our members in close cooperation with our local and implementing partners whose needs and challenges are central to any solution. We will therefore also ask all of our members or all of our Freedom Online member states to share also their best practices either as a donor or a recipient. I mean given the multi-regional build-up of a coalition, this would be a great chance to see it from both sides. And also as the Netherlands, these principles will be key and there will be a great way in connecting the development work and tying it to important, to tie the agenda that we have on digitalization and tying it to connect connectivity, security and good governance. Because we see it sometimes that we have these high-level discussions at the OEWG that are very difficult and we see that it’s a certain set of countries that are very active in that and we need to reach out and make sure that those, the last third of the world that’s unconnected, will be able to connect. But that also will have to have the cybersecurity tools to keep that structure secure and then of course have a good human rights set of principles to govern that structure, as Alison really much more detail pointed out. Thank you for that. I think I will leave it at this. Thank you so much.

Moderator – Lisa Poggiali:
Thank you so much, Kjus, and I have no doubt that you will be able to even exceed the work that we have done this year in your FOC chairship next year. So we look forward to partnering with you. No pressure. So now I’m very pleased to introduce Nelle Leosk, the Digital Ambassador at Large from Estonia. Nelle, over to you.

Nele Leosk:
Thank you. Thank you so much and I’m glad to be here in this very early hour and I’m glad to see also so many other people here. Actually last month we celebrated a little birthday in our Ministry of Foreign Affairs because 25 years passed since Estonia became a donor. From a recipient side to a donor. So we have both experiences and perhaps I will just complement these principles by some practical, I would say, takeaways from our 25 years, out of which I would say 15 digital has been a priority. And I know that we have been discussing here over the past days and also today quite a bit of everything that can go wrong with technology. And in a way I believe it’s also an increasingly trendy to talk about. It’s of course very timely and very much needed conversation. But it seems to me that we are also at the same time forgetting about everything that technology can bring. And in this sense Estonia I believe is a good reminder that technology actually can be used to build democracy. Technology can be used to enhance economy, to rebuild trust, to build openness, transparency and Estonia has all done this. And this has I believe also been the reason for interest in our experience. Because it’s not about digitalization. It’s not to become the world leader in digital services. It’s really about democratizing your state and the opportunities it gives. So for us digitalization and these principles that we’re also talking about here have actually been horizontally integrated in different programs. And not only, I would say, our development or economic policy or trade policies, but currently also in our tech diplomacy. So these principles that we are talking about here somehow need to be implemented. Because just talking about the principles will also not get us very far. And actually digitalization through development cooperation has been one of these very practical ways how we build a democratic state. And there were some examples here in these principles, for example data governance and management. So it is clear that in order to introduce a data governance or management system, for example In Estonia, we have this famous system called X-Road. It’s our interoperability layer that allows to exchange data. In order for this to work, you need to create also an ecosystem and a supporting legal framework and policies. You must have access to Information Act. You must have open standards, and so forth, and so forth. So this, in a way, creates this, I would say, democratic ecosystem. But one other aspect that we were discussing about it yesterday evening also over the party is actually that often we forget that the development is not about us. And in order to really reach these principles, it is actually about also the receiving side. So we really need to put the emphasis in building the capacity of the others to the level of us and even beyond. And we have a very good example, a practical example, from long cooperation with Ukraine, for example. Over the past 14 years, we have been working closely in supporting Ukraine to build their democratic system. And we can see now that in many areas, they may also exceed all of us in this room. So it’s really about the other side and not that much of us in this journey. I believe my time is almost finished. But I wanted to bring maybe just quickly three main priorities for us that are also horizontal issues. And actually, one of them is a gender divide, which is also integrated in all our policies and action plans and is also the priority for tech diplomacy and, in a way, my own work. The other is the working with private sector. Our development agency is only two years old. So it has been mainly through the partnership with private companies and other organizations that we carry out our policies. And the third is actually about openness. And that also translates to technological openness. So we support open source in our development cooperation not to get anybody hooked and have also more control and transparency over these processes. So this is maybe very shortly about how we have approached it.

Moderator – Sidney Leclercq:
Thank you very much, Ambassador. And thanks for a great reminder of the democratic potential and also the importance of open source. of building capacity. But you were starting by saying that it’s really early. I’m afraid that for our next speaker, who is based in Kenya, it’s very late. But it’s even more my pleasure to introduce and to welcome Immaculate Kassai, the data protection commissioner from Kenya. So commissioner, over to you.

Immaculate Kassait:
Thank you. I hope you can hear me. You can hear me? Perfectly. All right. It’s very early. It’s actually 3 AM in Kenya. So thank you, Ambassador, and my fellow panelists in the USA and Netherlands for the opportunity to participate in this panel. I’ll try as much as possible to summarize. I think it’s a very exciting moment to be discussing key principles for donors in terms in this era of digital era when we are discussing governance. And I liked what was spoken earlier, that if we could quickly be evolving into digitization, and we’re not talking about governance, this could lead to misuse and destabilize many economies. Of course, from a data protection perspective, we are often seen as the people who hold back development and interfere with innovations because we are put there to actually then ask questions as far as data protection is concerned. As an office, just a quick one, this office has been there for three years now. It was established in 2020, but the act came into force in 2019. And really, our role is to regulate the processing of personal data based on certain principles, which I would say are very common across all data protection authorities. Our task is to make sure that when we talk about the right to privacy, it’s actually not just a right we speak about. It’s a right that is actually implemented by the Kenyan government. That makes sure that the social justice orientation of the society. On top of that, as an institution, we have been mandated to establish a legal and institutional framework, provide the rights of the data subject. Some of the key issues that we’ve been able to achieve in this short time, of course, is guidance notes as an office. We are members of three international bodies. We will be hosting the Network for Data Protection Authority in the coming year. We have established a register of the data protection controller, data controllers, and we have a strategic plan. What I’d like to just speak about is we have had 2,761 complaints and have actually enforced almost six penalty notices. The recent one, which was like a week ago, was to do with people using personal photos of children and also using people’s photos in social places and also unsolicited information. Unsolicited messages. And that comes to the point that many times in the process of marketing, many controllers are not paying attention to that this is personal information and we must be held to account. Of course, as an office, there are challenges and I’m happy we have this conversation. We are finding ourselves in a situation where we don’t have adequate laws in some cases, where in the context of when we developed the Data Protection Act, we did not anticipate we’d have multinationals. have not registered in Kenya, of course, being a new office, resources are never adequate, and of course, advancement in technology, we are seeing AI as one of the issues. But coming now to the highlighting as far as the issues around the donor principles for human rights, what does this mean for us when we say we need to commit to doing no harm in the digital age while enhancing technology and also ensuring that we’re increasing donors accountability? I see several areas of collaboration. Some of the areas of collaboration, when we say donor support and country being aligned in terms of their legal framework, I see the need for support in as far as reviewing of current legal framework and for those countries that don’t have existing data protection framework, they need to actually then help them so that we’re not leaving other countries behind in as far as data governance is concerned. Sharing expertise, some countries are ahead, I think it would be important to collaborate and come up with some of the guidance notes, guidance in as far as this is concerned. We also need to liberate the government agenda on technology. In our case, as a country, we are digitalizing over 5,000 government services, and there’s need for liberating what others have done. Sharing best practices, of course, in terms of collaboration with private sector, we see an opportunity there to facilitate partnership with private sector and recipient countries to encourage right-based respect. And I would say, I would see this also more of the data protection by default and by design. Capacity building is another area for collaboration and technical support, supporting training programs. Of course, when it comes to fostering coordination, I see joint advocacy effort as one of the things that we can also do. Support on the growth of rights, respect and technology as a principle, I see one of the areas of collaboration is facilitating training initiative, advocating for professional codes of ethics, and of course, facilitating exchange of information. When it comes to prioritizing of digital security, the need to provide for resources, and of course, capacity building. I think I don’t take too much of the time. I want to thank you once again for the opportunity, and I really welcome the conversation around the principles, and it being launched here is a really big milestone for donor countries, for partners, and especially in this era of technology, where we are now being hold to account and holding other people to account, so that it’s not just development, it’s not just technology for the sake of it, it’s technology that adheres to human rights. Thank you.

Moderator – Lisa Poggiali:
Thank you so much, Commissioner Kassayet, for those remarks. I think you provided a really nice bridge for us to start thinking about implementation by offering some concrete ideas of how we could partner with other countries around the world, not only donor countries, but all countries around the world, so really appreciate that, and appreciate your remarks. and the work that you do. So I wanted to now turn it over to Juan Carlos Lara, who is the Executive Director of Derechos Digitales and who has played an instrumental role in the drafting process for these principles. Juan Carlos.

Juan Carlos Lara Galvez:
Thank you, Lisa. Good morning, everyone. And good morning, evening, afternoon to people attending online. I wish to first introduce myself. I am a member of an organization that works on digital rights in the global majority, specifically in Latin America. And for us, it’s very important to interact with governments and with donor governments, especially considering the role that they have in funding much of the work that organizations like mine do in the global majority. And that depends on the support that we can obtain from different funders. In that regard, it’s also heartening to hear so much about having countries be accountable or having put principles that will lead to action and other types of language that represents an intention to bring all the good intentions that countries often present into concrete steps, into concrete things. And the donor principles in that regard are a product of an interaction, of an exchange of ideas and views that in many ways represented what our priorities are for civil society in the global majority, understanding as well that we need the support not just to conduct work that we like, but also to create change and to promote social justice and to generate conditions for a responsible development that is respectful of human rights and that is centered around the people. I wish to, before I close my remarks, I wish to recognize those efforts and at the same time recognize the fact that whether we see this as a fruitful steps, fruitful thing is going to be shown by the implementation process. As much as we would like to recognize this as the beginning of something very inspiring, we also need to see how this translates into action. And to the question about the opportunities that this presents for advocacy for organizations like mine, it’s also very positive to see that the principles recognize the need for coordination with stakeholders and the need to admit also participation of different people, participation of different stakeholders and recognition of human rights in issues such as technological development. So I think that one of the most important things that we can see here is that when we put the idea of the priorities of states into action that we need for advocacy organizations is that those priorities should come from the advocacy organizations and should come from the grounds of the people that are doing this work. And that donor governments, donor institutions need to recognize that that’s where the knowledge comes from, from what is needed on the ground. And that the position of certain officials, it’s better informed when they have that type of interaction and when they can foster collaboration between different stakeholders in order to promote human rights. So thank you.

Moderator – Sidney Leclercq:
Thanks very much Juan Carlos, and we cannot agree more on the importance of localized knowledge and evidence at IGRC for sure. And I’ll turn to Zach Lambell, Senior Legal Advisor for the International Center for Nonprofit Law, who is online, and I hope, yes?

Zach Lampell:
Yes. Thank you, Sydney. Can you hear me okay? Perfectly. Great. Well, thank you all so very much. My apologies that I could not be with you all in person in Kyoto, but I know and trust you all having a great time. Before I begin my very brief remarks, I wanted to quickly introduce myself. I’m Zach Lampell, Senior Legal Advisor with the International Center for Not-for-Profit Law, where I lead our global digital rights programming, and where we work in over 100 countries to ensure that the legal framework supports civil society and promotes and protects the freedoms of expression, association, assembly, and the right to privacy. I want to also thank the whole Freedom Online Coalition, the support unit, and the member states, and especially the U.S. government, USAID, and the U.S. State Department for their leadership in developing these principles, as well as Sydney and his team with IDRC, the co-authors and co-leaders of the principles. I’d also like to thank the Funding Coordination Group, the rest of the drafting committee, and finally, everyone who provided feedback, comments, and suggestions, especially all of those from civil society organizations in the global majority. I’d also like to thank I’d like to now briefly present three ways in which civil society can use the donor principles for advocacy. First, internationally. I would encourage all civil society organizations to collaborate with donor governments, as those donor governments develop their strategic priorities and institutionalize their processes to shape their foreign assistance. Like Juan Carlos was saying, let them know what you’re seeing on the ground. Let these donor governments know what has worked, what concerns you have, and most importantly, articulate what gaps there are in domestic legislation. And finally, utilize existing processes like the UPR to obtain firm commitments from your governments to improve the legal framework. So that’s internationally. Domestically, work with donor governments to encourage and facilitate real, meaningful, multi-stakeholder, open, public processes for drafting legislation. Be sure to reference all of the international legal obligations and frameworks on which these principles are based. And work with both your governments and the donor community to ensure that these principles and international human rights standards are being upheld in the legal framework. Finally, technically, and this is one of the principles, but work to push for inclusion into standard-setting bodies. If you or your organization or your partners do not have the knowledge base to effectively engage with these standard-setting bodies, reach out to the international community, donor governments, international NGOs, so you can develop and build your knowledge base. So that way you can impact the work of these technical bodies. Work to ensure that human rights protections are built into the infrastructure of the internet. Work with private companies to help create products, services, and design systems that place human rights at the forefront. So again, internationally, domestically, and technically, there are ways for civil society to use these principles to advocate for an improved legal framework, improved product and services, and an improved internet infrastructure, all of which we believe will lead to the change and support, promotion, and protection of democratic principles that we all seek. Thank you again so much, and I look forward to rolling out these principles and working with all of you then. Thank you so much.

Moderator – Sidney Leclercq:
Thanks so much, Zach. And you’ve probably given us the kind of the structure for implementing. internationally, domestically, and technically. So thanks so much. Let me turn to Michael Karimian, Director for Digital Diplomacy, Asia and the Pacific, from Microsoft, to also provide a private sector perspective on the donor principles. Thank you very much, Sydney, and indeed a private

Michael Karimian:
sector perspective, not necessarily the whole of private sector, but thank you to FOC, USAID, and IDRC for the opportunity to join today’s discussion. It’s very nice to follow on from Zach. Zach and I did some work together a few years ago. I have a lot of respect for him and his organization. I work on Microsoft’s digital diplomacy team, which seeks to advance responsible state and non-state behavior in cyberspace, grounded in international law and norms, including the international human rights regime. I previously worked on Microsoft’s human rights team, which seeks to uphold Microsoft’s corporate responsibility to respect human rights, grounded in the United Nations’ guiding principles on business and human rights, and it’s great to see the UNGPs accurately integrated throughout the principles here. Indeed, as Sydney mentioned, I’ll offer a quick reflection on current application of the principles and some of the ways to move forward where there’s perhaps some gaps in application. So looking particularly at principle three, within that there’s reference to donor government should also emphasize the need for industry to remain accountable to address critical feedback from civil society and human rights defenders. I think firstly that requires that donors are very specific in either encouraging or even mandating that companies uphold the second pillar of the UN guiding principles on business and human rights, mainly by having a human rights policy in place signed off at the most senior level, publicly available and implemented by accountable teams, and with the right degree of transparency. And that of course should include a commitment to respect the work of human rights defenders. Additionally, that also requires both states and companies to uphold the third pillar of the United Nations guiding principles, which is access to remedy, and you do that through grievance mechanisms, both judicial grievance mechanisms and non-judicial grievance mechanisms. So that’s a mix of mechanisms coming from the state, from law enforcement, and from regulatory bodies, as well as the more informal non-judicial grievance mechanisms which can be implemented by companies, civil society, or other actors. And again, companies should be expected to respect and participate in such processes and not to hinder them. There is an important recognition in the principles around the fact that transnational private sector companies often have weak direct connections to local civil society stakeholders. This is a huge challenge. This is where platforms such as the IGF come into play, as well as regional IGFs and local IGFs. I would also call out organizations which have tremendous civil society networks around the world, such as Access Now. And Brett Solomon is pleased to see Brett is in the room. Access Now is an incredible organization who has a tremendous network, which has certainly helped Microsoft to be better at having those direct connections with civil society organizations in global majority countries. Additionally, in the principles, there’s a reference that donors can and should hold private sector partners accountable. This absolutely goes back to the fact that donors, I think, should have a high expectation that companies should be undertaking human rights due diligence so that the actual inclusive, sustainable, and rights-respecting business investments are being made. And human rights due diligence requires that companies are undertaking ongoing practices which are transparent. They must include stakeholders, including civil society, to assess and address actual and potential human rights impacts. Quickly turn into principle seven, support the growth of rights-respecting technology workforce. Within there, there’s reference to donors should encourage these products to be built in alignment with respect for human rights and democratic values or supporting, I should say, inclusive human rights by design processes. I would actually take that down a step further and make sure that there’s a focus on so-called salient human rights, so the human rights that are most at risk by business activities. And that’s generally understood to be the human rights risks where there’s the highest degree of scale, scope, and remediability challenges posed by those business practices. And for most technology companies, that means privacy by design, accessibility by design, and increasingly responsible AI by design. And that requires having policies in place, accountable teams in place, and again, the right degree of transparency. Lastly, there’s mention in principle seven around a professional code of ethics for individuals, organizations, and institutions. This is a challenge. Many have looked at this before. So for example, can you have software engineers having a code of conducts that are taught in university courses? The challenge there is those university degrees, especially at the highest level universities. Students have very little scope for optional courses. The mandatory courses are already very full. And so it’s hard to add anything into that curriculum. But actually, you don’t need to let perfect be the enemy of the good. There are lots of interim steps. So donors should make sure that companies have the right standards of business conduct in place and making sure that there is the right degree of training for staff throughout the company so that they understand what are their responsibilities. They understand the structures that are in place to seek additional guidance if they need to. They should also have access to additional training if they want to have it. And most importantly, they should know where to go to within the company for additional expertise on these subject matters. I’ll stop there and very much look forward to the breakout sessions.

Moderator – Lisa Poggiali:
Thank you so much, Michael. And what a rich set of remarks for us to think about when we start the implementation. implementation conversation in a minute. Thanks so much for that. So before we move into the second portion of our event, we will hear from, last but not least, Shannon Green, who is the Assistant to the Administrator for the Bureau for Democracy, Human Rights, and Governance at USAID. And she will be joining us, as you can see, via video. Thank you.

Shannon Green:
Hello. I am delighted to join you to celebrate the launch of the Donor Principles for Human Rights in the Digital Age. And I commend the 38 member governments of the Freedom Online Coalition who have endorsed these principles and supported their development. These principles provide an important blueprint to protect and uphold the rights of individuals in our digital world. They commit donor governments, including my own agency, to hold ourselves accountable for the role we play in shaping the global digital ecosystem. The principles encourage donors to examine our own internal structures and processes and introduce safeguards for all programs. These safeguards will help ensure that our programs are equitably distributed. They will also promote safer and more secure environments for partners and local communities. Donors have much to learn from our partners around the world in government, civil society, and the private sector. You heard earlier from Commissioner Kasait, who has been leading Kenya’s Office of Data Protection. These authorities are the safeguards that protect us from the darker aspects of the digital age. It is more important than ever that donors partner with them in their critical mission to better protect the public and increase transparency. USAID is also energized by the Open Government Partnership, or OGPs, recent announcement of digital governance as a priority issue. This will strengthen the transparency of public oversight of artificial intelligence and data processing systems. We have seen remarkable progress under OGP commitments, and in this spirit, on behalf of USAID, I am pleased to issue a call to action for other donor governments to join USAID in making concrete commitments aligned with the donor principles. Internally, donors can make commitments to integrate human rights impact assessments into their program design and evaluation processes. They can also allocate dedicated funding to support partners and local communities’ digital security. Externally, donors can better support partner countries to develop and implement strong legal and regulatory frameworks, or equip oversight bodies to better protect the public and hold powerful actors accountable. Civil society and tech companies, large and small, should consider how they can most effectively use the principles to encourage responsible donor behavior. For more information, please visit the Freedom Online Coalition’s website. We look forward to hearing what concrete actions donors commit to at the Third Summit for Democracy in the Republic of Korea, where the United States government plans to launch its own efforts. The donor principles for human rights in the digital age help contribute to a digital future that respects rights, promotes democracy, and ensures that the benefits of technology are shared by all. Let us act with determination and vision to fulfill its promise.

Moderator – Lisa Poggiali:
Thank you, Shannon. And with that, we will conclude the official launch of the donor principles for human rights in the digital age, and we will now move into breakout groups. So, I’m going to invite Zora, who is over there in the corner, to facilitate the process of getting all of you into breakout groups. There won’t be too much movement. And then maybe I’ll also just say, if you have not signed in via the sign-in sheet that is going around, We will also send it around again, and then we’ll leave it on the table right next to the entrance and exit so that we can continue to keep in touch around implementation of the donor principles after this event. Zohra.

Zora Gouhary:
Hello. Can you hear me? Thank you so much, everyone, for joining us. As Lisa said, we will be going ahead with our breakout groups. So what we’re going to do is we’re going to be breaking out in five groups. So four groups here physically, and then everyone who’s joining us online will have their own breakout group and their own moderator. So I would like to ask everyone who’s in the room just to move to the four different corners of the room. I think make it out of your own choice. I’m not going to be separating you, so just direct yourself to one of the corners. I will be going around, and we have about four questions, which you can see now on the screen. Maybe I’ll just give over to Lisa just to explain maybe the questions in a bit. But one final thing for me is that we’ll have about 15 minutes for the breakout groups, after which we’ll come back into plenary just to quickly discuss what has been discussed in the breakout groups. We have our own facilitators who will be taking your contributions, after which we will be taking them and summarizing them and making sure that we’ll use that towards the next steps following the launch of donor principles. And I think that’s it for me. Thanks.

Moderator – Lisa Poggiali:
Hey, thanks, Zora. So just to provide a little bit of structure, as you heard many of our panelists note, there is sort of an internal component to the donor principles, and there is an external component. So on the one hand, we’re thinking about what can donor governments do internally in terms of their own processes and structures to uphold the donor principles. And then also we’re thinking about what can donors support externally and programmatically in order to uphold the donor principles. So we’ve structured each of the questions around that internal and external component. We’re going to run this kind of like a speed test. dating situation. So each group will have a few minutes to focus on each question. And then the group will remain the same and will just move to focus on a different question every few minutes. Or we’ll announce a loud buzz or something to indicate. And so you’ll get to have a sort of cohesive conversation across the entire period of the breakout group. You can stay with your group and pick up on conversations that you had as the questions move along. And I think that is it. Anything? OK, and to our online group, we will do our very best to incorporate you in the discussion afterwards. And so don’t think we’re forgetting about you. We value that you are there as well. So let’s break out into groups. And if everyone can kind of migrate to the corner that you’re closest to, we’d appreciate that.

Audience:
Congratulations. Thank you. Thank you. We’re just going to have a little break in this. I think it was our time. I think we have a few minutes left. And we’re going to move to some five minutes. And I’m going to wait until you have a question. But we want to do one last main question. And then we want to move on to a round of questions. So I want you to do a round of questions. And then we’ll take a buzz. Yeah, let’s go. So if you have a question, in that case, I will say it. And no, I don’t think you need to talk to the audience. I’m just curious to know what we have in there. Because there’s questions all around the room. So if there’s a question, I’m going to say it. Thank you. OK. So if you have a question, and you want to take a buzz, take a buzz. And if you have a question, you can ask that question. And we’ll give it to you there. Can we see this? Because I have it on the board. Yeah. Yeah. So if you have a question, you can ask it. And if you have a question, you can ask it. OK. That’s really insightful. Thank you. Thank you. OK. And thank you all for that. That’s wonderful. And we’ll take another question. And we’ll leave with that. Give us a pause. We have a question. We have a question. We have a question. We have a question. We have a question. The knowledge of the race is important. And meaningfully, it goes without saying. That’s a great question. That’s a great question. Oh, thank you. That’s wonderful. Wonderful. Thank you so much. I know you’re listening in terms of the community and that’s a wonderful thing. Thank you. Hello, everyone. If I can just ask you to move to the next question. Thank you. Thank you. Thank you. So I’m just wondering if you can speak to the outcome by any funding that’s going to be given to government as a result of the civil society, rather than necessarily a portion of the funding that’s going to be given to government that’s going to be effective in the long term? Thank you so much. I think the accountability potential and accountability, and to make sure that they have the good news, and the structures that are needed in the country to make that happen. But also, the work that’s happening across the whole system, the structures in place, in order to do that. Hello again. I’m just asking everyone if they can move to the next question, if they haven’t already. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. May I ask everyone just to move to the last question? We have the last four minutes. Thank you. Thank you very much. I will just ask everyone to go back to their seats so we can come back to plenary. Thank you. Everyone can stay in the same chairs if you’re in the room and just turn them, or you can get up and move back, but we do need to move back. back to plenary at this point, and we’ll continue the conversation. We won’t just be reporting out. We’ll continue the conversation at this point, and we’ll continue the conversation at this

Moderator – Lisa Poggiali:
Okay, so we’re going to have a continued discussion, so we won’t be reporting out necessarily from groups, but we’ll invite any of you to raise your hands either in the room or raise your hands online if you want to make a comment, and we’ll just start with one of the questions around implementation internally in donor governments that Eleni, who I don’t know where, oh I think she went to the bathroom. So Eleni from GNI asked a question about what it would look like in donor governments like 4USAID or IDRC to implement these processes, and then somebody else whose name I’m forgetting, but feel free to chime in, asked a question around not burdening those who are receiving funding such as implementing partners, grantees, from having to do extra work themselves in order to implement these principles. So just invite anybody to maybe give thoughts on that. I’m sure this has come up in multiple groups, so we’ll just turn the floor over to anyone who has any ideas around that or want to expand on that idea of implementing the principles internally without burdening grantees and implementing partners with additional labor. We can start with IDRC, maybe Sydney, or if you want to repeat what it was, Rahia, that you said in the session.

Audience:
I mean, first of all, I can’t speak for all of the programming that happens at IDRC, but I think for those of us who work in technology, we already take these things into consideration a lot. And I think what I would want to try to do is socialize this across my colleagues and begin to talk to them about, for instance, providing more digital security and digital resilience as a portion of a budget. And to work with grantees who are, you know, for instance, if it’s a health application and there’s a, you know, what are the data governance practices? Because not everyone’s on the same page with these issues, right? I mean, they’re thinking of different human rights outcomes around access to health or, you know, access to clean water. So how can we begin that conversation with an IDRC?

Moderator – Lisa Poggiali:
Anyone want to add to that or have a follow-on question? Quinn. And please just introduce yourself when you come on mic.

Audience:
Sure. Is this on? Yes, thanks. Yes, I’m Quinn McHugh, the executive director for Article 19. We work on implementing a freedom of expression approach and human rights-based approach to bridge technology and policy and human rights actors. I just wanted to echo what she was saying. One of the things that we see quite frequently when we are submitting grant proposals for Article 19 and in negotiations with donor governments is we will put in a line for safety and security, and it is one of the most frequently questioned lines we have in our proposal. People are like, what’s this for? Can this be only to demonstrate the safety and security for specific actors in this program and not for the organizations themselves to build robust digital security and resilience practices, which are about keeping our partners safe as well? And so that’s just something to echo a little bit. I think it would be really useful in terms of the implementation if there was maybe a broader understanding of the importance of these kinds of lines in the proposals that we’re submitting. And maybe this. This is something that can be echoed from kind of yourselves down to your colleagues that maybe having a bit broader understanding of digital security and resilience and how that programming should be incorporated into some of the work with grantees. So it’s not just, again, specific to someone being given an emergency training or something like that. It would be very helpful.

Moderator – Lisa Poggiali:
That’s really useful. Thank you. Are there specific actors, this can be directed to you or anybody else, that we should bring to the table or existing networks that we can leverage or bring in as partners in order to socialize these very issues to others across all of our respective development agencies who may not have the knowledge of what digital security might look like in a solicitation process and who should actually be involved and who should be protected?

Audience:
We work on this. I mean, Access Now, pretty much every organization that’s going to be here in civil society could provide something. But in terms of donors themselves, the Ford Foundation is actually really good at building the idea of capacity building into the grants that they give as well. I’m sure there’s other funders here, but that’s just one I’m very familiar with. They have a very open dialogue-based approach and more expansive in terms of looking at issues of security, not just from technical things, but like economic, social, cultural elements of digital security and safety as well, looking at the more kind of a holistic approach to it. So I would suggest if you’re looking for another kind of donor to speak to on some of their practices, they’ve been very good.

Moderator – Lisa Poggiali:
Thank you. What about… Go ahead, Daniela.

Audience:
Yeah, Daniela from GPD. Just wanted to echo that that came up in our group around being more creative in terms of reaching more groups and going beyond the usual suspects and reach communities that are usually marginalized, and that goes back to the very clear point that was made earlier about that bottom-up approach, but also we discussed how these principles can be leveraged not just with donor governments, but also increasing collaboration with private foundations. So that came up as well. So yeah, just echoing that and supporting that point.

Moderator – Lisa Poggiali:
Thank you. Is there a specific fora where maybe these ideas are not socialized as much? So thinking about other major development conferences or like even the G20 process, or other spaces where we might want to work on socializing these ideas so that our colleagues who work on digital health or digital economy can start to learn more about how they can facilitate more digital security? Yeah. Mm-hmm. Go ahead.

Audience:
Thank you. Apologies for my voice. Silvia Cadena, APNIC Foundation. I just wanted to say that there are so many events and… and principles and processes that small and medium and large organizations are supposed to figure out by themselves, that it would be very useful to have, when you talk about mechanisms and tools for implementation, it would be really good to have really practical things that allow organizations to see, okay, where do I align? Where do these align with my strategy? It’s not about, it feels a lot like chasing the strategy of others, instead of seeing how that is helping the strategy of each organization to actually deliver, and maybe that in our case, we can support, I don’t know, or do proper follow-ups of three or four of these principles, but not necessarily all. Same with the ROAMx indicators, and you start looking, and it’s like, okay, which one do I choose? What do I do when I’m doing, and all the time you feel you’re doing wrong, because you’re not following everything. So figuring out this, I really like the fact that you mentioned the principles of digital development, tiny little thing at the end, having things like that to say, for this principle, these other things are important, then you start feeling like you are connected and you’re contributing, and even encouraging people from a bottom-up approach to be able to participate in this process would be really good. I’m David Sullivan with the Digital Trust and Safety Partnership. One thing that occurs to me, so principles are invaluable for building consensus, but that process of building consensus, you wind up with a fair amount of passive voice, and then the concern, of course, becomes that in that passive voice, responsibilities get driven down to implementers and their partners, and I was sort of thinking that you could almost have an accompanying tool for donor agencies to take the principles and then just add specifics in terms of who is responsible. for each of these things, going from, you know, the actors to the events and opportunities and items or whatnot. And that could be particular to each government. And then you could sort of ensure, okay, we’re not going to, you know, take these responsibilities for human rights due diligence and add that on, you know, as on top of other things that the implementer has to do that gets pushed down to local partners in the field. But that’s something that gets built in at the strategy level within the agency with the right people involved. So just a thought in terms of how this could be operationalized in a way that you go from that sort of vague consensus to clarity about who does what.

Moderator – Lisa Poggiali:
Thanks. That’s very helpful. And I will say the idea for the call for action of donor governments is to allow individual governments to be able to think about within their own internal legal structures and processes and strategies, what commitments they might be able to make that are concrete, that are kind of bringing the principles down a level to concrete commitments and actions. And one of the things that we talked about in the drafting process was the potential for building out toolkits as part of the next year implementation under the Freedom Online Coalition. And so curious if anyone talked about that or has ideas around what kind of toolkit might be helpful. You know, there was a suggestion for different pieces of guidance that was more concrete that speaks to specific tools for different stakeholders, like maybe civil society for advocacy or diplomats or development actors who are doing the work out in the field. So any ideas that anyone have those kinds of conversations? Online as well, feel free. Zora, is anyone from online wanting to participate? Okay. Go ahead, Brett. It’s not working.

Audience:
Hello. Hi. Brett Solomon from Access Now. Thanks a lot for the principles and for the donors who have worked on it and for civil society as well. I just wanted to… your point and I think also to David’s as well, is just if these principles serve as a tool to focus donors’ minds on how to get more money out the door and into the hands of the beneficiaries, then I think that’s a real plus. If what actually happens is that they become a bureaucratic roadblock to the delivery of money, then that’s a backfire. And I think in terms of the toolkits and the processes and the briefings and all of that, like the starting point should be, and I’m speaking from the perspective of civil society is, or from my perspective as a civil society member, is that civil society is currently so under-resourced and so under attack and so on the front line, particularly organisations in the global majority. And so whatever we can do to leverage these principles to facilitate the transfer of funds from those who have it to those who need it, then the better. And I would think that should be the starting point of any of the briefings or the processes for implementation.

Moderator – Lisa Poggiali:
It’s very helpful, thank you. Anyone else want to speak to that? Go ahead, Quinn.

Audience:
I’m sorry, I’m speaking too much, but taking off what Brett just said, there’s something that all of us in civil society, particularly working on digital rights and these issues are acutely aware of, which is the big hanging question over all of us, particularly in global majority countries of what is gonna happen with open society foundations. There’s very strong indications they will be pulling away from funding a large number of the organisations they have supported in the past. And so the question is, what is going to be the response of the donor community if they think it’s very important to have these organisations at the local. national level in the global majority countries be strong, what is going to be the response from, as Brett was saying, those who have lots of funding. I mean, statutory donors typically provide larger grants, but it’s often harder to get them to smaller ones. And while these donor principles don’t necessarily talk about that in terms of that issue, I do think because this is a forum for donors here, DR, I just thought it was useful maybe to reflect that there is a huge amount of uncertainty in the community because open society has funded at the human rights level so many organizations broadly and at a small level, but was very useful for sustaining and securing. And with that question, there is, as Brett was saying, there’s a huge amount of uncertainty in the field about how are we going to sustain the momentum that we’ve had. And so in these donor conversations, it’d be very useful to think about that level of how do we sustain and build the networks that are there when the funding environment is so uncertain at present. That’s all.

Moderator – Lisa Poggiali:
Yeah, that’s a really good point, changing landscape for sure. So I wanted to bring it back to the question that Daniela raised about private sector. See, are there any private sector partners who maybe could comment on how private sector organizations who do have even more money than donors do oftentimes could potentially partner with donors on digital security or any of the other issues raised in other principles? Invite those online or in the room. Michael, I don’t want to put you on the spot, but if there are no other private sector partners who want to speak, I will because I know you are one.

Michael Karimian:
Thank you, not appreciated. So I think your question speaks to a broader challenge, frankly, which is that in low and middle income countries, as they undergo digital transformation that expands the cybersecurity threat landscape. And so there absolutely needs to be more effort as some are already doing. For example, the GFCE, the ITU is looking at this as well. Among others, Microsoft too, the government of Sweden. How do we mainstream digital security, cybersecurity into the digital development arena? And as we start to now look at the post 2030 agenda, we need to be much more acutely aware of that than when the 2030 agenda was created in the first place, where digital transformation was. undervalued as a means for achieving the SDGs. It’s kind of a conversation happening now, which is a bit too late. And so how do we think about cybersecurity in the post-2030 agenda is absolutely a critical component of that conversation, which is starting now. The GDC process must be part of that, whatever happens with the new agenda for peace as well. But yeah, absolutely. I mean, it’s much bigger than just what we’re looking at in these principles today, I think. Thank you.

Moderator – Lisa Poggiali:
Well, and that raises a good point about some of the other fora through which these conversations, and particularly the human rights and democracy affirming kind of perspective, could join forces with some of the more traditional cybersecurity conversations that have been occurring in the ITU and GFCE, et cetera. And so we’d love to hear if anyone is engaged in those processes currently, if there are any concrete recommendations for next steps for trying to engage in those spaces and networks that have thus far not been connected that well, at least from the space where I sit in the DRG, Democracy, Human Rights, and Governance Bureau at USAID. And I know from talking to other donors as well that the democracy and human rights issues on the technology side have been siloed oftentimes from many of these other technology conversations that are happening at the global level. So any insights from anyone in the room, or Michael, feel free to also respond, or anyone online as well.

Audience:
May I? Yes, please. Yes, I don’t know where to start. Thank you for being one of the participants in this launch. But all that I want to say, my name’s Honorable Ratilo from Botswana. It’s around 3.05 in the morning. want to say here is that when you are talking about the civil society, indeed the civil society can play a critical role, but at the same time we have to try to understand some few things because in most of the country you will realize that there’s no strong civil society in place, but the digital human rights violations are in place. So how are we going to try to protect those people who are living in those countries? We can try to protect the interests of the ordinary people or the community, but at the same time the donors cannot reach that because they have not registered a civil society in their respective country, but at the same time I will decide because I’m a member of parliament, I keep on telling them no, once the violation of the human rights take place on the issue of digital, I will take the government to court, but at the same time I don’t have enough financial muscle to protect the interests of the ordinary people before the court of law simply because of the financial muscle. Now I want to pose a question, how are we going to assist those type of the countries that are not really vibrant in the line of the civil society? Thank you.

Moderator – Lisa Poggiali:
So I think if I’m understanding right, the question was in spaces where civil society doesn’t have that kind of leverage with the government or doesn’t have the resources, how it is that we can support them in order to hold governments accountable when human rights are being violated. If you wanted to put something in the chat, we couldn’t hear, some of the audio was breaking up. I think that’s an excellent question and I think that that’s something that donors can heed the call on to support civil society. And these principles certainly provide a foundation for doing that on these critical human rights issues in particular. So thank you for that. I will right now turn it over to Sidney to close out the session and he will introduce the last speaker.

Moderator – Sidney Leclercq:
Yes, time flies when we’re having fun. And so we’re a bit late, but I’ll introduce maybe Adrian DiGiovanni, our team leader on democratic and inclusive governance at IDRC. And he’ll be providing some closing remarks. And he’s online from Ottawa. Adrian, over to you.

Adrian di Giovanni:
Hi, good morning, everyone. Can you hear me okay? Perfect. All right, so I’ll just dive in and it’s really just to say a few words of thank you. It’s bedtime here, so I managed to join in for the plenary discussion right now and I have a flavor for the richness of your discussion. So really to our distinguished guests and panelists, ladies and gentlemen, it’s an immense pleasure for me to join you from Ottawa, Canada. We’re on the unceded, unsurrendered territories of the Algonquin and Anishinaabe people. We just passed our third annual National Truth and Reconciliation Day in Canada. So we always recognize the traditional custodians of the territories we’re on. And it’s a wonderful event, the launching of donor principles on human rights and the digital age. And we’re really delighted to have been part of this effort and the principles couldn’t arrive at a more critical time. I don’t have to talk to a group of experts like yourself about the fast pace and ever accelerating pace of change with technology and how it can be a double-edged sword. And we always grapple in our work, do we talk about things as an opportunity or as a challenge? And we see it both, and especially for democratic values and human rights for the most marginalized and vulnerable communities in the majority world. Digital technologies, yes, powerful tools for information sharing, self-expression and organization, but they can also be used to deny or diminish people’s. And again, I think within the room, it’s probably come up quite a bit, a lot of the threats. And we’ve seen how digital technologies can play a key role in the decline or backsliding of democratic processes. And this Vera, from what I understand, and I read her opening remarks, mentioned how most often where you see stresses online in the digital space, it reflects a broader decline in human rights and freedoms across the world. And we see that work. We’re at the Democratic Inclusive Governance team at IDRC. We see both the online stresses on the ground and actually how they may feed one another, something we actively try to think about and understand. So that’s why at the International Development Research Center here in Canada, we’re a funder and a champion of research for sustainable, inclusive development. And we’ve been supporting work to improve evidence and understanding of all these critical phenomenon, like information disorder, technology-facilitated gender-based violence, and the online shrinking of civic space. For more on that, Steve Urquia in the room there, she’s definitely a resident expert. And really for us at IDRC, we focus on the experiences of populations and communities across the global majority. We have also aimed at strengthening the capacity of research institutions and civil society organizations to build global self-knowledge networks and to better enable cross-learning and scaling of policy solutions. So a couple of examples are the Feminist Internet Research Network and the Data-Free Development Network. And so many of the discussions just now definitely ring true about trying to reach local organizations, actors, flowing our funding directly. We’re nimble enough. We often get to do it. And that’s really where colleagues like Sidney and Rahia find great joy in the work. We also see the power. And for us, this is part of our contributions to a localization agenda. And on technology, we definitely see the gaps and opportunities, especially in terms of ensuring that strategies are tailored to context. non-European language where from what I understand most of the action can be when it comes to some of the distortions and sneaker democratic governance. So just to say collectively as donors we have a responsibility to ensure that the actions and investments made in digital initiatives do not contribute to an erosion of human rights protections and democratic institutions processes and norms. So in other words to echo the introductory remark donors must do no harm and that’s something because we’re a research funder we take seriously across every single project we fund and so it’s not a pediment to funding just echo a comment earlier it’s actually something we take very seriously and it’s becoming harder to understand how to ensure we do no harm with many of the threats out there to democracy around the world. I mean this is why the donor principles are such an important step they provide both a safeguarding and accountability framework to ensure an alignment between investments in digital and innovative initiatives and commitments to human rights and democratic values. So I’ll also emphasize the importance of inputs from government civil society and private sector throughout the consultation and drafting process of these principles. At Year Z we’re kind of a public institution we’re close to civil society we engage with a variety of actors and so these kind of multi-stakeholder settings we really see as key and I want to thank take this opportunity to thank all colleagues who have taken the time to provide feedback and really to improve the principles and to arrive at the version that you see now. And of course the adoption of the principles is just the start and that’s why together with U.S. colleagues we have wanted this launch to be not just about presenting and discussing the principles but already to begin to dive into the critical question of so what or now what and what next especially through the breakout groups you’ve had and you know I’ve had the pleasure to just really hear your debriefing now. And so This idea, again, it reflects our mix of what we think is needed for effective change going forward. So, as you’ve all just done in this session, you’ve started to address the issues around what the principles actually might mean in practice, what kind of internal and external change is required, how to go about implementation, who do we need to engage with, and how can we measure progress once it is made. This is vital into translating the principles into action and impact. And I have to say the large majority of the work that we support on human rights is about the implementation gap. You can have many great principles and frameworks and constitutions around the world, it’s really then ensuring that they get implementation in the spirit, implemented in the spirit of human dignity, as was mentioned in the opening remarks. So, if you do have further input to provide, we really encourage you to share any comments or suggestions. You have after this launch, including through the dedicated email address colleagues from the FOC have created. I imagine someone in the room can point you to it, but it’s donorprinciples at freedomonlinecoalition.com. And so, let me just conclude by thanking again all of the panelists and the presenters who came before. I believe that they have already been thanked. And also really to end on a note of gratitude to our U.S. colleagues who have shown incredible dedication and commitment throughout the development, consultation and negotiations of the donor principles. It’s with a debt of gratitude that I’ll end. Blame Sydney if I’m gone over time.

Moderator – Sidney Leclercq:
Thank you so much, Adrian. And thank you to everyone for the luncheon. Thank you very much, Lisa. Thank you.

Adrian di Giovanni

Speech speed

183 words per minute

Speech length

1319 words

Speech time

433 secs

Allison Peters

Speech speed

177 words per minute

Speech length

651 words

Speech time

221 secs

Audience

Speech speed

178 words per minute

Speech length

2727 words

Speech time

920 secs

Augustin Willem Van Zwoll

Speech speed

169 words per minute

Speech length

673 words

Speech time

239 secs

Immaculate Kassait

Speech speed

164 words per minute

Speech length

1024 words

Speech time

376 secs

Juan Carlos Lara Galvez

Speech speed

165 words per minute

Speech length

517 words

Speech time

188 secs

Michael Karimian

Speech speed

214 words per minute

Speech length

1245 words

Speech time

349 secs

Moderator – Lisa Poggiali

Speech speed

164 words per minute

Speech length

1823 words

Speech time

665 secs

Moderator – Sidney Leclercq

Speech speed

164 words per minute

Speech length

398 words

Speech time

145 secs

Nele Leosk

Speech speed

145 words per minute

Speech length

790 words

Speech time

327 secs

Shannon Green

Speech speed

162 words per minute

Speech length

498 words

Speech time

184 secs

Vera Zakem

Speech speed

146 words per minute

Speech length

1010 words

Speech time

415 secs

Zach Lampell

Speech speed

157 words per minute

Speech length

620 words

Speech time

237 secs

Zora Gouhary

Speech speed

176 words per minute

Speech length

244 words

Speech time

83 secs

Digital democracy and future realities | IGF 2023 WS #476

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explores various aspects of public interest internet and its societal impact. It highlights the need to understand the funding mechanisms for public interest internet, particularly in relation to the Wikimedia Foundation. Ziske, who represents the Wikimedia Foundation, has requested information on funding in this area, indicating a growing interest in understanding the financial aspects of public interest internet.

Another perspective is sought from Bill, who has a background in research and development (R&D). This aims to gain insights into public interest internet from someone with expertise in innovation and infrastructure. Including Bill’s viewpoint enhances the analysis and provides a more comprehensive understanding of the topic.

The analysis also discusses the role of Facebook in providing internet access, especially in many global majority countries. It is noted that Facebook often offers free internet services, positioning itself as the primary gateway to the internet in these regions. However, concerns are raised about the monopoly Facebook has over internet access, which may result in limited choices and potential inequalities in accessing the internet.

Furthermore, the analysis examines the global impact of the internet, highlighting its positive and negative aspects. While the internet has facilitated globalization and connected people worldwide, it has also centralized control and decision-making processes. This centralization undermines the democratic nature of the internet.

A significant issue identified in the analysis is the digital divide, particularly affecting young men and women in grassroots communities. Limited access to necessary infrastructure and content creates a substantial barrier to internet usage for these individuals. Additionally, language and content act as obstacles in bridging this divide.

The analysis also delves into how internet usage challenges social norms, particularly for young women. In many societies, using the internet is stigmatized as it is seen as a threat to established norms. This negative perception hinders women’s empowerment and their participation in the digital space.

Acknowledging the importance of digital literacy, the analysis emphasizes the need to increase digital skills among young people and women. It includes not only basic technological skills but also the ability to generate content and engage in internet activism. Promoting digital literacy can contribute to reducing inequalities and fostering greater gender equality.

Lastly, the argument is made for democratizing access to the internet. The presence of the digital divide within societies and the centralization of control over the internet necessitate equal opportunities for participation and engagement. Democratizing access ensures a more inclusive and equitable digital society.

In conclusion, this analysis sheds light on various issues surrounding public interest internet. It emphasizes the importance of understanding funding mechanisms, gaining diverse perspectives, and addressing inequalities such as the digital divide. Furthermore, it underscores the significance of digital literacy and the need to democratize access to ensure equal opportunities for all.

Rachel Judistari

The analysis sheds light on the crucial role that public interest platforms, such as Wikipedia, play in the digital world. It argues that the current digital landscape is primarily dominated by private and for-profit platforms, which in turn exacerbate existing wealth and knowledge gaps, compromise privacy, and facilitate the spread of misinformation.

However, the analysis also highlights the positive aspects of platforms like Wikipedia. It underscores that Wikipedia is a not-for-profit public interest platform that undertakes consistent technological innovation and actively addresses knowledge gaps. It emphasizes that Wikipedia is a community-led platform, with decentralized community-based content moderation, making it a unique and valuable resource.

The analysis suggests that regulations implemented in the digital space often focus on big tech companies and overlook the diversity of internet services. It argues that policymakers should ensure that regulations uphold protections for human rights and safeguard user privacy, while also fostering meaningful community participation in internet governance. The supporting facts provided highlight that Wikipedia opposes overly broad restrictions with highly punitive consequences and actively encourages meaningful community participation in internet governance.

Furthermore, the analysis points out that Wikipedia is actively involved in training large language models essential for generative AI, thereby contributing to reducing knowledge inequalities. It further showcases Wikipedia’s commitment to knowledge equity by highlighting their launch of knowledge equity funds to create more content and uphold diversity.

The analysis expresses concerns regarding the unintended consequences of public interest technologies. It highlights the potential risks of endangering indigenous languages and criminalizing dissenting voices, urging stakeholders to carefully consider and mitigate such risks.

Addressing the digital divide is seen as a major priority. The analysis points out that in the global south, where many individuals lack access to the internet, public interest platforms like Wikipedia should actively contribute to discussions aiming to bridge this divide.

Content moderation also features as a significant concern. The analysis notes that while Wikipedia puts effort into content moderation, regulations primarily designed for large corporations can complicate this process. The work being done by UNESCO to assist with content moderation is highlighted.

Furthermore, the analysis acknowledges that internet regulations can be new and complex in certain regions. It points out that some regions in Asia consider internet regulation a new concept, and emphasizes the presence of diverse ways of content modifications.

Advocacy for using superior platforms for better content moderation is presented. The analysis mentions the social media platform Mastodon as an example of a better alternative. It also highlights the importance of exceptions being made for public interest platforms, citing Rachel as an advocate for such exceptions.

Engaging young people in digital literacy is identified as a priority. It highlights that Wikimedia is actively working with communities of editors to provide training and focuses on initiatives, like in Cambodia, that involve indigenous young people in creating content and videos to preserve their culture.

Successful engagement with young people, the analysis suggests, can be achieved through collaboration with other organizations. It points out that Wikimedia has collaborated with the Minister of IT in Indonesia and expresses a desire to have more collaborations with youth-led organizations.

The analysis advocates for the promotion of the internet of commons to serve public interest and suggests that exceptions should be made for public interest platforms. However, no specific evidence or supporting facts are provided in this regard.

Diversity within public interest platforms’ community contributions is another important aspect emphasized in the analysis, without any further details or evidence being given.

Finally, the analysis advises policymakers to be mindful of the diversity of the internet ecosystem. It suggests that policymakers should take into account the various perspectives and interests within the ecosystem while formulating regulations. It concludes by highlighting the importance of promoting the internet of commons for public interest and creating an inclusive environment for all stakeholders.

Overall, the analysis provides a comprehensive examination of the role and impact of public interest platforms like Wikipedia in the digital world. It highlights the need to address wealth and knowledge gaps, privacy concerns, and misinformation, while also recognizing the positive contributions of public interest platforms in addressing those issues. It argues for regulations that protect human rights, encourage user participation, and support diversity. The analysis also raises concerns about unintended consequences and identifies priorities such as bridging the digital divide and engaging young people in digital literacy. The insights gained from the analysis shed light on the complex challenges and opportunities in creating a more equitable and inclusive digital ecosystem.

Mallory Knodel

The internet is widely seen as a public good that offers numerous benefits. It empowers communities and provides valuable tools for communication, information sharing, and access to resources. Examples of public goods on the internet include Indymedia, a platform for citizen journalism and protest news, and Wikipedia. These platforms serve as valuable sources of information and rely on the contributions of individuals to create and share knowledge.

However, there is a concern that corporations monopolise user experiences on the internet and engage in anti-competitive practices. While community-driven innovation still thrives alongside corporate platforms, it can be challenging to compete with large corporations that prioritise their own interests. Communities continue to build their own tools and generate content, but they face difficulties in gaining a strong foothold against corporate dominance.

Furthermore, efforts to create a public good internet are often not inclusive. The individuals involved in the hacking culture, which contributes to developing a public good internet, tend to be those with free time or jobs that align with this pursuit. This exclusion of people who lack the time or access to technology creates a barrier to participation and limits the diversity of voices and perspectives in shaping the internet.

To sustain a public good internet, substantial investment is necessary. Public good internet initiatives, being not-for-profit, struggle to maintain themselves without financial support. These initiatives often rely on “bootstrapping” and grow gradually once established. Without sufficient investment, the potential of the public good internet to thrive in many areas is limited.

On a positive note, communities that build public good internet technology tend to be self-perpetuating. By fostering strong community involvement, these initiatives can continue to expand and grow, gaining support and participation from individuals who understand and appreciate the importance of a public good internet.

However, the existence of public good internet is not guaranteed without strong nearby communities. Building a public good internet requires the dedication and collaboration of individuals in a specific locality. Without this local support, it is difficult to establish and sustain a public good internet that truly benefits the communities in the area.

Public interest work on the internet does not necessarily have to be for-profit to be sustainable. There are alternative ways of generating revenue, such as contextual advertising, that can be profitable and less invasive. The focus should be on creating sustainable models that prioritise the public interest.

In contrast, big tech companies are often criticised for prioritising monetisation over innovation. These corporations, with their established platforms and significant influence, can create barriers for competing services and limit the choices available to users. Targeted advertising, a common strategy used by big tech, is seen as invasive and contrary to the public interest. It violates user privacy, and there are concerns about the ethical implications of such practices.

The regulations designed for big tech platforms may inadvertently hinder public interest platforms. While efforts should be made to improve big corporate platforms, it is important to devote attention to public interest platforms, such as Wikipedia, that serve the public good. Current regulations may not fully consider the practices and needs of these platforms, which can impede their ability to operate effectively.

To promote competition and user preference, it is important to have more choices in platforms. The ability to migrate to different platforms encourages healthy competition and provides users with options that align with their values and preferences. Currently, big multinational corporate tech platforms dominate many regions, leaving limited alternatives.

Public platforms, like Wikipedia, should be considered in discussions on content moderation. These platforms have established practices and guidelines for content moderation that can serve as examples for other platforms. It is crucial to learn from these successful models and incorporate their insights into broader content moderation discussions.

In conclusion, building and sustaining a public good internet requires effort, investment, and support. While corporations dominate the landscape, efforts to create a public good internet are still underway. However, inclusivity remains a challenge, and investment is crucial for the success and expansion of public good initiatives. It is important to ensure that public interest work is sustainable and prioritise the public interest over monetisation. While big tech companies have their shortcomings, the existence of more platform choices and proper regulations can foster healthy competition and better serve the needs and preferences of users.

Bill Thompson

The analysis explores various arguments concerning the current state of the internet and its ability to fulfil public service outcomes. One viewpoint asserts that the existing internet standards are inadequate, primarily due to their domination by commercial interests. It is argued that this has hindered the delivery of public service outcomes. Efforts for intervention and regulation are advocated to address this issue effectively.

Another argument suggests that Internet governance needs to be inclusive and representative of a wider variety of communities. Traditionally excluded groups should have a voice in shaping the internet to create a fair digital public sphere. Inclusion and active participation from these communities are considered crucial for better internet governance.

The analysis further highlights the need to reevaluate and reimagine the internet to enhance democracy and protect individuals from surveillance. The current internet structure is questioned as potentially unsuitable for these purposes. A network that safeguards individuals’ privacy from surveillance is deemed necessary.

The limitations of existing protocols are seen as a hindrance to innovation in the design of modern social networks. The emergence of similar platforms that lack innovation and the perceived restrictions of current protocols provide evidence to support this argument. However, the introduction of alternative protocols such as ActivityPub offers the potential for innovation in online social spaces and presents a different lens for constructing such spaces.

Responsibility for delivering various aspects of the public interest internet is viewed as falling on all stakeholders. It is emphasised that these stakeholders should contribute to the public service internet in accordance with its overall interests. This collective approach is crucial to ensure the internet effectively serves the public interest.

Funding of public infrastructure, including the internet, is another debated topic. The argument is made that society should bear the cost of public infrastructure rather than relying on private entities or philanthropy. State funding is considered an acceptable option if it avoids exerting control over content. However, concerns are raised regarding the risk of state-controlled media associated with government funding.

The analysis also calls for a different approach to the internet model. The current model, based on decisions made by a select group of individuals predominantly from North America and Europe, is criticised for its failure to address current challenges effectively. The importance of co-creation and community engagement is emphasised as a means to reshape the internet model and build a more sustainable digital ecosystem.

In conclusion, the analysis presents a range of arguments that highlight the inadequacies of the current internet model in delivering public service outcomes. The influence of commercial interests, limitations of existing protocols, and the need for inclusivity, democracy, and community engagement are all key factors that require attention. Ultimately, a collective effort is necessary to create an internet that effectively serves the public interest.

Anna Christina

The analysis reveals various important aspects concerning internet governance and cultural diversity. One of the key points highlighted is the pressing need for diverse cultural content on the internet, with a specific focus on meeting the needs of indigenous communities. It is pointed out that a significant portion of the current internet content does not relate to indigenous communities. This is particularly relevant in Mexico, which is the 11th country with the most multicultural communities. Efforts should be made to ensure that indigenous cultures and perspectives are represented and celebrated through diverse online content, particularly as it relates to sustainable cities and communities.

Additionally, the analysis underscores the importance of establishing a governance system that fosters balanced and inclusive participation of all stakeholders. This includes promoting transparency, accountability, and stakeholder inclusion in decision-making processes related to internet governance. To this end, UNESCO has been running a consultation since September 2022 to develop guidelines for regulating digital platforms. These guidelines aim to ensure that governance systems are transparent, accountable, and promote diverse cultural content. This is important for achieving peace, justice, and strong institutions.

Furthermore, the analysis highlights the need for active youth participation in internet governance discussions. It is noted that children aged 13 to 18 expressed their desire to participate in governance discussions during the consultations. Recognizing that the youth are the most important users of the internet, their active involvement is required to reduce inequalities and promote peace, justice, and strong institutions.

In terms of implementation and evaluation processes of internet regulation, the analysis emphasizes the importance of involving internet stakeholders. It is observed that civil society participates in advocacy but does not often participate in implementation and evaluation processes. Evaluation is crucial for judging the effectiveness of the governance system. Promoting stakeholder involvement is vital for achieving peace, justice, and strong institutions.

Moreover, the analysis highlights the positive role that community networks in Mexico, Central America, and Latin America play in promoting indigenous expression and cultural content online. These networks were created in partnership with UNESCO and serve as an example of promoting indigenous expression and cultural diversity. This is related to industry, innovation, infrastructure and peace, justice, and strong institutions.

The analysis also addresses the issue of funding public interest technology. It emphasizes that responsibility for funding public interest technology lies with all stakeholders, including governments, the private sector, and users. This collaborative effort is necessary for achieving partnerships for the goals.

Another important aspect brought up in the analysis is the need for a balance of responsibilities and contributions from all involved parties to achieve sustainability. This involves governments, the private sector, and users working together to achieve common goals. This is essential for achieving partnerships for the goals.

The analysis also emphasizes the importance of the consultation process for guidelines and regulations. It notes that building, maintaining, and resisting during this process is crucial. This indicates the significance of active engagement and continuous involvement in shaping internet governance policies. This is closely tied to achieving peace, justice, and strong institutions, as well as partnerships for the goals.

Additionally, the analysis underscores the importance of identifying the roles of different stakeholders in the regulatory process. It is highlighted that this aspect received the least response during the consultation. Involvement is necessary even after regulation happens. This is tied to achieving peace, justice, and strong institutions, as well as partnerships for the goals.

Furthermore, the analysis notes that while good laws and standards are essential, they can be misused in authoritarian regimes. This raises concerns about the potential misuse of laws in authoritarian regimes. This is especially relevant for achieving peace, justice, and strong institutions.

In conclusion, the analysis provides valuable insights into the need for diverse cultural content on the internet, the establishment of inclusive governance systems, the importance of youth participation, stakeholder involvement in implementation and evaluation processes, the role of community networks in promoting cultural diversity, the responsibility for funding public interest technology, the balance of responsibilities for sustainability, the significance of the consultation process, and the role of civil society in fighting against misuse of laws. These findings shed light on the complex nature of internet governance and the importance of fostering cultural diversity in the online world. These aspects are tied to achieving quality education, reduced inequalities, sustainable cities and communities, peace, justice, and strong institutions, industry, innovation, and infrastructure, and partnerships for the goals.

Widia Listiawulan

Traveloka, a publicly traded private sector company, prioritizes innovation and technology to enhance tourism while emphasizing sustainable and inclusive growth. They collaborate with communities, governments, and stakeholders, operating in six ASEAN countries with over 45 million active users monthly. Even during the COVID-19 pandemic, Traveloka’s contribution to Indonesia’s GDP in the tourism sector reached 2.7%. They actively partake in policy-making processes and ensure compliance with local regulations, promoting customer safety. Traveloka’s commitment to sustainability involves working with women and environmental groups, supporting local communities. Their focus on youth involvement and digital literacy empowers young people to contribute to community-building and develop new tourism destinations. Traveloka promotes tourism through local perspectives, valuing the preferences and aspirations of local communities. They also engage in collaboration, partnering with institutions nationally and internationally to provide digital literacy training and foster inclusivity. Moreover, Traveloka advocates for collaboration and public-private partnerships to address technology regulation concerns effectively. They emphasize responsible technology use, focusing on customer needs and societal benefits. Traveloka’s multifaceted approach showcases their understanding of the relationship between technology, community engagement, and responsible business practices in driving positive change in the tourism sector.

Nima Iyer

Nima Iyer, the founder of Policy, a feminist civic tech organisation based in Kampala, Uganda, expressed concern over the commercialisation and politicisation of online spaces. She noticed a shift in how internet spaces evolved over time, from being free and accessible to becoming controlled by commercial interests and divisive politics. Nima believes that this trend has eroded the idea of a free, open, and publicly-owned internet. She argues that the internet should be a space that is not restricted or controlled by commercial or political interests.

Nima advocates for the creation and governance of public internet spaces that are inclusive and free for everyone to use. She is concerned about the diminishing open internet, which was initially intended to be a space that everyone could use freely. Despite the challenges, Nima believes that there is still an opportunity to create public, inclusive, and free digital spaces.

In addition to her concerns about the commercialisation of online spaces, Nima also observes a divide in conversations between for-profit and non-profit tech communities. She maintains separate Twitter accounts for both communities and notes that they discuss vastly different topics, with the for-profit community heavily focused on revenue generation and customer retention. Nima also explores the influence of profit-driven motivation in the innovation space, using the example of Couchsurfing and Airbnb. She believes that profit-driven corporations can have a negative impact on innovation.

Furthermore, Nima questions how to maintain public interest when innovation is dominated by profit-oriented motivations. She notes that the concept of public interest appears to be overshadowed by the quest for profits in the innovation space. Nima also highlights the importance of differentiating the rules for big tech companies and small start-up companies when creating data protection laws. She points out that it is unfair for small companies in their early stages to have to follow the same dense regulatory protocols as larger, technologically advanced companies.

Bill Thompson, another prominent voice in the analysis, suggests that commercial engagement should be allowed in the public service internet, but on public service terms. He believes that the public service internet should support democracy online and a digital public sphere without traditional commercial capture or monetisation. Thompson criticises the current model of a global timeline used by platforms like Facebook and Twitter, arguing that it is not reflective of real life and is not good for civil society. He suggests the need for a different way of thinking and building internet systems, abandoning certain core assumptions of existing models.

In terms of universal internet access, Nima expresses some sadness about the idea of previously disconnected indigenous communities being connected to the global internet. She questions whether constant access to global information is always beneficial. Nima also calls for deliberate design of public spaces, goods, and platforms, highlighting the need to encourage people to use them rather than defaulting to existing ones due to convenience. She advocates for conversation between government officials and civil society for effective legislation.

Throughout the analysis, there are several other noteworthy observations and insights. The importance of encouraging volunteerism and contribution to open-source software and knowledge bases is discussed. The challenge of public infrastructure funding is reflected upon, with a comparison to essential services like sanitation and water. Finally, there is a call for action on the discussed matters and a focus on the next steps to address the issues raised.

In conclusion, the analysis highlights the concerns and arguments put forward by Nima Iyer and Bill Thompson regarding the commercialisation, politicisation, and profit-driven nature of online spaces and innovation. They advocate for the creation of public, inclusive, and free digital spaces and the differentiation of rules for big tech and small start-up companies. They also emphasise the importance of deliberate design, conversation between government officials and civil society, and addressing the challenges of universal internet access and public infrastructure funding. Overall, their insights contribute to the ongoing discussions and efforts aimed at creating a more accessible, inclusive, and socially responsible digital world.

Session transcript

Nima Iyer:
and future realities. Thank you so much for coming early this morning and for joining us for what I believe will be a very interesting and exciting conversation. So I’ll just briefly talk about why we’re having this conversation and how it came about. First, let me just quickly introduce myself. My name is Nima Iyer and I am the founder of Policy. And Policy is a feminist civic tech organization based in Kampala, Uganda. And when I first founded Policy about six years ago, there was a lot of buzz around civic tech. And I feel even like using the word civic tech feels a bit dated. Like it feels very, you know, 2016, 2017, but it’s the same topics with just different names and the similar ideas. And why this is really interesting to me because I vividly remember the first time I used the internet back in the early 90s and just how much joy it had and how it felt like you could create anything and it felt, you know, it felt free and accessible. And then slowly over time, things changed and platforms became very gated and then you had to be in these closed spaces and sort of the dreams that we had for, you know, this open internet that we could all use was slowly diminishing in some ways. So now we have a lot of platforms that are fueled by commercial interests and, you know, fueled by advertisement or they’re fueled by divisive politics online. And so the question that we are asking here today is what happened to the spaces that would have been publicly owned and publicly governed? What happened to those spaces? Do we still have an opportunity to create those kinds of spaces? Who should be having these conversations about making these spaces? And yeah, that’s kind of why we’re all gathered here and also to get different perspectives of who can be in the room, who’s not in the room, who should be in the room. And yeah, also generally to talk about how this term of public good has changed over time but how it’s still very much the same concept and still very important and very relevant. So I hope you will have a great conversation with us and the format we’ll have is that we’ll talk together for about 40 minutes on the panel, 40, 50 minutes, and then we would love to have time to open it up to hear your perspectives and also to get your questions. So I know there’s often, you know, this is not a question but if you do have interesting comments to add that would definitely be welcome. So I would love to start the panel and I will first start off with Mallory and I’ll give a quick introduction to Mallory Nodal. Mallory is CDT’s, that is Center for Democracy and Technology’s Chief Technology Officer. She’s a member of the Internet Architecture Board and the co-chair of the Human Rights Protocol Considerations Research Group. She takes a human rights people-centered approach to technology implementation with a focus on encryption, censorship, and cybersecurity. Mallory, thank you so much for joining us this morning. The first question I wanted to ask you is generally about what do you think about the general concept of public internet infrastructure or public goods? How would you explain it to the people in the room, first of all, and what good is it providing to us or even what’s the potential of what it could provide to us?

Mallory Knodel:
Yeah, thanks so much for inviting me and for having me here to talk about this topic. What I really like about your framing of this panel is it answers the question, internet for what? Because I feel that we often just assume that the internet is inherently a good thing and that’s actually not a bad assumption. I think we all arrive at the same conclusion but I don’t think we introspect or remind ourselves enough what for and what does it provide. I think that where governments and corporations have made the case for why we need to move online and digitize, often those are austerity measures. Often those are ways of replacing infrastructure with digital infrastructure. And I think what we’re talking about in this panel is the opposite. Why do we have the internet? Why do we believe in it so much? Why is it so important? And I can tell you from a while back, I’ve been in this space for a terribly long time, it turns out. And I remember when we didn’t have social media or we couldn’t take for granted that one could simply go on the internet and build oneself a platform or share information. It started for me when I was an activist with Indymedia where we were going around mostly just filming protests or sharing information about protests. And then that wound up online because the Indymedia websites were ostensibly somewhat open. They were kind of the proto web 2.0. You could upload an event or share event details with people, you could then post a blog or we just called it news. We could post news from a protest on the Indymedia website. And then those got published. And so that sort of citizen media became a real precursor to what we see now pervasively in social media where a lot of that content now is on corporate owned private space platforms. Indymedia still exists. Those things are still around. Other things that are in that spirit are like Wikipedia, things where we’re co-generating with one another in aggregate content. And I think the other thing that back in that time when I’m really stretching my mind backwards where we were really insistent upon owning the technology and not just owning it, in terms of having a bare metal server somewhere in a co-location center that you could visit it and check in on it, see how it’s doing, install the software you want on it, make sure you have the encryption keys and no one else, et cetera, et cetera, we were also really invested in figuring out how to do it too. So it wasn’t just about the having, it was also about the doing and the making. And I feel like that in and of itself was quite an empowering sort of action because we were actually building cool tools. Like I mentioned, Indymedia sort of invented social media. We were, and we were hacking on it, we were figuring out. And so I think there’s some spirit of that that still happens. I see it everywhere. It’s sort of a yes and, right? It hasn’t been that corporates have sort of replaced this, it’s just that we now have to compete with the corporates that of course act in anti-competitive ways, they are interested in capturing users, they have all kinds of other incentives. And so while some of those traditions are still around, they’re just not as present, they’re not as well used, they’re not as well remembered. And so I think, yes, the internet is itself a public good, but I think all of the things that sort of come out of it when the exercise is itself the end goal, really I think is what communities end up coming up with as what are public goods for them. So I think I’ll stop there and let you introduce the rest of the panel.

Nima Iyer:
Yeah, I just wanted to add on to that in terms of what you said. For example, what groups do you think benefit from these public goods and who is excluded as well? Just as you said, the corporations tend to own them now, so if you could just expand on that.

Mallory Knodel:
Yeah, I think your question about exclusion is a good one and I’m sorry, I didn’t mention it before. I do think that while we like to valorize this sort of hacking and the making and the doing, it is not that inclusive, it does require a lot of time. And so much of the people in this space still today are folks who have free time or that have jobs that align with this sort of work. And so it does, by virtue of that, simply exclude people who maybe don’t have a lot of time to just try to figure out the technology or they don’t have access to those things. So I think we shouldn’t be too overly enamored with this idea that we can just build it and make it. It actually does take a lot of time, it does take a lot of investment. And so I think without a concerted effort to build up the public good internet without real investment in money, again, because we’re not doing this for profit, there is no business model, that it won’t thrive and in a lot of places won’t even exist at all. And because these communities are very much bootstrapping communities, meaning that once they exist, they start to grow.

Nima Iyer:
There’s a request for you to speak a little bit slower.

Mallory Knodel:
Oh, certainly, yeah. Absolutely, yes. So I was just finishing up, but what I was saying is that a lot of the communities that make public good internet technology tend to be self-perpetuating. So that needs to be grounded in existence. And the opposite then is also true. If there is not a strong community of building a public good internet nearby, it’s really difficult to expect one to just happen or expect the local communities there to benefit from a global public good internet when it’s not in their local language, it’s not necessarily serving their needs. So again, I’ll just reiterate the main point here is that it takes effort and investment, support, money, et cetera, to make it happen.

Nima Iyer:
Thanks, Mallory. I think I still have more questions on that topic, but let me get on to some of the other speakers as well, because I definitely am curious in terms of, when you say the investment and the money for a public good that will not make money as well. And yeah, I’m curious, we’ll discuss it later, like where might this money come from and how would it be sustained? But we’ll come back to that. I would like to bring on our next speaker, who is Bill Thompson from the BBC. Yeah, I think he’ll-

Bill Thompson:
Hello.

Nima Iyer:
Hi.

Bill Thompson:
Can I be seen or even heard?

Nima Iyer:
We’re just waiting for your image to come up on the screen, just one. I’ll introduce you in the meantime.

Bill Thompson:
It’s not worth waiting for.

Nima Iyer:
Okay. Oh, whoa. I’ll introduce you in the meantime as that happens. So Bill will be joining us remotely. Bill leads the public value research in BBC research and development. He’s also well known as a technology journalist and advisor to arts and cultural organizations on matters related to digital technology. From January, 2001 to April, 2023, he was also a regular studio expert on the BBC World Service Technology Program, Digital Planet, which is also known as Go Digital and Click. And he still appears regularly as an independent commentator. He’s an adjunct professor at Southampton University and member of the board of the Web Science Trust. So Bill, we’re still waiting for your image to appear. Should we just go ahead?

Bill Thompson:
I would carry on. I’m better on the radio anyway. I know that.

Nima Iyer:
Oh, there you are. All right. So we can see you on the screen now. Welcome. Welcome and thanks for joining us very late your time. We really do appreciate it. So the question that I have for you today, Bill, is how can we build internet technologies that are architected, designed and deployed to meet the specific requirements of public service organizations? So in simpler words, how do we make these public goods and how do we make sure that they work within the current standards of the internet? So what’s the best way we can go around to create these digital public goods?

Bill Thompson:
Oh, the easy questions first then. I think that it’s interesting that you say that we do them in line with current internet standards because that sort of assumes that what we’ve got now is a sufficient base for public service outcomes. And I’d argue just in line with what Mallory has been eloquently saying that the history of the network over the past now 50 years is that we have a set of technology standards that have failed to deliver public service outcomes that have been subverted, that have been taken over by commercial interests intentionally in the governments have sort of given that space to commercial interests, but also the standards, the technical standards, the protocols themselves have proved unable to resist commercial pressure and have not effectively delivered good outcomes. And we see that again and again in the way that the open web has been closed in the way that sort of things we would like to happen in terms of open communications protocols haven’t happened. So part of what we’re looking at at the BBC is in fact to ask whether we need a significant intervention in the underlying technology stack as well as work on regulation and governance. So let’s not just accept the internet as it is, but let’s think about how we might build it or improve it and design it to deliver those outcomes. So bring in the sorts of communities that have been traditionally excluded from internet governance activities, bring the sorts of communities that were definitely not part of the conversation in the 1980s and 1990s when today’s network was emerging and try to have a more structured conversation. As a public service broadcaster, you see that the BBC has spent a hundred years making television and radio work. And it feels to me that as part of our mission, we should be trying to work with others to make the internet work. And that means trying to go back to basics, to ask ourselves what a network would look like that could allow us to effectively assert say identity that could protect people from surveillance, that could deliver those public goods. And then on top of that, we could start to build a digital public sphere in which people could feel more fulfilled, could feel happier, could feel protected from some of the bad aspects of the commercial internet if they chose it. And so I’d say the two parts of your question go together quite effectively in that we want to consider what good public service outcomes are. We sort of know what they are in the real world. We sort of know what they are in the broadcasting space that the BBC knows very well. I think we’re quite unclear about what they would be online, particularly when we have many different constituencies of interest. And so we need to have the widest possible coalition of interest, people talking about this, designing the network, but we shouldn’t assume that what we’ve got today is actually the right starting point. Perhaps the radical thing to do is to accept that if we’re to serve democracy and serve digital democracy, we should be willing to ask some very hard questions about the way today’s network runs, the technical protocols, the design standards for our applications, and the governance, and whether that’s the right way to deliver the sort of public service internet that we’re looking for.

Nima Iyer:
Thank you. Thank you so much for that, Bill. I think it’s interesting in terms of the design because a few, I want to say a few weeks ago, there was suddenly a ton of platforms that came about to replace Twitter slash X. And it just felt like over the course of two weeks, there was like 10 new online platforms, but they all looked exactly the same. There was no innovation. It was just copy paste of the same platform. And it just felt so boring. Like, isn’t there another way to design a space where we can share our very brief thoughts? But I think it’s really interesting. And like, yeah, how do we get to get together and design something that looks different from what we currently have? And yeah, it just, it felt so restrictive.

Bill Thompson:
Indeed. And of course, part of that is if you like, the network primitives, the underlying protocols that you have to work with if you want to build a modern social network are themselves quite limited. So, you know, the emergence of ActivityPub was brilliant because it was a different way of thinking about how you might construct an online social space. And it allows you to have different design criteria to work in a different way, to build security into it in a different way. And I think it’s that novelty that is going to be absolutely important to the next generation of the internet, that what we’ve got now doesn’t feel to me like it’s a good starting point. So let’s have the sort of radical conversations that we could have in this room and see where they take us.

Nima Iyer:
Lovely. Thank you so much. All right. I would love to move to our next speaker, Anna Christina Aruelas from UNESCO. Thank you so much for joining us this morning. Anna is a Senior Program Specialist at UNESCO Communications and Information Sectors, Section for Freedom of Expression and Protection of Journalists. She has dedicated her work to the promotion and defense of human rights, freedom of expression and the right to information. Previously, Anna Christina was the Director of Article 19’s Regional Office for Mexico and Central America. Once again, thank you for joining us. The question that I have for you builds upon what Mallory started, talking about who’s included and who’s excluded and the kind of resources that are needed. So I’d love to ask you, how do we ensure that various stakeholders are heard and have the appropriate input so that we can develop these online governance structures that serve everyone?

Anna Christina:
Great, thank you very much. It’s a great conversation. I just wanted to think of what Bill was just saying of how we are thinking the internet and how we are including different voices within the internet. But, and that remind me of one of the things that my first job of UNESCO was related to, which was trying to make indigenous communities content within the internet to be available and how the possibility of creating indigenous communities content, acknowledging that most of the content right now is content that do not relate to most of these indigenous communities. I’m Mexican and Mexico is the 11th country with most multicultural communities. So I was thinking on how can we actually make sure that diverse cultural content, that cultural expressions are well set in internet. And that when we navigate into internet, we relate to those communities that live in our countries more than to other community. And as long as, at the same time, as we relate to other communities from other countries. Because as I say, in my country, sometimes we don’t know, we don’t even know about indigenous communities, even though they live in the side of our door. So I just was thinking about that because this relates a little bit of what UNESCO is doing right now and what we’re intending to promote in this process of defining how the governance of digital platforms. look like when we’re facing different processes, regulatory arrangements in different parts of the world. So UNESCO has started since September 2022 a process of consultation on guidelines for the regulation of digital platforms. In the beginning we started thinking about how the different discussions around regulation should take shape and try to create an understanding, a common understanding, that a human rights-based approach should come into place. And we realized that there were three elements that we wanted to enforce. One is that, as some of you know, UNESCO endorsed in a declaration, unanimously, that is called the Windowhead Declaration, that said that information is a public good, and that there’s three steps to actually make sure that information becomes, as a share, good for everyone. The first one is transparency for internet platforms, the second one is empowerment through media and information literacy mechanisms, and the third one is media viability. So through that, taking that in mind, we started this discussion recognizing that the thing was happening in silos, that we wanted to maintain the freedom that we all have in the internet. We wanted indigenous community to be able to engage, to have cultural content within the internet as we have it, but at the same time we were looking that regulation that was happening around the world was targeting the users and not seeing what the companies could do to be more transparent, to be accountable, to identify what was that phenomenon that wasn’t to be targeted, such as disinformation, hate speech, et cetera. So through a different process, it was three stages of open consultation where we received more than 10,000 comments from many of you, we realized that what we wanted is, one, to safeguard freedom of expression, access to information, and I will say one thing that will come in the next version is diverse cultural content, because one of the things that we aim in this process is to balance and make sure that whatever the governance system is, thinking that there’s always complementarity between self-regulation, core regulation, and statutory regulation, whatever that kind of shape of arrangement, of regulatory arrangement is, the governance system, which is a group of people, a group of people that should be identified, and this relates to your question, we need to identify those stakeholders that are interested to participate in the governance system, and the governance system had to be able to create balance in the participation of these stakeholders. We need to bear in mind that when we’re talking about a governance system, we need to include those voices that are mostly affected by the different phenomenons that we are seeing in the internet, and that are the issues that we want to address in order to also preserve freedom of expression, access to information, and diverse cultural content. So this is one of the things that UNESCO guidelines are trying to put forward, how we can ensure that governance systems are transparent, how we can ensure that governance systems are accountable, that they promote diverse cultural content, that actually they are and have in place check and balances, because sometimes even when we’re talking about self-regulatory measures or self-regulatory arrangements, there’s not within a specific check and balances or mechanisms to be accountable, and we want them to be able to be open and inclusive and accessible for everyone, not only for the ones, the technical community or the people that knows about the internet, but the people that wants to engage with the internet and have in the possibility of create their own content. So I will say that for us, there’s one, two, three, four, five, six elements that we said about the multi-stakeholder approach within this governance system. The first is acknowledging and identifying the stakeholders, including the companies that should be responsible for the compliance of the five principles set in the guidelines, then afterwards I can talk about them, and when identifying these companies, the regulators should bear in mind, yes, on one hand, the size, two, the market share, and three, the functionality of the platforms. And in this last section, I want to stop a little bit because it has to do with public interest, internet technologies. In this last section, the guidelines are clear, that when a governance system identifies which are the companies that should be on the scope, there should be a clear understanding of what is the kind of functionality, business model, service that the companies place, etc, etc. So I could read the thing, but it’s complicated. And then the second thing is encouraging inclusive participation, and when we say encouraging inclusive participation, it’s not about only the usual suspects, but actually one of the things that we receive from the various submissions from the consultation, for instance, for children from 13 to 18 years old, we’re like, we want to participate in these discussions, we are not in these discussions, you know, like, and you’re always trying to protect us, but how are we enabling the possibility for us to participate and engage more in the internet and in the decision-making process of the governance system? How are you giving us the tools to actually engage in these processes? And I think this is an important question because I don’t see that we have been able, for instance, in this forum, to bring together people that are actually the most important user in the internet right now. And the third thing is creating balance, that means acknowledging that the different actors within the stakeholder or the governance system have different levels of power, so we need to create balance and understand how balance should be worked. Ensuring transparency and accountability, as I already said, collaborative decision-making, so it tends to put forward a set of guidance of how decision-making is going to be, and then coordinating implementation efforts and evaluation. So that means that when we talk about multi stakeholderness, it’s not about just the moment of releasing any type of regulation or any type of code of conduct or any type of whatever, we need to participate in the implementation process and in the evaluation process. What we’ve heard from the regulatory groups is like, city society participates a lot in the process of, you know, advocates a lot for or against regulation, but then once the regulation pass, they leave us alone, they are not with us, and we need to participate together because we are the technical person that are going to implement regulations that are facing the different questions, and we don’t have the participation of the different stakeholders in our decision-making process. So I think that’s another important thing, and the evaluation process, which allow us to identify if the governance systems is working or not. Thank you very much.

Nima Iyer:
Thank you so much, Anna-Christina. Thank you for breaking that down, I think that was really helpful. All right, we’re going to move on to our next speaker, and our next speaker is Rachel Judistari from Wikimedia Foundation, and Rachel is Wikimedia Foundation’s lead public policy specialist for Asia. She has extensive working experience engaging key stakeholders through lobbying and advocacy to promote knowledge sharing, innovation and village governance, human rights, and youth empowerment. All right, so what I was thinking about while this discussion was going on is that I have two separate Twitter accounts, and on one it’s the people in this room, it’s about like open source or non-profit driven public interest tech, and then on my other Twitter it’s purely for-profit, and the conversations these two groups are having do not intersect at any point. The for-profit one is about how do you launch a SaaS, how do you get the most money from your users per month, how do you raise your prices so that you can have the most money and recurring revenue, and it’s just all about like how to suck out the most money from possible from customers, identifying their pain points, how do you keep them locked into the platform. A big thing in that is like how do you reduce churn, which is people dropping from your platform, and yeah, two very, very, very, very different conversations. So I wanted to ask you, when we think about public interest, what does it mean to place this public interest, these public goods at the heart of innovation or regulation? So I feel like the innovation space is really being taken over by, I don’t want to say corporations, but people who want to make profits. I also thought about this example a few weeks ago about couch surfers. So when I was in college, couch surfers was really popular. In case you don’t know, it was basically a platform where you could go to different countries and stay for free in someone’s house, and you would stay on their couch most likely. And after Airbnb came about, I feel that it killed couch surfing, because I actually logged into my account after like 10 years, and it’s become like a cesspool. The vibe is gone, you know. And then on the other side, it’s all about Airbnb, and like how much money they can make, and how they’ve taken all the apartments. So it’s a long-winded way to say like, yeah, how do we keep public interest when innovation nowadays is really focused on profit? Over to you, Rachel.

Rachel Judistari:
Thank you, Nima, for giving me the longest questions. One million questions. But yeah, good morning everyone. So I think I just want to summarize what has been shared by previous speakers, that the digital world today are mostly private and for profit platforms, as Nima also said. And in some cases, the privations of internet amplify wealth gap, prevent equitable access, and also exacerbated knowledge gap, especially for women, indigenous people, people of color, and other socially depressed groups. It’s also compromised our privacy and intensified polarizations and disinformation that is very detrimental to the protections of human rights and democratic values. So at this juncture, I also want to give you good news that Wikipedia still exists. We are the only not-for-profit platforms which are maintained by a community of users that are consistently ranked among the top 10 most visited websites. In this year alone, about 4.5 billion unique global visitors visit Wikipedia monthly. No one owns Wikipedia, and it’s available for free without advertising, without selling personal data, while maintaining strong user privacy protections. However, when you mentioned about innovations, this is also something that we are consistently doing. We realized that as the world’s largest online free encyclopedia, we play an essential role in training most large language models, which is essential in generative AI. I think we’ve heard the buzzword since day one of IGF, I don’t want to bore you, but what we are trying to do is to address knowledge gap within our communities. We also understand that while we are not-for-profit public interest platforms, we are far from ideal. Majority of the editors, we have around 300,000 editors right now, are still from Global North, and we want to diversify our community of editors, and also providing tools and accesses for most repressed groups. For example, two years ago, we launched knowledge equity funds, and in this year, we provided funds to Aman Alliance Masyarakat Adat Indonesia, one of the largest indigenous people alliances with more than 2 million members, to create more content in wiki media projects, to preserve their indigenous cultures and languages. We also have some projects to ensure the participations of women, people of color, and queer people through art and feminism. So by providing more profile of women in Wikipedia, we hope that it can shift the conversations around us. And the second part of your question is also about regulations, and what are the key principles that we need to preserve to ensure the protections of public interest platforms? Well, this is a dicey topic right now, because I feel like in the past few years, we saw a surge of very restrictive regulations on content moderations and platforms. However, the creations of these regulations are often focused on the big tech, and forget to consider the diversity of internet services. So some of these policies prescribe overly broad restrictions with highly punitive consequences, which also affecting our decentralized community-based content moderations. So hopefully, when new regulations are created or the current regulations are revising, the policymakers can also bear in mind the diversity of internet, especially for public interest platforms like Wikipedia, where we are using our community-led models to maintaining the website, but also becoming the antidote of this information. Because daily, our editors are doing fact-checking for more than 50 million articles that are available in Wikipedia. And we also want to encourage the regulations that caters internet to not solely mandating automations of content detections, but also help create opportunities for people’s participations to avoid creating a digital divide. Another principles that need to be protected within the internet regulations is definitely meaningful community participations in internet governance. I think Anna-Christina has mentioned earlier on the importance of that, and I would like to resonate with that, because decentralized content moderations model is one of the ways to preserving democratic values in the internet. And we also see the importance of having open and free internet for a diverse and equitable digital environment. We saw internet shutdowns, service interruptions, website blocking as means to hinder Wikipedia volunteers collaborations. And hopefully this also can be addressed technically, but also regulations-wise. Lastly, the regulations should definitely safeguard user privacy and ban intrusive surveillance system, while also upholding our protections to human rights. And lastly, because our next speaker will be coming from private sector, I also want to encourage further collaborations and communications with commercial platforms that also have pivotal roles in sharing information globally. So thank you, Dima. I hope that I answered your questions.

Nima Iyer:
Yes, yes, yes, definitely. Thank you so much for that. Thank you. But what you said also got me thinking about, generally interesting talking about how regulations are often aimed at big tech. And I was doing a couple of surveys, interviews a few months ago, looking at Kenya’s data protection. And you know, on the surface it looks great that there’s these data protection laws, there’s a data protection office, you need to comply with all these different laws. But then I think about small companies that are just starting out, because I used to be a small company that was just starting out. And I couldn’t imagine adding that other layer of work that you’d have to do when you’re a two-person company. And you’d have to follow all the same rules for a company that has a hundred thousand employees, and there’s no way around that. And it feels extremely unfair that it’s the same rules that applied despite such different context. So that’s a really good point, thank you for that. All right, I would love to bring on our last speaker, who we’re very excited about, Widya Listiawulan, who is a VP of Public Policy of Traveloka. And Widya will be joining us virtually as well. Thank you so much for joining us this morning. Widya has 20 years of experience on public policy. Currently she leads the leading, sorry, she leads policy work of Traveloka, the largest travel and lifestyle app in Southeast Asia. And previously she managed public policy at Amazon Web Services, and also worked in the UN. So Widya, we’re just waiting for your image to appear on the screen. If you’ll just give us… Hi, welcome. There you are. Hi, it’s so lovely to have you. Thank you so much for joining us this morning. So yeah, as Rachel already prefaced, we’re very interested to hear from you about how the private sector, or you know, e-commerce, business… businesses can be a part of this discussion about public interest tech, how can companies ensure that some of the principles of public interest tech continue to live on? So if you can just add generally to the conversation that we’ve been having, we would really love the private sector perspective. Please go ahead.

Widia Listiawulan:
Thank you, Neema. Thank you, Neema. Hi, everyone. Good morning from Jakarta, Indonesia. Thank you for having us in here. My name is Vidya from Trafaloka, but first of all, perhaps some of you not really familiar with Trafaloka, so just for a second, I’d like to share what we are doing in Asia, right, and how far we’ve been working in terms of innovating travel and providing convenient services for customers globally. So Trafaloka has been here since 11 years ago. We started from, you know, meta search and trying to help people to travel conveniently. And after 11 years of, you know, working with all of the ecosystem, now we are operating in six countries in ASEAN. We have more than 45 active users monthly. We have more than 2 million partners, and by partners mean restaurants, hotels, flights, transportation, as well as all the ecosystem of tourism sector. And you know, we don’t stop here. We hope to expand and work more and provide more and better service for customers globally. Now to add on the discussion that we have this morning, we as a company believe that innovation is the key factor, technology and innovation are the key factor to boost tourism in the world. And perhaps we remember back in COVID time, we understand that tourism is, you know, one of the biggest industry that hit the most by COVID because, you know, people don’t travel, didn’t travel, people didn’t, you know, want to go outside and so on and so forth. However, Nima and everyone here, we actually very, very proud because this year we published our impact study showing our impact to community, to society, mostly during the COVID era. So during that time, we actually contributed 2.7% GDP to tourism sector in Indonesia, and that’s quite large. And we didn’t work alone, obviously, we work with the government. We work with community in Nima, we did a lot of digital literacy throughout the years, and we aim to have 100,000 participants from tourism sector in our digital literacy program. We work with community across Indonesia mostly, we work with women community, we work with fishermen and, you know, environment community to make sure that we have sustainable component in tourism because according to our data, there are four points as a trend in terms of tourism after COVID recovery. Number one is actually flexibility that we provide through our innovation and technology. Number two is people tend to travel in nearby areas. Number three is people prefer to travel outdoor. And the last one is people actually prefer to travel in area that offers sustainability practices. And we actually focus to make sure that sustainable is in our core of business. Now we’re talking about policy. I heard Rachel say that, you know, there should be a collaboration with the government. Bill mentioned the openness of government, and we all we agree with that. And therefore, Traveloka is actually very active in association, both locally and regionally. For example, in Indonesia, we have an association for e-commerce called IDEA, and we become one of the active participants, active members, and actually we hold the position there. And also we actually the coordinator of industry task force. This is a task force assigned by the Ministry of ICT during G20 MIMA. So in this two organization or association or community, if you may say so, we provide input, we provide practices, we provide lesson learned that we capture on the ground that we heard from our customers. And then we provide input to the government, to the regulators, with the hope that innovation, regulation, and customers can actually talk together, can actually produce a solution that fits for everybody needs, that provides safety for our customer, but still, you know, comply to the regulation in the local. So I think that’s, you know, that’s opening MIMA. I hope I answered your question and I’m happy to further discuss. Thanks. Thank you so much for that, Widya. Thank you. All right. I want to go back to Mallory with the question that I started in the beginning, and I’m actually going to ask you two questions in one because I feel like they’re related. So of course we’ve heard from Widya, but I’d love to also get your idea in terms of can public interest work in a for-profit model, and yes or no, maybe, but if not, like how would you otherwise fund the infrastructure and maintenance required for public interest infrastructure?

Mallory Knodel:
Fair question. I set myself up for this. I just want to correct, I think, a slight nuance that I hear a lot, which I don’t think what we see in the massive corporate big tech space is innovation. It’s monetization. They’re taking things that people want that have already existed that are there, right, and then they’re figuring out ways to make a lot of money off of it, right? So, you know, we’ve come up with loads of examples already on this panel. I don’t have to restate them. How do you make that profitable? I don’t know that that’s the question, right? What we’re asking is not profit, but sustain. How do you make it sustainable? So I think that there are a few different ways to look at this. This is not at all going to be coherent because this is not my area of expertise, but one, for example, is like barrier to entry. Right now, it’s really difficult to compete because the barrier to entry is enormously high. We’ve monetized just about everything at this point, right? We’re now picking up the scraps off the floor. Even the big corporates, right, are suffering. They pretend that it depends on the day, right? Are they doing awesome, making loads of money for shareholders? Are they really losing a lot of money and they need your pity? It’s hard to follow. The other issue then, too, is I think a lot of what we’ve been talking about so far is assumed that we’re talking about platforms or social media, but there’s actually tons of different services out there, right? There’s, you know, email and web hosting. People do pay for those things. Businesses pay for those things. There’s financial services, certainly something that people pay for. Lots of things that are possible to be made in the public interest without profit-seeking but that typically just aren’t because we’re really just hyper-focused a lot of times on, like, what’s social media doing? How do you make social media profitable? And the last thing I’ll just say is that I think a lot of ourโ€”I would say often we are critiquing this issue of surveillance and privacy violations in service of the, quote, innovative, you know, targeted ads-based monetization, right? That is really narrow, and I think it’s starting to break down already. Maybe I’m too eager to see it collapse, but I don’t think necessarily the issue is with advertising itself. There’s a lot of ways to do advertising that’s not targeting, right? Contextual advertising is great. If I’m already reading an article about something, it’d kind of be great to see ads related to it. There’s no need to necessarily, you know, again, like, try to sell me wool socks in Washington, D.C. in the wintertime. Like, I’m going to buy warm socks when it’s cold in a place that I live. I don’t need an ad from Facebook to tell me to do that. So we’re kind of wasting a lot of potential on this idea of targeted ads, and so I’d really like to see that go. And I don’t think that, for example, that is a monetization strategy that’s at all compatible with the public interest, but we don’t need to just look at the figures to make that determination. It’s inherently a paradox to survey and to serve in the public interest. So I think maybe when we’re coming up with monetization schemes or sustainability schemes, right, that there’s alignment with values, and then that really points the way towards what’s possible. And so I don’t think that there’s any issue with that. It’s just, yeah, it has to be done with principles in mind.

Nima Iyer:
Thank you so much for that. And I think you’ve made such a great point that a lot of it, yeah, it’s definitely not innovation. It’s just monetization. I saw these angry messages from people because there was a website where you could learn, where you could get sheet music for guitar that had existed for, like, 20 years and was free, and then somebody bought it up and made it a SaaS, and now you have to pay a monthly subscription. And yeah, but that was praised as a very good business. So on the other side, interesting. Okay, I’d love to bring Bill back. Hi, Bill.

Bill Thompson:
Hello there. I’m still here.

Nima Iyer:
I wanted to ask, how do you determine responsibility and accountability for delivering various aspects of public interest internet? So yeah, please go ahead.

Bill Thompson:
I think that, I mean, it’s a very broad question and a useful one. To some extent, it’s the responsibility of everyone who wants a public service internet to figure out what they can do to contribute to it. And then we can look at existing institutions and organizations and ask whether they are aligned with the overall interest of the public service internet. So when Vidya was talking about commercial engagement, there should be no barrier to commercial engagement with a public service network, as long as it’s done on the public service terms and not on commercial terms. And there should be no barrier to anyone’s or any organization’s engagement, as long as they accept the terms of trade, that what we’re looking for in supporting democracy online and supporting the idea of a digital public sphere where society can come together is something which is sustaining, something which has positive attributes and is not subject to commercial capture or monetization, as Mallory was saying. In that sense, it’s up to everyone to decide how they can contribute and how they can support it. The issue, as ever, is going to be coming up with some underlying principles that we can all agree on about how such a space, how such a network should be constructed and run, and then also feeling comfortable with the fact there will be divergence in how it’s delivered into different cultures, to different interest groups, to different societies, to different countries. Because one of the problems that’s emerged in the last few years has been the idea of the global timeline, that Facebook, Twitter as was, want everyone to see everything and we all exist in the same space. And that’s not how real life works, it’s not effective for us as human beings, it’s not effective for civil society. And so we need to abandon some of those core assumptions on which the existing systems have been built and look to a different way. I do not have an answer. I have an organization, the BBC, which has been quite good in the past at figuring out how to do these things in the world of broadcasting. I believe there are enough of us, some of us who are in this room right now, who care enough about the model of an internet that is sustaining and nourishing to want to build it and to have those difficult conversations about what it might look like. And everyone brings their own concerns to the party. We try to be much more representative than we have been and certainly than we have been in the past 30 or 40 years building today’s network. If we do that, my optimistic view from this side is that we can achieve something really good and valuable, that we can begin, we can outline the design principles for a network that will actually serve the public interest and will sustain civic society. As I say, I don’t know what it is yet. I do think there’s a process for getting there beginning to emerge and this conversation is part of that process.

Nima Iyer:
Thank you so much, Bill. Thank you. I have a last question and then I will open it up to the floor for a discussion. And my last question is to Rachel. So this whole time we’ve been having a conversation, we’ve been using words like public interest, public good, right? So we’re inherently assuming that it’s good. And like as Mallory said at the start, is the internet always good? Because I was having this conversation with somebody about how they brought the internet to these like really previously disconnected indigenous communities and I almost felt sad. I mean, I don’t like access, like, you know, access is great and everything, but also it’s sometimes it’s like, yeah, what if we just lived in a world where we didn’t have to know what was happening in American politics all the time? You know, what if? So let me ask, let me close my questions with asking what unintended consequences could public interest technologies have? Yeah, so what could go wrong? How might we anticipate and or mitigate them?

Rachel Judistari:
Well, it’s kind of very interesting questions, but there are some risks that can be affecting public interest platforms, especially in the process of knowledge creations itself. As you know, like two thirds of global majority countries are consuming information from the internet. However, only less than 15% of representations of the global south are actively create knowledge online. And mostly the contents are in English. So one of the possible risks that we might be facing is endangering indigenous and less resources languages. And as I shared with you, this has been picked up as one of the key priority of the foundation’s knowledge equity is one of our main goal in achieving our 2030 visions. And in doing so, we are working with community of editors, partners like the UN and government to do digital literacy so that more people can contribute in the creations of knowledge. Second of all, internet is only a reflection of what’s happening in the society. So it’s unfair if you want to have a free and accessible internet, while in reality, civic space are shrinking. And sometimes information in internet, especially the creator of that, can be utilized to punish the information that they put in the internet. And we see some of these cases happening in public interest platforms. So definitely regulations that criminalize dissenting voices need to be addressed, while we are also strengthening community resources to ensure the holistic security of contributors of the internet. And ultimately, while we are thinking that internet is everything, and if I don’t have internet access in five minutes, I’ll definitely get anxiety attack. But it’s literally not everything. There’s a lot of people who do not have access of internet. And the digital divide is still become one of the main issues in the global south. So although this is not specifically the risk that arises from the public interest platforms, but I feel that the public interest platforms should also contribute into discussions on how to addressing this access inequity. So yeah, I think I’ll stop there. And hopefully, other people can also have more questions on this. Thank you.

Nima Iyer:
Thank you so much, Rachel. I just had one funny example to share on what I consider a public interest tech. And so my mom is from Tanzania. And a few years ago, they started digitizing their government services. And so before, it’s quite a centralized country. Before, you’d have to go to the capital where the office is, give the papers, a person’s gone to lunch, the person’s not around, the person has been sick for two months. And then they digitized the service. But what that basically meant was that most people couldn’t fill the forms online. And so people would go to the office. But now there was a little kiosk outside where there was a man with a computer who would then fill it in for you. And I was like, yeah, it’s a cost-cutting measure. And it’s all these other things. But it’s also like, have we thought about whether people have the access and know how to fill online forms, et cetera, et cetera. So yeah, it’s interesting in how you bring people in the design of these things and thinking about those issues. But yeah, I’ve really enjoyed this conversation. And I’ve hogged most of the questions. So I would love to open it to the floor if there are any questions for five of our amazing panelists. Or if you have stories to share.

Audience:
Hi, I’m Ziske from the Wikimedia Foundation. Thank you so much for a really wonderful and engaging discussion. I would love to hear other people’s answers to one of the questions that you asked, Nima, which is I think it was about funding. I forget exactly what the phrasing was, so maybe you’ll meet the honor of re-asking it. But I’d really like to know from Bill’s perspective, particularly because you’re also in R&D, how you see funding working. Thank you so much. I’ll just reframe the question for you, Bill. The question was, how do you fund the infrastructure and maintenance required for a public interest internet?

Nima Iyer:
Please go ahead.

Bill Thompson:
It’s a good question. Obviously, I speak from the BBC in the UK, so my obvious answer is you make everybody pay for it by forcing them by law to give you money to cover the public infrastructure that you acquire through the television license that we have in the UK. It’s a sort of a frivolous answer, but it also actually has some serious intent behind it, which is that you don’t get good public infrastructure for free. The danger of having state funding media is, of course, that you then have state-controlled media, and that’s a very dangerous thing to have, and so you want to avoid it. But it feels to me that a society that wants an internet that can deliver public value should be able to invest in it and not require it to be self-sustaining on a commercial model. So I would much rather that we looked for a design and a set of functions that we wanted, that we believed could be, and we were using the term good fairly loosely earlier, so I’ll carry on using it loosely, that was good for society, and then find a way for paying for it that does not require compromise. And from my mind, if what you’re covering is the sort of core internet infrastructure, it’s just moving the bits around, and you can get some guarantees from governments not to interfere too much, then a degree of state funding is acceptable because what you’re paying for is the underlying network in the way that you’re paying for the roads, or you’re paying for water services and things like that. You’re paying for the infrastructure of a society in order to allow civic society to flourish on top of it. So I’d much rather that sort of model than rely on, say, philanthropy, or rely on private companies being able to do something commercial on there but to stay good, because I think that sort of thing goes wrong. So I’m reasonably sort of firm in my own mind that paying for public infrastructure is a reasonable thing to ask a society to do. The problem is we don’t yet know what we’d want to be paid for or, indeed, how much it would cost. I hope that’s helped.

Nima Iyer:
Thank you so much for that, Bill. We have… Please go ahead, either or.

Audience:
Hi, I’m Ivan Sigal from Global Voices. I have a question about mobile technology. In much of the global majority, internet access is through telecoms. And as we’re talking about the internet, we should also not neglect that question. I’m curious, given that in many countries in the global majority, Facebook is de facto the internet, given its access point and often its free offerings, how we reconcile the desire for a public interest internet in many global majority countries with the fact that most of the energy effort and resources is coming through telecoms, which is a different technology architecture. Thank you.

Nima Iyer:
Do you have someone you would like that question to go to in particular?

Audience:
Not really, though. Maybe UNESCO, that would be an interesting one.

Nima Iyer:
Okay, let’s go with that. Anna-Christina.

Anna Christina:
Well, it’s a difficult question, but I actually was looking to a person that just step up because it’s a very good sample of what you’re mentioning. They started creating in Mexico, in Central America and Latin America, community networks with the community. They start building those networks. They actually work with UNESCO in a process of creating public policy, and that’s the one that I was referring to, to promote indigenous expression and cultural content from all of the process of creating community networks, but then engaging communities, indigenous communities in broadcasting, but also in generating internet content and having the possibility to create media and information literacy processes. But what I think is that, and what I’ve learned, it’s that we need to learn also, and Bill just mentioned, from other expression, of other experiences that have faced the same, kind of the same struggle, acknowledging that we have differentiated approaches when it comes to the internet, the scale, the way it functions, et cetera. I don’t have a specific answer of how sustainability would come into place, but I think if we are talking about multi-stakeholder, it’s a word that comes all the time, all the way through. We also need to take into consideration that funding public interest technology comes to the responsibility of all of the actors that participate and engage in this process. So it’s, yes, a responsibility from the governments, and I totally agree with Bill. We cannot rely on governments because then it can become co-opted, but there’s part of responsibility of governments, there’s parts of responsibility of private sectors, there’s part of responsibility of the users, and the people that engage in this, and so we need to define and create balances where these come from. I urge you to talk to Redes because I really think that they have come up with a good idea of how to deal with this, acknowledging that the scale might not be enormous, but the change would be very, very, very, very good. Yeah, so that would be my take.

Mallory Knodel:
Yeah, I’ll add on. I might actually connect it a step back because I think one of the things about the work you’re doing at UNESCO to help with content moderation, and then this ties into sort of Wikipedia’s woes around having to actually meet then those standards that are really designed for big tech. So just connecting those dots, I think for Wikimedia, maybe other public interest platforms, that element of regulation really isn’t helpful. In fact, I think it can be really counterproductive because ultimately all of this, even if it’s multi-stakeholder, all of this effort is going into making big corporate platforms better, and maybe they’re just not good, and maybe we shouldn’t be using them because they’re not awesome, and if we had more platforms, more choice, we would eventually just migrate off of them. But why it’s important to consider these larger platforms is they will end up being the only thing that’s in place in a lot of places that don’t have a robust local economy or the ability to create these alternatives. So we can’t neglect really big multinational corporate tech platforms because they are big, and a lot of people use them. A lot of places don’t have the ability to completely modify the market or the landscape that they’re working in. And so I just wanted to acknowledge that it’s both, right? It’s not either or. Like, we have to do all the things, but I do wanna just lift up the fact that a lot of this regulation, I feel like there should be something called the Wikipedia test or something, right? It’s like, if your regulation is making it hard for Wikipedia, your regulation is not great. So, I mean, if anything, we should be asking a lot of questions of you all. Like, how do you do content moderation of disinformation at scale? We know you’re doing it. Teach us how, right? And everyone else should be learning from it. That’s not currently what’s happening, and I have a lot of sympathy for that because the two are not equal, right? And so that nuance gets lost. And ultimately, yeah, if a platform just is not working and there’s a better one out there, thinking about social media and activity pub-based platforms like Mastodon and other ones, let’s let the bad one die. Let’s use the better one that has better content moderation that fosters community better. But that’s a sort of long-term solution, and it’s going to be unevenly applied around the world, so.

Rachel Judistari:
Thank you so much for advocating for us. I think I need to just copy-paste what you’re saying. Yeah, I think, yeah, on top of that, I think what we are really trying to say to the policymaker is to have exceptions for private and public interest platforms like Wikipedia, but also internally we understand one of the major hindrance is lack of understanding about decentralized community-based content moderations, especially in the global majority country. For example, in Asia, the issues of internet regulation is considerably new in privacy protections. So the default response is using fear-based approach to quote-unquote control it so that it doesn’t create a public chaos or whatever based on the assumption. So I think one of our main responsibility as public interest platform is to educate the lawmakers with our community about the diversity of internet ecosystem and also alternatives content moderations tactics because there are different models. So, yeah, and hopefully we can have more allies to do that and ensuring that communities are actively participating in that effort.

Nima Iyer:
Wonderful, thank you all so much. Does that answer your question? All right, we have a question back there.

Audience:
Thank you very much. I think very great panel. Thanks for sharing all the information. So this is Nazmul Ahsan from Bangladesh. I work with ActionAid Bangladesh and particularly with the young people. I’m very much interested in terms of how young people are being engaged in the internet and also cyber spaces. So the internet, it’s not only globalized the world, but it was also centralized the whole process. This is a big challenging. I think this is the anti-democratic kind of movement and process that we somehow, we all are in this kind of process. So in our context, we see there is a huge digital divide, particularly we see young men and young women, particularly in the grassroots. They don’t have access to the infrastructure at the same times in the content. You already mentioned about these languages and the other aspects of the contents. And we see also stigma. Sometimes internet, using internet is being stigmatized by the patriarchal interventions in the society. So when young girl and women are using internet, probably the society don’t see this look good kind of thing. This is going in a different direction or challenging the social norms and we have had this kind of things. My interest is that since I work with young people, how can actually we make and make more grassroots young people, young women, under this kind of digital literacy network and bring up particularly with this content generation and also make them as a kind of active internet activist and for the social good. So this could be something actually would be really helpful for me. I think it goes directly to the Wikipedia at once, but you can also respond UNESCO. Thank you very much.

Rachel Judistari:
Thank you so much for your questions. I think engagement of young people has become one of our focus these days because as I shared earlier, majority of our editors are from the global north and coming from specific age group that are not young. So what we are currently trying to do is to work with community of editors to provide trainings, not only on how to use Wikimedia projects, but overall digital literacy that are contextual and culturally appropriate according to the needs of different young people because as we all know, young people is also a diverse constituencies. So some of the example is our projects in Cambodia where we provide capacity building and tools for young indigenous people to create content and also video for preserving their culture. In addition to that, we are also working collaboratively with government. For example, in Indonesia, we are collaborating with the Minister of IT to create a cyber kreasi, which is a national digital literacy education for a community of young people in school, but also in the community that have various needs. So it’s definitely a work in progress and we are hoping to have more collaborations with youth-led organizations to make sure that we are still relevant in that case. I hope that answers your question and thank you for your questions.

Nima Iyer:
So Vidya would also like to give a response to this question. Vidya, please go ahead.

Widia Listiawulan:
Thank you, Nima. Thank you for the questions. So for us, Traveloka, as a tech travel company, youth is part of the core of our ecosystem. And then we divide it into two things when we talk about young people and youth. Number one, actually, our talent pool, most of them are young people. We recruit the best talent in Indonesia for Indonesia market and for other areas as well. But on top of that, your point, your question was, how can young people work together and then create such an impact for community, for their own community? Now in Indonesia, if you are familiar with the geography of Indonesia, we have more than 500 villages all over Indonesia. And working with the Ministry of Tourism and Creative Economy, not only that we provide a digital literacy for young people in those area, but we empower them, we encourage them to help their community to build a new tourism destination. And using our platform, we promote those tourism destination using their language, using their analysis, using their assessment on that tourism area. So in a way, we empower them to be proactive in looking what is the potential of their tourism destination and voices their assessment toward the neighborhood. So that’s how we empower the young people across Indonesia, but not only in Indonesia as well. We work with young people in Vietnam. We work with RMIT, Royal Melbourne Institute of Technology in Vietnam to empower young people to work with them in providing digital literacy for young people, for disability community, as well as for women-led business. So I hope I answered your question. Thank you.

Nima Iyer:
Thank you so much, Widia. All right, I’m going to start wrapping down the panel. And I feel like this has been a great, obviously, this has been a very great conversation, but I feel like I’m leaving with more questions after this discussion. And some of these questions that I have are, how do you design public spaces or public goods? I feel like we’re a bit locked in with the designs that we have at present. How do we get out of that? How do we think about what platforms could look like? Who do you engage in those discussions? How do you build it and how do you make people come, right? So just building it doesn’t mean people will come. Mallory, I know that you did say we would just move, but I remember when the WhatsApp signal thing happened and then we pretended to move and then we didn’t, a lot of us didn’t really move. We just went back to WhatsApp. I think we get stuck using platforms because you’re like, I’ve already used it for 15 years. Like, I mean, I know it sucks, but I’m not, you know. But I would love to see a new form of design. And my question is, yeah, how do we design that? The other question I have is related to like, how do we have these conversations with lawmakers? Like, as civil society, I can see that we are annoying to governments, you know. If we first approach governments and said, we want data privacy, and then we come back and we’re like, but not like that. You know, not for those people, but for these like, yeah. I can also see from the government’s point of view that it’s difficult to legislate for different people. So how do we have these conversations in a constructive way? How do we encourage people to build public goods in a world where we’re very money and monetization driven? How do we get back to that culture of volunteering and maintaining open source and that kind of stuff? How do we encourage? Yeah, how do we, I mean, yeah, there was conversation about encouraging young people, but just in general, like, how do we get more people to give their knowledge to Wikipedia? You know, why is it that group of people? It’s amazing work that, you know, that they do give that information, but what is it about that group that makes them give the information versus other groups? And then the big question, how do we fund the infrastructure? I really like Bill’s point about how to think about it like a public service, like sanitation or water or any of those issues, like we need media. We need spaces as a public service, physical and digital. And then my biggest question is where do we take the conversation from here? So yes, it’s nice to have this conversation, but what’s next? How do we actually answer these questions? So we only have about five minutes left. And I would just love to hear from each speaker if you could really, really just keep it to one minute of a parting message to us of what’s next. So I’ll just go in the same order that we started and let’s start with Mallory. One minute.

Mallory Knodel:
All right, challenge accepted. The, so about leaving or moving, I just wanna say that I don’t think it’s always about, have we successfully moved off of, or have we killed it? It’s the threat that we can that’s really important. So while like, yeah, maybe we’re all still using WhatsApp, but now we’re maybe using both, or at least it started a conversation. And it proves that users are paying attention. Who knew people were reading the terms and service of WhatsApp so closely that they could basically, they had a red line in their mind. And they’re like, they changed the sentence, and I’m furious about it. That was a really impressive moment to me, because it demonstrated that people care. And that’s just as important as people now don’t use it anymore, they move to something else. So and to that point, I think we have to stop thinking, I’ve said this already once, we’re not replacing anything. We’re actually just moving into this incredibly complicated landscape where we’re downloading apps all the time, we’re trying out new things. I mean, at least a lot of us are, right? There’s more and more and more. Nothing’s really dying anymore, right? So I would think that what’s going to be important moving forward is integration. And this is not exactly interoperability, but it does, I think, implicate things that Bill was saying about standards. If your app or your new thing integrates with all the other ones, that’s actually an asset, and it’s a feature, and users are going to come to expect it. And that’s really, really good for competition, and it’s really good for end users. So I think that if you’re building something new, or if you’re an ossified old social media platform that’s been around for too long, if you don’t start integrating or creating those features, people are not going to like you as much.

Nima Iyer:
Thank you so much. I’ll stay with the people here. So Anna-Christina, if you could go next. One minute, please.

Anna Christina:
Yeah, I was thinking of the government question because I have heard within the consultation different views from one side. Well, governments are not all the same, and civil society is not all the same, and companies are not all the same. So everyone has their own opinion and their own comment. But my thinking in all this process of consultation on the guidelines is the most important part is to build and to maintain, to resist the process. Because what happens, as I said, is that we’re very used to think as the regulation as the ultimate goal for the good and for the bad. And we don’t see, and it’s difficult even for us to understand what is our role in the process of implementation, reviewing, monitoring, evaluation, et cetera. So I think, and I have to say this, in the question that we made specifically within the consultation, what a multi-stakeholder role looks like in all of the stages of the regulatory process, this was the less responded question. Even though this word is a word that, along with the GNI, is the most used in this forum. So I think it is very important to identify not like when we’re dealing with the governments, what is also our role after regulation happens in dealing with the people that is engaged in the regulatory process, and then afterwards in the evaluation of these regulations. Because if not, then the regulatory cycle has a breach. And it becomes, like we were saying, and this is just to end, you can have the best law, you can have the best standard. But in an authoritarian regime, this can be misused. And the only way to target it, to fight it, is with resilience, with capacity building, with a strong civil society that is advocating for change. So I think this is important.

Nima Iyer:
Thank you so much. Rachel, one minute.

Rachel Judistari:
I think it’s really important to back to BASIC and really promoting the internet of commons for public interest, and also reminding policymakers about the diversity of internet ecosystem and providing exceptions to protect public interest platform. While at the same time, public interest platforms, including Wikipedia, has to ensure the diversity of communities that contributes in the creations of knowledge so that it will be positive for our sustainability and also the diversity of internet itself. And I’ll stop because time is up. Thank you.

Nima Iyer:
Can we have two more minutes? Oh, it’s like really time is up. OK, two minutes. Bill, please go ahead. One minute.

Bill Thompson:
Very, very briefly, I think we need to accept that the model of an internet based on the technical decisions made by a bunch of overly optimistic, mostly men, in the 70s, 80s, and 90s, based in North America and Europe, has failed us. And we need a different approach. The answer is about co-creation. It’s about bringing communities of interest together to decide what’s important to them and to work on that basis to look at what we actually really need from the internet to build and sustain civic society. And so what I look forward to is actually revisiting some of those core assumptions and working together. Thank you.

Nima Iyer:
Thank you so much, Bill. And lastly, Vidya, please go ahead. One minute.

Widia Listiawulan:
All right, it will be quick. So again, like what Bill said, the last part, collaboration, public-private partnership, regulation need to open a discussion for private sector to raise concern and for user, for society to raise concern. But on the other hand, company has the responsibility to ensure that we will focus on customer needs, not only providing service, but what is important for society. And digital literacy that is not only focuses on how people actually use technology, but also to ensure that people know their rights when they use technology. So twofold here, the regulation, the corporate sector working together with the ecosystem, and people need to know their rights. People need to be educated on how to use technology in a very responsible way. Thank you, Nima. Thank you.

Nima Iyer:
Thank you so much, Vidya. That’s such a good note to end on. Thank you so much to all our amazing panelists. Thank you to everyone for joining us. This has been a really great conversation. And I wish you a wonderful rest of the IGF. Thank you so much.

Bill Thompson:
Thanks for having us.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you very much. Thank you.

Bill Thompson

Speech speed

206 words per minute

Speech length

1992 words

Speech time

580 secs

Anna Christina

Speech speed

155 words per minute

Speech length

2262 words

Speech time

876 secs

Audience

Speech speed

186 words per minute

Speech length

624 words

Speech time

201 secs

Mallory Knodel

Speech speed

203 words per minute

Speech length

2665 words

Speech time

788 secs

Nima Iyer

Speech speed

197 words per minute

Speech length

3797 words

Speech time

1159 secs

Rachel Judistari

Speech speed

121 words per minute

Speech length

1770 words

Speech time

876 secs

Widia Listiawulan

Speech speed

157 words per minute

Speech length

1286 words

Speech time

492 secs

Defence against the DarkWeb Arts: Youth Perspective | IGF 2023 WS #72

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Speaker

The dark web and internet have various purposes beyond criminal activities, making them tools rather than the enemy. Machine learning, AI, improved encryption, blockchain, and advanced data analysis can assist in combating dark web crimes. Focus should be on technology and software companies rather than user identification. Mitigating the abuse of power in the fight against crime involves forming specialized cybercrime agencies and collaborating with academia. Mandatory cybersecurity education is necessary for all involved in handling data. Whistleblowing mechanisms should be encouraged. Law enforcement and politicians often lack understanding of the internet’s working, necessitating increased awareness. Diverse hiring can aid in understanding software misuse. A software registry and due diligence are crucial in identifying and preventing software misuse. These measures contribute to creating a safer online environment.

Maria Lipiล„ska

During the discussion, the potential positive use cases of the dark web were explored, shedding light on how it might impact the future of online privacy and security. The speakers acknowledged the dark web’s negative reputation but emphasised that there are aspects of it that can be harnessed for beneficial purposes.

One of the main points raised was that the dark web enables anonymous communication and the exchange of information. This can be advantageous for individuals living in repressive regimes or facing persecution, allowing them to freely express themselves and access uncensored content. Moreover, whistleblowers and journalists can use the dark web to protect their sources and share sensitive information securely.

Furthermore, the dark web can facilitate the sale of legal goods and services. For example, it serves as a platform for anonymous online marketplaces where individuals can purchase legal products, such as books or art, without leaving a digital trail. The anonymity provided by the dark web can also empower activists and dissidents in countries where their activities might be monitored or suppressed.

In terms of online privacy and security, the dark web can act as a catalyst for innovation. The constant battle between criminals and law enforcement agencies pushes the development of advanced encryption techniques and cybersecurity measures. As a result, lessons learned from tackling the challenges presented by the dark web can be applied to enhance overall online privacy and security.

It is worth noting that the positives discussed should not overshadow the illegal and unethical activities that are prevalent on the dark web. Criminal networks, such as drug trafficking and illegal marketplaces, make up a significant portion of dark web activity. However, it is essential to consider the potential positive aspects and explore how they can be used responsibly.

In conclusion, the potential positive use cases of the dark web were evaluated, highlighting its impact on online privacy and security. While acknowledging its negative reputation, the discussion shed light on the anonymity and freedom of expression it offers individuals living in repressive regimes. Additionally, the dark web’s role in facilitating legal transactions and driving innovation in cybersecurity was recognized. Nonetheless, it is crucial to address the illegal activities on the dark web and ensure that any exploration of its positive side is done responsibly and ethically.

Izaan Khan

The analysis suggests that the dark web can offer benefits to certain individuals by providing anonymisation services. This can be particularly useful for individuals who require a high level of privacy and restricted access to a tightly knit community. Anonymity on the dark web can be critical for use cases such as journalists researching or communicating under extreme conditions, as well as for organising protests. Overall, the sentiment towards the dark web is positive, emphasising its potential advantages.

Furthermore, the analysis acknowledges that law enforcement agencies have achieved successful outcomes in cases involving cybercrimes on the dark web, citing notable examples like Silk Road and AlphaBay. However, it argues that eradicating privacy-enhancing technology, such as the dark web, is not necessary to combat cybercrime effectively. Instead, alternative strategies such as open source intelligence, infiltration, and hacking techniques can be employed to counter cybercrime without compromising privacy rights. The sentiment towards this argument is neutral.

The report also highlights the importance of people’s ability to protect their online privacy using technologies like the dark web. It advocates for a principles-based approach that balances the need for anonymity against other legitimate uses of anonymising technologies. This sentiment is positive, reflecting the belief that individuals should have the right to safeguard their privacy online.

Regarding regulation, the analysis suggests that regulations should be defined within the context of cybercrime. Existing regulations, including basic criminal law, already exist. However, it is noted that enforcement often involves a constant arms race between authorities and cybercriminals. The sentiment towards regulation is neutral, emphasising the need for a careful and nuanced approach.

It is also highlighted that technological solutions alone are inadequate in combating cybercrime. The dynamic nature of cybercrime requires innovative solutions that go beyond technology. Additionally, adopting more pragmatic approaches to regulation, such as controlling information flows and data retention, is seen as potentially beneficial.

The importance of trust in institutions within the complex regulatory environment is emphasised. It is believed that trust is crucial for navigating the challenges posed by emerging technologies and evolving regulatory frameworks.

The analysis further emphasises the significance of international cooperation and capacity building in effectively combating cybercrime. It notes that a lack of understanding of technology can hinder policy outcomes and enforcement efforts. Existing international cooperation organisations, such as Europol and Interpol, are highlighted as essential in the fight against cybercrime.

Additionally, the analysis raises the concern that tension between governments and encryption services will intensify. Governments may seek to undermine encryption for backdoor access, potentially restricting the privacy and security provided by these services. This development is viewed negatively, suggesting potential conflicts between privacy protections and government surveillance.

Furthermore, the report anticipates changes in the landscape of internet usage due to technological advancements and government regulations. It suggests that the emergence of new anonymisation services and government attempts to undermine encryption could reshape the way people use the internet.

In conclusion, the analysis highlights the benefits of the dark web in providing anonymisation services to individuals who require heightened privacy. It emphasises that eradicating privacy-enhancing technology is not necessary to combat cybercrime effectively. Instead, a principles-based approach that balances anonymity and other legitimate uses of technology is advocated. The report also emphasises the need for pragmatic regulation, international cooperation, and trust in institutions to address the challenges posed by evolving technology and cybercrime.

Pedro de Perdigรฃo Lana

The discussion revolves around various aspects of internet governance, the dark web, and intellectual property. One argument highlights the importance of intellectual property in the context of internet governance. It is stated that intellectual property was among the first and most important discussions among civil society, the private sector, and the government in relation to internet governance. However, another argument challenges the dark web’s notorious reputation for intellectual property infringement. It argues that the portrayal of the dark web as a hub for criminal activity, particularly intellectual property crimes, can be misleading. The argument suggests that the dark web and deep web are not exclusively used for illegal activities, but are also repositories for various types of files, including copyrighted content.

Furthermore, the discussion explores the negative consequences of fear-driven policies and rigid copyright systems that have emerged due to concerns about the dark web. It is argued that several society reforms have been implemented based on the idea that piracy, including intellectual property infringement, is a widespread problem. These fear-driven policies may have inadvertently created obstacles to the very objectives they aim to promote.

The need for purposeful and careful regulation of the dark web is emphasized. While acknowledging the potential dangers associated with the dark web, the argument highlights that regulating it should take into account its positive uses, such as communication in environments where freedom of expression is restricted. It is suggested that regulation should be purposeful, avoiding undue restrictions on legitimate uses and considering the underlying reasons for regulation.

Additionally, the discussion examines the ethics and inequalities associated with academic documentation. It is noted that some academic ecosystems are unjust towards poorer countries, and publicly funded scientific publications charge high fees for access. This situation raises questions about the ethics of sharing academic documentation and the role of copyright in academia.

Furthermore, there is criticism directed towards the science publishing industry for charging exorbitant access fees despite being sustained by public funding. The argument highlights that the industry charges thousands of dollars for access to scientific publications, which creates barriers to knowledge dissemination and exacerbates economic inequalities.

In conclusion, the discussion revolves around the complexities and nuances of internet governance, the dark web, and intellectual property. It emphasizes the need for careful consideration when regulating the dark web, taking into account its positive uses. The discussion also raises important questions about the ethics and inequalities associated with academic documentation, as well as the practices of the science publishing industry. By critically examining these issues, it is hoped that a more balanced and effective approach to governance and regulation can be achieved.

Pavel Zoneff

The Tor software is a powerful tool used by millions of individuals worldwide to securely access the internet while protecting their right to privacy and information. It aids users in circumventing censorship and browsing the internet freely without facing restrictions imposed by governments or other entities.

It is important to note that only a small fraction of the traffic on the Tor network is directed to onion services, which are confined exclusively to the Tor network. This suggests that while censorship circumvention is a significant use case for Tor, it is not its sole purpose.

However, there is notable criticism levelled against privacy-preserving technologies such as Tor, Signal, and encryption platforms. Some individuals or entities misinterpret encryption as being associated with nefarious intentions, leading to unjust criticisms of these technologies. This misconception can result in policymakers lacking a comprehensive understanding of how privacy-preserving technology works.

As a consequence, governing laws are sometimes enacted that roll back international standards related to human rights, freedom of expression, and access to information. This situation is concerning, as it indicates a lack of education and awareness among policymakers about the importance of privacy and its relationship to fundamental human rights.

To counter this negative perception, it is crucial for proponents of privacy-preserving technology to engage in robust advocacy efforts. There is a need to raise awareness and educate policymakers about the benefits and importance of these technologies, as well as to dispel any misconceptions or unfounded fears surrounding their usage. By doing so, it may be possible to protect and preserve fundamental human rights in the digital age.

Overall, the Tor software plays a pivotal role in safeguarding internet users’ privacy and right to information. However, the criticism and lack of understanding around privacy-preserving technologies highlight the need for continued efforts to advocate for their importance and counter any unfounded narratives surrounding their usage.

Alina Ustinova

In this series of articles, Alina Ustinova delves into the controversial topic of the ‘dark web’ and aims to shed light on its implications, fears, and potential benefits. Ustinova, as the president of the Centre for Global IT Cooperation and the organiser of the Russian IJF and Youth Russian IJF, is well-positioned to explore this subject and provide valuable insights.

In her exploration, Ustinova acknowledges the widespread misunderstanding around the term ‘dark web’ and its incorrect association with negative activities. She seeks to clarify the misconceptions that people have by differentiating the dark web from the deep web, emphasizing their distinct characteristics. By doing so, she hopes to dispel the misconceptions and provide a clearer understanding of the dark web.

Ustinova also emphasises that the dark web potentially holds benefits beyond its negative connotations. She aims to uncover these potential benefits and challenges the prevailing notion that the dark web is purely a hub of illicit activities. By exploring the possibilities, Ustinova opens the door to a more nuanced understanding of the dark web and its potential uses.

On a different note, research indicates that young people, mainly millennials, exhibit bad habits in cybersecurity. It is observed that many youths are drawn to the dark web out of fascination for forbidden things and as a form of protest against the system. This insight highlights the complex motivations behind young people’s engagement with the dark web, indicating a deeper societal issue that needs to be addressed.

Additionally, the rise of Generation Alpha, growing up in the digital age, has led to their inherent reliance on the internet. Ustinova highlights that Generation Alpha, exposed to internet devices at a young age, considers the internet as a beneficial tool that is essential for various aspects of life. This has significant implications for education and the development of digital literacy skills.

In conclusion, Ustinova’s exploration of the dark web sheds light on its implications, fears, and potential benefits. By clarifying misconceptions and differentiating the dark web from the deep web, she offers a more comprehensive understanding of this often-misunderstood realm of the internet. The insights gained from Ustinova’s analysis also highlight the complex motivations behind young people’s engagement with the dark web and underline the importance of digital literacy skills in the modern age.

Abraham Fiifi Selby

The dark web, a part of the internet accessed through special software, presents a range of risks and benefits for users. It contains websites that are not indexed by traditional search engines, making it a haven for illegal activities such as the sale of drugs, stolen data, and hacking tools. However, it is important to note that not all aspects of the dark web are associated with criminal activity.

One argument suggests that using the dark web can be dangerous for ordinary users, unless used properly. This is due to the specific risks involved, including exposure to malware, scams, and illegal activities. Dark web tools are not encrypted and can be monitored by third parties, potentially compromising user privacy and security. Nevertheless, another viewpoint asserts that the dark web can also be used for legal and meaningful purposes if used correctly. In fact, the dark web is not exclusively for criminal activity and, when utilized wisely, it can actually provide protection for users. It is essential for ordinary users to learn how to navigate the dark web safely in order to avoid these risks and experience the potential benefits it offers.

Regulating the dark web is seen as a complex task for law enforcement agencies. While there is a need to investigate and prosecute organizations that engage in criminal activities on the dark web, developing effective regulations is challenging. The dark web operates on an anonymous network that is difficult to trace, requiring specialized strategies and tools to combat illegal activities. However, governments and law enforcement agencies are taking steps towards regulating the dark web. For instance, the FBI shut down the Silk Road, one of the largest dark web marketplaces, in 2013. In 2020, the UK government announced plans to introduce new legislation aimed at giving enforcement more powers to investigate and prosecute dark web crimes.

Education and awareness are highlighted as key elements in safely utilizing the dark web. As users are often unfamiliar with how to navigate the dark web safely, there is a need to provide education and raise awareness about the risks and best practices. Understanding the nature of the dark web is crucial in order to detect and mitigate potential threats. Creating awareness about the dark web can help users make informed decisions and protect themselves from the dangers associated with it.

Despite its association with criminal activities, the dark web can also be utilized for good purposes. People can leverage the anonymity and privacy provided by the dark web to conduct research and share information about sensitive topics without fear of censorship or surveillance. This highlights the potential for the dark web as a platform for positive contributions to society.

In conclusion, the dark web presents a complex landscape with both risks and benefits. It is important for users to understand the dangers involved and learn how to navigate it safely. Regulating the dark web is a challenging task, but necessary to combat criminal activities. Education and awareness play an important role in safely utilizing the dark web, while also recognizing its potential for positive usage. By promoting responsible usage and implementing effective regulations, society can better harness the potential benefits of the dark web while minimizing its risks.

Audience

The analysis highlights several important points raised by the speakers. One speaker discussed the challenge of identifying cyber crime and used the analogy of a thief breaking into a house to illustrate the complexity involved. The speaker’s sentiment towards this challenge was negative, indicating the difficulty of understanding cyber crime.

Another speaker emphasized the need for consistency in global internet usage and regulation. They stressed the importance of establishing a common ground for internet governance and highlighted the different approaches taken by countries like China and Russia. The speaker’s sentiment towards this topic was positive, suggesting the necessity of a consistent approach.

A concern was expressed for marginalized communities in the context of internet governance. The speaker acknowledged that these communities often lag behind in internet access and usage, potentially exacerbating existing inequalities. The sentiment expressed towards this issue was one of concern, demonstrating an understanding of the potential marginalization.

Furthermore, research findings revealed that millennials tend to have poorer cyber security habits compared to older generations. This observation underscores the need for increased awareness and education on cyber security, particularly targeting younger individuals.

Lastly, there was a discussion on the future landscape of dark web activities, focusing on the perspective of youth. Although specific supporting facts were not provided, the analysis indicates an interest in understanding the potential evolution of dark web activities among young people.

In summary, the analysis provides valuable insights into cyber security, internet regulation, and their impact on marginalized communities. It underscores the challenges in identifying cyber crime, the importance of consistent global internet governance, and the need for improved cyber security habits among younger generations. Additionally, it recognizes concerns for marginalized communities and the urgent need for inclusive and equitable internet governance. The analysis also raises questions about the future landscape of dark web activities, particularly from a youth perspective.

Miloลก Jovanoviฤ‡

The internet is a vast space that contains a wealth of resources, some of which are not easily accessible through conventional search engines. These resources are found in the deep web, which is the part of the internet that is unindexed by search engines like Google. The deep web contains content that is not readily available to the general public, making it a mysterious and intriguing realm.

However, there is often confusion between the deep web and the dark web. The dark web is a subset of the deep web, specifically associated with negative and illegal activities. It is a place where individuals can engage in illicit behavior, such as buying weapons or drugs. It is crucial to differentiate between the two and not solely associate them with negativity.

The deep web and the dark web share the common characteristic of housing unindexed resources on the internet. The dark web, however, is just a portion of the overall deep web. It is essential to clarify this distinction to avoid misunderstanding.

While the dark web and the deep web are often viewed as havens of illegal activities, it is crucial to note that illegal behaviors and cybercrime are not exclusive to these parts of the internet. Negative behaviors and cybercrime can occur on publicly available resources like social networks as well. Therefore, it is essential to approach discussions of online security and criminality with a broader perspective that considers the entire internet landscape.

Protecting one’s metadata is also a significant concern for individuals who value privacy and security. Techniques like using the Tor Browser or the onion protocol can help hide metadata, ensuring greater anonymity online.

The responsibility for controlling internet information channels lies with national governments. Geopolitical circumstances have resulted in a fragmentation process on the internet, with different countries seeking control over internet governance. Protecting infrastructure and citizens from cybercrime necessitates traffic control and monitoring.

Investing in technological sovereignty is crucial for nations to have control over their internet space. This involves developing strong agencies and institutions to protect national interests and enacting strict laws regarding data storage and usage. By doing so, countries can ensure they have the means to safeguard their digital infrastructure and maintain control in the ever-evolving technological landscape.

Regulating the dark web or the deep web exclusively is not feasible since they are integral parts of the entire network. Instead, efforts should be focused on regulating the internet as a whole to combat illegal activities effectively.

While technology such as TOR and VPNs can provide some level of data protection, they may not guarantee absolute privacy. It is essential for users to understand the limitations of these technologies and exercise caution when sharing sensitive information online.

Accessing services that are not available in one’s country may violate local laws. It is important for individuals to be aware of and respect the legal frameworks in their respective jurisdictions to avoid engaging in illegal activities.

The fight against cybercrime requires a multi-stakeholder approach, involving collaboration between security and intelligence agencies, governments, and other relevant parties. Current alliances and systems like Europol have made significant contributions but may not be sufficient to effectively combat cybercrime. Enhancing cooperation and communication among different parties is crucial to solving and understanding the complexities of cybercrime cases.

Overall, the deep web and the dark web are intriguing aspects of the internet that warrant further investigation and understanding. While they are often associated with negative and illegal activities, it is important to approach discussions with a balanced perspective that considers the wider internet landscape. By promoting awareness, improving regulation, and fostering international collaboration, we can work towards a safer and more secure online environment.

Session transcript

Alina Ustinova:
Hello, everyone, we’re going to start like now. So I hope everyone who wants to join, join us, and we’ll have a wonderful discussion. So my name is Alina. I am the president of the Center for Global IT Cooperation, where the organizer of Russian IJF and Youth Russian IJF. So and today we’ll discuss a wonderful topic, dark web. I will make a remark that we will call everything we discussed a dark web. But it’s not like the term that usually used to describe correctly what we’re going to talk about, but because it’s common knowledge, we will speak about it. So what we’re going to discuss today, and we’ll try to understand why people basically are afraid of dark web, and why maybe dark web is not so threatening as we think, actually. And in the end, we’ll try to answer the one question. So is it a cybercrime heaven or just another layer of the web, where our society can also find benefits? So we’re going to start with the basics, because people sometimes, they kind of mess with the terminology and think that dark web is actually something that only contains bad things. And they mess it up with the thing called deep web. So our first speaker, Milos. So can you please tell what is the difference between deep web and dark web?

Miloลก Jovanoviฤ‡:
So thank you very much. Thanks for organizing this panel. It’s very interesting topic, because we should discuss about the dark web, deep web, and all challenges on the internet. But speaking about deep web, we should say that deep web is a part of internet, which is an index, speaking about conventional search engines like Google, like, you know, who we index and so on and so on. So if you understand how internet works, we see some resources on the internet, which is available, we can easily search. like on Google and so on and so on. But on another part, there are a lot of resources which is not available easily. So we should understand architecture of internet. We have domain name system, we have IP addresses and so on and so on. So if we see internet as a global network and I don’t want to go into fragmentation processes and so on and so on. If you look internet as a global available network in every part of power world, we should understand that there are a lot of resources which are available only via IP addresses. So there are some different aspects how we can control this, what’s behind this, what can we do accessing these resources. And this is really interesting. So speaking about dark web and deep web in our, I would say community, there are a lot of confusions and misunderstanding what is dark web, what is deep web. Many people would say that dark web and deep web are same concepts and speaking about terminology and so on and so on. But I would agree with this. So speaking about dark web, many people think that when we speak about dark web, we generally speak about some bad behaviors, buying some weapons, drugs, and so on and so on. But on another hand, we should underline that they are very similar approach when we speak about dark web and deep web that I would say that this is all about unindexed sources on the internet. So we can do bad things regularly when we visit some other publicly available resources, speaking about Facebook, about social networks, about all other resources which we use every day. So it’s not only when we speak about dark web that this is a bad behavior, speaking about some illegal things and so on and so on. So we should understand how internet works. And I would conclude that if we compare dark web and deep web, that it’s all about unindexed resources on the internet.

Alina Ustinova:
Okay, thank you very much for your answer. And of course, yes, the main concern of all the people that dark web brings only cyber crimes that bring nothing wrong. So our next speaker, Fifi, she will be joining us online. Do you hear us? Yes, I can hear you. Okay, hi Fifi. So my question to you, so in terms of cyber security, why dark web too? for an ordinary user is considered dangerous. Okay, all right.

Abraham Fiifi Selby:
Thank you for this. And as my colleague explained the dark web, it could be, we have the good side and the bad side of it, not only for the criminal aspect of it. But let me address this. Whenever we are using dark web tools, as you’re saying in terms of cybersecurity to the ordinary user, dark web can be very dangerous because we see that users are not familiar with how to use them safely. So how to use the dark webs can be also dangerous. If you are able to use it safely because people use it and they use it for criminal activities and other stuff, but there are some specific risks that it can be seen and involved when we are talking about dark web. One, it could be the malware aspect of it, that is aspect of the cybersecurity whereby there are some distributed malwares on the internet which contain the viruses that throw down around somewhere people also use. We also have scams because using the dark web, there’s a lot of scam because people use it for illegal activities. So we have people that they are being scammed like phishing and scam attacks, phishing meals, and some other phishing as that people may be targeted to the ordinary user. And also let’s see the illegal activity. As my colleague was saying that people would be using it for some sexual aspect and child sexual abuse and materials and other stuff over the internet. And when you also move forward, we also see the aspects of some people who don’t have the capacity to learn how they can use it. This is in a sense that people who are using dark webs in conjunction using the dark web, using with the criminals online at the same time. So they may not be able to see how they can protect themselves. And these are the various aspect of it that it is very cyber-concerned because the ability for you to use it very well, wisely, can also help you protect you. But also, you might also know that dark web tools are not encrypted and they are not protected unlike the normal applications as well. Although people use it for normal deep web applications for general purposes, but they can also use for criminal activities. So the ability to use and also ability to protect yourself. And why is it for the ordinary user is that it is not highly secure and encrypted, whereby the dark web also, like you can be monitored anytime maybe with a third party organization or for criminal offenses investigation. So these are the various concerns that we’ve been raising for the ordinary user because they are very prone to other threats on the internet when they are using dark web because they think that they wanna browse private or they want to access information private. Thank you very much.

Alina Ustinova:
Thank you for input. So does anyone want to add? Miloลก want to add some stuff?

Miloลก Jovanoviฤ‡:
I just wanna clarify, when we speak about deep web, dark web and so on, we should understand that dark web is just a part of deep web. So speaking about deep web, as I mentioned before, it’s just a part of overall network of internet and it’s majority of course, but when we speak about deep web, when we speak about deep web, all unindexed resources on the internet. So when we speak about illegal things and cyber crime and everything, which is actually trending topic today, we should understand that it’s not only exclusively on the dark web or deep web or public resources, it’s available everywhere. So when we speak about dark web, we should understand that there are many techniques which gives you availability to hide your metadata and so on and so on, because Tor browser, onions protocol and the different techniques speaking about how to hide your, I would say metadata. Yeah, that’s actually metadata. So the main question speaking about privacy, about security is how to secure your own metadata in the concept of security. So we should not make misconfusion and misunderstanding. Dark web is a just part of deep web as we can consider all the resources on the internet, which are not indexed on search engines as a part of deep web, yeah. So, yes, this is the main concern and the main confusion.

Alina Ustinova:
You’re right. So I want to ask Izan, why actually like, we know that people think that dark web only criminal activity, like only people that use databases and steal them and load them there and use it like for some kind of bad behavior. But actually, is there something good in the dark web? What benefits can it bring to the people?

Izaan Khan:
Thanks, Alina. That’s a very interesting question. I feel that the dark web, basically just a bunch of hidden services that are made available, you know, through tools like Tor and so on, can provide benefits that any other piece of technology really that has those anonymizing features or pseudo-anonymizing features, shall we say, would provide to an individual who needs them. And there are many legitimate use cases for something like the dark web to have hidden services or services that only a few people from a tightly knit community can access. And those could be potentially journalists, could be individuals who are researching or communicating in situations of extreme censorship or duress, for example. There are numerous websites, for example, The New York Times, that have mirror websites on the dark web to allow individuals to be able to access that when that content is usually going to be censored from the clear net, as we call it. You know, digital activists as well have many, many different use cases for accessing these sorts of services and communicating. Organization of protest sometimes also happens on these dark net platforms. So I think there is a lot of interesting use cases for this kind of technology. But over and above that, I also feel that in general, people should have the ability to protect their privacy online and they should be able to use whatever services are at their disposal. And this is one of them. And of course, this gives rise to legitimate concerns on the other side of the coin, which we often see by law enforcement, which is that, well, how are we going to be able to tackle cybercrime online? Is all hope lost if we have totally anonymized services? And I would say no. We don’t necessarily have to throw out the baby with the bathwater, as it’s so called, and get rid of every single privacy enhancing technology simply because it makes law enforcement difficult. In fact, there have been many, many successful cases of law enforcement that have taken place in dark net contexts. We saw the shutdown of the Silk Road, and the second iteration of that, and other darknet markets like AlphaBay, where there were drugs and other sort of paraphernalia that were stolen being sold online. We’ve seen other tools by law enforcement, such as open source intelligence or infiltration to get rid of CSAM material on the darknet as well and apprehend those offenders. We’ve also seen basically other hacking techniques, like if there’s a misconfigured server on the darknet, they can take full advantage of that. They can run as well, their own middle relays and exit nodes and sniff content over there as well. So I think there are many different techniques that they can use to fight cybercrime online without having to get rid of that technology in the first place. As I mentioned, it’s always an arms race. If you have a removal of this technology, there’s going to be another technology to come and replace that. What we need is a principles-based approach to how we balance these issues of anonymity and other legitimate use cases for this kind of anonymizing technology, like free expression and so on. So that’s to serve my two cents on this.

Alina Ustinova:
Yes, thank you, Izan. And actually yesterday, we had a wonderful talk with the TOR project, yes. With the actually, I hope he will join us maybe today and express the position of the TOR project because they said like interesting statistics that only in services, it’s like where actually dark web pages exist, it’s like only one to 3% of the whole TOR browser traffic, which means that people access TOR browser specifically not to like do something bad, not to do some kind of criminal activity or to access even the dark web pages, but just to use it as a VPN service, for example, because it encrypts your surfing traffic, yes, surfing the web. So, but technology develops. We see that lots of things appear now and maybe with. technology it can affect both dark web tools and not only in the dark web itself. So my question to Gabriela will be is how do you think how actual emerging technologies in the future affect the whole dark web? Yes thank you so much and I just want to ratify the importance of what

Speaker:
was said before so the dark web and the internet in general is a tool it’s not the enemy and we’re fighting here the criminal organization so the crime on the internet in the dark web is the problem. So when it comes to emerging technologies they of course can play a significant role in the fight against again these crimes in the dark net. The first of all that is again very popular everywhere if you think of rec tech technologies is the machine learning and AI so these technologies can be for example employed to identify patterns and anomalies in dark net activities assessing law enforcement agencies in tracking illegal activities and identifying potential threats. So again you can just think of what’s happening in the banking sector right now you have the QAC softwares that are helping to understand the different money laundering techniques and patterns so this is something that can be reused eventually to seek some sort of criminal behavior and anomaly in the dark web as well. Then you have the improved encryption and cyber security aspect of it so we are talking about developing advanced encryption techniques and cyber security measures that can help protect sensitive data and prevent unauthorized access to dark net platforms. So here again many many different hacking attacks and attempts are of course very much I would say popular. There is a sort of a race between dark net marketplaces right now so if Silk Road 2.0 or Hydra or many others were shut down there was a sort of a competition between other dark net marketplaces that were taking that was taking place and it’s still ongoing. So they’re trying to undermine each other and here of course this is you know something that can be thought of in the future thinking of a solution in terms of improved encryption cyber security. Then you have the blockchain and this distributed ledger technology which again is very I would say popular it’s not a new technology but these can be used to create transparent and tamper-proof records making it more challenging for criminals to conduct transactions on the dark net without leaving digital footprints. Then the advanced data analysis which again is very popular I would say in the commercial internet if you will. So here again we’re talking about leveraging big data analytics which could help law enforcement agencies and of course other actors to undercover hidden connections, track financial flaws and identify individuals involved in criminal activities on the dark net. And of course the collaboration tools are the most important ones today so enhanced communication, collaboration tools can improve the coordination among everyone involved, and of course, work to combat darknet criminal networks altogether. So in Europe, we have the DAS directive a few years ago, which kind of revolutionized, if you will, the overall understanding of cybercrime. So European countries had to open a cybercrime unit within their organizations, which is very important. And this is exactly what I would generally advocate for in every single country to do. And so again, I would say do not restrict the personal opinions online, because again, we’re talking here about different civil liberties and what other speakers were telling about the importance of having the option to be private on the internet. And again, I would say just the focus on the biometric identification of users is, in my own opinion, the wrong direction. I’m seeing several countries trying to implement that type of tooling, but again, the identification of users is, in my opinion, a wrong focus. We should maybe focus on the technology. We should focus on the software companies, on the applications. How are they used? We should assess like having maybe a technical due diligence of the softwares and trying to actually stop them rather than or modify the use of software rather than focus on the users. Thank you, Gabriela.

Alina Ustinova:
I think that’s a wonderful input to our discussion. We actually, let’s say, we didn’t cover one topic, considering dark web, and it is actually the protection of intellectual property, because many of the people, we actually discussed it, of course, yesterday with your project, that they said that many people, of course, you store browser for downloading, like for pirating. This actually messes up with the whole system, the whole connection, because it’s very, very big files to download. I want to ask our online speaker, Pedro, a question. How does the usage of dark web tools and actually dark web affect protection of intellectual property?

Pedro de Perdigรฃo Lana:
Hi, everyone. I would like to first of all greet everyone here. from Brazil. I hope everyone has arrived in this early morning after those amazing IHF nights. But to get back to the issue here, I would like especially to build up on Aizen comments. And I kind of like to use intellectual property as an example for everything. Every thematic that I work on, especially fragmentation sovereignty, intellectual property can be used as something in the middle of it. Because especially when we’re talking about internet governance, intellectual property is at the basis of it, right? So it was among the first most important discussions that we had, that we had amongst civil society, the private sector, the government. It kind of nowadays, it isn’t so much on the highlights, but it still comes up from time to time as something that kind of implements the debate once again. And on this occasion, I would like to use intellectual property infringement as a good example on how deep web and darknets can be weaponized, argumentatively weaponized, and how they are weaponized kind of erroneously as a presentation of the idea that they are something purely threatening, purely menacing, even if the argument is actually absolutely wrong. After all, when you search for intellectual property and deep web, or more specifically, what we’re here calling dark web, darknets, you will tend to think that this is a place created for criminal intent. It’s used only for that. You will see a lot of lawyer firms talking about intellectual property crimes happening on these places. And of course, it is a… a page that facilitates the sharing of illegal copyrighted material. You can find books and other visual contents that are under heavy enforcement on the superficial levels of the internet, especially with those wonderful tools that the entertainment industry have to date, automatically search and take down content, not always illegal ones. Illegal ones also get taken down by this sort of source. And also, some types of severe intellectual property infringements, such as trade secrets, commercialization, really are especially problematic here. But the dangers and infringements that are actually kind, they are actually kind of the same of those that we find on the surface net. And they are even less concerning, considering the sheer number of people that have access to normal websites and those that have access to deep web content repositories. You must remember that copyright infringement is not a problem when just a dozen of peoples are doing it. But a multitude of people affecting through known market failure, the possibility of existence of a certain business, or the possibility of revenue from a creator. And more than that, dark web agreements are actually presented as a paradigmatic example of the alleged dangers of copyright piracy online. So people and organizations use the threats of the dark web to actually enhance and extend the fear on overall sharing of content online. Which ends up just reinforcing even more how these policies are modeled towards rigid and aggressive systems of copyrights. So these illegal frameworks became arguably obstacles to the objectives they promote, because of the informational society reforms that were based exactly on the same idea of how piracy was a pandemic, and what we needed to reframe a bit of the internet potential so it could have a positive impact on the internet. avoid the bigger evil of intellectual property infringements. So the point I would like to talk here and discuss a little bit more later is how we need to be careful on how these ideas around dark web and dark net are presented, are used. So we don’t end up just trading something that is somewhat problematic for something that is systematically and severely problematic. So back to you.

Alina Ustinova:
Yes, thank you for input. And actually I have a wonderful news because I just found out that joining us online is actually one of the tour projects. So I actually think that it will be great to hear perspective from one of the most famous browser that is usually connected with the dark web. But so, Pavel, can we give a word to Pavel Zonov online so he can speak? No. No? Okay. I will go to the tech team to get, but before I go, maybe speakers can discuss, can make a discussion. Then we’ll, yes, we will just discuss about dark web, how we actually can, can we actually regulate it? Or can we actually control some kind of the thing that’s in the web? Because there are lots of policies. There are lots of laws created to govern the internet because we’re all here, but can actually parts of the web that we call dark web be governed? So I ask this question to every speaker. So if anyone wants to start. Okay, I can start. Okay, Fifi, yes.

Abraham Fiifi Selby:
Okay, all right. So looking at the. and regulating of dark web. It’s quite a complex task, and very challenging. But there are some ways that we can regulate, because we must ensure that there’s a law enforcement. And the law enforcement agencies must be able to investigate and prosecute organizations that use dark web for criminal activities, because people can use the dark web to do good things, good research, as my colleagues were saying. And also, the technology aspect, the government company might develop tools that will implement to disrupt the dark web activities for criminal activities, that are blocking access and other aspects. And one thing that we are all doing is education and awareness. We are creating education and awareness. Now, let me give this small scenario that we’ll be able to understand. Despite the challenge in the solution that to regulate dark web, there have been some approach in the various years and some other research that I have done personally, that in some cases, dark web were trying to be regulated. Maybe in 2013, the FBI shut down some Silk Road in the largest dark web marketplace. And also, when we’re looking at in 2020, the UK government announced plans to introduce new legislation that will give enforcement more powers to investigate and prosecute dark web crimes. So we can also try to give much education and awareness. And also, the technology and policies that we can also try to put it behind, in terms of developing new encryption algorithms, that can really help to regulate that. It’s a collaborative effort. And one entity cannot do. We are all involved. We must also be able to be safe on our own use of the online tools and resources, because dark web, as we are saying, it’s not only a tool. only for crimes. You can also use it to do. And one thing I want to say is that in this life, you cannot detect darkness unless you’ve been in darkness before. So it’s very important for us to learn how we can use this dark web so that we can make policy a regulation behind this, as Pedro was saying. So this is my take on it. There are some few regulations that we can do, but it’s a collaborative effort between an institution, we individuals, and stakeholders. Thank you very much.

Miloลก Jovanoviฤ‡:
So when we speak about control of our information channels, I mean, traffic flows, and so on and so on, we should think about how to control our all, I would say, internet, speaking about sovereignty. So if we speak about fragmentation processes, which occurs definitely right now in these geopolitical circumstances, we know what’s happening right now in Europe, in the Middle East, everywhere across the globe, and so on and so on, we should see different technological zones. And when we speak about sovereignty, which is a really important topic in China, in Russia, in some countries in Europe, in America as well, so on and so on, we should understand that controlling information channels and traffic flow, I mean, speaking against cybercrime and how to protect your own infrastructure, how to protect your own citizens, and so on and so on, it’s a job for national governments, I would say. So when we speak about internet as a global network, we should understand that it’s a global network, but control of every part is in the hands of local governments, I would say. And this is what China proposed, what Russia proposed, and what other countries proposed. And this is a good example, because when we speak about data, but about potential investigation, controlling and monitoring traffic and so on and so on, we see fragmentation processes, and it’s all about technological sovereignty. So I would give, you know, I think it’s a good example. For example, when you visit China, you are not allowed to use some Western services. When you are in Russia, for example, there is a strict laws which propose that all data of Russian citizens should be stored in the territory of Russian Federation. When you go to Europe, to America, there is a huge discussion about, I would say, Huawei equipment, ZTE, Chinese manufacturing, and so on and so on, speaking about tech industry. So when we speak, moving back, I would like to make parallel with the interim and so on. It’s all about hardware, software, and protocols. So if we want to maintain and to control our national internet space, and our, I would say, information channels, it’s really important that we invest in, I would say, technological sovereignty of every country at the national level. So only if we have strong, I would say, powerful institutions, and forces, and speaking about some agencies, and monitoring institutions, and so on and so on, we will be able to fight against cybercrime and to protect our own interests. Okay, thank you.

Alina Ustinova:
So you mean that protection of a citizen is enhanced in the local government? Absolutely. Okay, so this avenue wants to add to regulation. Yes, Izan, thank you.

Izaan Khan:
I think that’s an interesting question, primarily because we need to sort of define what exactly we mean by regulation, because there are already regulations that exist. It’s like basic criminal law, don’t do crime. So if you’re talking about regulation in the sense of, is there a technological way to be able to control what people do online? Well, as I mentioned, it’s just an arms race. The government can try as hard as it can to be able to take down these unlawful services and activities, and individuals will try to find ways around that. That’s always going to be a cat and mouse game. But in terms of making the lives of law enforcement slightly easier, one interesting example of basically a type of forum shopping, essentially, is that law enforcement officials in many parts of the world are not actually allowed to commit crime in the course of… off fighting crime. And specifically in the case of the dark web, in order to gain trust of individuals who are accused of or being suspected of trading in CSAM material, you cannot try to gain their trust by yourself sending CSAM material to them. Except, unless I’m mistaken, the last time I read in the case of Australia. So a lot of international cooperation centers around the Australian government because Australian officials are then able to go, because they know that they have to be able to gain their trust in order to be able to detect and figure out who these individuals are. And so they are allowed to do that because the judges and the law is drafted in such a way that there is this sort of carve out or exemption. And I think we need to think about solutions like that where you’re able to come up with these sorts of solutions that don’t necessarily involve cracking the technological nut of what Tor and I2P and all of these other services provide, but also enable for a more pragmatic approach towards tackling cybercrime in these sort of contexts, in these anonymous contexts. So that’s, I think, my sort of two cents on the problem. Because when we talk about regulation, we need to talk about what exactly we’re trying to regulate and what mode of regulation are we using. There’s, sure, the law, but there’s also ways that we can regulate through controlling information flows and data retention and stuff like that as well. So we need to recognize what the limitations of each mode of regulation could be. Because if you say, don’t do crime, somebody could still go ahead and do crime. So you need to figure out, okay, is there a technological way that we can deal with this? If there isn’t a technological way that we can deal with this, is there a way that we can make our own lives easier to be able to tackle the cybercrime when we’re going and venturing out into that space? So I feel like there’s different sort of approaches and different layers to this problem of regulation that need to be considered. It’s not really a simple problem, but I’m hoping, and I have trust in our institutions to be able to do that in a balanced manner, basically.

Alina Ustinova:
Yeah. Thank you, Jonathan. That’s actually a very important point you made.

Speaker:
So Gabrielle, you want to add as well? Yes, just a few sentences to what was already said. In how I see it, my profession is to talk about risks and try to mitigate different risks and not go in unwanted territories, let’s put it like this. So when it comes to the dark web and everything concerning this, I would say that it’s a risk in this situation, the abuse of power in the name of fighting crime. So this is something that we should be aware of because it’s something that we should be aware of because data is the new oil today. If you will, it’s not something that is happening from yesterday to today. It’s already like from a decade that we see this type of activities going. And so what I would suggest… suggest is really to focus on cybercrime agencies, on dark web teams that would work actually together with, you know, the academia, with the different, you know, actors in the field. AeroPool, for example, has a dark web team right now that is focusing exactly on this type of illegal activities. Then I would say it’s very important also to report illegal activities. So, for example, you know, Monkey Sees, Monkey Reports in a very, I would say, private manner because whistleblowers are never welcomed in any country. So this is something that should be normalized, if you will. And then, of course, the awareness and the mandatory, as I see it, cybersecurity educational lessons for government people, for everyone involved in data, you know, whether it’s patient data in a hospital, anyone like an administrator in the hospital, the very first person you come and give your ID to, that person needs to go through a cybersecurity, you know, lesson and educational workshop. So this kind of three points that I really feel strongly about is something that is the basis of where we can actually give our input, because through, you know, an agency, we can always, you know, advocate for certain things, for certain techniques, we can learn from each other, and we can help. And of course, whoever is part of the sector already can definitely support the educational lessons. These are very simple, actually, lessons, because you don’t need to be, let’s put it that way, you know, a technical genius to understand certain things. But everyone knows here in this room that the cybersecurity breaches are generally, you know, connected to human breach. So it’s all related. to humans. So we like dogs, or we like, I don’t know, sheep, but you know, you click on a link, you do something, you don’t think that person, you know, is has some malignant thoughts, you know, it’s a lady or it’s a young boy, but you know, this is fishing. So you have many, many different situations that just regular citizens should be aware of, because we’re living in a digital era. And this is not something that is so special anymore. And it just needs to be normalized and put into a system that makes sense for everyone. So everyone should be part of it. And, you know, this type of topic should be just again, normalized, standardized in order to tackle this type of topics in the future. Okay, thank you. And so I was told that Pavel is now can add something as a third project. So please, Pavel, we want to hear your input and your view on the whole dark web theme.

Pavel Zoneff:
Thanks for giving me the space to say a few things. Ultimately, I think we can mirror a lot of the sentiment that has already been expressed today in the sense that there are probably many more positive use cases as it pertains to certainly the use of Tor software, whether it’s accessing our network and onion sites, and some of the other censorship circumvention tools that we provide. So ultimately, for us, we’re helping millions of daily users to securely access the internet and access their right to information and privacy and safeguard their human rights online. So whether that is, you know, from day to day online activity and protecting your rights to say no to non consensual tracking, to in certain parts of the world even be able to access news. So I know onion services have been discussed. And what we always point out is the scale and the statistic that they were referencing earlier. So actually, if we’re looking at the traffic on our tour network, only 1% or a little over 1% are solely traffic that is only directed to onion services. So meaning the sites that are completely confined to the tour network. So that is an extremely small number, especially if you know you want to open up the conversation to potential illicit uses of the tour network. So that is such a small faction that it is really hard to account for any nefarious activity carried online. So what we’re seeing is that our network is primarily used for censorship circumvention for maintaining your right to privacy. And the fun fact actually around onion services is that the most popular onion service seems to be Facebook and news sites, so we’re really seeing that these provide a valuable service and people’s ability to partake and democratic day to day actions. Yes, thank you very much. I think it was important to hear that because people usually associate that sort of browser with very realistic services and I know because we held a session like during our youth Russian IJF. We had a session about dark web, and one of our speakers said that you should not consider like every user who opens their browser that he is intended to do something wrong that he’s a criminal, just for opening the browser, just another browser, and that’s just all. So I think we can move to the major. Yes, you want to add, yes. Oh, just add to that I think this is a very important point that you make that because this is not just tour browser. This goes back to many other technologies such as encryption. I actually had a panel about this just before yours, but the truth is that there is a very powerful force right now that is trying to malign the use of privacy preserving technology, whatever it is, whether it’s Tor or Signal or any kind of other platform that utilizes encryption to make the case that this constitutes some sort of nefarious intent. And that is a very slippery slope. And this is something that we all as a community need to be outspoken against. Because I don’t know, I don’t remember who exactly said that, that there is regulation needed and that they have a huge trust in lawmakers. I don’t think that people across the globe have the same trust in the lawmakers, especially as we’re seeing that a lot of policymakers like the fundamental understanding of how privacy preserving technology actually works to the extent that governing laws are now made that roll back a lot of international standards as it pertains to human rights, access to information and freedom of expression. So we need to all be vigilant and ensure that we continue to have a right to privacy and encryption. Thank you very much.

Alina Ustinova:
Yes, it was a very important point to make. And so we move to questions and I give work to our online moderator, Maria. So we have some questions in the chat. So it’s- Yes. Yes, I can see you. Hello. Good night. Yes. We have one question from Habib Corrida if I read it correctly.

Maria Lipiล„ska:
Can you share your insights on the potential positive use cases of the dark web beyond its negative reputation and how it might impact the future of online privacy and security? That’s the first question. And we want to ask, we want the audience. to share other questions, of course, from the online participants and audience. Thank you.

Alina Ustinova:
Okay, so who would like to answer the question? So the question was, yes, okay, yes, Pedro.

Pedro de Perdigรฃo Lana:
I think I can go on that because I would also like to comment on something that you were debating before. Oh yeah, you want to go back? No, yeah, but I actually, Gabriel and Aizen, I told exactly what I was going to say, and after that, Pedro commented about cryptography. And the thing here is, when we’re talking about regulating dark web and darknets, I think if it’s possible to regulate or not, it’s not a real question, not the most important question, but more precisely, why we need to regulate it and what we need to regulate it, because pointing out more precisely what the problem is and how to tackle it without affecting the rest of the technology is really the scope we have here. The question that has been made about how it might impact the future of online privacy and security is that if we talk about regulating the dark web, regulating the darknets, sorry, the deep web, the darknets, without being careful about the ideas of more legislation, more regulation, stronger institutions, this may end up becoming a problem to those who care about and use these spaces for good uses, good utilizations, such as communication in places where freedom of expression is restricted, and so on and so on. Yes, thank you for your input. And going back to the question, so the

Alina Ustinova:
question was what other positive things can dark web bring to the ordinary user, but I think we actually covered lots of that, including that it gives you basically private connection and freedom of expression in most of the time, because you can access some web services that are not available, for example, in your country, or because actually going back to the conversation with the founder of the Tor browser, you want to add? If you access some services, you know,

Miloลก Jovanoviฤ‡:
which are not available in your country, as you mentioned, you violate the laws in your country, you know, so there is a, you know, circle. So how we use technology and I came, you know, to fragmentation processes. So if you want to regulate something, I think it’s not possible to regulate exclusively dark web, deep web, because we see dark web is a part of deep web and deep web is a part of all network. So we should see layer as one level, all network. So now we need some approach, how to deal with some challenges which occurs in geopolitical sphere, because if you protect something in your country, maybe it will be allowed in other countries. So if you access these services, you violate the laws of your country. So it’s not easy discussion from regulatory side. From the technical aspects, you can do almost everything. And speaking about TOR, speaking about VPNs, about different aspects, how to protect or so-called protect your data, traffics and so on and so on. I don’t think that there is a right way and that you are able to protect what are you doing on the network. Speaking about standard stuff, about encryption techniques, about IIS certificates and so on and so on, it’s a huge discussion, it’s complex and so on, but I think there is no privacy on the internet.

Alina Ustinova:
Okay, that’s an interesting thought. So, but going back to the question, I think we actually covered lots of the positive impacts of the dark web, but so, I think if everyone in the audience, like on site have questions, we will do like one question from a site’s audience and one question from the online audience. So do you have, yes, do you have a mic? Please represent yourself and ask your question.

Audience:
Okay, good morning, everyone. My name is Ismaila Jawara from the Gambia and I lead a cyber security community in the country and then we provide training for law enforcement and university students in the areas of cyber security technical research and education. So, I have a question but before that I just want to give a preamble on that like that was this training that we had for one of the law enforcement agencies, and one of the inspectors, you know, officers said, Mr. If a thief broke into someone’s house, and the case is reported to the police, we come check, you know, how the thief broken, maybe the door or the window. And if someone have $1 million or dollars in this account. And the following morning, you know it is $0. How do I know which door or window they break, you know, so, so, so what I’m trying to emphasize here is, you know, the issue of dark web and, you know, regulating the internet and all that. I think, you know, as my colleague said yes it’s important that you know we also not notice the fact, you know, that local, you know, governments and regulators have different opinions and ways of, you know, things that I mean I would say, particular, you know, a wheel on how they want to regulate the internet within their space, but also, you know, I think the main purpose of the IDF, you know, is to for us to have a common ground on how we want the internet to be operated and use globally. You know, so for example what works in the in the Gambia or Africa, you know, should be something very similar to what works for, you know, us or Ukraine or China, because if not what we are going to run, you know, into is that you have countries like China Russia will implement, you know, certain things, sorry to mention, but then how about the already marginalized communities and nations that are already behind, you know, the current, you know, progress of the internet, you know, regards to education and technical support accessibility and all that. How are those people going to fit, you know, into that discussion of, you know, each your own way with your own way, I didn’t have it, you know, and then considering access to information right to privacy and all that so I think I just want to understand how is marginalized communities feel fit into this whole discussion, you know, when they are already behind. Thank you.

Alina Ustinova:
So, does anyone want to answer this question I think this is very important point is on yes. Thank you.

Izaan Khan:
So international cooperation is definitely one, and capacity building is definitely too. So you need to be able to train law enforcement because I think what the point that Pavel from tour made is a really important one which is that a lack of understanding of what the technology is capable of is what leads to really bad policy outcomes and enforcement outcomes. So upskilling and definitely giving law enforcement training on cyber crime related issues, and how to actually works and as 50 probably isn’t there anymore but as 50 mentioned earlier, you know, you can’t fight the darkness unless you’ve actually been in the darkness and it’s very similar when it comes to understanding, you know how browsers like to work you can’t regulate in the abstract you have to actually go in there and figure out how it works and try to, you know, put yourself in the mind of the criminal essentially. So capacity building is definitely a big one and then on top of that you have a lot of already existing international cooperation on fighting cyber crime you know was mentioned previously we have Europol we have Interpol we have office we have a whole bunch organizations that exist to fight cybercrime on a number of different fronts, be it geopolitical, be it on an individual level, be it organized crime, what have you. So definitely focusing on those two areas, both the diplomatic and the technical, would probably be the best approach that not only yourself, but as mentioned, any nation would have to fight this issue of cybercrime on the dark net and understand what is capable and what isn’t. That’s my point of view. I’m not sure if there’s anyone else.

Alina Ustinova:
Does anyone want to add something to the, well, I think that’s it.

Miloลก Jovanoviฤ‡:
I mean, speaking about fighting against cybercrime, you know, I participated in some, you know, events in Serbia, you know, we had some accidents and it’s a multi-stakeholder approach. So when you want to fight against some, you know, accident or whatever, to, you know, do research what’s happened in the situation and so on and so on. We had a situation that 17 security and intelligence agencies participated in just one investigation. So it’s too complex, you know, and sometimes, you know, if you speak about Europol, about some different alliances and so on and so on, it’s not enough. You need to go into a multi-stakeholder approach and to communicate with a lot of different parties to solve some problems and to check what’s happening exactly. So that’s just the nature of network and packet transmission and the traffic flows and so on and so on.

Maria Lipiล„ska:
Okay, thank you, Maria. Back to you. Do we have other questions in the chat? No, really, we don’t have any for now. So maybe we’ll ask the audience on site.

Alina Ustinova:
So do we have any questions on site? Yeah, please take the mic. Good morning to everyone.

Audience:
I’m from Sri Lanka. Actually, research shows that most of the millennials have the bad habits of cyber security, in cyber security. parent to the gen X and the older people. So in this context, so actually why do you think young people are drawn to the dark web and what are the activities that they really engage in? That’s my first questions. And my last, second question is, how do you see the landscape of dark web activities you will learn in the coming years? That’s mean in future, in the youth perspective. Thank you.

Alina Ustinova:
So I guess the first question is very interesting, Juan. And I think that personally I will answer that and then give the words of course to everyone. I think that young people are very, they like to see what is forbidden because forbidden fruit is the sweetest one. And they always want to try something new. Yes, yes. And it’s like, and it’s actually because like when you restrict something, it is very interesting to know what your restriction because it’s, especially in the young age, it’s a very kind of a protesting things against the system. Like you want to do this, you want to say, I can do this because I’m young and I know what I do. And the old people out there, they don’t actually don’t know what they’re doing. I will find the right answer. So I think one of the reason is that. And the other, well, we actually have a new generation alpha that has grew fully in the age of internet. It was very developed. So they knew how to use the phone probably before they knew how to speak. So probably because of that, they grew with the opinion that internet is not a threat but it’s actually a very good benefit they have. And so if you don’t know how to use internet, if you’re in the generation alpha, I think that it will be hard for you in the future in the life. So, and they try to use every tool that they can find on the internet. So maybe someone can add something to the point from our speakers. No one? So Ezzan, yes. Yeah, just a quick one.

Izaan Khan:
So usually why people would use these tools and access hidden services would be either curiosity or privacy or necessity. So one of those three things. In terms of the landscape changing, I think we may see technological advancements that would further protect privacy, which would again lead to further issues down the line potentially. As Paul mentioned, there’s a lot of work that Tor Project is currently doing on anonymization services. There are other anonymization services as well that are popping up. So yeah, that may be one force in the landscape. And another force would be, as again, was previously mentioned, to fight against encryption. We will see governments retaliate by saying, we want to undermine encryption in some way or have some back doors. And if we don’t provide those back doors, then those services will not be allowed, effectively making it illegal to use these kinds of services. So we will see that tension play out, and that tension has basically always existed. since the history of the internet is just going to be, we’re not fighting with sticks anymore, we’re fighting with nuclear weapons. That’s what it’s going to be like, I feel.

Alina Ustinova:
Yes, thank you. So I think we’re like actually running out of time. So I will ask every speaker who’s left, because I know that someone already left, to like, to have a closing remark, just one minute, what actually we have discussing. And I guess maybe you can answer the question. So should we like treat, the dark web is a fundamentally dangerous part of the web, or is it most similar to the rest of the web? So the basic question, just your opinion. So Gabriella, you wanna start?

Speaker:
Well, again, as I said before, I would just say that dark web in general is a tool, and it depends on how it’s used. Criminal organizations and whoever has bad intentions to deviate from the law in any given country will do that regardless of whether you have those tools or any other tools. Because what is important here is that the crime that we’re talking here in the dark web is actually the crime offline. It’s just the tool that is facilitating to make it, make it look like something different. You get a nice clear net website where everything is shiny and whatever. So it’s kind of very easy to actually deflect from this type of situations in case if someone has the intention to do that. So the dark web in per se is not, again, the enemy. The enemy is the system that should fight more often and more strongly towards fighting organized crime, because this is, again, just a tool. I’ve spoken many times on different conferences about different cybercrime topics. Recently, I’m focusing on AI-powered cryptocurrency cybercrime, because it’s, again, happening in a very unprecedented way, and it’s very, very quick. So every second something new is happening, 30 millions are laundered, just like that. So you just need to be, I would say, realistic and try to understand that the law enforcement, and many times the politicians are not aware of how the internet works. If you hear some congressional, US congressional hearings, it’s clear that by the questions asked to Meta, Facebook, Twitter, X, all these names, but you see that they don’t understand what’s their business model about. So if you do not understand how they make money, it’s difficult for you to understand how other people can actually use that tool to create, I would say, a dark economy for themselves. So there is actually also a colleague of mine who got a very interesting idea, and I fully support that, is to actually start hiring people, very creative people who should be, you know, from different backgrounds, it doesn’t matter, but to try to understand how their software can be manipulated in a bad manner. So for example, you have, I don’t know, a type of software, can it be misused? Okay, how? And this is actually something that, you know, is a discussion that is happening among high tech startups that are working on AI machine learning type of tools, because again, we’re creating right now tools that can be easily, you know, manipulated into what other people need. So the dark web, no, it’s not the enemy, we should just be more aware. And eventually, I mean, I don’t know if that’s correct or not, but I would like to open the discussion on a registry, for example, of softwares and their official use, and maybe have like a due diligence or technical due diligence to understand the different backdoors, or the different misuse, you know, someone who is interested in child pornography, or anything like that, they will use even games, online games, like there was a case of a pony game. So little ponies, you know, between each other, finding each other on a game. And in the chat, you know, pony, pink pony is talking to the black pony, hey, let’s meet in room number one, you know, and then they’re discussing, you know, a new terrorist attack, or they’re discussing where the trade of, you know, illicit narcotics or whatever, it’s going to happen. So, you know, it’s just the creativity that never stops. So the tool is not a problem. It’s, you know, the different approaches, that is what is important to focus on. Yes, thank you. So Izan, your last remark?

Izaan Khan:
Just to keep it very brief, I’m very glad that that amongst the panelists here, there’s some consensus about the fact that Tor is just a tool and there are many positive and necessary use cases for this technology. And the fact that law enforcement has other mechanisms that exist and it’s not all hope is lost for cyber crime. And yeah, it’s always a perpetual arms race. We’ll probably be revisiting this question in the next 20 years again with a different kind of technology. So we’ll see.

Alina Ustinova:
Yes, thank you. I also have Alain Pedro. You want the last remark?

Pedro de Perdigรฃo Lana:
Yeah, really fast. I would just like to highlight that things are not in many cases, it’s not in situations that they seem really easily defined for sites. So answering many questions that were posed before with just one example that perhaps are also used for sharing academic documentation in an academic ecosystem that is very unjust towards poorer countries. So it is a crime or at least in civil infringements, but it may be less of an ethical problem than part of the science publishing industry. For example, those that are sustained with public funding and still charges are charged for thousands of dollars for access.

Alina Ustinova:
Yes, thank you. And we have left with Milos, so your final remarks. Okay. So as a conclusion, I would give it a strategic approach.

Miloลก Jovanoviฤ‡:
My perspective is that we can use all technologies on the bright side, on the dark side, only how we think is right at the end. So, you know, speaking about dark web, about deep web, it’s just the service of internet, I would say, and TOR as an application and so on. So yeah, let’s end with this, that my approach is that we need to strengthen our local institutions and speaking about fighting against cyber crime. So this is the way how we can protect internet globally because we need some processes, we need some fragmentation processes. We will see how it will, and you know, how internet will look like in the near future. So yeah, my approach is that we have to fight against cyber crime with common approach, but on authority as the local governments. So thank you.

Alina Ustinova:
Yes, thank you very much. Thank you for joining us online and outside. I think we had a wonderful discussion. So we can of course talk after session. And if you want to speak more with us, we have a booth in the booth village, Central Global IT Corporation. You can always come and we have a wonderful discussion there. Thank you very much.

Maria Lipiล„ska

Speech speed

157 words per minute

Speech length

102 words

Speech time

39 secs

Abraham Fiifi Selby

Speech speed

168 words per minute

Speech length

1015 words

Speech time

362 secs

Alina Ustinova

Speech speed

166 words per minute

Speech length

1868 words

Speech time

674 secs

Audience

Speech speed

162 words per minute

Speech length

557 words

Speech time

206 secs

Izaan Khan

Speech speed

205 words per minute

Speech length

1855 words

Speech time

544 secs

Miloลก Jovanoviฤ‡

Speech speed

176 words per minute

Speech length

1824 words

Speech time

622 secs

Pavel Zoneff

Speech speed

170 words per minute

Speech length

690 words

Speech time

244 secs

Pedro de Perdigรฃo Lana

Speech speed

164 words per minute

Speech length

1079 words

Speech time

396 secs

Speaker

Speech speed

159 words per minute

Speech length

2033 words

Speech time

766 secs

DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bengus Hassan

The analysis highlights several important points in the AI conversation. One key finding is that companies are striving for a first-mover advantage in the AI race, often neglecting to consider the ethical implications of their developments. This emphasizes the need for grounding AI conversations in ethics. It is crucial for companies to not only focus on technological advancements but also take into account the potential consequences of their AI systems on society.

Furthermore, data protection emerges as a vital element in the AI conversation. Many countries, particularly those without data protection frameworks, are now grappling with significant data projects and AI implementations. This raises concerns about the privacy and security of individuals’ data. Reports from Paradigm Initiative highlight this issue, shedding light on the absence of sufficient data protection regulations in various regions, particularly in Africa. These findings underscore the importance of developing robust frameworks to safeguard personal information and ensure the responsible use of AI technologies.

The analysis also highlights the significance of diversity in AI. A personal experience shared by Bengus underscores the potential for bias in AI systems. This serves as a powerful reminder that AI technologies should incorporate perspectives from across the world, not just from the global north. To achieve this, diverse representation in AI modeling and research is essential. By encompassing different viewpoints, AI systems can be designed to be more equitable and inclusive, reducing biases and promoting equal opportunities for all.

Another important aspect discussed in the analysis is the role of regulation in the AI landscape. It is argued that regulation should do more than simply implement control; it should create standards. The conversation about data protection regulation in many countries has often provided an opportunity for certain governments to seek control rather than establishing reliable and comprehensive standards. This highlights the importance of developing regulatory frameworks that genuinely protect individuals and their data while fostering innovation and advancement in AI technologies.

The analysis also raises the point that innovation tends to outpace regulation. It provides a case where a country banned cryptocurrency before fully understanding its potential as a foundation for new forms of money and movement. This example serves as a cautionary tale, indicating that regulators and policymakers should strive to comprehend emerging technologies before enforcing restrictive measures. By creating sandboxes where ideas can be experimented within specific frameworks, regulators can grasp the intricacies and implications of new technologies, enabling them to make informed decisions.

In conclusion, the analysis underscores the need to consider ethics, data protection, diversity, and effective regulation in the ongoing AI conversation. Companies must not solely focus on being at the forefront of the AI race but must also take into account the ethical implications of their developments. Strong data protection frameworks are necessary to ensure the responsible use of AI and safeguard individuals’ privacy. Diversity in AI modeling and research is essential for creating inclusive and unbiased systems. Regulation should aim to establish high standards rather than merely exerting control. Policymakers must strive to understand emerging technologies before enacting restrictive measures.

Amandeep Singh Gill

The Secretary General of the United Nations has proposed the creation of a multi-stakeholder high-level advisory body to govern artificial intelligence (AI) practices. The objective of this proposal is to ensure that AI governance is aligned with principles of human rights, the rule of law, and the common good. The advisory body will serve as a credible and independent entity responsible for assessing the risks associated with AI and providing recommendations to governments on global AI governance options.

To ensure its effectiveness, the advisory body will work towards implementing existing commitments made by governments under international human rights instruments in the digital domain. This emphasizes the need for AI governance that upholds these important values.

The formation of the advisory body is still ongoing, with nearly 1,800 nominations from across the world being considered. It is expected to release an interim report by the end of the year, outlining its initial findings and recommendations.

In its work, the advisory body will consult various ongoing AI initiatives to ensure comprehensive engagement and cooperation. These initiatives include the G7 Hiroshima process, the UK AI Summit, UNESCO’s work on the ethics of AI, and the efforts of the International Telecommunication Union. By incorporating knowledge and insights from these endeavors, the advisory body can harness a wide range of expertise to inform its assessments and recommendations.

One important aspect of the advisory body’s mandate is to examine both the risks and opportunities presented by AI with regard to achieving the sustainable development goals. It will conduct a thorough assessment of the potential risks associated with AI, as well as identify the opportunities and necessary enablers that can help AI contribute to the acceleration of progress in these goals.

Overall, the proposal for a multi-stakeholder high-level advisory body on AI governance reflects the growing recognition of the need for responsible and ethical AI practices. By aligning AI governance with principles of human rights, the rule of law, and the common good, the proposed advisory body seeks to guide and shape the development and deployment of AI in a way that benefits society as a whole.

Moderator – Moritz Fromageot

An open forum on AI regulation and governance at the multilateral level took place, organized by the Office of the UN Secretary-General Envoys on Technology. The event, attended by both in-person and online participants, began with welcome remarks from Moritz Fromageau of the Office of the UN Secretary-General Envoys on Technology, who outlined the agenda for the day.

Amandeep Gill, the Secretary-General’s Envoy on Technology, delivered keynote remarks on AI regulation and governance, followed by Peggy Hicks, Director at the UN Human Rights Office, who moderated the panel discussion.

The forum then transitioned into a Q&A session, with audience members asking questions and the panel members providing answers. During the session, Amandeep had to leave and Quinten stepped in to fill his role. Additionally, Benga also had to leave, so priority was given to including his participation before his departure.

Co-facilitators from the Global Digital Compact were present in the room and encouraged to join the discussion.

For the Q&A session, on-site participants lined up behind the microphone, and the first three questions were collected. The questions focused on balancing the need for quick action with global processes, ensuring the enforcement of agreed-upon rules, and the importance of multi-stakeholder assessments in mitigating and enforcing the rules.

The panel members addressed these questions, and Gabriela took the opportunity to thank the audience and panelists for their participation. Peggy then concluded the session.

Audience

AI regulation is deemed necessary on a global scale due to the rapid advancements in technology, which are surpassing the development of regulatory frameworks. The current lack of swift global regulations means that tech companies are not being held accountable for the ethical and human rights implications associated with AI. To address this, there is a call for punitive measures or fines to be imposed on tech companies that disregard these implications. This approach is supported by the European Union’s General Data Protection Regulation (GDPR), which has implemented significant fines for non-compliance.

Ethical values play a crucial role in the development and deployment of AI. These values, such as dignity, autonomy, fairness, diversity, security, and well-being, are recognized by institutions such as UNESCO, EU regulation, and OECD. However, the challenge lies in enforcing these values in specific contexts. It is argued that concrete and measurable adherence to ethical values is essential in AI to ensure responsible and ethical development and deployment of AI technologies.

Another important aspect of AI regulation is the need for ethical assessments at both micro and global levels. These assessments involve multiple stakeholders and aim to identify, mitigate, and avoid risks associated with AI. At the company level, discussions involving customers and clients are necessary. Additionally, the intersection of bioethics and infoethics needs to be addressed. By including the perspectives of different stakeholders, these assessments can help shape the development and deployment of AI technologies in a manner that upholds ethical standards.

The governance of AI should be guided by global standards that are developed gradually and holistically. This will ensure that all aspects, including economic, social, and cultural rights, are taken into consideration. Furthermore, it is noted that the private sector has an interest in interoperable governments to facilitate seamless jurisdiction transitions. Governance of AI can also involve other policies that impact incentives, such as taxation, trade policy, and intellectual property policy.

In developing AI governance, an interdisciplinary and inclusive approach is advocated. The involvement of voices from all regions, genders, and disciplines is crucial to ensure a comprehensive understanding of the societal impacts of AI and its effects on social, economic, and cultural rights. High-level advisory bodies on artificial intelligence that incorporate diverse perspectives have been established to foster this approach.

Overall, the analysis highlights the importance of global AI regulation, the adherence to ethical values, the need for ethical assessments, the development of global standards, and the embrace of an interdisciplinary and inclusive approach to AI governance. These measures are essential to address the challenges and risks associated with AI technologies and to ensure their responsible, ethical, and inclusive development and deployment.

Moderator – Peggy HICKS

During the discussions on AI governance, participants stressed the need for a comprehensive conversation on this complex topic. They highlighted the importance of addressing issues such as privacy protection, deepfakes, and transparency.

In terms of privacy protection, the speakers noted that recommendations have already been made regarding the establishment of guardrails to protect individuals’ privacy. They emphasized the urgency of taking immediate action on issues like deepfakes and ensuring transparency in the data sets used for large language models.

The global challenge of AI governance was also discussed, with participants calling for a level playing field in the development and implementation of AI technologies. They stressed the need for increased investment to engage with the global majority and ensure inclusive AI governance.

The importance of multi-stakeholder participation in AI governance was highlighted. The participants noted the significant influence held by a small number of companies in the AI sector and called for increased commitment to effective engagement from various stakeholders. Civil society involvement was seen as particularly important in ensuring inclusive AI policy decisions.

Another important aspect discussed was the integration of a human rights framework in AI governance. Participants acknowledged the agreed-upon human rights framework across continents and called for its application in AI governance. They emphasized the need to move beyond rhetoric and make human rights actionable in policy making.

Diversity in the global conversation on AI was recognized as crucial. Participants stressed the need for greater diversity and inclusion to achieve a comprehensive understanding of AI governance issues.

The participants also emphasized the necessity of global standards and guardrails for AI. They highlighted the importance of integrating current knowledge and red lines into global standard-setting processes to ensure responsible AI development.

Transparency emerged as another key aspect of AI governance. Participants advocated for greater transparency in the global AI conversation, including dedicated forums for discussing AI governance.

The discussions also addressed the need for investment in social infrastructure and the digital divide. Participants highlighted the importance of building social infrastructure to support AI development and the role of public investment in creating necessary infrastructure for AI research. They suggested that those profiting from AI should contribute to these investments.

Lastly, participants stressed the need for a global framework to address digital technology and human rights issues. Collaboration across sectors, rights, communities, and countries was deemed essential to effectively tackle these challenges and ensure inclusion of all those affected by technological choices.

Overall, the discussions emphasized the importance of approaching AI governance from multiple perspectives, involving global engagement, multi-stakeholder participation, and a human rights framework. Participants urged immediate action on key issues, increased investment in inclusive AI governance, and the establishment of global standards to ensure responsible and equitable AI development.

Owen Larter

The analysis strongly supports global governance and standards for Artificial Intelligence (AI). The speakers believe that AI presents immense opportunities for humanity but also poses risks that require global collaboration and consensus development. AI encompasses a wide range of tools that offer significant opportunities for industries and infrastructure. However, these opportunities come with risks that transcend boundaries, making a global approach necessary to ensure the safe and responsible development of AI.

The main argument is the need for global standards to be established and adopted by national governments. The International Civil Aviation Organization (ICAO) is an example of successful global governance, involving every country in developing safety and security standards for aviation. The goal is to set global standards for AI in a representative and global way, promoting fairness and accountability.

Developing a global consensus on AI risks is also emphasized. The Intergovernmental Panel on Climate Change is cited as an example of successfully building an evidence-based consensus around climate risks. Similarly, there is a need for a collective understanding and agreement on the risks associated with AI. A global consensus would enable effective mitigation of these risks.

Investment in infrastructure is essential for a broad understanding of AI. The analysis suggests providing publicly available compute data and models, allowing researchers worldwide to better understand AI systems. Additionally, a global conversation on the social infrastructure surrounding AI, including ethical considerations and policy frameworks, is needed. This ensures that the benefits and challenges of AI are understood by stakeholders and align with global values.

The analysis consistently expresses a positive sentiment towards global collaboration, consensus development, and standard setting in AI. AI is seen as an international technology requiring international cooperation to harness its potential and address challenges. Examples such as ICAO and the Intergovernmental Panel on Climate Change are cited as successful models for consensus building and standards setting.

Furthermore, it is important to apply existing domestic laws to AI systems. Discrimination laws pertaining to loans and housing should extend to cover AI systems to prevent biases and discrimination.

Impact assessments are crucial for AI system development. Microsoft’s responsible AI program is mentioned, where impact assessments with human rights-related elements are conducted for high-risk systems. Sharing the workings and templates of these assessments can benefit the AI community in improving transparency and accountability.

In summary, the analysis strongly supports global governance, consensus development, and standards for AI. Collaboration across nations is necessary to maximize opportunities and mitigate risks. A global approach ensures that AI is developed and implemented in line with shared values, benefiting humanity as a whole.

Gabriela Ramos

Artificial intelligence (AI) has played a significant role in various sectors such as health and education. For instance, AI has contributed to our understanding of how the COVID-19 virus works, which has been crucial in vaccine development. AI has also been utilized in the distribution of benefits within the welfare, health, and education systems.

To ensure ethical advancements in AI development, UNESCO has developed frameworks and tools like the Readiness Assessment Methodology and Ethical Impact Assessment. These resources aid member states in implementing AI in an ethical manner. Currently, 40 countries are deploying this framework, with more expected to follow suit.

Legal frameworks play a vital role in the control and development of AI in the public sector. UNESCO recommends that legal regulation, rather than market forces or commercial reasoning, should guide AI development. Many countries are actively building their capacities to handle AI technologies responsibly.

Interoperability is essential in both technical and legal systems. As technologies become increasingly global, it is crucial to ensure interoperability of technical systems and data flows across countries. Additionally, the transnational nature of technologies calls for interoperability of legal systems to effectively regulate AI developments.

Harmful impacts of AI technologies are a concern, and governments need to understand potential implications and anticipate possible harm. It is essential for governments to have measures in place, such as compensation mechanisms, to address any harm caused by AI deployment.

Gabriela Ramos, an advocate for responsible AI development, emphasizes the role of governments in managing AI impacts and upholding the rule of law. Governments serve a crucial function in monitoring and regulating AI technologies to protect individual rights and maintain social order.

In conclusion, AI has been instrumental in sectors like health and education, aiding in vaccine development and benefit distribution. Ethical advancements in AI are promoted through frameworks and tools developed by UNESCO. Legal frameworks guide the responsible control and development of AI in the public sector. Interoperability, both in technical and legal systems, is crucial due to the global and transnational nature of technologies. Governments play a vital role in managing AI impacts and enforcing the rule of law.

Session transcript

Moderator – Moritz Fromageot:
Welcome to everyone here in the room. Also welcome to everybody who is participating online. We have an open forum on AI regulation and governance at the multilateral level now. My name is Moritz Fromageau, I’m part of the Office of the UN Secretary General Envoys on Technology. Let me quickly walk you through the agenda of the day. We will start this off by some panel remarks by our esteemed guests here, and then we’ll have a big Q&A session in which we want to engage with you, the audience. We will start this off with keynote remarks by Amandeep Gill, who is the Secretary General’s Envoy on Technology, and after that, Peggy Hicks, Director at the UN Human Rights Office, will moderate. the panel, and after that we go over to the Q&A session. Yeah, without further ado, I would hand over to Amandeep to introduce the topic.

Amandeep Singh Gill:
Thank you very much, Moritz. Welcome to this event, this discussion on AI governance and the very important dimension of human rights, the role of human rights in how we approach AI governance. So to set a little bit the context, I will talk about the Secretary General’s proposal in his policy brief on the Global Digital Compact that he launched on June 5th this year for a multi-stakeholder high-level advisory body for artificial intelligence that, as the SG said, would meet regularly to review AI governance arrangements and offer recommendations on how they can be aligned with human rights, the rule of law, and the common good. This proposal that he reiterated in his remarks to the first Security Council debate on artificial intelligence in July is currently being put into practice. So this advisory body is being formed as we speak after a process for nominations that ran along two tracks. One was member states being invited to nominate experts to the Secretariat, and the other was an open call for nominations. And all together, we got about 1,800 nominations from around the world. So different areas of expertise, backgrounds, different geographies. So it’s very satisfying to see that degree of interest and excitement about this proposal. we kind of hit the right spot with this. Now, what is the advisory body when it comes together? What is it supposed to do? The Secretary General has tasked it to provide an interim report by the end of the year. And there is a context to this timing. The discussions on the Global Digital Compact start early next year, restart early next year. They move into a negotiation phase. So this interim report would help those who are putting together GDC to consider one of the more important dimensions. There are these eight important high-level dimensions along with the cross-cutting themes of gender and sustainability that have surfaced through the consultation. So it’ll bring more substance and expert-level insight into that discussion. So after that, there is time for the advisory body to consult more widely, including with ongoing initiators. You heard the Japanese Prime Minister speak about the G7 Hiroshima process. There is the UK AI Summit. There has been work that’s been done earlier in the G7, G20 on AI principles. And there is longstanding work in the UN context. And today, I’m very happy to be joined by some of my colleagues. The work in UNESCO on the ethics of AI, consensus recommendation adopted by all member states. The work in the International Telecommunication Union on some of the standards that underpin digital technologies, but also at the AI for Good meetings. And then, most importantly, from the perspective of the SD’s vision and today’s topic, the work being done by the Office of the High Commissioner for Human Rights on how to make sure that. existing commitments that government member states have taken under international human rights instruments, they are implemented in the digital domain. So I just want to conclude by saying that this body that we’ll start meeting soon would help us pool multidisciplinary AI expertise from around the world to provide a credible and independent assessment of AI risks and make recommendations to governments on options for global AI governance in the interest of all humanity. I think those conversations that are happening today, they are very important, they are essential building blocks, but if this is an issue that concerns all humanity, then all humanity needs to be engaged on it through the universal forum that is the United Nations. The risk discussion can often be political or it can be motivated by economic interests. We want a discussion in which there is an independent, neutral assessment of that risk and a communication of that to the global community at large. At the same time, we also need to make sure that the opportunities and the enablers that are required for AI to play a role in the acceleration of progress on the sustainable development goals, they are also assessed, they are also presented in a sober manner to the international community. So looking at the risks and the opportunities in this kind of manner allows us to put the right governance responses in place, whether they are at the international level or at the national, regional regulatory level or at the level of industry where there may be self-regulation, co-regulation schemes to. address risks, including through the kind of initiatives that the Japanese Minister shared yesterday. So I’ll stop there and hand it to Peggy for the

Moderator – Peggy HICKS:
moderating panel. Thank you, Peggy. Great, thank you so much. We’re so fortunate to have Amandeep with us to give us that overall perspective about where we stand on these issues now. I’m going to have the pleasure, I’m Peggy Hicks with the Office of the High Commissioner for Human Rights, and I’ll have the pleasure of moderating the panel but also giving some introductory remarks from the Human Rights Office perspective starting out to just sort of set the course for us by making sort of four introductory remarks. One is that I think when we’re looking at the issues of AI governance, we need to be able to have a complex conversation. We tend to throw out the term AI and think that we all know what we’re talking about. We tend to talk about existential risk, near-term risk, short-term, mid-term risk, with no real definitions on the table. We need to break the conversation down. We need to be aware that there are areas that are already existing. AI that’s in use, being used in human rights sensitive and critical areas like law enforcement, where we don’t have any question about what needs to be done. We just need to implement the things that we already know. Recommendations have already been made about the guardrails that should be in place, for example, on mass surveillance technologies to protect privacy and in other places. We need to move forward on that and we don’t have to wait to do that. But then we also have the issues that have really rushed the surface around generative AI where there is a real need to look at what are the new challenges that are presented. And even within that area, some are immediate in terms of, for example, the impact of deepfakes. the need for water marking and providence to be put in place as quickly as possible, transparency around data sets for large language models. So there are things that we can do urgently, even within that emerging space. But then we have to also be able to look forward at the same time to what are the risks that are in our future that we see, and to be able to do the hard work of putting in place the governance mechanisms and approaches that will allow us to make sure that we’re tackling not just what we already know, but what we foresee for the future. The next point I want to emphasize is that that is a global challenge. And as much as we appreciate all the different efforts at the national and regional level, we need to be able to come together in a global way to address these issues. We need to be able to learn from each other, we need to recognize that the solutions won’t work if they’re only solutions that are adopted and taken in one place. And for that global engagement to work, we need to create a level playing field. And that means that there needs to be much greater investment and resources and engagement with the global majority that may have more difficulty being part of these policymaking conversations going forward. The third piece is one that of course comes up in the IGF context all the time, is around what we mean by multi-stakeholder and how that has to be part of the governance approach that we undertake in AI. And I want to emphasize that when we talk multi-stakeholderism, we are talking both in terms of the business side of things and the civil society side of things. And in fact, what we need on each of those pieces is quite different. With regards to business, there’s a tendency to really look at how we engage and to some extent mitigate the extent that a small number of companies have an enormous influence in this space. But at the same time, we need to create a race to the top where those companies may be the ones that are best prepared to put in place some of the guardrails that we need, but we also need to protect against the way other businesses will come into the sector and are coming in, perhaps with less incentive to put those same guardrails in place as we go forward. On the civil society side, we all know. know that that is an area where there’s a lot of commitment to general participation, but perhaps not as much to effective engagement. And we need a different pathway. We need to draw on the expertise. We need to make sure that civil society is present, because they’re the ones that will help us to make sure that no one’s left behind. And finally, and you won’t be surprised to hear me say this, I want to make a pitch for human rights and the human rights framework as being a crucial tool to allow us to move forward in all of these areas effectively. We’ve heard in many of the sessions I’ve been in already at the IGF how we have to build on what already exists and not create everything afresh. Well, the human rights framework is a framework that has been agreed across continents, across contexts. It’s celebrating its 75th anniversary today. My pin shows. And we need to find a way that we leverage it in this space. But that also requires support for us to be able to do that more effectively. It requires all of us to move from the talking point of, yes, we’re grounded in human rights, to making it actionable in a variety of ways in the policymaking context. So those are the introductory remarks from my side. But I’m very much looking forward to hearing from the contributors today. And I’m very pleased that we’re going to turn first, I guess, to Bengus Hassan, who’s the executive director of Paradigm Initiative and a member of the IGF leadership panel. So over to you, Bengus.

Bengus Hassan:
Thank you, Peggy. And thank you, Amandeep, for the earlier comments. I think it’s important to start with the three areas that have been identified by the Secretary-General in terms of human rights, rule of law, and common good, help with the ongoing conversation. But let me start with a statement, someone. So at the opening ceremony, someone who sat, I think, behind me. Yes, behind me. I shouldn’t confuse behind with beside. leaned over after the session and said, look at the stage, there’s no diversity, and during the AI panel, and then we had a conversation. And the conversation we had wasn’t just about diversity, but was about many things. And Peggy, you’re right. Civil society already, I mean, AI is not new. It’s been said that AI is the unofficial theme for 2023 IGF. I’m sure if you got a dollar for every time AI is mentioned here, you all be billionaires already. And also, there’s a tendency for us to assume that a conversation we’re having is understood by everyone and we’re all at the same level, but we’re not. So first of all, there are people whose level of inclusion, even before you have conversations of AI, are not exactly, we already have a divide, right? We already have a divide that is contributed to by some of the problems that we have that civil society is trying to address. And so three very quick things for me. Number one is that in all of this conversation, we’ve talked about the need for human rights, for the rule of law, and for the common good. But I think the common good will only be served if we have a conversation that is based on ethics. And I say this because if you look at all of the race, literally, like the AI race that we had over the last few months, and I’m sure we hear a bit more from the private sector representative on this, and at some point, there had to be a call to say, guys, let’s stop. And the reason for that was because it then, it became a race literally without rules. And everybody was trying to get to be the first to do it. Of course, there are many reasons for that. There’s economic incentive and there are other, the first come advantage and all of that. But those conversations must be built on ethics. And thankfully, we already have many frameworks around human rights that can guide us in this. So it’s not, we’re not creating new principles. We’re not saying that the ethics should be based on new inventions. We already have principles for that. The second is on data protection. And I say this particularly because we’ve had many conversations about the need for. privacy and protection, but there are many countries where there are still, for example, majority, you know, so we do a part of the initiative, we do a report every year on the state of the internet, on digital rights across the African continent. And one of the major challenges that we have is that there are many countries that do not even have data protection frameworks already. And not only are they now talking about, you know, just collecting biometric data, but they’re also talking about AI, they’re talking about massive, you know, data projects, and that is important. So ethics, also data protection. And I’ll come back to the first point that I made about diversity, not just diversity in terms of conversation. It’s great to have a panel, and at times I think with tokenism you can solve the problem, but we need to go beyond the tokenism. I think that the importance is not just in the conversations, but also in the modeling. I always give the example of my very first, you know, experience with an AI demo, you know, somewhere, you know, not too far from here. And I, you know, stood in front of this machine where everyone was standing and they were testing. I need to tell you where in the world you’re from and tell you a bit more about yourself based on the data I had been fed with. And then I faced this machine and I said, hi, and I said, hello, and I said a few words. And the machine not only said I was from the wrong continent, but also said I was very angry, and I was like, wait a second, what is going on here? And by the way, that project was already being used by a country to determine who to arrest based on prank calls. So it meant that anyone who soundsโ€”I sound like this all the time because I’m Nigerian. I’m from a country of 200 million people. You need to raise your voice to be heard. So when I speak, I need to raise my voice. So if the machine thinks I’m angry and all that, it’s not because I am, it’s because I’m Nigerian and I have to raise my voice. So I think it’s absolutely important for us, not just in conversations, but in modeling and also inโ€” research. AI by nature is global, but global does not mean it happens in the global north. Global means that it has applications across the entire world, and if it has, then it means that diversity must be a fundamental factor in what we do. Otherwise, we’re going to keep having many of the problems we currently have on social media where platforms are struggling to interpret something that is understood within a context and mean something else entirely once it crosses to another context. So ethics, data protection, and diversity.

Gabriela Ramos:
Thank you very much, Benga. Words to live by there, and I’m sure we’ll go back to each of those three points. But I understand that Gabrielle Ramos is now online and able to join us, so I’d like to introduce Gabrielle Ramos, who is the Assistant Director General of Social and Human Sciences at UNESCO. Over to you, Gabrielle. Thank you so much, Peggy, and I’m very sorry, but I got the wrong link, and I was with a very technical expert. Very interesting session, but it was not mine. Great to be here with you, and thank you. Great to share this panel with you and with Amandib. And I could not agree more with what the previous speaker mentioned. I think that ethics is a good guide because it’s not only about the challenges we are confronting now, but actually the challenges that might be posed to us with this very fast moving technologies. And we are now probably questioning all these issues brought by the generative AI, but AI is not new. And we know since how many years AI has been used to take decisions that are substantial and relevant for all of us. We know the application of these technologies in the distribution of benefits in the welfare system, the health system, the education system. We know how much… facial recognition has been used and now is being debated how much we can rely on it to take decisions in the public sector. But the public and the private sector have been taking decisions based on AI for many years. We tend to forget, but we know that having a vaccine to fight the COVID pandemic was actually allowed because of the analytical capacities that the technologies could put together to understand how the virus work. So it’s not new, but the questions that we ask of course are much more relevant given the pervasiveness and also the fastest speed at which these developments are advanced. So it’s very important that we have the right frameworks. If these major technologies are just deployed in the markets for geopolitical reasons, for commercial reasons, for profit-making reasons, it’s not going to work. And that’s why we are very pleased to be contributing to this framing of the technologies in the right manner at UNESCO. As since two years ago, the 193 member states adopted the UNESCO recommendation on the ethics of artificial intelligence. And I recognize Amandeep was one of the major contributors because he was part of the multidisciplinary group that we put together to develop the recommendation. And it was pretty straightforward, but I feel it was also in the right frame because the question was not to go into a technological debate of how do we fix the technologies or how do we build the technologies in certain ways to deliver for what we want to have in the world. But the question was actually, what are the values that we are pursuing? And then we put it all around. It’s a societal debate, not a technological debate. And the values, we know them. the values that the technologies should serve, are the human rights and human dignity, are the fairness, inclusiveness, protection, privacy. And these values need to be served by certain principles or goals. And you know them because these goals are of accountability, transparency, proportionality, the rule of law. These principles are part of the equation that have been advanced by many, many players in the artificial intelligence ecosystem. But these principles need to be translated from our perspective into policies, because policies is what will make the difference. Yes, the technologies are being developed by the private sector mainly, but this will not be different as many other sectors that we have in the economy where governments need to provide with the framework and the right framework for them to develop according to the law. And at the end, it’s not that the governments are going to go into every single AI lab to check that we have diverse teams, that the quality of the data is there, that the training of the algorithm has the adequate checkpoints, not to be tainted by biases and prejudice. But at the end, when you have the norm and when you have the tools and the audit systems to advance these kinds of outcomes, is when you get things right. And this is where we are now in the conversation, because the member states, when they adopted the recommendation, it was not only left to the goodwill of anybody who wanted to advance in building these legal frameworks, but they also asked UNESCO to help them advance specific tools for implementation, because we also are in an heterogeneity of capacities and systems that can be put together. And therefore, we developed two tools. understand where member states are regarding the recommendation, the readiness assessment methodology, that is not only a technological discussion, again, it is about the capacities of countries to shape these technologies, to understand these technologies and to have the legal frameworks that are necessary for them to deliver. And then we also develop the ethical impact assessment. And I feel that now we are converging with many other institutions and organizations that are advancing better frameworks for developing on AI. Just last Friday, we were with the Dutch Digital Authority because this is also an institutional debate. For us, this is for governments. Governments need to upgrade their capacities and the way they handle these technologies because, as I said, I’m a policy person and the reality is that this is about shaping an economic sector. An economic sector that, yes, pervades many other sectors and is changing the way all the other sectors are working. But at the end, it’s an economic sector. The way that the technologies are produced can be shaped, can be determined by technical standards, but it can also be determined by the rule of law. And it’s not as difficult as it might seem in terms of at least having these guardrails. When we say, for example, that we need to ensure human determination, well, then what the recommendation established is that we cannot provide AI developments with legal personality. And I feel this is just the very basic to ensure that whenever something goes wrong, there is going to be a person, there is going to be somebody that is in charge and that can be legal, liable, liable legally. And then we also need to have systems for redressal mechanisms and to ensure that the rule of law is really ensured online. I’m proud that have this framework is now being really deployed by 40 countries around the world and we will be having more. Next week we are going to be in Latin America launching the American Council for the Implementation of the Recommendation and we’re partnering with many institutions, with the European Union, with the Development Bank in Latin America, with the Patrick McGovern Foundation, with Bilastar, to ensure that we work with member states to look how they can build up these capacities to understand the technologies and to deliver better frameworks. We always also talk about skills, skills, skills, skills to understand, to frame, to advance a better deployment of the technologies. I feel that it’s also very important that we have the skills in the public sector to frame and to understand because these are also so fastly moving technologies that we need to be able to anticipate also the impacts that they can have in many fields that have not been tested. But if you ask me for the bottom line, the bottom line, and I think this is not the way that generative AI or chat GPT arrived to the market, is that you need to have an ethical impact assessment, a human rights impact assessment of major developments on artificial intelligence because before they reach the market. I think this is just right due diligence and it’s not what is happening in many of these developments as we see them. And therefore, I think it’s the moment to put the conversation right in the right framework to ensure that these technologies deliver for good. And we are seeing many movements. We just saw the bill that was put together in the US Congress. We know what the European Union is doing. We know how many countries are advancing this and we’re also doing it with the private sector. We can neither put all the private sector in one basket. We’re working with Microsoft and Telefรณnica because also this needs to be a multi-stakeholder approach, also gathering the civil society and many, many groups that need to be represented because the ethics of artificial intelligence concern us all. I’m so glad that I have this minute to share with you these thoughts and I’m looking forward to the exchanges. So thank you so much.

Moderator – Peggy HICKS:
Thank you very much, Gabriela. It’s wonderful to hear your comments based on the experience of UNESCO and the ethics of AI development, but also its application, as you said, and the work that’s being done globally to move forward on these issues. And I think the point that you make around human rights impact assessments and the need for them to be done before things reach the market is one that we’ll come back to as well. I’d like to turn to our final panelists now. We’re fortunate to have with us Owen Lartner, who’s Director of Public Policy in the Office of Responsible AI at Microsoft. Over to you, Owen.

Owen Larter:
Thank you, Peggy. It’s a pleasure to be here. It’s a pleasure to be part of such an esteemed panel. So as Peggy mentioned, I’m Owen Lartner at Microsoft. We are very enthusiastic about the opportunity of AI. We’re excited to see the way in which customers are already using our Microsoft co-pilots to better use our productivity tools. We talk a lot about co-pilots at Microsoft rather than auto-pilots. The vision for Microsoft around AI is very much retaining the human dignity and the human agency at the center of things. And I think more broadly, we see AI as a huge range of tools that is gonna offer humanity an immense amount of opportunity, really to understand and manage complex systems better and to be able to address major challenges like climate change, like healthcare, like a lot of what is being addressed in the SDGs. So a lot of opportunity, but I think it’s clear that there is risk and complementarity. Thank you very much. a panel, and so we need to think about governance. And I think as we turn to governance of AI, we need to think about governance globally. As it was said before, AI is an international technology. It is the product of collaboration across borders. We need to allow people to be able to continue to collaborate in developing and using AI across borders. It’s also quite clear that the risks that AI presents are international. They transcend boundaries. An AI system created in one part of the world can cause harm in another part of the world, either intentionally or via accident. And so I think as we think about global governance, it’s worth taking a little bit of a step back and sort of understanding where we are. And I do feel like an enormous amount more work is needed, but we’ve made a huge amount of progress in the last year. We’re coming up to quite an important milestone or a significant milestone, which is that we’re just a few weeks shy of the one-year anniversary of ChatGPT being launched on the 30th of November in 2022. And I think we can see the way in which that has really changed the conversation around the world on these issues. I think it’s fantastic to see the way in which the UN has done what the UN is always very good at doing, which is really catalyzing a global and representative conversation on these issues. We’re excited about the high-level advisory body. We think that’s gonna be really productive work. Really delighted to be working with UNESCO to be able to take forward their recommendation on artificial intelligence. We think that’s a really important piece of work. And really exciting to see the way in which you now have concrete safety frameworks being developed and implemented around the world. People might be familiar with the NIST AI Risk Management Framework. This is from the National Institute for Standards and Technology in the US. They published their AI Risk Management Framework at the start of this year. It really is a global best practice framework that any organization can use now to develop their own internal responsible AI program. So I think we’ve sort of moved to a place where we have the building blocks of a global governance framework in place. I think now it really behooves us to take a bit of a step back and think about how we chart a way forward. And I think there’s probably a couple of things that are worth bearing in mind as we do that. of having a bit more of a conversation about where we actually want to get to. What do we want a global governance regime for AI actually to be able to achieve? And then secondly, what can we learn from the many attempts and the many successes around global governance in other regime? So I’ll offer a few thoughts in closing. I think as we move forward, we ultimately want to get to a place where we are setting global standards that are being developed in a representative and global way that can then be implemented by national governments around the world. And I think there are great lessons to draw from organizations like ICAO, the International Civil Aviation Organization, part of the UN family. It does a great job of including pretty much every country around the world in developing safety and security standards for aviation globally. So I think there’s more that we can learn from that. I think the other thing that we need a global governance regime to do is to help us develop more of a consensus on the risks of AI. It’s really important part of thinking about how we address them. So I think of organizations like the Intergovernmental Panel on Climate Change, which has done a fantastic job of developing an evidence-based consensus around risks in relation to climate. Actually a really effective job of then taking that out and driving a public conversation, which can lay the groundwork for policy as well. I think that the final suggestion I’ll make is that we really need to invest in infrastructure as we move away forward. That’s both the technical infrastructure so that we’re able to study these systems in a holistic and broad way. It is very intensive to develop and use these systems, so we need to provide publicly available compute data and models so that researchers around the world can better understand these systems, can develop the much-needed evaluations that we need going forward. I think the other bit that is just as important, if not more so, is thinking about the sort of social infrastructure. How do we really have a global conversation on a sustained way on these issues that is properly representative and brings in views from everywhere around the world, including the global south? I think it’s a great start on that front. I think conversations like this and work that the IGF is doing is really important. I think there’s more that can be done. One small contribution that we’ve made so far and we want to do more is setting up a global responsible AI fellowship. So we have a number of fellows around the world, including from countries like Nigeria and Sri Lanka and India and Kyrgyzstan, where we’re bringing together some of the best and brightest minds working on responsible AI, right across the global south to help shape more of a global conversation and inform the way that we at Microsoft are thinking about responsible AI. I think there’s much more opportunity to do this kind of thing when we’re moving forward. But I’ll pause there for now.

Moderator – Peggy HICKS:
Great, thanks so much, Owen. It’s been really helpful to hear your comments on what the global governance AI challenge looks like and what are some of the next steps we need to take. Just to pull together some of the thoughts and then we’re gonna turn over for the question and answer. I mean, I think we heard very, very similar messages to some extent from our somewhat diverse panel, not as diverse as we’d need to be probably here either, Benga, but we all recognize the need for that global diversity. How we achieve it, I think we still have a lot of work to do, we can commit to it in principle, but in practice, it requires a lot more effort, a lot more resources to make it a reality, I think. We also heard the importance of really putting in place guardrails based on what we already know in the space and moving forward on them, the governance conversation with regards to the best practices is there, but we also need to recognize that we do have some red lines and those red lines ought to be part of the global standard setting process as well and moving forward. And finally, we also need to understand the need for greater transparency, greater ability for a global conversation to happen and that means making sure that forums like this one are available to a much broader audience, but that we have, I liked Owen’s comments about the social infrastructure that’s needed and that will require investment and commitment as well to move forward. So with that, I think I will close this first segment of the panel discussion and I’m to turn. over to Moritz, who will guide us in the question and answer. Over to you.

Moderator – Moritz Fromageot:
Thank you very much, Peggy. So we will now take the time for an extensive question and answer. So you all have the possibility to ask any question you might have. Unfortunately, Amandeep had to already leave the session, but our colleague Quinten is filling in. Also, I understood that Benga has to leave it in 20 minutes as well, so we might just prioritize you in the process. And I’m also seeing that we have the co-facilitators from the Global Digital Compact in the room, so do let us know if you want to participate in the discussion. For the on-site questions, you can line up behind the microphone over there. First come, first serve. We collect the first three questions and then answer them from the panel. And yeah, so feel free to ask anything regarding the session topic.

Audience:
OK, that’s a nice clarification. Hello, everyone. I’m Alice Lenna from Brazil. I’m also a consultant for GRI, the Global Index on Responsible AI. And I have a question that I think has relations with everything that you’ve said so far. Because we’ve been listening in all the panels on AI that AI must be regulated through a global lens, right? It can’t be just national frameworks. And we’ve also been listening that it must happen now. It’s urgent. And these things, we know that global regulations are not the fastest regulations we have. So my question is, how do we balance these needs? Thank you. Hi, I’m an attorney at law from Sri Lanka. Last year I just did a course from CIDP in Washington, and I’ve been studying AI policy. I was just wondering, the biggest threat is that the technology is running far ahead of the law. And is there any possibility, like we were speaking of global AI regime, et cetera, is there any possibility that punitive measures, like fines or penalties, can be given to these tech companies which are going ahead without the implications, the human rights aspect, the ethics, without that being examined, if the tech companies put out the tech? I feel the only way is to penalize them somehow, like how GDPR brought huge fines. Is there any conversation on that going on, or I just want to know? Hello, my name is Yves Poulet, and I am vice chairman of IFAP UNESCO program, and my specialty is infoethics, and I am chairing a working group on infoethics at UNESCO. I think we are agreeing all together about ethical values. I think there are a certain number of ethical values which are recognized by UNESCO recommendation, by EU regulation, by OECD, and these ethical values are very well known. That’s dignity, that’s autonomy, that’s definitively fairness, that’s diversity, that’s the problem of security and well-being, and so and so. So the problem is not the ethical values. I think that Gabriela was right. The problem with ethics is not the problem of… of designing the ethical values. But the problem is to what extent these ethical values are met in a concrete situation. And that’s another problem, and that’s another difficulty. And that’s why I think we need to have, definitively, legislation imposing what we call ethical assessment. I think it’s very important to have this ethical assessment. At a micro level, it means at the company level. And this ethical assessment needs, absolutely, to have what we call a multi-stakeholders within the company, and perhaps the customer, perhaps the clients, and I don’t know exactly which must be around the table. But we need to have this multi-stakeholders and multi-disciplinary assessment to clearly enunciate the risk, to mitigate the risk, and definitively, to try to avoid the main risk. And that’s very, very important, I think. If we have this ethical assessment at the micro level, I think that’s the most important thing. At the global level, I think we need, definitively, to have the discussion, discussion about a very important issue, like the increased man. It is quite clear that bioethics and infoethics, tomorrow, will join together. It is quite clear that, definitively, we must have a certain number of reflection about our iterative system, especially as we have the problem of manipulation of people, and all these questions. So my question is to know what’s your position about this reflection?

Moderator – Moritz Fromageot:
Yes, thank you very much. Just one suggestion, I think for the next round of questions, you could also say whom on the panel. you address the question too, then we can have it a bit more targeted. So yeah, three questions. The first one on how to balance the need for quick action in the face of some of the global processes that can take a little longer. Second question’s on enforcement. How do we make sure that the rules that we agreed on are actually applied? And yeah, the third one on the need for multi-stakeholder assessments on how to mitigate and also enforce the rules. So who would like to go ahead?

Gabriela Ramos:
I can chip in if you. Perfect, Gabriela. Then we’ll start with Gabriela and then give over to Benga. Okay. Well, thank you. I think these are very relevant questions and it’s true that the technologies are global and therefore this transnational character needs to be recognized. And I feel that’s why we are always referring to the interoperability, not only of the technical systems and the data flows across countries, but we are also talking about interoperability of the legal systems, because at the end, the kind of definitions that you have in one jurisdiction is going to be determining the kind of outcomes when you go into international corporations for law enforcement. But at the end, the very basic tenant of all this construction is to have the enforcement of the rule of law regarding these technologies at the national level. And this is the emphasis that we are putting in the implementation of the recommendation on the ethics of AI with the many different countries with whom we are working, because at the end, governments need to have the capacity first to understand the technologies, which is not as a straight. forward as it seems. Second, to anticipate what kind of impact that they can have on the many rights that they need to protect. And then to have commensurate measures whenever there is harm. And I think that this is another bottom line. Whenever there is harm, there should be compensation mechanisms. And these are the areas where governments need to upgrade their capacities. Then, of course, we need international cooperation, because at the end it would not work only if you have regulatory fragmentation at the national level. It’s very important that we also have this kind of exchanges in a multi-stakeholder approach to ensure that we learn from each other and that we can also share what we know are those that are the front-running developments in terms of the legal frameworks and those that are lagging behind. But I feel, again, the role of governments is really important in trying to ensure that the rule of law is respected. But that’s their task and that’s why they are paid for.

Moderator – Moritz Fromageot:
Thank you, Gabriela.

Owen Larter:
Fantastic. I can jump in and give some thoughts and agree with a lot of what Gabriela said as well. I think on the sort of global piece, I think it’s exactly right to look at these issues through a global lens. The risks that are presented are global. But I don’t think that necessarily means that every single national regulation needs to look the same as each other. Exactly as Gabriela said, I think it’s all about interoperability. And I think a big part of this will be developing some global standards in relation to how you evaluate these systems, for example, that different countries can then implement in a way that is sensible for them. In terms of sort of how to apply the law and where the law might apply, I think there is a large amount of existing domestic law that should be being applied right now in relation to AI systems. I think if you’re in a country where you have a law against being able to discriminate against someone in providing a loan or access to housing, it shouldn’t matter whether you’re using AI or not, that law should apply. I don’t think it should be an offense that, oh, you know, yes, I discriminate against this person and gave loans on unfavorable terms, but I was using AI, so don’t come and penalize me. That’s not gonna hold. So I think existing law should be applied across various different jurisdictions, whilst we also put in place these other frameworks that address some of the specific issues of AI as well. And then in relation to the impact assessment process, I think it’s a great thought. We are very enthusiastic about impact assessments at Microsoft. It’s one of the many things that we’re very enthusiastic about in relation to the UNESCO framework. We actually have an impact assessment as a core part of our responsible AI program at Microsoft. So any high-risk system that is being developed, the product team has to go through an impact assessment. It has a number of human rights related elements to it in relation to making sure the system is performing fairly, addressing issues of bias. We think that’s a fundamental sort of structured process to be able to go through. We actually have now started publishing the templates that we use for our impact assessment, and we’ve also published the guide that we use to help our colleagues navigate the impact assessment process. We think it’s really important to share our working as we go as a company so that others can quite frankly scrutinize and build on it and improve it. So we’d welcome thoughts that people have on the impact assessment template that we’re using at Microsoft.

Bengus Hassan:
Thank you. I mean, just to build on the earlier contributions, in terms of regulation being global and the fierce urgency of now, I mean, I can understand why that is the conversation that is happening because that’s a natural reaction to some of the confusion we’ve seen in the last one year. But one is that, first of all, the regulation and academia are now trying to diagnose the issues that the government has identified and they have identified instructions and they have had brief conversations with the government, but they have not had a conversation with the individuals that they have in various places in order to look into impairments and understanding and implementing new regulations. And I think it’s really important to say this, is that regulation is about creating standards and not implementing necessary control. And I say this because this is the same conversation we had about data protection regulation in many countries where it then became an opportunity for certain governments to seek legitimate control over areas where they were supposed to create standards. So the idea was to control and not to create standards that they were also, you know, going to go to a bad buy. But I think there are many existing processes that we can build on. And I can understand why global always, you know, gives the idea of being slow. Because there’s negotiation, there are countries that โ€“ I think there are some countries that may just want to be contrarian, just so, you know, because they want to, you know, take the mic and speak or something. But there are existing processes and there are things that work. I mean, I like the example that you gave of the International Civil Aviation Organization. And there are many examples that we can look at. We can look โ€“ you know, we can talk about some of the multi-stakeholder conversations we’ve had at ICANN, you know, and now at the IGF, and we can build on those processes. And on the second question, just very quickly, I understand the concern, and like you said, there are many, you know, the existing laws that can be applied. But I’m also a bit cautious when it comes to the sort of the tension between innovation and regulation and policy or regulation. I think that innovation will always, always, always be ahead of regulation. And what is important is for regulators and policymakers to at least seek to understand before regulating. Because we’ve seen in many instances โ€“ I mean, I know a country where we’re working where cryptocurrency was banned, and we had to write a policy brief to the central bank. You can’t ban this. What you are banning is the foundation of the new forms of money and movement. So I think it’s really important to, you know, create sandboxes where people can experiment ideas but within, you know, specific frameworks where if something goes wrong, of course, there is โ€“ you have to apply โ€“ abide by but it’s absolutely important that in the name of you know cautioning and not you know not allowing people to go a wire that we’re not stifling innovation because we’ve also seen that happen where regulation doesn’t understand innovation and wants to jump ahead of it. Thanks and I’ll pop in as

Moderator – Peggy HICKS:
well and then the the first question I think is a really important one and I think the that idea that that we can’t come up with a global framework I’ve said that a million times myself that you know making a treaty isn’t isn’t going to get us there because it will take us too long and by the time we got it it would already be outdated but I think Benga’s answer and and and Owen and Gabrielle as well have said some of the pieces that we have and we need to we need to build piece by piece one thing that we desperately need right now we talked about in a conversation earlier today is around a authoritative monitoring and observatory that will give us greater

Audience:
there’s a kind of paradox here everyone’s talking about global standards universally global standards and everyone’s talking about fast and what I’d like to suggest in a minute is that perhaps in this case I mean there are reasons why that this could happen very quickly including the fact that the private sector is very interested in interoperable governments so that they can move through jurisdictions easily without having you know different regulation in different jurisdictions so there’s a lot of kind of of a carrot there but I’d like to suggest in this case slow maybe fast in a sense because to get a global agreement to move from 20 countries 50 countries to 193 countries all of those countries have to want this and what we’ve noticed at least on the global digital compact process process, is that the term human rights has often had certain connotations for certain groups of countries. And as an example of that, we had a lot of submissions to the global digital compact process and, you know, from some of the political groups that were, say, from the Global North Human Rights, we did a word count of how many times human rights was put in there. It was, you know, a ratio compared to the words digital divide. It was in a ratio of up to 15 times. Every time digital divide was mentioned, human rights was mentioned 15 times. For some of the other groups representing the Global South, human rights may have been mentioned zero times and digital divide several times. So completely the opposite. Now what I’m going to suggest is that when we think about human rights holistically, yes, we have the individual, civil, political, and kind of rights. We also have the social, economic, and cultural rights in the Universal Declaration of Human Rights 22 to 27. And these are also human rights. And these also need to be protected and governed for. And these are human rights which the whole world can get behind, including the right to work, employment, favorable pay, standard of living, education, protection of authorship. So how can the world think about this topic of governance of AI from a holistic perspective and bring along the countries who have more urgent pressing needs on the economic side, on the development side, and take a holistic approach, not just geographically to 193 countries, but also holistically from a governance perspective? So if you allow me one more kind of interpretation here, we’re talking a lot about regulation and legislation in this panel. But governance can also involve other types of policies, not just legal regulation, not even just ethical standards or technical standards. They can also involve other kinds of policies that impact incentives, from taxation, trade policy, intellectual property policy, which also, by the way, is one of the socio-economic cultural rights for authorship. So how can the conversation be shaped in a way that governance can be thought of holistically across the different parts of the UN’s work, not just what is commonly thought of as human rights, the social and political rights, but also the economic, social, cultural rights, and the sustainable development goals? And how can all of these other countries who, when they hear human rights, they think, it doesn’t matter if we don’t focus on the economic side, to actually embrace a concept of governance that will, we hear a lot about AI accelerating SDGs, but how is that actually going to happen? We can talk about productivity tools on Office Copilot 365, that’s great for a lot of office workers in the West, but how does that actually put bread on the table? How do we get the climate resilient agriculture that people keep talking about? Does that actually involve different forms of economic policy like prizes or subsidies or even incentive-creating policies, like in the COVID challenge trials where the vaccine was developed in a matter of weeks instead of normally years? How does that happen to really get material impact on the SDGs? So I would say slow is fast in this. To get a global 193 countries agreeing, they have to see an interest in it, and to see an interest in it, we have to think of human rights holistically, to include the whole Universal Declaration of Human Rights, not just a sub-part of it, and to get to that, we need a holistic approach to policy which doesn’t focus… focuses only on regulation, but also embraces other kinds. And that’s why, when the Secretary General put together his high-level advisory body on artificial intelligence, which will look at governance, there was an explicit choice to make it interdisciplinary, include voices from all regions, genders, but also from all disciplines, including digital economy, including anthropology, to look not just at the individual impacts on individuals’ human rights, but also the societal impacts on individuals’ social, economic, and cultural rights. Thank you.

Moderator – Moritz Fromageot:
Thank you very much, dear audience and dear panel. I would hand it back to Peggy for wrapping this up very quickly.

Moderator – Peggy HICKS:
Thanks. Quentin already helped me out with that assist on the human rights side. But I do think it’s a crucial point, and one that we need to think about is that human rights aren’t only, when we use the words human rights, the digital divide, and what it means for people who are suffering for the lack of technology is also a human rights that falls in the basket of economic, social, and cultural rights, as Quentin has described. But we have to get away from a terminology debate and move forward on the issues that we’ve discussed today. I see the facilitator for the Global Digital Compact here as well. There’s a lot of work to be done in building that global framework, but it does need to be done across sectors and across rights, but also across communities, countries, and people. And that means finding the ways to bring in all of those who are going to be affected by these choices in a much more effective way. And that goes to the second part of the question that you asked, which is, how do we make sure that the resources are available to do it? I think that’s a fundamental piece here, that we need investment in this global public good. And that does mean, and I think Owen even brought up, the need for that social infrastructure to be built. And that means public compute resources that will allow the researchers to be able to do the research that we all know we need them to do. So it’s really looking at those questions and finding a way that we can make sure that those who are making the profits out of this are also helping us potentially to invest in the ways that we can make sure that this opportunity side of artificial intelligence is there for all of us. Thank you all so much for joining us. Thanks to the wonderful panel that we’ve had with us today. And I hope everybody enjoys the rest of the IGF. Thank you. Thank you. Thank you. You You You You

Amandeep Singh Gill

Speech speed

150 words per minute

Speech length

871 words

Speech time

348 secs

Audience

Speech speed

150 words per minute

Speech length

1561 words

Speech time

623 secs

Bengus Hassan

Speech speed

199 words per minute

Speech length

1673 words

Speech time

503 secs

Gabriela Ramos

Speech speed

166 words per minute

Speech length

2066 words

Speech time

745 secs

Moderator – Moritz Fromageot

Speech speed

157 words per minute

Speech length

484 words

Speech time

185 secs

Moderator – Peggy HICKS

Speech speed

200 words per minute

Speech length

2122 words

Speech time

636 secs

Owen Larter

Speech speed

235 words per minute

Speech length

1799 words

Speech time

460 secs

DC-CIV Evolving Regulation and its impact on Core Internet Values | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sรฉbastien Bachollet

The internet, a network of networks, is a global medium that operates on open protocols such as TCP, IP, and BGP. It is free from centralized control and promotes open and interoperable communication worldwide. This highlights the positive aspect of the internet, emphasizing its ability to connect people and facilitate the exchange of information.

However, financial challenges are impacting internet freedom. As the world economy struggles to recover, what was previously offered for free on the internet may no longer make financial sense for companies providing services. This negative aspect raises concerns about potential limitations and restrictions that may arise due to economic constraints.

In response to these challenges, governments are actively involved in drafting and implementing regulations concerning internet governance. Notable examples include the UK’s online safety bills, the Australian Online Safety Act, the European Digital Services Act, Digital Market Act, and the US Kids Online Safety Act. This neutral argument suggests that governments are taking steps to ensure the safety, security, and responsible use of the internet.

Amidst these discussions, defenders of the core values of the internet emphasize the importance of preserving certain principles. The Dynamic Coalition on Co-Internet Value promotes permissionless innovation, which allows for the unrestricted development and deployment of new technologies and services. This is seen as a positive stance that supports the notion of an open and innovative internet.

Overall, the analysis illustrates the complex nature of the internet and its evolving landscape. While the internet offers open and interoperable communication, financial challenges pose a threat to internet freedom. Governments are actively intervening through regulatory measures, and defenders of internet values highlight the importance of preserving the core principles that have contributed to its success. The promotion of permissionless innovation adds another layer to the discussion, highlighting the need for ongoing innovation and development in the digital realm.

Audience

The provided summary examines various arguments and viewpoints concerning the security, reliability, and anonymity of the internet. It highlights the increasing dependence on the internet and the rising number of security breaches, emphasising the need to enhance its security and reliability.

On the other hand, the summary acknowledges the struggle with the need for identification on the internet. While identification is necessary for certain purposes, the concept of anonymity is also seen as significant. It argues that anonymity should be considered a fundamental value of the internet and advocates for the development of a standard that can combine both security and anonymity.

Furthermore, the summary supports the creation of a trusted service that promotes secure anonymity on the internet. The benefits of such a service are not explicitly stated; however, it can be inferred that it would provide a secure platform for users to maintain their privacy online.

The summary also brings attention to the concept of communications metadata security, suggesting that it may be a more accurate term than anonymity. It explains that the term “anonymity” can be misleading and proposes that the focus should be on protecting the security of communications metadata.

In addition, the summary mentions the use of Tor for accessing services like Facebook, highlighting the advantages it offers. It allows users to have control over the level of communication metadata they reveal, ensuring their privacy and security online.

Furthermore, it discusses the network layer of the internet, emphasising that identification is not automatically performed at this level. This suggests that users have the ability to choose whether or not to disclose their identity.

The summary concludes by suggesting that it might be beneficial, both in a societal and platform context, to have the option of identifying oneself at a different layer of the internet. This implies that users should have the flexibility to choose when and how they reveal their identity online.

Overall, the extended summary provides a comprehensive overview of the arguments and viewpoints regarding internet security, reliability, and anonymity. It touches on the perspectives of enhanced security, the need for anonymity, the concept of communications metadata security, and the importance of user control over identification.

Lee Rainie

The analysis highlights the issue of internet fragmentation and its impact on various aspects of society. One significant finding is that a staggering 2.6 billion people currently lack access to and use of the internet. This statistic emphasizes the importance of addressing the digital divide and ensuring equal access to the internet for all individuals.

The impact of the internet is further explored through four major revolutions: home broadband, mobile connectivity, social media, and artificial intelligence. Home broadband revolutionised the internet by making it an essential utility in people’s lives. Mobile connectivity then increased the speed of information access and communication. Social media expanded social networks, connecting people globally. Lastly, the emergence of artificial intelligence brought both promising possibilities and fears.

However, it is important to acknowledge that these internet revolutions have also led to social, cultural, and legal fragmentation. Different experiences have emerged across various segments of society, including differences based on class, gender, age, race, ethnicity, religious affiliation, awareness, optimism, and individual behaviours. These disparities highlight the need to address inequalities and ensure that the benefits of the internet are accessible to everyone.

Another significant finding suggests that individuals often perceive themselves as managing the internet better than society as a whole. This perception may stem from personal proficiency or satisfaction with their own internet usage. However, this self-perception does not necessarily align with the overall societal impact of the internet, which may still face challenges and inequalities.

In terms of technology policy, the analysis reveals a growing trend towards partisanship. Previously, there may have been a consensus on issues like anonymity, but that consensus seems to be diminishing. Signs of polarization are evident in the dynamics of populist mainstream parties in Europe. This partisan shift in tech policy raises concerns about the ability to reach effective and inclusive regulations and policies.

The analysis concludes by suggesting that the current dynamic in tech policy is fluid and unsettled. Discussions surrounding technology and its regulation suggest an environment where things are constantly evolving and difficult to settle. This observation underscores the complexity and challenges in shaping a cohesive and inclusive tech policy framework.

Overall, the analysis highlights the need to address internet fragmentation, overcome inequalities caused by the different experiences of internet revolutions, and find ways to address partisan tensions in tech policy. By tackling these challenges, policymakers and society can work towards a more equal, inclusive, and beneficial internet ecosystem for all.

Alejandro Pisanty

Regulation proposals in the context of the internet have raised concerns regarding their potential infringement on the core values of the internet. It is believed that these regulations may have a negative impact on the technical principles with which the internet was built. This concern stems from the assumption that such core internet values are primarily rooted in these technical principles. The sentiment towards these regulation proposals is generally negative, highlighting the need to carefully consider their potential consequences.

One of the main concerns regarding regulation proposals is the potential reduction in the universality of the internet’s reach. There is a risk that these regulations may limit the accessibility and availability of the internet, thereby undermining its global reach. Additionally, it is argued that these regulations may also lead to a reduction in interoperability, making it more difficult for different systems and platforms to effectively communicate with one another.

In order to enhance security, there is a suggestion that additional devices might be necessary for stronger authentication or identification. This highlights the need for ongoing technological advancements to address the evolving challenges of cybersecurity and digital identity verification.

However, it is crucial to implement regulations carefully in order to strike a balance between enforcement and the preservation of core internet values. The focus should be on finding a middle ground that allows for the regulation of the internet while ensuring that the underlying principles that shaped its development are not compromised. This approach is considered constructive, as it acknowledges the importance of regulations while also emphasizing the need to safeguard the fundamental values that the internet was built upon.

The topic of trust establishment in the internet also arises, with questions raised about the magnitude of architectural changes that may be required. There are concerns about the scalability of trust systems and whether they can effectively meet the demands of a growing global network. Alejandro Pisanty specifically highlights Estonia’s trust system as a brilliant example but potentially limited in its scalability. This insight offers valuable considerations for future developments in trust establishment within the internet infrastructure.

Furthermore, discussions around internet governance touch upon the significance of privacy and online identity. It is argued that individuals should have the choice to identify themselves online without being compelled to disclose personal identification data. This highlights the importance of striking a balance between privacy protection and the necessary security measures in place.

The case of AFRINIC, a regional internet registry, brings attention to the challenges faced by private entities registered in certain jurisdictions. AFRINIC’s position as a private entity registered in Mauritius has resulted in numerous court cases, sparking discussions about according technical organizations governing the internet the status of internet government organizations. This observation raises important questions about the governance structure and legal frameworks surrounding the internet.

In conclusion, regulation proposals for the internet have generated concerns about potential infringements on the core values and principles of the internet. Discussions revolve around the need to carefully implement regulations to preserve the internet’s universality, interoperability, and core values. The importance of stronger authentication and identification is highlighted, but considerations must be made for the impact on privacy and choice. Trust establishment also comes under scrutiny, with reflections on scalability and architectural changes. The legal status of technical organizations governing the internet is explored, emphasizing the need for effective governance structures in addressing the complexities of the digital age.

Iria Puyosa

The analysis considers various perspectives in the debate on content moderation in encrypted apps and the transnational flow of data. It raises concerns about ill-designed regulation that could potentially disrupt the internet. The argument is that rushed regulation may have unintended consequences and negative effects. This highlights the need for careful planning and comprehensive consideration.

Another important point raised is the focus on harmful content within encrypted message apps. While much of the public conversation revolves around managing harmful content in these apps, research shows that the majority of content in messaging apps is actually useful and positive. This challenges the notion that harmful content is pervasive and questions the urgency of regulation.

Furthermore, the analysis presents an argument against breaking encryption solely for content moderation purposes. It suggests that there are alternative ways to address harmful content without compromising encryption. Breaking encryption in messaging apps could have broader implications and potentially undermine encryption on the internet as a whole. This negative sentiment emphasizes the importance of considering long-term effects on digital security and privacy.

The analysis also emphasizes the significance of considering the transnational flow of data in policy making. Regulations implemented in one country can significantly impact other countries. The extraterritorial nature of data flow is often overlooked in policy discussions. This neutral sentiment highlights the need for a global approach and collaborative efforts to ensure coherent and harmonized regulations that do not have unintended negative consequences on cross-border data flow.

Additionally, the analysis highlights the importance of respecting human rights, the rule of law, and internet integrity. It suggests that solutions should be found that align with these principles. Balancing concerns while maintaining the core principles of the internet is crucial.

The analysis recognizes the need for technical expertise in policy discussions. It emphasizes the importance of individuals with the knowledge and skills to solve problems and implement effective solutions. This observation underscores the intersection of technology and policy and the value of diverse expertise in shaping regulations.

To prevent unintended consequences, the analysis stresses the necessity of input from civil society and a thorough understanding of human rights before implementing regulations. Involving a broad range of voices and perspectives can help avoid exacerbating existing problems or creating new ones.

In conclusion, the analysis highlights the complexities and various perspectives within the content moderation debate in encrypted apps and the transnational flow of data. It underscores the need for well-designed and thoroughly considered regulations that do not compromise internet integrity or undermine encryption. Respecting human rights, the rule of law, and involving technical expertise and civil society in policy discussions are also crucial. A balanced approach is needed to address concerns while upholding the principles and integrity of the internet.

Nii Quaynor

The African Network Information Centre (afriNIC) has faced significant challenges in Mauritius due to local legislation. These challenges have affected afriNIC’s ability to develop effective policies and have caused issues with Resource Registry (RR) transfer policies. This legislative impact has had a negative effect on afriNIC.

Despite these challenges, afriNIC’s multi-stakeholder approach within the Policy Development Process (PDP) has remained resilient. Draft proposals aimed at hijacking resources have failed to reach consensus, demonstrating the effectiveness of the multi-stakeholder approach in preventing such attempts. Although participation in the PDP has been hindered, leading to the recall of a co-chair, the multi-stakeholder approach has overall been positive for afriNIC.

One argument put forth is that internet identifiers should be managed as public goods, rather than treated as property. Transfer policies in other regions have considered resources as property, but not necessarily for the end user. It is argued that managing internet identifiers as public goods is crucial for their equitable distribution and accessibility.

afriNIC has also faced challenges regarding non-compliance from a member. This member, who had received significant resources but refused to comply with afriNIC’s requirements, had their resources recalled as a consequence. This non-compliance has created further difficulties for afriNIC.

Another concern is the need for stronger protections and governance for afriNIC. Despite plans to become a decentralized organization, this transition remains incomplete. Additionally, afriNIC’s attempts to seek diplomatic protection have not been successful. These factors highlight the need for improved security measures and governance within afriNIC.

Commercial disputes between non-profit organizations and members have also arisen as a challenge. It has been observed that disputes can occur, raising questions about the effectiveness of the current legal system in resolving such issues.

Furthermore, disapproval has been expressed towards a member who refuses to be disciplined and has abused the legal system by generating multiple court cases. This member has violated rules and even attempted to bribe individuals, undermining the integrity of afriNIC and placing further strain on the legal system.

Lastly, concerns have been raised about business misuse and the potential hijacking of numbers by organizations lacking proper infrastructure. Some organizations have been found to be misusing resources and generating numerous court cases without the necessary business infrastructure. This raises ethical concerns and questions about the proper allocation of resources.

In conclusion, afriNIC has faced various challenges, including legislative barriers, non-compliance from members, commercial disputes, and concerns over business misuse and number hijacking. Despite these challenges, afriNIC’s multi-stakeholder approach has shown resilience in the Policy Development Process. However, there is a need for stronger protections, improved governance, and a more efficient legal system to effectively address these issues.

Vint Cerf

The analysis covers a wide range of topics related to internet security, privacy, anonymity, accountability, and the role of technology in filtering harmful internet behaviour.

One area of discussion is the side effects of internet security measures. While governments have enacted laws to protect internet users, there is concern that these laws can be used to inhibit freedom of speech. It is argued that internet security measures have unexpected consequences and may not always achieve the desired outcomes.

The importance of strong authentication is emphasised as a means of preventing unauthorised actions and impersonation. Strong authentication, such as end-to-end cryptography, is seen as a way to protect user information and maintain confidentiality.

Anonymity on the internet is also addressed, with some arguing that it can lead to harmful behaviour. Anonymity is believed to shield individuals engaging in bad behaviour and decrease the consequences for their actions, thereby encouraging harmful actions. However, others argue that mechanisms allowing for identity discovery should be tolerated, as accountability can help prevent harmful actions. The tension between anonymity and accountability is a significant consideration in this debate.

The limitations of technology, such as machine learning, in filtering harmful internet behaviour are highlighted. It is argued that technology fails to effectively filter harmful behaviour and that incorrect filtering can infringe upon individuals’ rights.

Certain situations, such as whistleblowing, are seen as necessitating anonymity. Whistleblowers rely on anonymity to protect their identity and ensure their safety, especially when exposing sensitive information.

The need for architectural changes to internet identity is also discussed. The current identifier provided by the internet, the IP address, is seen as insufficient for maintaining security and privacy. Estonia’s implementation of strong authentication for its entire population is cited as an example of the potential for significant changes to internet identity.

The importance of accountability over absolute anonymity is emphasised, acknowledging the potential risks associated with identifying individuals by biological metrics. Privacy concerns are balanced against the need for accountability to prevent harmful actions.

Vint Cerf, a prominent figure in the field, argues that absolute anonymity may no longer be a core value that serves the interests of internet users. He also supports the inclusion of a multi-stakeholder perspective in policy formulation, believing it should be a normal practice for governments. The multi-stakeholder model of organisations like the Internet Governance Forum (IGF) is praised for ensuring robust policy-making regulations and engagement with governments.

The value of cryptography in data protection is highlighted, with examples of Google’s encryption practices and user-controlled data keys. However, arguments against the idea that data about citizens should be kept within national borders are presented. Keeping data within physical borders is seen as compromising reliability due to the lack of redundancy, while transborder data flows combined with encryption are seen as offering safe data storage options.

The layering mechanism for communications metadata security is appreciated, drawing parallels with other elements of internet design such as the domain name system. The concept of user-choice in revealing identity is viewed positively and considered an important aspect of internet security.

The power of internet exchange points for connectivity is acknowledged, facilitating efficient connections between networks. However, concerns are raised about government-operated exchange points leading to unwanted surveillance if all traffic is required to go through them. It is suggested that cryptography could help secure encrypted traffic running through exchange points.

Furthermore, the challenges of maintaining exchange points and data centres in space are noted, due to the difficulties in accessing these locations and carrying out necessary maintenance.

Lastly, the critical importance of the internet in everyday life is recognised, with global surveys indicating a widespread unwillingness to give it up. The positive impact of the internet on various aspects of society is acknowledged.

In conclusion, the analysis explores complex and diverse perspectives on internet security and related issues. It highlights the need for a balance between security, privacy, anonymity, and accountability. The role of technology in filtering harmful behaviour is examined, and the importance of strong authentication and architectural changes to internet identity is emphasised. The multi-stakeholder approach in policy-making, the value of cryptography in data protection, and the challenges and benefits of internet exchange points and space-based infrastructure are also discussed. Overall, the analysis sheds light on the multifaceted nature of internet security and the ongoing discussions surrounding its various dimensions.

Deborah Allen Rogers

The extended summary discusses the effective e-governance models developed by Finland and Estonia. According to Deborah Allen Rogers, who works with the digital fluency lab Find Out Why, these solutions often go unnoticed. She suggests that promoting learning from and collaborating with Finland and Estonia on their e-governance models is important, as they have been implementing them for about 20 years and have answers to many challenges faced by Europe and the United States in e-governance.

The summary also highlights the crucial role of cryptography in protecting human rights, personal rights, and privacy. It is considered a safe and scalable method for safeguarding information.

Furthermore, the significance of scale in technology is emphasized. Deborah Allen Rogers points out that smaller societies can serve as test samples, and scaling their functional aspects has been successful. The CEO of XRoad, based in Finland, shares insights about their more conservative cultural context in scaling technology compared to Estonia. The summary also mentions that scale changes the concept of what can be done at the push of a button.

It is worth noting that Deborah Allen Rogers has previous experience with drastic transitions, having been a clothing designer during the shift of global manufacturing to China and during the AIDS pandemic, as well as being in New York during the 9/11 attacks. This experience adds credibility to her perspectives.

The functionality of societies is discussed, with Deborah pointing out the difference between highly governed and functional societies, like the Netherlands, and dysfunctional ones. The summary implies that dysfunctional societies may struggle in handling societal aspects effectively.

Finally, the summary emphasizes that the functionality of a society is more important than its size. This notion aligns with the SDGs of reducing inequalities and promoting sustainable cities and communities.

Overall, the extended summary provides a comprehensive overview of the main points, arguments, and evidence discussed in the original text. It also includes Deborah Allen Rogers’ insights and experiences, adding depth to the analysis.

Shiva

Internet exchange points (IXPs) are critical infrastructure that facilitate the exchange of internet traffic between different networks. However, there are concerns about the potential impact of IXPs operating on a commercial business model on internet neutrality. Some IXPs operate as for-profit entities, and this could potentially lead to favouritism or discriminatory practices, impacting the principle of net neutrality.

The argument against commercial IXPs is rooted in the belief that when financial interests are prioritized, the impartial exchange of internet traffic may be compromised. This sentiment is reflected in the negative sentiment associated with this argument. The supporting facts suggest that some IXPs do indeed operate on a commercial basis, which raises concerns about the potential erosion of internet neutrality.

Another concern related to IXPs is government regulation. There is a fear that governments could use their regulatory powers to manipulate or control the internet through IXPs. This negative sentiment draws attention to the potential misuse of IXPs as tools for political censorship or surveillance. The related sustainable development goal of SDG 16: Peace, Justice and Strong Institutions highlights the importance of preserving a free and open internet.

On a more neutral note, there are ongoing discussions and considerations for the design of interplanetary internet exchange points. Given the increasing interest in space exploration and the possibility of future interplanetary communication networks, the concept of interplanetary IXPs is being explored. However, limited information is provided regarding this topic, suggesting that more research and development is required.

In conclusion, concerns about the impact of commercial IXPs on internet neutrality and the potential for government control highlight the need for careful regulation and oversight in the management of IXPs. The concept of interplanetary IXPs adds an intriguing dimension to the discussion, emphasizing the evolving nature of internet infrastructure as technology and human exploration progress.

Joseph

The use of Virtual Private Networks (VPNs) is a topic that sparks controversy. VPNs have the ability to bypass internet restrictions, granting users the ability to access sensitive data that may be otherwise blocked. This feature has both positive and negative implications. On one hand, it allows individuals to browse the internet freely, evade censorship and access information that may be crucial in certain circumstances. However, this freedom can also be easily misused, leading to fraudulent activities and infringement on sensitive data.

The argument against the use of VPNs centres around the potential for misuse and harm. Those raising concerns argue that VPNs provide a cloak of anonymity that can enable cybercriminals to carry out illegal activities, such as hacking, fraud and identity theft. By masking their IP addresses and encrypting their online activities, these criminals can disguise their tracks, making it difficult for law enforcement agencies to trace and apprehend them. This creates a significant challenge for cybersecurity and poses a threat to the security of individuals and organisations.

However, it is important to note that VPNs have legitimate applications as well. Many individuals and organisations, such as journalists, activists and businesses, rely on VPNs to protect their sensitive information and maintain privacy. For these users, VPNs provide a layer of security by encrypting their data, making it difficult for hackers or prying eyes to intercept and exploit it. In this context, VPNs are seen as valuable tools for safeguarding data and ensuring the protection of individual content on the internet.

The need for protective measures for individual content on the internet is a relevant concern in today’s digital age. As more and more information is stored and shared online, the risk of cyber threats and data breaches increases. This issue is closely linked to topics of internet security, cyber safety and data protection. With the rise of cybercrimes and the increasing value of personal data, it is crucial to find a balance between protecting privacy and ensuring the safety of individuals and society as a whole.

In conclusion, the use of VPNs is a contentious matter. While VPNs can provide internet users with greater freedom and privacy, their potential misuse raises legitimate concerns. The debate surrounding VPNs highlights the importance of balancing individual privacy rights with the need for cybersecurity measures. Solutions that address these concerns while preserving internet accessibility and protecting sensitive data are crucial for tackling this complex issue.

Jane R. Coffin

This extended summary provides a more detailed overview of the main points, arguments, evidence, and conclusions present in the provided text. It also includes noteworthy observations and insights gained from the analysis.

1. Importance of funding small networks in the United States: – The text highlights the importance of funding small networks, specifically in rural and underserved areas. – It recognises the lack of connectivity in certain areas in the US and the need for creative and innovative funding solutions. – The argument is strongly in favour of funding small networks to bridge the digital divide and reduce inequalities in access to the internet.

2. Open connectivity and fewer regulations: – There is a call for open connectivity and the need to reduce regulations to foster innovation. – The text mentions the importance of keeping internet exchange points open with fewer regulations. – The argument is positive and emphasises the benefits of promoting open connectivity for industry, innovation, and infrastructure development.

3. Concerns about the erosion of core internet values: – The text raises concerns about the erosion of openness, interoperability, global connection, and permissionless innovation. – Certain countries and international organisations are observed attempting to regulate internet exchange points. – The argument expresses a negative sentiment towards the potential threat posed to the core values of the internet.

4. Advocacy for community networks and competition in connectivity: – The importance of community networks for building networks that serve the community, with the community, and by the community is emphasised. – The text highlights regulations that prohibit community networks and stresses the need for more network diversification and competition in connectivity. – The argument is in favour of community networks and advocates for their importance in reducing inequalities in access to the internet.

5. Need for inclusive, multi-stakeholder policymaking and regulation: – The text argues for inclusive and multi-stakeholder inclusion in policymaking and regulation. – It suggests that neglecting smaller networks, internet exchange points, and other stakeholders may lead to forced centralisation. – The sentiment is negative towards the exclusion of certain groups and emphasises the importance of diverse perspectives in regulatory decision-making processes.

6. Observations on unintended consequences in policymaking: – The text suggests that excluding civil society, the technical community, and academia from policymaking may lead to unintended consequences and forced centralisation. – The negative sentiment arises from the potential negative impact of excluding certain stakeholders from decision-making processes.

7. The role of the Internet Governance Forum (IGF) and the multi-stakeholder model: – The text highlights the obligation of the IGF and the uniqueness of the multi-stakeholder model in working with governments for better policy formation. – The argument is positive, emphasising the need for collaboration between the IGF, governments, and other stakeholders to improve policymaking and regulation.

8. Possibility of exchange points in space with Low Earth Orbiting Satellites (LEOs): – Relevant research funded by the Internet Society Foundation explores the possibility of exchange points in space using LEOs. – The argument remains neutral, presenting this as an area of exploration for future developments in internet infrastructure.

9. Issues surrounding control over traffic in LEO constellation networks: – The complex nature of control over traffic in LEO constellation networks is acknowledged. – Complications arise in negotiating cross-border connectivity issues with transmissions between countries. – The argument takes a negative stance towards a potential concentration of control in the hands of a single entity or company.

10. Acknowledgement of different types of internet exchange points: – The text acknowledges that some countries require traffic monitoring at exchange points. – It recognises the existence and role of both neutral, bottom-up internet exchange points and government-managed ones. – The sentiment is neutral, neither positive nor negative.

11. Support for encryption and potential relevance of cryptocurrencies: – The importance of encryption in protecting the privacy of internet traffic is acknowledged. – While the support for encryption is positive, there is no significant interest expressed in cryptocurrencies at present. – The sentiment is positive, emphasising the importance of privacy and security in internet communications.

12. Overall sentiment towards the future of the internet: – The analysis reveals a positive sentiment towards keeping the internet open, secure, and globally connected. – The text recognises the need for collaboration, open connectivity, and innovative funding solutions to bridge the digital divide and reduce inequalities. – There is a strong emphasis on the core values of the internet and the importance of multi-stakeholder involvement in policymaking and regulation.

In conclusion, the text highlights the importance of funding small networks, the need for open connectivity, and concerns about the erosion of core internet values. It advocates for community networks, competition in connectivity, and inclusive policymaking to avoid forced centralisation. The role of the Internet Governance Forum and the multi-stakeholder model is recognised, and potential developments in internet infrastructure, such as exchange points in space, are explored. Encryption and privacy also receive positive support. Overall, the sentiment emphasises the need to keep the internet open, secure, and globally connected.

Olivier Crepin-Leblond

The Dynamic Coalition, led by Olivier Crepin-Leblond, extends an invitation to individuals to join their year-round discussions. Notably, there is no requirement for a membership fee, making it inclusive and accessible to a wide range of participants.

The work of the Dynamic Coalition holds significance, as they will be creating a report based on their sessions. This report will be taken into account in the Internet Governance Forum (IGF) messages for the Kyoto meeting, emphasizing the recognition of the Coalition’s efforts and their valuable contributions.

The initiatives of the Dynamic Coalition align with two Sustainable Development Goals (SDGs): SDG 9, focusing on industry, innovation, and infrastructure, and SDG 17, emphasizing partnerships for goal achievement. This demonstrates the Coalition’s commitment to contributing to the global sustainable development agenda.

Overall, the Dynamic Coalition, under the leadership of Olivier Crepin-Leblond, provides an open platform for discussions and collaboration. Their dedication to producing a report that influences internet governance decisions highlights the importance of their work. Furthermore, by aligning their efforts with key SDGs, the Coalition showcases its commitment to contributing to global sustainable development goals.

Session transcript

Sรฉbastien Bachollet:
Ladies and gentlemen, we’ll start in one minute, please. Thank you. OK, let’s go. My name is Sebastien Bachelet. I am in charge of taking care of this meeting, but I will not be the main speaker. Of course, you are joining the dynamic coalition on co-internet values on the topic of evolving regulation and its impact on co-internet values. So co-internet value, which comprise the technical architectural values by which the internet is built and evolves and derives universal values that emerge from the way the internet works. So internet, it’s a global medium open to all, regardless of geography or nationality. It’s interoperable because it’s a network of networks. It doesn’t rely on a single application. It relies on open protocols such as TCP, IP, and BGP. It’s free of any centralized control except for the needed coordination of unique identifiers. It’s end-to-end, so traffic from one end of the network to the other end of the network goes in the grid. It’s user-centric, and users have control over what they send and receive, and it’s robust and reliable. So dynamic coalition on co-internet value held sessions at every previous IGF. And every year, there seems to be another challenge, one of the most basic co-internet value. It’s unique weakness. In 2023, the world economy has not recovered from the challenge of previous years. What was free on the internet might no longer make sense financially for companies offering the service and might end up behind a paywall. What was free movement of information in the past might not be seen by government as a good thing today. What was free connectivity might not be financially sustainable any longer. What was free might be blocked tomorrow for many reasons. On the one hand, there are calls for commercial operators such as telecom providers asking for a fair share of internet profits, which is gaining grounds with some lawmakers. In addition to this commercial pressure, where the free mode of operation might no longer be the preferred mode of operation, recent years have seen a lot more regulation affecting the internet. Whether it is the UK’s online safety bills, the Australian Online Safety Act, the European Digital Services Act, and Digital Market Act, or the US Kids Online Safety Act, regulation is being drafted and ruled out by many governments. Very often for good reason and good objective, but it’s something we will see during this discussion. So not only is there a strong movement worldwide to implement some major structural change to the ways internet and internet services work, there is also a commercial interest from some to change the internet business model altogether. A few years ago, the Dynamic Coalition on Co-Internet Value promoted permissionless innovation. These days, for many governments, this translates to the World Wild West. Is this a fair assessment of the internet that we have been defending? Are the core values that gave internet its freedom at risk? Regulation, it’s now firmly back on the agenda. This session of the Dynamic Coalition Co-Internet Values will, again, bring world-class experts to discuss the internet we want, each bringing their unique experience to the table. I will briefly talk about our speaker. We are here on my right, Lee Rainey. I will leave them to present themselves. It will be shorter. Jane Coffin is with us.Nii Quaynor and Iria Pusosa are online. And Vint Cerf is with us. I would like to thank them very much. And give the floor, if you agree, to Lee to start the discussion. Lee, the floor is yours.

Lee Rainie:
Thank you, Sebastian. It’s wonderful to be here. I’m honored to be here. And really, my philosophy has been whenever you’re in the same room with Vint Cerf, you have to start by saying thank you. And I come to you from 24 years of doing research with the Pew Research Center about the social and political and economic impacts of the internet. I thought I was going to retire from Pew Research this past spring. And I flunked retirement. So I got a wonderful gig to continue on with a portion of the work at Elon University, which is in North Carolina in the United States. We’ve done a lot of work with them related to that. And I get the title professor in front of my name now. So my mother is smiling at me in heaven. And my children laugh at me a little bit less now. I wanted to start by saying the overlying topic here is fragmentation. So the first thing maybe to note in the sense of fragmentation is that there are 2.6 billion people who don’t have the internet and don’t use it. And so there is an enormous fragmentation at the heart of the social, political, cultural experience of the internet. So just noting that is an important scene-setter for this conversation. Over the course of my work at Pew, though, it was easy to spot four different revolutions that were occurring on our watch. And watch then the reckoning that came from those revolutions. There was a dynamic that has tightened up. There’s usually great enthusiasm at moment zero. And then the enthusiasm sometimes faded as the reality of things came out. So I want to also make sure that you understand I’m going to be talking about four social, cultural, and legal changes. These don’t really affect how people think about the underlying principles of the internet. They love it. You poll on the ideas of free, open, secure, interoperable. And you get unprecedentedly positive survey ratings about the principles that underlie what the master here built. What happens, though, is that once those principles collide with culture and law and people’s own personalities, there are ways in which their enthusiasms begin to fade or their qualms begin to rise. So go through the four revolutions relatively quickly. The first one we saw in the late 1990s, beginning in the late 1990s, was the rise of home broadband, which made people enthusiastic users of internet protocols because the internet became a utility in their life. It was not a play thing anymore. When you dialed up those modems, that was kind of a fun sound to hear. But when it became always on and on higher speed, people began to embrace it in the rhythms of their life. It changed the volume of information that was coming into their life. And you could see the incipient ways that they became enthusiastic about being content creators themselves. So it was democratizing. It was doing end runs around gatekeepers. There were ways in which new kinds of communities could be built that were built around affinity and affiliation rather than localities and the physical proximity that people had to each other. And people just loved the idea that they could tell their stories without being shut down or without having to cajole a gatekeeper to allow them to tell their stories. And yet, right in those early days, there were early signs that people, while they liked that for themselves, they didn’t like that necessarily for others who had different ideas. The medical community at first was one of the initial communities to sound alarms around mis- and disinformation. They were worried, from a gatekeeper sense, that people were doing end runs around their providers and getting second opinions and diagnosing themselves and things. But there was also concern that more and more misinformation and just bad information was getting out into the world. Dangerous actors early on began to figure out how to exploit these new tools for themselves. Concern about the content that was appropriate, particularly for children, to be exposed to. I came out of the world of journalism, too. So it was easy to see the warning signs of what the internet was going to be doing to mainstream journalism in the culture. So that was part of the backlash. Love at first, democratizing, but also concerns about some of the early ways in which it was playing through the culture. Second revolution was the mobile connectivity revolution, which changed the velocity of information into people’s lives. All of a sudden, their phones became another body part and another lobe in their brain. And they loved that. They loved the always-on, always-available connectivity that they had with others. They liked being able to be reached by others. They liked the fact that the nature of their social networks was changing, even before social media really sort of came to prominence. They could see more people in their lives and interact with more people. And they enjoyed that. But they, again, sort of early enough in that whole arrival of that second revolution on mobile connectivity, they began to worry about the distractions that it was bringing into people’s lives, the way it was disrupting their attention flows, the way that they were always available to others. They liked it in some sense. They certainly liked it when they could do outreach to others. But they didn’t like necessarily being always available to others. And it imposed new obligations on their lives. So again, there’s this sort of push-me-pull-you, yin and yang dimension to the rise of this second revolution. Third revolution is social media, particularly when combined with the mobile connectivity revolution. It just put everything on accelerants, the relationships in their social networks, the size and scope of their social networks, their exposure to new information and people and ideas, the fact that they could share the adventures of their lives, and even the little things in their lives, very quickly with the push of a button. And they could like and affirm things that others were doing. That was incredibly exciting to people and changed the way that they reacted to media. They lived their lives in a variety of ways. But then, relatively soon, too, began the first backlash wave about, well, what’s this doing, particularly to younger children, and especially to girls, when their messaging that was coming into the world was not necessarily affirming or was showing them parts of life that they struggled to think that they would ever have access to and things like that. The business model of the companies themselves began to raise questions about, well, how much do they really know about me? And how much am I being targeted and manipulated or steered or things like that? Obviously, there were concerns about harassment and hate speech and threats and all kinds of things like that. And information warriors themselves taking actions in this. The fourth and final revolution that I’ve been privileged to watch and is unfolding in front of our very eyes and is the central topic of this idea is the artificial intelligence revolution. And clearly, people have very discriminating ideas about it. There are ways in which they think AI is doing wonders in their life. And they anticipate even more wonders in the future, their productivity and things like that. But they’re also worried about their jobs. And they’re worried about bias and discrimination. They’re worried about their own autonomy and a way to act. And they’re worried about ethical applications. I heard a number here. I hope someone will fact check me on this if I’m wrong. There are at least 1,300 documented protocols of ethical AI that are now being circulated, God knows how many more, in more private channels. But it’s a sense that there’s a palpable fear that these tools might turn bad or they might be pulled in bad directions. So those are the four revolutions and the backlash. So each of them sort of have affected people’s lives. But I also wanted to talk for a minute about other ways that I call them fragmented souls are affected by these new environments. And again, a play through the social, cultural, and legal fragmentations that we’re seeing. Everything that we’ve studied about those four revolutions shows that different groups have different experiences of the revolutions. And the obvious ones that Pew measured every time something new happened was there are differences by class, differences by gender, differences by age, differences by race and ethnicity, and sometimes pretty significant differences by religious affiliation or non-religious affiliation. There are also differences by sort of psychographics, the way people are affected their relationship to these new tools. First of all, especially when it comes to AI, their awareness is an enormous determinant of how they think about it. The less people know, the more scared they are. And you can see how public education and other just sort of familiarization processes might ease things over time, but that’s a big determinant now. But there are differences among those who are optimists and pessimists, those who trust and don’t trust as their starting point with other individuals, extroverts and introverts, and a whole lot of other psychographics. Finally, just to make things confusing from a fragmentation sense, and anybody that’s trying to deal with this has to deal with the reality that different people act different ways in these environments. At one moment, the context is open and affirming, and I want these things in my life, and I would like them available to me. At another moment, I don’t want any access to me. I don’t want my data being gathered. I don’t want to be offered this transactional kind of thing. So there are ways in which you can’t even predict at the individual level at times whether people are going to like it or not like it, which makes lawmaking hard, which makes rollouts of new products and applications hard, and things like that. And the final one is the sort of big one, which is there’s an optimism gap that’s at the center of people’s thinking about the fragmentation we face. They think, each individual thinks, I’m doing OK in this environment. They like all of these revolutions for what they bring to their lives. But they also think everybody else is messed up by them. I’m OK. You’re not. So they think they’re doing fine, but the society is not doing well, and they have a split mind thinking about how to reconcile that in policy, in culture, in norms, and in technology. Thank you, Sebastian.

Sรฉbastien Bachollet:
Thank you very much, Lee. I will give the floor to Jane now, please.

Jane R. Coffin:
Hello. For those of you that don’t know me, my name is Jane Coffin. I’ve been rambling around the internet community and connectivity communities for about 25 years. I’ve been in government, industry, non-profits, and startups. My last startup was one that I didn’t start it up, but it was one of the key people. But it was one of the key people working on the startup to help fund small networks, believe it or not, in the United States, because there are a lot of networks that are not being deployed in the rural, remote, urban, unconnected, un- and underserved areas. And it was specifically to take a look at how to fund those networks with creative, innovative funding, a.k.a. bringing what people call blended finance and impact investment back to the United States, where it probably should stay for a while, because there’s a lack of connectivity and things have to change. And the regulations need to be loosened up a little bit in order for that to happen. During the 25 years that I’ve been running around, I’ve done a lot of work in what people call the Global South, but the Global South often doesn’t call itself that. The common denominator is working in those places that are less connected and potentially had fewer regulations and policies. So helping to bring some policy and regulatory sense in some areas and or building regulators, to help bring in more open connectivity, which was always my goal. I was at the Internet Society for 10 years and spent a lot of time working on internet exchange points and community networks, which I’m going to focus on as some of the core internet valued entity things that we need, related to something called invariance. And the Internet Society put out a paper called Internet Invariance. And I want to read the Wikipedia, if I can find it again. Definition for you of invariant, which is a constant. It’s something that’s not changing. And so if some of the key internet invariants are openness, interoperability, globally and something that I think Vint coined as permissionless innovation. I’ll call it innovation without permission. Those are critical things for building your internet community and building networks in anywhere. But what we’re seeing is some erosion of those key things about the openness, the interoperability, the globally connected part, which is if any endpoint of a network can connect to another endpoint from that global interconnection, this is super important. Internet exchange points are a sign of some of these invariants because they bring networks together in a very neutral fashion to exchange traffic without a lot of rules. The rules are, of course, based in protocols that come out of the Internet Engineering Task Force and some other organizations like the IEEE. If you’re doing wireless, sort of Wi-Fi connectivity at the IX. But those internet exchange points that we helped develop over time gave people a neutral grounding place to exchange traffic. They were often not regulated, and it’s been quite something to work over the last 15 to 10 years to make sure that they weren’t regulated and to keep them open. We’ve seen some erosion of that in different countries, and I’m not gonna name the countries or where some of this is coming from, even in international organizations where they wanted to standardize the stack of equipment in IXPs, which could have created more challenges and hardened the architecture to a degree that there was less innovation when you’re building the internet exchange points. The other connectivity medium that we were working with so closely, and I’ve been working with in the last couple years as well on a different level on financing them, are the community networks. You can call them municipal networks, open networks, structurally separated networks where there’s more networks riding over a network that somebody else runs, the baseline network. But with community networks, you have permissionless innovation to just bring in what you’d like from the community out. And if we see more regulation that prohibits community networks, I’ve been in international meetings where people said I was trying to stand up a terrorist network. Or, and I thought, wow, okay, that’s a whole new spin on what I’m trying to do. But, and it wasn’t me, and I should say, the expression we used to use was for the community, with the community, by the community. These are organic networks that are built out in places that have little last mile connectivity to no last mile connectivity, or no competitive last mile and middle mile connectivity. So I would posit that when we keep seeing spectrum locked in, when we keep hearing people say, no, you can’t have a different type of network that isn’t an incumbent network, or designed a certain way, there’s, they’re locking out innovation, but they’re also locking out competition, and they’re locking people out of connectivity at a cheaper price. So if we’re talking about some of the core internet values of openness, interoperability, globally connected, and innovation without permission, internet exchange points, community networks, and working with brilliant technical people in a very innovative way, which is not in a university setting at times. I’ve worked with a lot of people in the network operator groups, which I think a lot of people don’t know what those are, the NOGs, the network operator groups around the world are some of the best places where you see technical expertise transferred to other people at what I call the local, local level, where you, if you’re talking about sustainability and building more internet infrastructure, it’s not just people jetting in to say, you do this this way, it’s more of a, how do you work with local people to train local people for local connectivity? So I’m gonna stop there and just also say that, I think Lee had mentioned it, but there are some things that we’re seeing with the DSA and with fair share, which by the way, I saw so much erosion of this fair share issue 20 years ago. People were calling the internet bypass because it was bypassing the traditional telco networks. So for years and years in certain fora, people were locking out the internet. They didn’t want IP-based networks in their countries because it was going around the toll booth of the old telco networks. Now I’m not anti-telco, full disclosure, I did work for a telco years ago, but there’s room for everyone in this equation, and I’m gonna turn it over back to you, Sebastian.

Sรฉbastien Bachollet:
Thank you very much, Jane. Very well articulated, I think it will be useful for the follow-up of this meeting. Now we have two person online. I would like to be sure that, Nii, who will be the next speaker, and Iria, who are available online. And Nii, please take the floor.

Nii Quaynor:
Yes, I’m available.

Sรฉbastien Bachollet:
Go ahead. Thank you, Nhi.

Nii Quaynor:
Thanks very much for inviting me to share some views on the topic. I tend to think internet means fragments, so perhaps the fragmentation is elsewhere. I’ll be speaking to how AfriNIC, Africa’s regional internet registry, was affected by local legislation in Mauritius, and what impact this could have on regional internet registries. There’s sufficient background information at the afriNIC.net website on legal cases, but take a look also at the assisted review. I intend to present that though the legislative context is a factor, there were real other challenges, including RR transfer policies, policy development process attacks, cyber bullying, legal denial of service attacks on the org, and also on individuals who dare speak. Misinformation was peddled, even there was cyber squat of the RR, and so on, community poisoning, and naturally that generated some internal governance challenges surrounding the resources. However, the core, the afriNIC core function of administering resources to operators and end users, according to community developed policies, has so far held up very well. The good news is that the multi-stakeholder approach we practice in our PDP has been resilient, and several draft proposals to hijack resources did not reach consensus. Attempts to gain the participation in the PDP were also thwarted, and a co-chair was recalled for the first time. A brief history will put this in context. Proposal to establish was made in 97, meetings in 98 in Kotonu and AFNO 2000 endorsed the proposal, and afriNIC itself was established around 2004 going to five. It received endorsements and support of several governments and inter-governmental organizations, many African countries, African Union, ICT ministers, OIF, Francophonie, E-Africa Commission, UNECA, UNICEF Task Force, and many others supported. So the need to have it established was unquestioned. The original idea was to establish as an incorporated association, not for gain in South Africa, but eventually consensus was to develop a decentralized organization with headquarters in Mauritius and other operations in South Africa, Egypt, and Ghana. afriNIC was blessed with generous financial resources from the government of South Africa and was actually incubated in CSR in Victoria. And we proceeded to build a headquarters according to the consensus with additional support from the government of Mauritius. And in Mauritius, we ended up establishing as a private company with membership bylaws. For a decade, the shared objective was clear and was to build the foundations of internet in Africa. We lost this shared objective as we went along and interests, personal interests or self-interest began. And this began with the, when afriNIC received the last slash eight of IPv4 in 2011 as per global soft landing policy. The pressures on the common objective started at this time and transfer policies adopted by other regions questioned service versus property. These policies considered the V4 resources as property to LIRs, but not to the end user on whose behalf LIR justified the resources. Given that people are voluntarily adopted to use the identifiers, we have responsibilities to manage them as public goods, not property. There were discussions on changing scope of our function. Some say are a mere bookkeeper versus a registration service agreement to be complied. The need basis policy was questioned out of reaching use of IPs became an issue. Meanwhile, of course, board got involved in our case in resource allocation, which was a no-no. There was misappropriation of legacy V4 by founding staff, which has been addressed and most resources recalled. The consensus we had had weakened and the board got divided resulting in community disagreements. We’ve had three CEOs, 2004, 15 and 19 and none since 2022. In 2021, afriNIC initiated a resource members assisted review according to the RSA. The membership application has compliance requirements where members shall do specific things as well as consequences if member is not compliant. In the review, some members accepted some had forged documents. One member who had received more than slash nine in four locations in 2013, 2014, 2015, 2016 refused to comply saying afriNIC is a bookkeeper, has no rights. But the member signed the RSA, the member in question also has no ASN and no V6. afriNIC followed the RSA and applied the consequences by recalling resources. The member did not seek arbitration, denied afriNIC rights to assess his compliance and started litigations. A commercial dispute therefore had erupted between the member and afriNIC. There were 28 cases with member initiating 26 and afriNIC only two. 18 of cases were completed with 12 set aside, four withdrawn by a member and two null and void or by agreement. There were 11 injunctions, three state of executions, four claims and one contempt. The claims were to amend our register to make the person like a director, whereas he’s not been elected, demanding $1.8 billion, demanding afriNIC on use V4 resources, garnishing the company’s assets, claiming defamation and so on. The cases seem frivolous and designed to overwhelm attention, financial resources and stress governance. This member bullied community members with defamation suits in their countries if they dared mention a name on mailing lists. However, the substantive case on violations of the RSA by a member has not yet been heard. One of other consequent cases damaged board quorum and could not appoint lawyers for court cases to defend afriNIC nor her CEO. A recent court order has appointed an official receiver to hold elections to restore governance at afriNIC. In summary, someone saw a loophole and decided to harass company, attack the weak part of the RRA system. This started with review of compliance. Then we saw abuse of legislation in cumulative attacks in a capital market economy. Member created number of confusion offering alternate RRA based on brokerage and lots of social media misinformation. On the other hand, afriNIC is well positioned in the substance, even injunction on transfer policy has completed as not granted. The multi-stakeholder in the PDP was strong enough to resist abuse of open participation. We have had support from all RRAs, ICANN, ISOC, governments, members and community at large. We just had AIS 2023 organized by AFNOG and afriNIC and hosted by ZADNA in Johannesburg, South Africa. We are organizing community for what to do in the future and we’re privileged to receive video message from VINCEF and Ambassador Amandeep Gill, UN Secretary General Envoy on Technology. During the opening ceremony, the Deputy Minister of Ministry of Communication and Digital Technology, Philippe Mapoulani did not miss words when he called the heist a neocolonial conquest. The V4, V6 and ASN resources are for internet development in Africa. And would be difficult to change the purpose. AfriNIC did not complete the decentralized organization it planned. It could also not get diplomatic protections it had sought. Ironically, afriNIC went to Mauritius for business stability, for a technology company, but now going through litigation that comes from capital market. We should not take internet for granted and protect it for all. Thank you.

Sรฉbastien Bachollet:
Thank you very much, Ni. Very interesting, useful and I am sure that a lot of people in this room and around the world support you and the people who try to solve the case of afriNIC because we all need afriNIC. And now I will give the floor to, I guess it’s Iria. Can you show us, show on the screen and take the floor, please? We, you need to open your mic because you are muted for the moment as I can see, yeah. Yes. Go ahead, thank you.

Iria Puyosa:
Sorry about that. Thank you, thank you, Sebastian. I kind of go back to what Li was saying at the beginning. We had a kind of wave of panics, of backlash, as he said, and now we are facing all those. So we are fighting, we are in a moment where we are listening, we are hearing a lot of voices saying, we need to regulate, we need to regulate fast because something, something seems so serious harm upon us on the internet. I’m kind of concerned about these reactions and this demand for quick response because most of the time these regulations don’t over, under pressure, are kind of ill-designed and they may break the internet. And that was, this is what we are concerned at the moment. I believe that we need to do more research on the issues before us, define precisely what the problems are and how the problems we are trying to solve and not something so big is impossible to understand and assess the trade-offs between different policies and the way in which they are suitable technical implementations for those policies. While we try to add, to regulate too fast, maybe we lost that. In the research I conducted recently in the FRL lab, we were focusing on knowing the internet as a whole, but in messaging apps. While we were trying to add in response to demands on regulating this ad, particularly trying to introduce content moderation in encrypted and messaged apps. That was kind of the call we were listening here in the United States. People were concerned about disinformation and foreign influence operations. People were concerned about notification of terrorists, violent extremists, species that may drive atrocities and child sexual abuse material. Most of the claims had the idea of, this is happening because these messages are encrypted. And so, police discontent and so those hands. And this was, it’s pretty much the, say a generalization, a simplification of the public conversation, but it’s what we’re hearing. So in our research we’re finding, well, it’s not the case. Most of the content we see is in messaged apps or have been posted and they’re also, so it’s useful, helpful for individuals, communities and society. But this thinking about harms is what dominates the public conversation. I will take also, get to the pressure we are seeing over the UK online safety bill and the US keeps online safety ad in which most of the pressure is, we need to find a way to moderate content in encrypted apps because everything running there is negative for the society, it’s harmful for the society. What the part of the war we were doing here at the different lab was trying to. to show how the content there was a variation of content with different purpose and most of them was positive, but also how different ways to deal with this harmful content that exists, we doesn’t need to break encryption, we doesn’t need to establish, impose content moderations would be undermining encryption. So that is the focus of that recent research we’re doing here. In part, our conclusion is one of the issues who sometimes get out of the conversation is one of the issues who sometimes get out of the conversation is how these policies for the flow of data, use of internet-based applications, don’t consider this is a transnational flow of data, this is an extraterritorial Scott or affected platform operations. So maybe one intended regulation in one country would be affected profoundly negatively in other countries in which rural love is known as a norm. So the world we are trying to do is try to find ways for addressing the problems existing in the platform with how breaking the fundamentals of the use, in this case, breaking the encryption. We were focusing a message in advance as we know, see, we go after encryption is not needed in messaged ads sooner rather than later. Some people are going to say encryption is not needed in the internet. We need to get rid of that because there are other harmful contents running in the internet. So this is pretty much what we are looking at at this moment. This is what we see. I see it’s a burial for the internet as a whole while we let this conversation escalate trying to undermine, in this case, encryption. In other case, it could be another value, another core principle of the integrity of the internet as a space for communications. Due to this shared concern we had in this legal, it’s the pressure for a quicker regulation, no well-defined regulation, no well-intended regulation is part of what we are trying to get into the conversation at the moment, trying to find solutions to ensure the respect of human rights, the rule of law, within the principles of necessity and proportionality without attacking the aspects where we consider core for the functioning of internet-based communications and internet integrity.

Sรฉbastien Bachollet:
Thank you very much, Mirja. And now, last but not least, Vint Cerf, please.

Vint Cerf:
First of all, thank you for, okay, well, first of all, thank you for inviting me to join you in this session. I think all the preamble just tells you that many of the times when we try to fashion rules to make the system function in a way that’s safe and secure, we often end up with unexpected side effects. And some of them you’ve just heard from me, for example. I think what’s happened over the course of the last decade or so is that the openness of the internet, which was relatively safe, was a consequence of the people who were using it. In the very early part, the people who used it were the people who were building it. And for the most part, they didn’t have any interest in destroying it or abusing it. They just wanted to make it work. But as time has gone on and as it has become commercially available, then more and more of the world’s population have access to this. And their motivations are not exactly the same as what the original engineering teams had in mind. They’re interested in using the internet for their own purposes. There’s nothing necessarily, apparently wrong with that. I mean, business wants to use the internet in order to improve business, to grow their businesses. But there are people who are on the internet who would like to exploit their ability to amplify their voices, to amplify their messages, to deliver malware, to deliver phishing attacks, or service attacks, whatever else is motivating them. And governments have, over the past decade or so, recognized that these hazards are beginning to arise out of whatever motivations. And so they try to enact laws that will protect people using the internet. And that’s also an understandable motivation. I must admit to you that there are some countries that are more interested in protecting the regime than they are in protecting the citizens. Interestingly enough, and the difficulty is that the same mechanisms that might be used to protect the citizens are also useful for inhibiting legitimate freedom of speech or other kinds of activities that many of us would consider reasonable. And so we now have a conundrum, which is that in our interest in protecting the safety and security and privacy in the internet, we may interfere with our ability to hold parties accountable for the bad behaviors that they exhibit on the network. And that is threading the needle in some sense. Perhaps those of us who live in democracies will have to recognize that the authoritarian governments will in fact use the tools that we would argue are needed to imbue citizens with rights, to inhibit those rights. And I’m not sure that we have the freedom to inhibit that or to prevent that from happening. What that means is that the internet will not be the same everywhere that we look. You see this happening where internets get shut down from time to time because the regime believes that it either is necessary to protect the regime, or they may even believe that it’s necessary to protect citizens from harmful misinformation and disinformation. This leads to a zeal in the legislative corridors to pass laws intended to protect people’s interests. And let me just set aside the laws that are passed to protect the interests of the regimes and just focus on the more democratic environments. What can happen, however, is that the intent of those laws may be laudable, no pun intended, but they may also have side effects. So one possible example is that if the law requires a 24-hour response to the removal of harmful content, first of all, it may turn out to be literally impossible to cite one statistic that you’re all familiar with, the YouTube application at Google receives somewhere between 400 and 500 hours of video per minute uploaded into the system. I have no idea how many hours of video are exported per minute by users who are trying to download content. It’s not possible for that content to be vetted manually. We don’t have enough people to do that. And so we rely on technical means, machine learning mechanisms, which we all know are imperfect. And so not only will they not work 100% of the time, but they won’t catch 100% of the problems. And they may catch things that aren’t problems but look like problems because the algorithms don’t know the difference. Asking a company the size of Google to do something is one thing, but asking a small and medium-sized enterprise to carry out the same kind of filtering may inhibit that small enterprise from ever existing, let alone growing. So we have these undesirable side effects of well-intended laws that may prevent us from building the internet that we all would like to have. We also, someone mentioned earlier, I guess it was Jane, that there were laws that were passed, in the US anyway, telcos that didn’t want competition from community networks were able to get laws passed in the States to inhibit the building of community networks on the grounds that if a municipality wanted to build a network, it was the government interfering with freedom, competing with private enterprise that ignored the fact that a typical arrangement would be that the community would actually have a contract with a private entity to go build the municipal network and operate it, but that was sort of ignored, and then zeal to argue the other case. So I’m actually quite worried that these are not simple problems to solve, and that at the Internet Governance Forum, where we’ve spent years literally contemplating some of these problems, that we have a kind of responsibility to try to help the legislators and the regulators come to reasonable conclusions about protecting human interests, while at the same time, recognizing that there are responsibilities associated with the use of the Internet. In a previous session, it occurred to me to remind people about the social contract, and Rousseau’s observation that along with safety and security, which people are looking for in their social environment, that they have obligations not to abuse their freedoms. My freedom to punch somebody in the nose kind of stops about one centimeter away from Sebastian’s nose, and my freedom existed up to that point, but as soon as I complete the action now, I have now violated his rights. So we have still some work to do, and I think especially in the IGF context, we have an obligation to help the legislators and the regulators to find a way forward that preserves as much of the utility and value of the Internet as possible, while at the same time, protecting people from harm. One particular thing which we valued over time, I think, is anonymous use of the Internet. You shouldn’t have to be known to just do a Google search, for example. However, if you are going to use the Internet for harmful purposes, eventually, I think we would generally agree we would want those parties to be identified. Well, this gets to the notion of accountability. Many of the laws that are being passed are attempts by the legislators to articulate how to hold parties accountable for their behavior, whether that’s a private sector entity or an individual or a whole country. In order to hold parties accountable, you have to be able to identify them. So now we have a tension between privacy and the ability to reveal a party in the event that we believe that party is misbehaving. There is currently, as many of you know, an attempt to draft a cybercrime treaty, and there is a considerable amount of debate deciding on what’s a cybercrime. In some cases, you could argue that every crime that already exists can also use a computer to execute the crime. Therefore, all crimes must be cybercrimes. That’s not a good syllogism, and some of us are arguing that we should be more cautious about the treaty being focused specifically on things that you could not do without the use of a computer in the network. That’s still in debate, so we haven’t completed that yet. So my bottom line on all of this is that in our attempt to make the internet a safe and secure environment, we are going to have to accept that some of the principles that we enjoyed in the early days of the internet may no longer be fully attainable. And in particular, I would argue that accountability forces us into making parties identifiable at need. And I will offer just one very weak analogy, which some of you heard before, I suspect. When you get a license plate on the car, it’s usually just a random collection of letters and numbers, and it looks like gobbledygook to us. But there are parties who have the authority to look that license plate up and identify the owner of the car, which, by the way, may not be the driver of the car, and that’s also an important observation. But this piercing of the veil of anonymity or pseudonymity may turn out to be essential to introducing accountability into the system. Some of you have also heard my argument that agency is another element of all this. We need to provide agency to individuals, corporations, and even countries to protect their interests, which might mean, for example, the use of end-to-end cryptography in order to maintain confidentiality. And arguments are often made that end-to-end cryptography is harmful because it means it’s harder for law enforcement to detect that there is misbehavior on the network. And I sort of draw the line there in arguing that end-to-end cryptography for the protection of confidentiality is extremely important. The idea that you have a backdoor into the cryptographic system almost certainly guarantees eventually that information will be released, and then no one will have any confidentiality at all. Last point, people who are focused on the anonymous use of the internet may sometimes forget that strong authentication of your identity might turn out to be helpful to you, and that you should be adopting mechanisms that make it hard for other people to pretend to be you, because if it’s too easy for them to do that, they may, in fact, take actions on your behalf that you didn’t authorize. And so strong authentication might, I hope, become a norm in the system where it’s needed in order to make sure that you protect yourself against other people taking actions that you didn’t authorize. So, Mr. Chairman, I’ll stop there, but I hope this feeds a little bit of the thinking for the debate which should follow.

Sรฉbastien Bachollet:
Thank you very much, Vint, Sebastien Bachelier speaking. Just, I would like to pick up one of your points. It’s when you remind us that IGF could be useful, and the exchange we have here and in the other room are not just to talk, but also, it’s to talk but to exchange between various stakeholder, and that’s an important point here also today. Now I would like to open the floor for question. You have a mic in the middle of the room. Just queue there and talk, give comments or question, and if there is the same online, please do it.

Alejandro Pisanty:
Sebastien Alejandro Pisanti here, moderator online. There’s Deborah Allen Rogers hand up as well.

Sรฉbastien Bachollet:
Okay, Deborah, go ahead, please, thank you.

Alejandro Pisanty:
Oh, Deborah, you can ask your question.

Sรฉbastien Bachollet:
If you can open your mic, and eventually your camera too will be great, like that we can see you for the moment your microphone is closed, as I can see.

Vint Cerf:
How many engineers does it take to turn on a microphone?

Alejandro Pisanty:
Maybe only one, but the system may be so unresponsive.

Sรฉbastien Bachollet:
Okay, maybe, okay, maybe, Alessandro, you may be willing to start and we will try to solve the problem with Deborah, please. Thank you.

Alejandro Pisanty:
Thank you, I’ll make a very brief comment right now. The work of a dynamic coalition on core internet values is concerned with the way different things and this year it’s regulations mostly may impinge on these core values, assuming of course that they are mostly the technical principles with which the internet was built. And What we see from some of the regulation proposals is that they may actually do away or damage seriously things like the universality of reach of the internet. They may be achieved by reducing interoperability. I’m very concerned, for example, this does not mean not to do it, but find a way to do it with what Fint has said, for example, for stronger authentication or for stronger identification. We may find ourselves needing to add devices to the system or some governments or banks or such entities may decide that you need to have an extra device, maybe also on their network to do this authentication that open standards like PKI will not work. So that’s the kind of concern that we have to look into, to extract a list of these things for now, and see how they can be made to work or research over the next months. These are key points that we’re looking at, but I’ll leave the floor to other participants. Deborah says it’s not allowing her to open and I’m already trying to unmute.

Deborah Allen Rogers:
Hello, hello.

Alejandro Pisanty:
There you are.

Deborah Allen Rogers:
I’m here, but I would like to be on camera, but you all see my face in the picture. So I just want to say hello to everyone and thank you very much. I will lower my hand also. And what I wanted to say was a couple of things. I’m from New York City. I live in The Hague. My name is Deborah Allen Rogers, as you see, and I have a digital fluency lab here called Find Out Why. So I wanted to direct my question. Oh, here we go. It looks like I can start my video. Okay. Hello, everyone. Okay. So hello from The Hague. I wanted to direct my question at Jane and also at Vint. Anyone else who might want to join in, but in particular, the two of you. One is the father of the internet, and secondly, as a woman who just gave us a lot of really good intel about NOGs, for example. Do either of you or do anyone on the panel spend time working directly with Finland and Estonia on e-governance? I do some work with them and they’ve developed these models and they’ve been putting them in place for a good 20 years for e-governance and have answers to many of the questions I see that we struggle with here in Europe and that we struggle with in the United States. And the last point I’ll make is because they stay sort of under the radar screen, oftentimes their designs are sort of overlooked, I’ve noticed, in all the work that I do with various European internet forums, et cetera. So I was in D.C. this summer and we talked a lot about it at the Trans-Atlantic Partnership meetings, but I did want to raise it in this venue as well about e-governance in Estonia and in Finland, and Exroit in particular. Thank you for taking my question.

Sรฉbastien Bachollet:
Thank you, Deborah. Questions in the room? We can start with a few questions and then โ€“ but it’s up to you. If you want, we can start, Deborah, if you want to take the floor and give some answer and then I will ask Vint also and the other participants. The question does relate to the same thing. Then go ahead.

Audience:
Do you hear me? If I have the microphone, okay. So Martin Bottemann, and indeed the big thing I’m struggling with is that this internet needs to be more and more secure, more and more reliable. We should be able to rely on it and we are working on that. Now one of the elements is indeed identification and would you consider, for instance, anonymity as a core internet value or is that something different? How can we get to a kind of standard where you combine security with anonymity via a kind of trusted service or something? Is that something where we can go? And I think it very much complements to Alejandro’s concern and what the lady just said, identity as used in these governments.

Sรฉbastien Bachollet:
Thank you. Vint, go ahead.

Vint Cerf:
I actually would like to respond to that specifically. For a long time, I had the view that anonymity was a right that we should have and that you should be able to use the internet without identifying yourself. What we discovered, at least what I believe we’ve discovered, is that anonymity creates opportunity for really severe and bad behavior. If people think that there are no consequences for their harmful behaviors on the net, then they will continue to execute those bad behaviors. And so absolute anonymity is, in my view, not necessarily, should not be a core value. I’m surprised at my change in position, but having seen too much bad behavior that’s shielded by anonymity, I now believe that accountability is more important. That doesn’t mean that you have to identify yourself to use all of the internet’s features. That’s not what I’m arguing. But I am saying that we should tolerate mechanisms that allow for discovery. And while I say that, I absolutely understand that viewing this through the lens of the democratic society versus an authoritarian one, you get very different answers from the standpoint of an authoritarian government. The ability to identify parties is harmful to that party’s interests. And yet, if we don’t allow for that kind of discovery, then all of our interests are harmed by the bad behaviors that are not accountable and therefore difficult to inhibit. You could say, well, can’t we inhibit the bad behaviors just by using technology? Can’t we use machine learning to filter all the bad stuff out? And the answer is, as far as we can tell, that doesn’t work. Either it doesn’t work because it fails to filter, or it filters the wrong thing and therefore people’s rights are harmed because of that. And so this is going to be a relatively imperfect outcome, but I am persuaded at the moment that protecting people’s interests and protecting people from harm is really important. We can say, though, that there are certain actions where we recognize that anonymity is important because if you’re identifiable, then there could be really harmful side effects. Whistleblowing being a good example of that. But I would argue with you that even in the whistleblowing case, the most traditional means of handling that are that a trusted party receives the blown whistle and may in fact need to know who is blowing the whistle, but is obligated to keep that party’s identity anonymous. And that’s one of the ways in which you thread the needle between anonymity and identifiability and accountability. So I’d be very interested, of course, if people have arguments against this proposition that pure anonymity should not be an absolute core value anymore.

Sรฉbastien Bachollet:
Thank you.

Alejandro Pisanty:
Can I pick up on that, Sebastian, for a second?

Sรฉbastien Bachollet:
Go ahead. Yeah, OK. Go ahead.

Alejandro Pisanty:
Very briefly. And to further the point that was made by Deborah as well, how big an architectural change would this be? We have assumed for many years that the only identifier that the internet gives you, that’s proper from the internet, is the IP address. And everything else comes from the edge. So how big of an architectural change would that be? And then, of course, how scalable would that be? The case of Estonia, I think, is very brilliant, but has a limitation of scale in the way you can establish trust within a small society or going further out. Sorry. I don’t want to extend this question. Thank you.

Vint Cerf:
Could I respond on the Estonian side? Because the one thing which impresses me about Estonia is that 100% of the population is registered for strong authentication, 100%. They can do that in part because it’s a million and a half people. When you get to 300 million or 600 million or 1.4 billion, it gets harder. Estonia has introduced the Aadhaar system, which is attempting to strongly authenticate parties for their benefit. But everyone sitting in the room and those online can also recognize the potential risk factors of being able to identify people by biological metrics and things like that. You can see how that can be abused as well. So this is a peculiar tension that I think is not 100% resolvable. But as I say, I believe that accountability may turn out to be far more important than absolute anonymity.

Sรฉbastien Bachollet:
Thank you. Jane. And then I will go to Ghana IGF Remote Hub and then back to Debra. Jane, please.

Jane R. Coffin:
I’ll be very brief. Debra, we’ve worked with a variety of governments around the world to work with a variety of governments around the world. But if there are some really great practices that we can glean from you, that would be exciting. I wanted to pick up really quickly on a point that Vint made about the IGF having an obligation. And I think, Vint, one of the points I want to extrapolate from that is to help find a way forward with governments to have inclusive, multi-stakeholder inclusion in policymaking and regulation. We start to exclude civil society, the technical community, academia. It’s very much not going to lead to a better regulatory and policy regime and environment. And if we don’t, the law of unintended consequences may prevail here where we may force centralization a bit more. Some governments may force centralization in their lawmaking if they aren’t including some of the smaller networks, the other instances like internet exchange points and others in the conversation and lock out multi-stakeholder inclusion. So I just wanted to put that out there before we ended.

Vint Cerf:
So, Jane, since this is also supposed to be entertainment for you, so now we’ll have this little debate back and forth. You’re not saying, I hope, or are you trying to argue that the point I’m making, that absolute anonymity may no longer be a core value in the interests of the people who use the internet? Your argument about governments and multi-stakeholder policymaking I don’t understand is an argument against my proposition. It is an argument for the utility of multi-stakeholder perspective in the formulation of policy. And I hope that what I’ve been saying is not unintentionally misinterpreted as against multi-stakeholderism. I’m a complete fan of that, believe that it should be a part of every government’s normal practices. So I see these as two very distinct things. That’s also a correct interpretation of what you were saying. Okay.

Jane R. Coffin:
I think you’re helping us point out that the obligation of the IGF, and it’s the uniqueness of the multi-stakeholder model in the IGF, to work with governments to make sure that whether it’s a discussion on anonymity or interoperability and more networks being interconnected openly is that that’s more robust policymaking regulation comes through that multi-stakeholder discussion.

Vint Cerf:
So, in fact, there’s a simultaneous obligation, I think, of members of the IGF who care about these things to engage with governments. We need to help the governments appreciate why the IGF is so important to them as they try to formulate policy lead.

Lee Rainie:
The striking thing for so many years about tech policy stuff was that it was pre-partisan, both here and in Europe in particular. The dynamic we’re talking about now, though, has hints and allegations of being swept into partisan polarization. I don’t think there’s the kind of consensus now that there might have been five or six years ago in the parties about whether anonymity shouldn’t be a core value. You see signs of it in the populist mainstream party dynamics of Europe as well. So, this is all, again, to the theme of the day, this is all organic and moving and fluid and it’s hard to settle things in that environment.

Sรฉbastien Bachollet:
Okay. Let’s go back to the participants and, Gana, please. I hope that we can hear you. I know that we can see you, at least in my computer, but go ahead, please, Gana. And then, Deborah. And then, I will go to the room and then to the next speakers online. Thank you.

Joseph:
Thank you very much. My name is Commuter Joseph, speaking from Pentecost University, Ghana. Whilst we look at the core values of the Internet, I want to ask this question that with the VPN, virtual private networks, people use these networks to bypass restrictions on the Internet to fraud and infringe on sensitive data of others. I want to ask, what can be done to protect individual content on the Internet whilst we look at, I mean, what can the government do or what can we do to help protect the content of individuals on the Internet? Thank you very much.

Sรฉbastien Bachollet:
Thank you, Gana. Yeah, go ahead, Vint.

Vint Cerf:
So, I think that I’ve reached the conclusion that cryptography is our friend in all of this. For example, there are many places that will insist that information about their citizens must be kept in the geophysical boundary of the country in the belief, or at least they make the argument that somehow that makes it safer. In some cases, the motivation behind that is to demand access to the information from the parties who hold the information within the geopolitical boundary of that government. We hear the term data sovereignty, for example, to argue that data about citizens shouldn’t leave the country. I will make the argument that when you insist on that, you actually lose reliability. At Google, for example, we replicate data across our data centers and we also encrypt it so that no matter where it goes, when it’s addressed, it’s encrypted. When it’s transmitted, it’s encrypted. We even have a situation or a provision for the possibility that the users hold the keys to the data and so we don’t, no matter where we put it, it is under the control of the users. My argument here would be that transborder data flows and encryption allow you to place data anywhere on the internet and protect it as long as you manage your keys properly. That is a huge challenge because key management is a non-trivial exercise. In fact, it’s one of the reasons that I did not push public key crypto into the internet for a while because while it was being developed, the people who were doing the development were graduate students. They’re not the first category of people that I would rely on for high quality key management. It’s not that they’re stupid or something. It’s just that they get distracted by silly things like PhD dissertations and final exams. So today, we have an obligation to help people manage keys and cryptography to protect their interests and to help them strongly authenticate themselves. So I’m of the view that that’s the correct way to handle data protection and not to argue that its physical location is the ideal protection mechanism, but rather cryptography.

Sรฉbastien Bachollet:
Thank you. Deborah, please. Maybe I need to do something. Wait a second. Yeah. I guess.

Deborah Allen Rogers:
There we go. Here I am. Okay. Thank you so much for that. That’s a quotable quote, excellent. Cryptography is our friend for sure. And to add to the question that was just asked about how do we protect human rights or personal rights or personal privacy, cryptography is our friend and thinking about all the different ways in which it can be scaled. This is what I wanted to say about the point you made about a million point seven users or something like that in Estonia. And the cultural sort of, I think the cultural context of that and the idea that now that we’re on this online, offline, no line world, scale is such a reference. It’s such a, it’s changing this concept of what we can do with scale at the push of a button. And so I speak also to the CEO of XRoad who is based in Finland and he talks about a different cultural reference in Finland, one that’s a lot more conservative than the one that was in Estonia 20 years building their brand new internet system. and e-governance for their banking and their voting and etc. So I just want to make this point. I was a clothing designer in the 80s and 90s when the entire world, existing through a pandemic called AIDS, moving into global manufacturing, all going to China. This is not, and I’m in New York at 9-11 of course, this is not the first time I’ve been through these sorts of drastic transitions. As you know, Vint, I mean I hear George Carlin’s voice somewhere in the background of your voice as well, talking about, and for anyone listening please look up George Carlin, you’ll see why. So thank you about the cryptography as our friend commented and please can, if you all want to speak about or at least think about this, rethinking about this idea of scale and smaller societies that are doing things. Because test samples are small and it’s scaling a functional test sample is what works. And so we have to think about these societies. I’m living in a very highly governed, functional society here in the Netherlands now for three years. It’s different than living in other cultures that are not highly functional at this moment. I say that in reference to something that you mentioned, Lee. So I don’t want to actually go on record as mentioning which society, but non-functional and functional looks very different and I think the functionality is the point, not the size of the scale model. Okay, thank you for listening.

Sรฉbastien Bachollet:
Thank you Deborah. We have 12 minutes to go. We have one question in the room and one speaker online and Alessandro Pisanti will read some comments online too. Therefore, let’s go to the room speaker, please.

Audience:
Roger Dingledine to our project. So this word anonymity is one that I think about a lot. I actually find the word anonymity to be confusing when people are thinking about it. I usually use the word communications metadata security or a securing communications metadata. That doesn’t trip very well off the tongue, does it? Fair enough, but the reason why I mentioned this is thinking about one of the ways that we’ve managed to thread the needle to manage both of these is looking at it from different layers. So if you tell people Tor is an anonymity tool, then they say, oh well I guess I can’t use Facebook. But it makes perfect sense to log into Facebook over Tor. You’re getting to choose what of your communications metadata you want to reveal. So by default, when you’re reaching them, you don’t automatically blurt out your identity. You then get to choose what you tell them. And Facebook doesn’t care where you are, they care who you are. And what they mean by that is Facebook level, Facebook application layer of who you are. So you log into Facebook and from there at the platform level, there’s a completely separate question about anonymity versus accountability. Do you need your real name? And so on. But the separating those means that at the network layer, you don’t automatically identify yourself. Yet, as you say, it might be beneficial in a societal way or a platform way or a community way to choose to identify yourself at a different layer. So that layering mechanism is one. I don’t want to say that it solves everything, but I think it helps us get closer to the answer. Of course, we don’t want anonymity for everybody all the time, no matter what. But we want to give people the choice of who they tell about them.

Vint Cerf:
I think that’s a really good point and I appreciate the layering argument, which makes good sense to me. You’ll notice that other elements of the internet design, especially the domain name system, has introduced mechanisms like DOH and DOT and so on in order to protect information at certain layers in the architecture while revealing it at others. And your point about choice is very well taken.

Sรฉbastien Bachollet:
Thank you, Vin, and tyank you for your question. Shiva, please, and then Alessandro. I will give you the floor. Okay, go ahead, Shiva.

Shiva:
Can you hear me? Yes. Okay. Jane was talking about the internet exchange points and the neutrality, the intended neutrality. As far as I know, some of the internet exchange points have a commercial business model. And how far are they away from the intended neutrality? And also, if an internet exchange point can theoretically be non-neutral, can they also become tools in the hands of governments, good and bad governments, to indirectly regulate the internet or to control the internet in a certain way? And the positive question on internet exchange point is, is there any design to think of an internet exchange point for interplanetary networks, probably with a peculiar bridge to give a one-way connectivity to the global internet? Thank you.

Jane R. Coffin:
So, Shiva, that’s a, I’m going to start with your last point about internet exchange points and interplanetary, and I can feel Vint, too, right next to me. Because I think, are you still the chair of the interplanetary working group or the… No, I’m not. I’m not the chairman. I’m a member of the board, but I participate with them, yes. So, you should check out a session on Thursday that Joanna Kulisa will be running with respect to, I think that’s data governance, but in any event, there’s a paper that Joanna Kulisa and Berna Gur have written, and it was funded by the Internet Society Foundation. I’m not, you know, this isn’t an advertisement for this foundation, even though I worked at ISOC before. But the paper they put together, and another paper put together by the Internet Society by Dan York, who also will be on the panel on Thursday, talked about the potential for exchange points in space with LEOs, Low Earth Orbiting Satellites. Sorry, I should be clearer. It could be a very interesting thing, and I, and then the question is, who can participate? You know, who’s running the network as far as the LEO constellation itself? Is there neutrality if it’s only one entity, company, that can control all the traffic exchange, or is it only their traffic? It’s very complicated right now with cross-border connectivity to potential. If you have a transmission going down into one country that beams up to another satellite that’s going to beam down into another country, the whole concept of negotiating cross-border connectivity issues is complicated wildly. But I’ll stop there for a minute. Shiva then turned to Vint on the interplanetary.

Vint Cerf:
Well, let me, just setting aside the spatial notion for a moment. Internet exchange points on the ground are really powerful tools because they allow for connectivity, efficient kinds of connectivity among networks. But here’s a scary thought. Suppose that you’re in a regime where the government runs the exchange points, and it is required that all traffic between networks go through government-operated exchange points, which might lead to surveillance of the kind that you didn’t want. That takes us back to cryptography being your friend, and once again you can imagine regimes that don’t want, you know, encrypted traffic to be running through the exchange points. With regard to putting exchange points or data centers in space, one of the observations I would make is that those typically require maintenance, and so we may have some difficulty getting people up there to do maintenance. I’m sure everybody in this room does understand and appreciate that the Internet doesn’t run itself. There are millions of people who, as a daily job, help keep the Internet functioning. Otherwise it would break pretty quickly, and I wish that were not the case. I wish that our designs have been even more robust, but to be quite frank, they require a lot of attention. And Shiva, to

Jane R. Coffin:
quickly just answer your question, I was referring to the the IXs that are the neutral bottom-up, you know, not managed by governments, but to Vince’s point, there are exchanges where traffic is monitored. That’s just required by the countries, and so that’s something that does happen, and I’m with Vince on the encryption, the crypto side. Not cryptocurrency, but encryption. I don’t really care about cryptocurrencies right now. I probably should in the future, but as far as the commercial IXs, that’s a different instantiation of exchange point, and they serve a certain purpose, but they they’re not the bottom-up neutral exchanges that I was meant to be more clear about.

Sรฉbastien Bachollet:
Okay, thank you very much. Now I give the floor to Alessandro. He will give us the last feedback online, and then I will give one minute to each of the five speakers to conclude, because we will be late in any case. Go ahead, Alessandro, please. Thank you.

Alejandro Pisanty:
Thank you, Sebastian. I’m not going to speak for myself right now. I’m going to read two comments. One comes from Iria on the chat. She says, choosing to identify is different from being forced to reveal your personal identification data in order to access the Internet or an app, and I side totally with that statement. And the other one comes from the Abuja IGF remote hub in Abuja, Nigeria, I think. Nhi mentioned that AFRINIC is registered as a private entity in Mauritius. Hasn’t this status contributed to the barrage of court cases the regional RIR now faces? While a good number of technical organizations are registered as non-profits, shouldn’t regional and global technical organizations that govern the Internet be accorded Internet governmental organization status? Those are the two points from online.

Sรฉbastien Bachollet:
Thank you very much. We need to run to another meeting. Therefore, may I suggest that if Nhi is still online, I want to take one minute for the microphone.

Nii Quaynor:
The answer to the question of the nature of a registration private company by laws, I think the answer is no, because it has really no bearing. A commercial dispute can occur between non-profit and members, so I don’t see that as the right thing. This is a case of some member who is violating rules and is refusing to be disciplined, and is beginning to abuse the legal system by generating a barrage of court cases, at the same time trying to break into people’s account by offering them money, and so on. So it’s just a bad case that needs to be dealt with as such, because it tried to invade the policy process, it failed, it tried to force a co-chair, and the co-chair got recalled. If you look at all these things, one organization, why generate 20-something cases in a year? If you really are doing proper business, why would you have so many IPv4 addresses, and no network number, no ASN, no v6? So it’s obvious what the game is. It’s about the interest of hijacking numbers out somewhere else to use, and that one, I don’t think Africa or the world would want to see that.

Sรฉbastien Bachollet:
Yeah. Thank you, Nii. We have less than one minute per person. Iria, please, two words of conclusion. Sorry for that.

Iria Puyosa:
Yeah. Basically, I think our consensus should be we need technical expertise in every discussion about policy. So we need to have people who know how to solve the problems, implement the solutions, and we also need the input from civil society, understanding the human rights before moving up to regulation. Otherwise, we may end with bigger problems, or a different set of problems than the ones we are trying to solve. Thank you. Nii, please.

Vint Cerf:
Just to set the right tone for the ending of this, when we’ve asked in global surveys, given all the problems that you are now talking about, we’re asking questions about in our surveys, how hard would it be and how willing would you be to give up the Internet? And there’s almost universal, under no circumstances would I give it up. So we’ve done a pretty good job by the consumer behavior and consumer sentiment.

Jane R. Coffin:
Thank you. Jane. Don’t discount your voice in helping keep the Internet open, globally connected, secure, and trustworthy. Make sure the multi-stakeholder model and the IGF continue. Thank you very much.

Sรฉbastien Bachollet:
And I want to give the last word to Olivier Crepin-Leblond. If I am here, it’s because he’s not here, it would have been much better than me to run this meeting. But Olivier, go ahead.

Olivier Crepin-Leblond:
The host has unmuted me. Thank you very much, Sebastien. Thank you to everyone who has participated as a panelist and also as a participant in this discussion. The Dynamic Coalition has discussions throughout the year. The work is ongoing. If you’re interested in joining the Dynamic Coalition, you can go onto the Internet Governance Forum website, go into intersessional work where the Dynamic Coalitions are all listed, click on the one on co-internet values, and you can join the mailing list. There’s no membership fee or anything like that, but we do take our work very seriously. It’s extremely important. We will make a report out of this, of today’s session, and of course, it will be taken into account in the IGF messages for Kyoto. So thanks very much, and thanks, of course, to all those people that have helped with organizing this session.

Sรฉbastien Bachollet:
Thank you very much, Olivier, Alessandro, and all the speakers. The meeting is closed now. Bye-bye. Bye, and thanks, everybody. I’m going to ambush you now. I couldn’t come up with a good way to say this on the mic, but I was really curious. I think Sue’s work as well, and Jane, which she was mentioning as well, in terms of, like, your generational model, your fourth generation model, and Jane, you were talking about the work that you were doing out in rural areas, but one of the things that we talk about in terms of the value of the interoperability, and then another challenge that has come up over the past, like, three years, at least in the U.S. and Canadian, that we get everything… … … … … … … … … … … … …

Alejandro Pisanty

Speech speed

156 words per minute

Speech length

579 words

Speech time

222 secs

Audience

Speech speed

156 words per minute

Speech length

475 words

Speech time

182 secs

Deborah Allen Rogers

Speech speed

197 words per minute

Speech length

828 words

Speech time

253 secs

Iria Puyosa

Speech speed

151 words per minute

Speech length

1039 words

Speech time

412 secs

Jane R. Coffin

Speech speed

192 words per minute

Speech length

1992 words

Speech time

624 secs

Joseph

Speech speed

137 words per minute

Speech length

110 words

Speech time

48 secs

Lee Rainie

Speech speed

185 words per minute

Speech length

2170 words

Speech time

704 secs

Nii Quaynor

Speech speed

152 words per minute

Speech length

1608 words

Speech time

635 secs

Olivier Crepin-Leblond

Speech speed

172 words per minute

Speech length

157 words

Speech time

55 secs

Shiva

Speech speed

135 words per minute

Speech length

132 words

Speech time

59 secs

Sรฉbastien Bachollet

Speech speed

126 words per minute

Speech length

1648 words

Speech time

785 secs

Vint Cerf

Speech speed

167 words per minute

Speech length

3723 words

Speech time

1337 secs

DC-Gender Disability, Gender, and Digital Self-Determination | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Gunela Astbrink

The panel discussion focused on the topics of digital self-determination and accessibility, highlighting the importance of empowering individuals and communities to have control over their digital data. Digital self-determination was described as the need to reconsider how individuals and communities can have autonomy over their digital selves. The panel acknowledged that society is still trying to understand the relationship between our lives and the technologies we use.

The discussion emphasized the need to address the digital divide, particularly for marginalized groups such as women, queer, trans persons, and those with disabilities. The panel aimed to make digital self-determination a reality for these groups by shedding light on their unique challenges and perspectives. Feminist perspectives played a central role in the discussion, with a specific focus on women, queer, and trans persons with disabilities.

One key argument made during the panel was that digital tools should be designed with accessibility in mind. It was stated that as a disability community, their motto is “nothing about us without us,” which means that persons with disabilities should be included in the development processes and community discussions. The panel stressed the need for all digital tools to be accessible and usable for all individuals, regardless of their disabilities.

Additionally, the importance of education and empowerment for people with disabilities in the digital sphere was emphasized. The panel shared a story of a determined young woman from Malawi who, despite having a disability and coming from a poor family, managed to study IT. Her education not only empowered her but also enabled her to tutor other students and utilize digital tools, even with her physical limitations. This example demonstrated the transformative power of education in enabling individuals with disabilities to actively participate online.

The panel also raised concerns about privacy and security, particularly for people with disabilities. They acknowledged the potential privacy and security issues that individuals with disabilities, especially those with visual impairments, might face. The need to ensure the privacy and security of these individuals was underscored, emphasizing the importance of safeguarding their personal information and digital presence.

In conclusion, the panel discussion on digital self-determination and accessibility provided valuable insights into the challenges faced by marginalized groups, particularly women, queer, trans persons, and individuals with disabilities. It stressed the importance of designing digital tools with accessibility in mind and promoting education and empowerment to enable active online participation for people with disabilities. Moreover, the panel emphasized the need to ensure privacy and security for individuals with disabilities, recognizing the unique risks they may encounter. Ultimately, the panel highlighted the significance of integrating inclusivity and accessibility into all aspects of the digital realm.

Judy Okite

The analysis emphasises the significance of accessibility for individuals with disabilities, both in physical and online spaces. It reveals that the evaluation of government websites for accessibility showed that 20% of the content remains inaccessible, indicating a pressing need for improvement. This highlights the lack of inclusivity and the barriers faced by persons with disabilities when accessing online information and services.

Furthermore, the analysis argues that individuals with disabilities must be actively involved in the process of creating accessible spaces and developing inclusive technology. It references Judy Okite’s experience in Dar es Salaam, where insufficient provisions for accessibility were observed. This illustrates the importance of including the perspectives and needs of persons with disabilities in the planning and design of physical environments to ensure that all individuals have equal access and opportunities.

In addition to physical spaces, the analysis also stresses the need for awareness and empowerment about rights among individuals with disabilities. Judy Okite’s assertion of her rights for accessible facilities during her stay in Dar es Salaam highlights the importance of advocating for and asserting these rights. The analysis further states that persons with disabilities should have a say in determining what works for them or not, enhancing their autonomy and agency in decision-making processes.

Overall, the analysis stresses the need for greater attention to accessibility in both physical and online spaces. The evaluation of government websites and Judy Okite’s experiences serve as evidence of the existing barriers and the urgent need for improvement. It argues that involving individuals with disabilities in the design and development of accessible spaces and technology, as well as promoting awareness and empowerment about their rights, can lead to a more inclusive society.

Audience

The implementation of certain features, specifically Zoom’s automatic captions, has had negative consequences for individuals with disabilities, particularly those who are deaf or hard-of-hearing. These automatic captions, intended to enhance accessibility, have instead led to confusion and disempowerment. This is due to the overlapping of captions in multiple languages, which obstructs the reliance on captions and lip-reading that these individuals heavily depend upon.

In order to avoid such detrimental effects, it is argued that technology companies should collaborate closely with individuals with disabilities and conduct comprehensive user research prior to implementing new features. By involving the very users who will be utilizing these features, technology companies can gain valuable insights that will result in more inclusive technology. This call for collaboration and user research is further supported by the incident involving Zoom, which serves as an example of the negative consequences that can arise from a lack of proper user research.

Furthermore, the importance of inclusive technology development is emphasized as a means to reduce inequalities and enhance accessibility. It is asserted that by working closely with intended users, technology companies can create technology that caters to the diverse needs of individuals with disabilities. This collaborative approach ultimately leads to more inclusive technology that empowers individuals rather than inhibiting their capabilities.

To conclude, the implementation of certain features, such as Zoom’s automatic captions, has had unintended negative consequences for individuals with disabilities. To address and prevent such issues, it is crucial for technology companies to engage in comprehensive user research and collaborate closely with individuals with disabilities throughout the development process. By doing so, technology companies can create technology that is truly inclusive and empowers individuals with disabilities.

Nirmita Narasimhan

The analysis highlights the importance of policies in ensuring compliance with accessibility standards. Countries with clear policies are more likely to effectively implement accessibility measures, as policies provide guidelines on what needs to be done, how to do it, and where it should be implemented. This is seen as a positive factor in promoting accessibility. The analysis also advocates for the creation and implementation of policies in countries where they do not exist, as well as the strengthening of existing policies to promote equal access to rights and opportunities for all individuals, including those with disabilities. While many countries have incorporated the Convention on the Rights of Persons with Disabilities (CRPD) into their legislation, the analysis suggests the need for the development of domain-specific policies to address specific accessibility issues in various domains. Different strategies for advocacy are required in different situations, as evidenced in the context of India. Active involvement of persons with disabilities in advocacy and policy-making processes is emphasized, as their perspectives should be adequately represented. The analysis also stresses the need for mainstream products to be universally designed, taking into consideration varying user needs and abilities. A user-centric approach in product design and enhancement is deemed essential to improve accessibility. Overall, the analysis underscores the significance of policies, the involvement of persons with disabilities, and the user-centric approach in achieving accessibility goals.

Debarati Das

During the analysis, several significant points were raised by the speakers. A central topic of discussion was the concept of digital self-determination, which highlights the need to understand who we are as digital beings as our digital footprints continue to grow. This evolving concept addresses critical questions surrounding the ownership and control of our data in cyberspace, affirming that a person’s data is an extension of themselves. It emphasises the importance of considering the rights and autonomy of individuals in the digital realm.

One key insight that emerged from the analysis was the significance of examining the experiences of individuals with disabilities in relation to digital self-determination. It was observed that digital spaces and decisions driven by data can greatly impact the autonomy and agency of individuals with disabilities. Therefore, there is an urgent need to explore how individuals can exercise control over their digital identities and have autonomy over their digital selves. By unpacking digital self-determination through the lens of the experiences of persons with disabilities, efforts can be made to reduce inequalities and promote inclusivity in the digital world.

Another important point discussed was the value of Design Beku and its principles of Design Justice in the field of design. Design Beku, a design and digital collective founded by Padmini Ray Murray, advocates for designing with communities, as opposed to designing for them. This approach aligns with the principles of design justice, which include ethics of care, feminist values, participation, and co-creation. By involving communities in the design process, Design Beku strives to create more inclusive and equitable solutions that address the diverse needs of different groups. This approach contributes to achieving the Sustainable Development Goals related to industry, innovation, infrastructure, reduced inequalities, and gender equality.

In conclusion, the analysis underscored the importance of digital self-determination, specifically in understanding our digital identities and asserting control over our data. It emphasized the significance of considering the experiences of individuals with disabilities to promote autonomy and agency in digital spaces. Additionally, the value of Design Beku and its Design Justice principles in advocating for inclusive and community-centered design practices was highlighted. These discussions provide valuable insights for addressing the challenges and opportunities associated with industry, innovation, infrastructure, reduced inequalities, and gender equality in the digital age.

Manique Gunaratne

Technology plays a crucial role in enabling individuals with disabilities to participate equally in society. Assistive devices and technologies act as a bridge between people with disabilities and their environment, allowing them to perform tasks that they might otherwise find challenging or impossible. This can include devices such as mobility aids, hearing aids, and communication tools. With advancements in technology, artificial intelligence (AI) has emerged as a powerful tool in improving the lives of people with disabilities. AI has the potential to make life easier for individuals with disabilities by developing solutions that cater to their specific needs and requirements.

However, cost proves to be a complex barrier to accessing technology for individuals with disabilities. While emerging technologies, such as AI and smart glasses, hold promise in enhancing the lives of people with disabilities, they often come with a hefty price tag. This poses a significant challenge, as many individuals with disabilities may struggle to afford these expensive technologies. The high cost of such innovations acts as a deterrent, limiting the accessibility of these technologies to a privileged few. Therefore, there is a need for collaborative efforts between technology developers, policymakers, and advocacy groups to address this issue and ensure that cost does not impede access to life-changing technology for individuals with disabilities.

Moreover, entertainment and emotional recognition technologies can greatly benefit certain disabilities, such as autism and intellectual disabilities. Emotional recognition technologies can assist individuals with these disabilities in understanding and interpreting emotions, which can contribute to enhancing their social interactions and overall well-being. Accessible platforms and games are also vital for providing entertainment to people with disabilities. These platforms cater to their unique accessibility needs and ensure inclusive participation in entertainment activities.

In conclusion, technology holds immense potential in empowering individuals with disabilities and enabling their full participation in society. Assistive devices and technologies act as enablers that bridge the gap between people with disabilities and their environment. AI, in particular, has revolutionized the landscape by offering tailored solutions to the needs of individuals with disabilities. However, the high cost of emerging technologies presents a challenge to their widespread accessibility. It is crucial for stakeholders to address this issue and work towards ensuring that cost does not impede access to these life-changing technologies. Furthermore, the development of entertainment and emotional recognition technologies specifically tailored for individuals with disabilities can greatly contribute to their well-being and quality of life. By embracing and advancing technology, we can create a more inclusive and accessible society for all.

Vidhya Y

The use of digital platforms has brought both positive and negative implications for individuals with visual impairments. On the positive side, these platforms have opened up new opportunities for communication and independence. Email, for instance, has revolutionised written communication, which was not previously possible without the advancement of technology. Digital tools, such as apps designed to identify colours and currency, have also empowered visually impaired individuals by providing them with greater independence and autonomy.

Furthermore, assistance tools like ‘Be My Eyes’ have proven to be invaluable resources for visually impaired individuals. These tools connect visually impaired individuals with sighted volunteers who can assist them in various online tasks, such as reading CAPTCHAs. This collaboration demonstrates the power of digital technology in providing inclusive and supportive environments for visually impaired individuals. Moreover, these tools can be used creatively for tasks like matching clothing colours, further enhancing the independence and quality of life for those with visual impairments.

However, there are also negative aspects that must be addressed. Accessibility remains a significant challenge for visually impaired individuals in the digital space. Many websites are primarily image-based and lack proper labelling, rendering them impossible to navigate using assistive technologies. This accessibility barrier hinders visually impaired individuals’ ability to access information and participate fully in the online world. Additionally, understanding and keeping up with new features and technologies can be daunting for visually impaired individuals, as design choices are often not optimized for their needs.

Moreover, women with disabilities face additional challenges in digital spaces. Privacy and vulnerability concerns are particularly prominent, as crowded environments or the use of screen readers may compromise their privacy when using digital platforms. This puts them at a disadvantage, highlighting the need for further measures to ensure the digital space is inclusive for all individuals, regardless of gender or disability.

In conclusion, the digital space presents both empowering and challenging aspects for individuals with visual impairments. While digital platforms have provided newfound opportunities for communication and independence, there are still accessibility issues that need to be addressed to ensure inclusivity. Furthermore, women with disabilities face unique challenges, emphasising the importance of considering diverse perspectives and needs in the development of digital tools and platforms. By addressing these challenges, we can create a more inclusive digital environment that truly benefits all individuals.

Padmini Ray Murray

The implications of surveillance capitalism and device use are particularly burdensome for disabled individuals. These individuals face additional challenges and risks due to the compromised nature of the devices they rely on. Unfortunately, most technology designs targeted at disabled users fail to consider these implications, exacerbating the difficulties they already face.

To address this issue, it is crucial to establish effective communication channels with disabled users in order to fully understand their specific needs and requirements. By engaging in conversations with designers and technologists, disabled individuals can provide valuable insights that can inform the development of more accessible and inclusive technologies. This collaboration can lead to better solutions that truly meet the needs of disabled users, going beyond basic accessibility requirements.

Furthermore, marginalized populations, including disabled individuals, are particularly vulnerable to privacy and surveillance issues. These groups often have limited opportunities for recourse when their privacy is compromised. It is imperative to pay special attention to the impact of surveillance on disabled users and their ability to exercise self-determination. Ensuring their privacy and autonomy is essential for promoting inclusivity and reducing inequalities.

One of the challenges in technology design is the tendency to create products at scale, which hinders the ability to provide more nuanced and individualized user experiences. Technology development often prioritises mass production and standardisation, which leaves little room for customisation. However, creating customised products requires a paradigm shift in thinking, moving away from one-size-fits-all approaches. Artificial Intelligence (AI) can play a crucial role in achieving this shift by enabling more personalised and tailored solutions for disabled users.

In conclusion, there is a pressing need to create more individualised and user-tailored experiences in technology design. This entails actively involving disabled individuals in the design process, fostering collaboration between designers, technologists, and users. Additionally, advocating for their rights and addressing the unique privacy concerns they face is crucial in building a more inclusive and equitable technological landscape. By embracing a paradigm of customisation and leveraging the potential of AI, we can empower disabled users and ensure their needs are met in a more meaningful and comprehensive manner.

Session transcript

Gunela Astbrink:
you you you I think we can start. We can start. Yeah. Okay. I would like to say good morning. Good day. Good evening to everyone here in the room and also online. This session is entitled Disability, Gender and Digital Self-Determination. And this is the session on the Dynamic Coalition on Gender. So we are delighted to have a number of excellent speakers from different parts of the world who will be joining us online and also there’s a speaker here on site. And so I will pass on to the organizer of this particular session, Deborah Arte Das, point of view, and also the person responsible for the Dynamic Coalition on Gender. So please, I wish Deborah Arte was here in person, but welcome Deborah Arte online. And Deborah Arte will give us a context for this particular session and introduce a little bit more about the topic. So thank you Deborah Arte. All right. While we’re waiting for some of the online panelists to join us, I will well, there’s a concept of digital self-determination and that’s what the framing of this particular session is all about. And it relates to our digital footprints and we know how much they are growing and society is grappling with new concepts, experiences, and understandings of the relationships between our lives and the technologies that we use. And who are we as digital beings? Are we able to determine ourselves in a data driven society? How do we locate ourselves as empowered data subjects in the digital age? How do we reimagine human autonomy, agency, and sovereignty in the age of datafication? Self-determination has been a foundational concept related to human existence with distinct yet overlapping cultural, social, psychological, philosophical understandings built over time. Similarly, digital self-determination, DSD for short, is a complex notion reshaping what we understand as self-determination itself. DSD fundamentally affirms that a person’s data is an extension of themselves in cyberspace and we need to consider how individuals and communities can have autonomy over our digital selves. So this panel session will center on the intersectional feminist perspectives with women, queer, and trans persons with disabilities and experts working in the intersections of digital rights, gender, accessibility, and technology. We will explore the idea of DSD through the lens of gender and lived experiences of persons with disability. So this is drawing from a first-of-its-kind series of DSD studios organized by Point of View. Point of View is the organization in India headed by, well, the project is involved with this particular topic and headed by Debarati Das. It’s been done in four cities in India and the panel will focus on the theme of digital divides and inclusion and also delve into the ability of women, gender, and sexual minorities living with disabilities to digitally self-determine themselves using current emerging digital technologies based on lived realities of individuals from different geographies and contexts. And secondly, it will deepen understandings of the need and potential to work with persons with disabilities in developing new and emerging technologies. Thirdly, it explores the collaborative and learning opportunities to make DSD actionable and a reality for women, queer, and trans persons living with disabilities. So, we are going to look through the lens of gender, sexuality, and disability and explore a bridge between access points and so-called pain points and think of inclusive ways of determining the self in new digital life spaces going beyond accessibility and also thinking about personhood, agency, choice, autonomy, rights, and freedoms in digital spaces for persons with disabilities. We will draw from our experience of DSD studios and its outcomes, articulate an exploration of a root concept of DSD and its key components through the lens of disabilities and gender. We’ll think about how we can co-create DSD through theory, practice, lived experiences, and concrete examples. And finally, operationalize DSD via a set of core principles and policy recommendations centering the intersections of gender and disability. So, we are still waiting for the online speakers. So, I will pass on now to Vidya, who is here with me, and ask Vidya a little bit about her experiences of being a digital person online and any barriers and enabling factors around this thing about accessibility, autonomy, choice, and potentially what are the implications for a woman with a disability. But before I do that, I will introduce Vidya, who is from an organization in India called Vision Empower, and Vidya is a co-founder. Vision Empower is a non-profit enterprise incubated at IIITB in Bangalore to bring education in science and mathematics to students with visual impairment. She is a research fellow at Microsoft Research India and has authored several papers on issues concerning people with vision impairment, such as improving programming environment accessibility for visually impaired developers. Vidya has received numerous awards and scholarships such as Thai Aspire Young Achievers Awards, Reebok Fit to Fight Award, and the Diberay Ambani Scholarship for Academic Excellence, and many more. So, please, I’ll pass now over to Vidya, so I look forward to hearing your particular experiences. Please go ahead, I’ll turn it on. Yes,

Vidhya Y:
thank you so much to the organizers for having me here, and also to Gunilla. Yes, we make STEM education accessible for children with visual impairments, and I am born blind, so I have experience of growing up in India, which is one of the developing countries, and also I have experiences of going online, the digital space, as a blind person, as well as I am a woman with a disability, so I have that experience as well, so today I’ll be talking more from a lived experience perspective, and also by, I’ll also be sharing some of the observations that I’ve had with children, as well as women with disabilities from my friend circles, and things that people talk about, generally, online. So, firstly, digital space, when we talk about it, it is really huge, because whenever we say technology, that’s the only way, as a blind person, I can communicate with the world, I can be more efficient, it has opened up so many opportunities like never before. I always mention this thing that, you know, growing up in a village, I didn’t have access to technology in the growing up years, and I missed out quite a lot, but as soon as I got onto the online platforms, there was, like, so much that I could do, like, you know, even I didn’t have to ask somebody to read out news, what’s the news, even to see the time, you don’t have to go looking for a Braille watch, even when you take something simple, like something so obvious, like written communication that everyone has on a daily basis, it was never possible for me till I learned to use email, because till then, if I had to communicate with somebody who can see, it was verbally, or someone had to write it for me, or I had to write it in Braille, which majority of the people don’t understand. Now, this actually compromises so much of what you have to say, because if I were to send a message, and if I were to ask somebody to type it for me, that means I don’t have privacy, what I want to say, I cannot say it. But digital platforms have opened up so many opportunities, and definitely have given a lot of privacy to individuals with disabilities, which we don’t have, mostly because someone or the other is always there, and the more severe disability you have, from what I have observed, the lesser privacy you have. And, as we know, that a lot of people are not able to get on digital platforms are really good, as we all know. They have enabled so much that was not possible before. But definitely there are so many challenges, in general, for persons with disabilities, like firstly, the accessibility issues that we all generally talk about, the websites are not designed in a way that people can access, there are a lot of images, a lot of the things that are so obvious for other people, I’m talking from a visually impaired person’s point of view, they’re simply inaccessible, because they’re not labeled, they’re image-based. But when you talk about women with a disability, the barriers are many, too many. From what I have observed, you know, it’s an irony, actually, digital platforms, as I mentioned, they have given a lot of privacy, at the same time, you have to be so careful, because when I started using a computer, for example, I was not using a lot of video calls, it was not necessary for me, but when the, when COVID happened and when people were trying to get on to online platforms, then video calls were a must. So, for me, first, I assumed that in the computer, the camera will be the whole of monitor, that was my assumption, because I did not know. And then, I would put my screen a bit down, thinking that, okay, if I don’t want myself to be visible, I can put it down, so that people are not able to see. But once when my sister took a look at it, she was just saying that the camera is just on top of the monitor, and it’s just your finger size if you put it down. Actually, people can see you much more clearly. So, from then, it’s really difficult without taking a second opinion to do anything digitally, because you really don’t understand. You really don’t know. I feel that I have too much vulnerability, and I’m missing out a lot of things, which the world outside knows. So, I feel like taking a second opinion for everything. But once you learn the basics, once you learn how to be visible, then it will definitely empower you. But at the same time, something new would have come up, and there’ll be something that you’re missing out compared to someone who can see, for example. So, these are some of the constraints that I face on a daily basis. And also, one of the other issues when you’re using a screen reader and typing something, when you’re in places where it’s crowded. So, whatever it is reading for you, for example, you know when it’s reading B and D. So, you might not make out the difference, and you tend to send some other word instead of some other word. Or when we say voice communication that you’ll have to use, sometimes it’s really confusing because, again, there’s no privacy when you’re in a place. Suppose I’m in a conference, and I’m not able to type everything because it’s touchscreen. When I’m trying to use voice based communication, there’s no privacy. So, all of these are there, and one of the main barrier that I have found whenever I have to join online in a lot of meetings, everyone finds it, whether you have a disability or not. It’s like you cannot type at least if there’s some other disability, maybe typing may be easier if, say, you have a hearing

Gunela Astbrink:
impairment or things like that. But if you have a visual impairment, typing is a huge issue, especially on phones. But you cannot send out voice messages on some of the WhatsApp groups, for example, the ones that you have for visually impaired because the fear that someone will reach out to you and message you and things like that. It has happened so many times in the past. So, though it is empowering, it’s still restricting, and it’s not empowering in the true sense actually. So, these are contradicting points, but this is the reality. This is what happens with most people. Thank you very much, Vidya. There’s so many different experiences that you have explained to us, and that’s so important to understand what a person with a disability goes through in becoming more and more online and becoming more active online. I’d like to tell a story about a young woman in Malawi in Africa. She was, just as Vidya, supposed to be here, but unfortunately there were visa issues and so forth, and through the Assista Dynamic Coalition on Accessibility and Disability, we have provided travel support for persons with disability to participate here at the IGF. And I’d like to explain about Grace Salange from Malawi. She is a wheelchair user. She has a speech impairment, and she has limited use of one hand, and she comes from a poor family in a village, but she was determined to study IT, and so she went through school, she went to vocational college, and she got through that with very well, and now she sometimes tutors other students. And the way she uses a smartphone or a laptop is with her knuckles. That’s the way she can communicate with her digital tools. And what is important? When a person with a disability is online, who knows? There’s no like, oh, they’re different, or something like that. We are together, a digital being. And that is important, that we are then feeling like we are on the same level with anybody. We communicate in the same way superficially, even though there might be tools that are needed. But the recipient of an email or a text wouldn’t know that. And I think that’s very important. But obviously those tools, they need to be there. They need to be workable, they need to be designed with accessibility in mind. So we’re talking about tools in a general sense, we’re talking about websites based on the international guidelines, the Web Content Accessibility Guidelines through W3C. We’re talking about making sure that apps are accessible. And it’s so important when any tool, any learning platform, anything is developed, that it’s done together with persons with disability. So there is that saying in the disability community, nothing about us without us. is part really of digital self-determination. That we as persons with disability are able to be part of a development or part of the community as such. And we are respected for that. So, I just wanted to pass back to Vidya to talk a little bit about some of the privacy and security issues. Because we can imagine that as a person with a vision impairment, there are additional concerns about privacy. We all have concerns about privacy and security. But there might be some additional factors that Vidya can explain to us. Thank you. Yes. Actually, digital tools enable you to do a lot of things

Vidhya Y:
by yourself, which was not possible. For example, these days, there are a lot of color recognizers. There are, if you have a currency, there are apps which can tell you what the currency is about. Then there are apps like Be My Eyes. You know, Be My Eyes is an online app which visually impaired people can install on their phones. And persons who can see sighted people can sign up as volunteers. So, if you want any help, suppose you’re not sure if the light is on or not, and one of the huge constraints that we have is solving captures. Captures are designed for not being readable by machines so that the internet is, the privacy is not compromised. But these can be huge barriers for persons with visual impairment, especially when you don’t have audio capture. It can be very frustrating because though you know how to use a computer, you cannot use it. So, though you can navigate the website, actually you cannot without taking help. And always there will not be somebody around you. But if you use tools like these, you can, anytime, any part of the day, of course, there’s a constraint that if you know English, then any part of the day, you will get somebody to assist you. Even if you don’t know English, you can set up your local language. Whoever is volunteering can set up their language as the local primary language and can assist in that language. But, for example, in India, we have a language called Kannada. So, if I want to get help in Kannada, then I will not get a lot of users in Indian time, night time, because obviously Kannada-speaking population, for them, morning is Indian time morning. But if you know English, 24 hours, there will be someone to assist you. But really, sometimes I use these tools because you cannot expect someone to be always around with you and you need quick help. One of the things is that always people also may not be willing to help you or even if they’re willing to help you, they may not have the time. So, these tools are very good, actually, because you can call and you can ask them. In fact, I conducted a lot of digital literacy trainings as I’m working with school teachers. So, I actually guided them on installing these apps and taking advantage of them. We found really good users, you know, apart from the CAPTCHA example that I told you. So, how it works is you can call them and the volunteer who picks up the phone will tell you to take the phone and point it to the computer. Now, if you’re blind, you may not know whether the CAPTCHA is visible or not. So, they’ll tell you move right or move down. Now, I can see it better. Now, I can tell you. But, you know, when I conducted these trainings, teachers, actually, the women teachers found innovative use of those technologies. In fact, somebody was using it to match their dress. We call it sari in India with bangles. So, whether the color is matching. So, these are some of the innovative, but these were very much needed for the teachers whom we are working with and they started finding these tools very helpful. But now, talking about the privacy concern, it’s like you don’t want to depend on somebody too much because they’re not there or they may not have the time, but you’re forced to depend certain times. And at the same time, you’re very concerned about where you’re pointing the camera towards, whether it is safe, whether you don’t know what’s happening, who is picking it up. You can just know the voice, but you don’t know what data is being collected. For example, there will be, just take, for example, banking transaction. Now, at the end of the transaction, if you have to enter your CAPTCHA, it means you have to enter all the details in the beginning itself before pointing your screen towards the computer, which means the person who is at the other end can figure out what you have typed. So, that is a huge compromise, actually. I mean, people are well-intentioned, but at the same time, it’s a huge compromise. You’re not very sure, but if you were to enter the CAPTCHA in the beginning itself and then type your data, then it will time out. So, the CAPTCHA will have only 40 seconds or a minute. By then, you have to enter and submit. So, that kind of privacy concerns are there, and the privacy concerns about how much of you should be visible to the other person, where are you pointing your camera, whether it’s safe, whether you’re very unsure, actually. Like, apart from the voice, you’re not sure of what’s happening. So, these issues are there specifically for women with a disability. And even on simple platforms, which everyone uses on a daily basis, like Facebook, Instagram, all of these, we talk about accessibility issues. Those are definitely there. But, for example, now, if I were to upload all the photos that I have taken during this conference, if I have to make a blog, or if I have to put all of these on Facebook, now, what I do is I generally tell somebody to… So, my cousin has come with me. He’s going to give me the photo with the caption. But that’s all information I have. Now, I don’t know whether I want those photos to be there or not, because you’re not seeing them in the true sense, right? You’re just depending on the caption. And sometimes you might miss it. There may be four or five pictures, and there may be one caption that is there. Always, there won’t be somebody to give you those captions. So, always, it is risky, because sometimes people have told me only half of your face is visible, or this photo shouldn’t have been there. And everyone so much relies on visuals that sometimes you’re forced to take screenshots and share. And then you really have no idea of what you’re sending. So, these concerns are there. They’re very empowering. At the same time, all of these concerns are there. You just need a second opinion most of the times. Thank you very much, Vidya. There was a lot of very good examples there of particular privacy and security concerns. We did have some technical issues with the Zoom link, and I’m very pleased to say that our online speakers

Gunela Astbrink:
are nearly all there. So, we are switching back to the introduction to Devarati Das, who will explain a little bit about the project in India when it comes to this particular topic. So, over to you, Devarati. Hello, everybody. Sorry, there were some technical issues and some confusion with the link.

Debarati Das:
Thank you all very much for joining. I’m Devarati from Point of View, and we are a feminist nonprofit in India working primarily in the intersections of gender, sexuality, disability, and technology. So, to set some more context for this session today, as our digital footprints grow every day, we are really grappling with new concepts, new experiences, and understandings of the relationships between our lives and the technologies that we use. And it’s become really important to understand who we are as digital beings, what does the self mean in data-driven digital spaces, and how do we imagine things like autonomy, agency, choice in today’s age of datafication? So, digital self-determination is an evolving concept to consider some of these critical questions, and it fundamentally affirms the fact that a person’s data is an extension of themselves in the cyberspace, and we need to consider how individuals can have autonomy over our digital selves. So, today we’ll unpack some of these very critical questions through the lens of experiences of persons with disabilities from different countries and regions. And I’m very pleased to introduce our moderators. Sorry about the delay because of the technical issues. Our moderator on site is Gunela Aspring. Gunela has been very active in the disability policy programs and research for 30 years, and chairs the Internet Society’s Accessibility Standing Group. And has also served on the IGF’s Multistakeholder Advisory Group, and is the vice chair of ICANN’s Asia-Pacific Regional At-Large Organization. Our moderator online and our partner in this is Padmini Ray Murray, who is the founder of Design Beku, a design and digital collective that is based in Bangalore, India, that works to shift how we can think about design and tech as processes of co-creation and participation, centered around feminist values, design justice principles, and ethics of care that advocate also for designing with communities and not for communities.

Gunela Astbrink:
With this, I hand it over to Padmini to maybe share in brief a bit more context of how today’s topic relates to disability rights and justice, and then over to you both, Padmini and Gunela, to take the conversation. So, yeah, good. Can you hear me? Yeah, great.

Padmini Ray Murray:
Thanks for the introduction, Bevorati. It’s nice to be here, albeit virtually. So I think actually Vidya, the first speaker, has already kind of set the scene quite well, because I think as they mentioned, that digital self-determination, of course, is something that we are all kind of currently positioned in a way that we all have to think about quite deeply because of the implications of surveillance capitalism. Every single device we use is compromised by some form of surveillance. And it is very difficult for even non-disabled people to wrap their head around the implications of being online, using these devices, and thinking about how to keep themselves and their privacy safe. And I think, obviously, this burden is doubled for people with disabilities. There are two reasons for this, largely because most devices or apps, even if they are made for disabled users, might not be taking these concerns into consideration when they’re being designed. So some of our work over the last few months with Point of View has been actually speaking to designers and technologists and putting them in conversation with people with disabilities so that they can understand their needs better. Because I think something that we all kind of come across when designing technologies is that while there are accessibility guidelines, for example, those set forward by the W3D, those are often just a baseline. And there is much more nuanced requirements of disabled users that need to be taken into account. I think the second issue is that in any kind of case around privacy and surveillance, it is always the marginalized who are the most vulnerable. And there is often the least kind of opportunity and options for recourse for them. And so it becomes even more important that we look specifically at disabled users and how they might be able to pursue self-determination as a use case. So I’ll just stop there and I’ll hand back to Penelope.

Gunela Astbrink:
Thank you very much, Padmini. And just for those participants and speakers online, we did start over half an hour ago. So it means we have about 25 minutes left. So we will move on to talk a little bit about, let’s see, we’re going to talk about imagining digital tech that works for everyone. And so I’m keen to hear examples and stories of digital tech that provides accessible, safe, joyful user experiences. So if Manik Gunaratne is online, I’d like to pass the floor over to her. Manik Gunaratne is the manager of a specialized training and disability resource center of the Employers Federation of Ceylon. She has promoted inclusive economic development centering on persons with disability. And she also acts as a vice chairperson of the South Asian Disability Forum and is a founding member of the South Asian Women with Disabilities Network and a member of the Asia-Pacific Women with Disabilities United. So if Manik is there online, please go ahead and talk about how digital tech, how it can be the best we would like it to be. Thank you.

Manique Gunaratne:
Thank you, Gunala. Yes, the technology is very important for people with disabilities because that’s how we survive in the society. Because we as people with disabilities, we have a disability which we have to admit. So through these assistive devices and technology, it’s easy for us to work equally capable as people with non-disabilities. And also, if we imagine a world of technology which will assist people with disabilities, for example, if there is a world where the technology through the movements of people with disabilities, which they can inform the caregivers what the requirements of the person with disabilities, then the lifestyle would be very easy for us. And especially now with the AI, artificial intelligence, AI technology, there are so many technologies available, but the problem is the cost factor. So it’s very important that they have, because for example, for hearing impaired persons, if someone comes, say if someone rings the bell, hearing impaired persons cannot hear. Or if a dog barks, a hearing impaired person cannot hear. But when the technology is there or through a smartphone or any device, if a dog is barking, a picture can be provided a dog barking or when the doorbell is ringing. Through the smartphone, if they can indicate that the doorbell is ringing. So it will be very easy for hearing impaired persons to make life easy. And also for vision impaired persons, the smart glasses, right? If we are through the eye gestures and when we walk with the smart glasses, if we can identify what is around us and give a description, so it will be very easy. And also for people with physical disabilities, they are the people who have the mobility difficulties. So through apps and technology, they can find out places which they can access. It may be a restaurant, it may be a movie theatre. So those things are important. And also there are people with disabilities where their movements are limited. So through hand gestures and facial expressions, if they can operate the computer, they also can be equally capable as people with non-disabilities so that they can be employed and economically active. And also if there may be technology through brain functions and the way of thinking, if they can operate any devices. So those are very important. And also entertainment is not only for people with non-disability. We as people with disabilities also need entertainment, maybe playing games through smartphones and the computers. So any accessible games and technology, it’s very important. And also if technology is there to give emotional recognition, people with autism and also people with intellectual disabilities, that would be very grateful for them. And platforms which are accessible so that all of us can equally use platforms which are accessible. And if we can imagine of a world where a smartphone, sorry, smartphone, if you want to cook something, if you put all the ingredients and press the buttons and say, I want fried chips or anything cooked rice, whatever, if the end product is there. So for people with disabilities, the phone can be very smart. So and we as people with disability use a lot of devices. I’m a vision impaired person. If you can just imagine a world full of darkness around you, and that is my world. So we do work equally capable as people with non-disability through various apps, the smartphones, the laptop, and we use the Be My Eyes app which gives assistance for us and currency identifiers, carrier recognizers, a lot of apps are available through the smartphone and other devices. So world with full of technology, especially for women with disabilities is very useful for us. Thank you so much, Manik. Wouldn’t it be wonderful if all of those technologies were available so that persons with disabilities could live seamlessly and independently?

Gunela Astbrink:
And that’s what we’re all aiming for. I would like now to ask Judy Okita, if she is online, to speak a little bit to this particular imagining topic, but also talking generally about her experience of accessibility and potential barriers. And Judy Okita is from Kiktonet in Kenya and is the founder of the Association for Accessibility and Equality. And she’s been advocating for many years for better access for persons with disability, both in regard to physical infrastructure and online content. So I’ll hand over to Judy, please, if she is there.

Judy Okite:
Hello, Gunela. Thank you. Good to see you. It’s an interesting topic that we talk about accessibility. for persons with disability. And yes, I will be excited to see all inclusive technology or inclusive, you know, physical spaces, because that’s the one that really affects me the most. I know for a long time we’ve been advocating for physical accessibility even within the IGF. I hope that this year it’s much better. And the little things that we don’t get to think about, we don’t get to look into what really brings the barriers and so enables or rather puts the people into the spaces of you have to request for assistance every now and then. So one of the things that probably I’ll just mention that we have been able even to do with Kiktonet in this year, we were able to evaluate the government websites. We did that on 46 websites. Just to be able to see the access, how accessible this information for persons with disability is. Unfortunately, the highest was got an 80%. Of course, we were using the poor principles. And it was interesting, the feedback. The feedback from government was interesting because people felt if you are at 80%, then, you know, you are at a good space. But no, if you’re at 80%, that means 20% of your content is not accessible. Meaning that your content is still not accessible for persons with disability. Another thing that we found from the research that we did was that more emphasis is placed on the persons who are blind when it comes to digital content. But you will find that a person with cognitive disability is actually more disadvantaged. If the content is not understandable, if the content is not perceivable, then you’ve lost this person. They’re not going to be able to interact with your information as much as you would want all of them to. And looking at it from the Kenya perspective, it’s only a few years, maybe two years ago, that the cognitive disability was recognized actually as a disability. Then you can see how far we still are on inclusion, on ensuring that everyone is included. So I would really like to see if there are these little things that we can ensure that the persons with disability are part of our change. Yes, we want to make change, but we need to include them. Not because they want to, but because they have to be part of the process. If I can just quickly give an example. Most recently I was in Dar es Salaam in Tanzania. We were having the Forum for Freedom in Dar es Salaam. That is an annual event. We’ve worked with them before, so they know my very specific needs when it comes to the physical platform. So when I got there, they had the ramp, yes, but there is the big pavement before you get into the ramp. So my question was, how does this make sense? So yes, there is the ramp, but I will still need to be lifted up to get to the ramp. So that’s not the access that we are talking about. They had a really beautiful, accessible room, but they have this very small cubicle for the washroom. So I decided that this time around I’m not going to say much about it. I’m just going to demonstrate. So I had to call the guys from the reception, and I was like, could you please come upstairs with the wheelchair? Is there a wheelchair? So they were like, okay, yes. So they came to the room with a wheelchair, and I requested them, could you please push the wheelchair into the bathroom? And the guy is asking me, how do we do that? I’m like, that’s an excellent question. How would you expect me to use it if you cannot push it in there? It’s not that the persons with disability want to be part of the process. They have to be part of the process. We need to empower the persons with disability to really be able to know their rights. I mean, I have the right to say this is not working for me. It’s not for you to tell me, no, this is the accessible room. People use it. No. I tell you, if it is not accessible, then it is not. And I just kept telling them, if you had included a person with disability to be part of this process, the ramp would not have been this bad. I mean, the washroom would not have been this bad. It’s not about having a wide, beautiful room. It’s about having it accessible. So I would really love to see if we can do that and be deliberate in that. It’s not something that we are requesting. It’s a right. We need to be part of that. We need to be part of the move, of the change. It’s not about we are going to disturb them, or we know what it is that they need. It’s about ensuring that they are part of that process, that they are there, that they have a yes or a no, and we are able and we are ready to listen to the yes and no and make those necessary changes. Thank you very much, Granella. Thank you very much, Judy. And I think it just shows that we have this beautiful imagining of what accessibility is and what technology can do. But then we come to earth and realise some really fundamental things still need to be fixed. And I think Judy also made suggestions there about nothing about us

Gunela Astbrink:
without us. We need to be involved in those decisions on how something is built, if it’s in the built environment or in the online environment. So I now just wanted to ask the audience if there were any particular questions, comments. And first of all, Padmini, are there any online questions or comments before we go to any in the room?

Padmini Ray Murray:
So actually, Granella, since we just managed to get Nirmita in the room, it would be really nice if we could also include her in the conversation. So I think you might have a brief biography for her, but Nirmita is a widely respected and known specialist in disability rights and policy from India. So, Nirmita, maybe we could have, since we’re running a little short of time, skip to the question, which is that how do you feel policy and regulatory processes can kind of ensure the inclusion of disabled people

Manique Gunaratne:
in the creation of or the making of technologies, just like Judy was suggesting? Yeah. So first of all, apologies for coming late. I was facing some technical issues.

Nirmita Narasimhan:
So let me get to the question. I think it’s important to have policies because otherwise it ensures that people are aware that there is a need. It is mandated. It is recognized by law. There are standards to comply with. Otherwise, it is just a personal request of somebody to somebody, right? And the fact that there is a legal and a social requirement and a responsibility to comply with standards, I think that is very important to ensure that accessibility is there where we see. So if you look at the DARE index survey, it shows that countries which have policies are more likely to have accessibility implemented. And so starting from the policy, I think I would like to say that now, either we need to have policy or where we have policy, we need to focus on implementing the policy. And that gives us guidelines on what to do, how to do, and where all to do. So I think that answers your question in brief. Can I just very quickly add a follow-up question, which is that how would you advocate disabled people lobby for this kind of policy? Because it’s quite labyrinthine, right, like getting these questions to a policy level.

Padmini Ray Murray:
So if you can just maybe share an example or maybe advice as to how that might be done.

Nirmita Narasimhan:
Sure. So I think by and large, a lot of countries have implemented the CRPD and have ratified and signed and are implementing it in their legislation. But clearly, domain-specific policies have to come from within and persons with disabilities have to do that. It also depends on different strategies and different situations. For example, in India, when we had to lobby for the global and the national level copyright law, we did a whole lot of research on what are the legal models available everywhere. We ran campaigns, we had meetings, we had signature campaigns, we had a whole kind of campaign stuff happening. On the other hand, when we look at electronic accessibility, we had meetings with the officials of the electronic and IT department, and that’s how we worked with them to develop a policy. On another level, when we look at implementing the procurement standard in India, we worked again with the ministry, with an agency, and there were nationwide consultations with experts and with different academic groups and industry on what the standard should be and how it should be implemented. But clearly, the one thing that is there everywhere is that we need to be involved and we need to be motivated and get other people to be responsible for this. It’s not something which is only applicable to us. It’s something we want the country as a whole to implement, and it depends on the situation, who the people are we are in touch with. Whatever it is, we need to be proactive and we need to be ready to do more than we think it’s our job to do. Thank you very much, Nirmita.

Audience:
I’m so pleased that you got online in time to make your comments on policy. I think they are so essential. I will now ask Lydia Best to have a question or a comment, please. Thank you very much for the opportunity to add my voice. As we speak about nothing about us without us, therefore, I would like to disclose that I am deaf and I use cochlear implant. When we talk about technology and how it empowers us, it does, but it also disempowers. In this case, for example, when during pandemic the situation happened when everybody has gone online on the telephone lines, Google Meet was an excellent tool where we could very easily connect with each other and while not perfect, we were able to communicate. Mostly one-to-one. Text messages also help. For deaf people, sign language users, we know that we’ve got WhatsApp, video calls, we can use sign language. Great. When we meet, and Zoom has been mentioned today, when we meet at Zoom meetings, usually it is multinational meeting because I am representing European Federation of Hard of Hearing People and I work globally as well. When it comes to actions being involved, automatic captioning unfortunately fails us and often we are finding difficult to participate because we cannot follow what the discussion is about. Another issue is when the users are actually switching of the videos because the auto-captioning, if it’s used, is not correct enough, we need to support ourselves lip-reading. And that causes a problem. We need to actually disclose as well that we actually need someone, everyone to have their face shown correctly so we can follow. But the latest invention of Zoom is causing the biggest consternation. So Zoom has rolled out quite a few languages now in automated version. Great. Any user who is participating in the Zoom call can actually click the language they want. But do you know what happens? You suddenly have, say, someone using English language, someone else wants to actually follow Spanish language. And suddenly both of us see both languages suddenly showing up as the captioning. It creates massive confusion and lately we are forced back into using only human captioning in the international meetings because we cannot rely on the technology which actually disempowers us. Unless everybody uses just one language, usually it has to be English. So there are a lot of issues. And to me, this demonstrates the latest thing with Zoom, that Zoom did not work with persons with disabilities, with expert disabilities, and did not do the user research enough before actually putting this new feature out. And that’s something which is really distressing.

Gunela Astbrink:
Thank you. Thank you very much, Lydia, for those important comments. Is there any last-minute comments, questions from anyone else, please? And, Padmini, is there any comments or questions online? Gunal, this is Nirmata here. Is there a minute?

Nirmita Narasimhan:
I just wanted to add some more thoughts on previous discussions. And when we’re talking about nothing about us without us, and we talk about accessibility, and I just wanted to quickly mention that I think increasingly we feel the need for mainstream products to be more universally designed. I mean, even simple technologies around us that we can use. And what we need to understand is that just because it’s accessible, it’s not usable to everybody. There are different levels of users, and maybe somebody who’s an expert in technology can use something, but another person using the same screen reader or same captioning or same technology cannot. And we need to have that user-centric approach when we are talking about accessibility as well. So, yeah, with that, I just conclude.

Gunela Astbrink:
Thank you, Nirmata. I think that is a very good point to end on. And I wish to thank all our speakers online in the room. And, again, we unfortunately didn’t have our online speakers there from the beginning because of some technical issues with the Zoom links, but all the information is captured, and I’m sure that a point of view and everyone else who has participated in this session will have some very useful information to take home when it comes to digital self-determination for people with disability and especially for the gender focus on this topic. Thank you very much. Thank you so much, Gunilla. It would be great if, Padmini, you could share your concluding thoughts and comments. Great. Thanks so much.

Padmini Ray Murray:
So, yes, I think one thing that as somebody who both identifies as a designer and technologist, I think the biggest challenge that we struggle with is the fact that when we design and develop technology, we always tend to do it at scale. And this means that much more nuanced and individualized use is much harder to provide. And so I think this does require a kind of a paradigmatic shift in the way we think about creating a customized product. And I think something like AI might actually be the way forward, but we need to be able to kind of layer user interaction in such a way that individual users can toggle between different kinds of way of using and experiencing technology rather than foisting the same technology on everybody because that’s not a tenable solution. So I would urge those of you who are working in the field and, of course, people with disabilities who are affected by this to, you know, start those conversations and advocate for kind of more individualized and customized experiences rather than something one size that fits all because we know very well it doesn’t.

Gunela Astbrink:
Thank you. Thank you very much. And, Deborah, I think we’ve finished then. So thank you very much for this session, and I think we’ll conclude there. Okay. Thank you.

Audience

Speech speed

151 words per minute

Speech length

498 words

Speech time

198 secs

Debarati Das

Speech speed

140 words per minute

Speech length

359 words

Speech time

154 secs

Gunela Astbrink

Speech speed

132 words per minute

Speech length

2545 words

Speech time

1157 secs

Judy Okite

Speech speed

149 words per minute

Speech length

1067 words

Speech time

429 secs

Manique Gunaratne

Speech speed

138 words per minute

Speech length

752 words

Speech time

327 secs

Nirmita Narasimhan

Speech speed

171 words per minute

Speech length

681 words

Speech time

239 secs

Padmini Ray Murray

Speech speed

170 words per minute

Speech length

701 words

Speech time

248 secs

Vidhya Y

Speech speed

176 words per minute

Speech length

2461 words

Speech time

840 secs