African AI: Digital Public Goods for Inclusive Development | IGF 2023 WS #317

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis covers several important topics related to the development of AI and its impact on various aspects of society. One of the key points discussed is the significance of data infrastructure and access to compute for the democratization of AI. It is noted that the lack of proper data infrastructure can hinder the development and use of AI, especially in contexts like Africa or the global South.

Another point raised is the need to address challenges regarding data infrastructure and compute access. While no specific supporting facts are provided, this suggests that there are issues that need to be discussed and resolved to ensure that AI can be effectively utilized and its benefits can be accessible to all.

The analysis also touches upon the presence of AI policies and legislation in Kenya. The question raised is whether Kenya has a specific AI policy in place and corresponding legislation to operationalise it. Unfortunately, no supporting facts or evidence are presented to explore this question further.

Lastly, the analysis considers the topic of human-robot interaction, specifically focusing on how human workers should perceive and interact with robots. However, no supporting facts or arguments are provided to delve deeper into this topic.

In conclusion, the analysis raises important questions and topics related to data infrastructure, access to compute, AI policies and legislation, and human-robot interaction. However, it is worth highlighting that the lack of supporting facts or evidence limits the depth of analysis and leaves several open-ended questions.

Yilmaz Akkoyun

AI has the potential to significantly impact inclusive development and help achieve the Sustainable Development Goals (SDGs). It can play a crucial role in improving access to medical services and increasing efficiency in agriculture, which can contribute to the goals of good health and well-being (SDG 3) and zero hunger (SDG 2). AI applications can facilitate medical service delivery by assisting in diagnostics, monitoring patients’ health, and providing personalized treatment. In agriculture, AI can enhance productivity, optimize resource usage, and improve food security.

However, there are challenges associated with the access and negative effects of AI that disproportionately affect developing countries (SDG 10). Only a fraction of the global population currently has access to AI applications tailored to their specific needs. This digital divide reinforces existing inequalities and limits the potential benefits of AI for those who need it the most. Moreover, negative impacts of AI, such as job displacements and bias in decision-making algorithms, can exacerbate existing inequalities in developing countries.

Ethical considerations and the regulation of AI are also critical. Risks associated with AI range from high greenhouse gas emissions to digital disinformation and risks to civil and democratic rights (SDG 16). To ensure the responsible and fair development and use of AI, it is essential to promote ethical principles and practices. This includes addressing issues such as algorithmic bias, ensuring transparency and accountability, and safeguarding privacy and human rights.

In order to reduce inequalities and ensure diverse representation, it is important to have AI expertise and perspectives from various regions, including African countries (SDG 10). Africa has seen the emergence of various AI initiatives, and it is crucial to involve these initiatives in shaping the global conversation around AI. This will help ensure more equitable development and minimize the risk of marginalization.

The German Federal Ministry for Economic Cooperation and Development (BMZ) is committed to supporting the realization of AI’s potential through local innovation in partner countries (SDGs 8 and 9). The BMZ believes that digital public goods, such as open AI training datasets and research, are important enablers of economic and political participation. These measures can enhance economic growth and create opportunities for communities to harness AI for their specific needs.

Access to open AI training data and research, as well as open-source AI models, is considered foundational for local innovation (SDG 9). By sharing relevant data, AI models, and methods openly as digital public goods, a global exchange of AI innovations can be fostered, benefiting various regions and promoting cross-cultural collaboration.

In conclusion, AI holds tremendous potential for inclusive development and the achievement of SDGs. However, challenges of access, negative effects, and ethical concerns must be addressed. It is essential to ensure diverse representation, particularly from regions such as Africa, and promote ethical AI practices. Open access to AI training data and research is crucial for fostering local innovation and accelerating progress towards the SDGs. The African AI initiatives are inspiring and underscore the need for continued dialogue and learning about AI’s impact on development.

Zulfa Bobina

AI technologies, though viewed as an ideal as digital public goods, have not yet become a reality. They are described as more of a future aspiration rather than something that is currently achievable. However, there is optimism about the future growth of AI technologies and collaborations. More work is being done in the advocacy space, which is believed to lead to a more widespread adoption of AI technologies.

Civil society is seen as playing a vital role in addressing ethical considerations related to AI. It is believed that civil society can step in to address these concerns and ensure that AI technologies are developed and deployed ethically and responsibly. Efforts are being made to address these ethical concerns through research and advocacy.

There is a need for comprehensible communication regarding AI technologies. It is argued that explaining technologically complex concepts in simple language can help the general population understand and incorporate these technologies into their lives. The goal is to avoid elitism in technology comprehension and ensure that everyone has access to and understands AI technologies.

The often overlooked human workforce behind automated technologies is being highlighted and advocated for. It is recognized that automation and AI technologies can have a significant impact on the workforce. Therefore, efforts are being made to support and advocate for the rights of these workers to ensure fair treatment and protection in the face of technological advancements.

Harmonizing collective and individual rights is emphasized, particularly when it comes to data rights. It is argued that adopting western blueprints of data rights that focus solely on individual rights may not be suitable for African societies. There is a need for more balanced regulations that take into account both collective and individual rights.

Discussions around AI technologies as a public good are considered important. There are considerable discussions taking place, especially at events like the Kyoto summit. Furthermore, public interest in data and AI technologies is growing, highlighting the need for ongoing discussions and dialogue as technologies progress.

Overall, there is excitement about the various activities happening across the continent in the field of AI and technological developments. These advancements are seen as opportunities for growth and progress. While there are challenges and ethical considerations to address, there is an optimistic outlook for the future of AI technologies in Africa.

Darlington Akogo

Mino Health AI Labs, a leading healthcare technology company, has developed an advanced AI system that can interpret medical images and deliver results within seconds. This groundbreaking technology has received approval from FDA in Ghana and has attracted users from approximately 50 countries across the globe. By providing fast and accurate results in medical image interpretation, the AI system has the potential to significantly accelerate and streamline healthcare processes.

Although the benefits of AI applications in healthcare are evident, it is crucial to subject these systems to rigorous evaluation processes, especially in healthcare. Approval of AI systems by health regulators can be challenging and requires extensive testing to ensure their effectiveness, reliability, and safety. It is essential to distinguish between AI research or prototypes and their real-world implementations, as the latter demands meticulous scrutiny and validation.

Considering the perspective of users is another important aspect of AI implementation. Users should actively participate in determining the features and operations of AI systems, particularly in healthcare. This ensures that these systems seamlessly integrate into users’ workflow and effectively meet their specific needs. Their input provides valuable insights on optimizing the functionality and usability of AI solutions, ultimately enhancing their impact in healthcare.

Moreover, the concept of businesses being built around solving problems connected to the Sustainable Development Goals (SDGs) has gained prominence. Companies such as Mino Health align their business strategies with addressing issues related to healthcare access and food security, demonstrating a positive approach towards achieving the SDGs. By focusing on solving socially significant problems, businesses can contribute to broader societal goals and make a tangible difference in people’s lives.

To guide businesses in achieving a balance between profit and impact, the concept of an internal constitution has emerged. This moral code acts as a set of guidelines for the company’s operations and ensures that its decisions and actions align with its core values. In certain cases, even the CEO can be voted out if they deviate from the principles outlined in the internal constitution. This mechanism promotes a sense of ethical responsibility within the business and encourages a long-term view that prioritizes societal welfare alongside financial success.

Furthermore, businesses can be registered for public good, which implies an obligation to prioritize the public interest over the interests of shareholders and investors. This designation reinforces the idea that businesses should focus on the common good, aiming to create positive social impact rather than solely maximizing profits. By doing so, businesses can align their objectives with the well-being of communities and contribute to the achievement of the SDGs.

Artificial intelligence (AI) has tremendous potential in aiding the attainment of the SDGs. The ability of AI to process vast amounts of data and derive actionable insights can be instrumental in addressing complex societal challenges. Investing in AI can be a strategic approach to tackling the problems identified within the SDGs, as it enables the development of innovative solutions and the efficient allocation of resources.

However, while harnessing the power of AI is essential, it is equally important to exercise responsibility and adhere to ethical frameworks. The transformative nature of AI technology calls for careful consideration of its potential risks and impacts. Leveraging AI in a responsible manner involves issues such as bias, accountability, and privacy, among others. Operating within ethical boundaries is crucial to prevent the emergence of new problems that could arise from unchecked deployment of AI systems.

In summary, Mino Health AI Labs has made significant advancements in the field of healthcare through the development of their AI system for medical image interpretation. However, the successful implementation of AI in healthcare requires rigorous evaluation, active user involvement, and a focus on aligning business strategies with the SDGs. The concept of an internal constitution and the registration of businesses for public good provide mechanisms to guide companies towards balancing profit and societal impact. AI, if invested in responsibly, holds the potential to address the challenges addressed within the SDGs. At this pivotal juncture in history, there is a need to harness AI technology while ensuring its ethical and responsible use to avoid unforeseen consequences.

Meena Lysko

During the discussion on industry, innovation, infrastructure, and data privacy in South Africa, several important topics were addressed. One of the key points highlighted was the implementation of the Protection of Personal Information Act (POPI Act) and the Cyber Crimes Act. These acts were considered crucial for prioritising the safeguarding of personal information and for providing a legal framework to address various digital offences.

It was acknowledged that challenges arise in striking the balance between innovation and compliance in digital privacy. However, the speakers emphasised that the POPI Act and the Cyber Crimes Act play a vital role in ensuring responsible handling of data by organisations in South Africa.

Collaboration between businesses, individuals, and law enforcement agencies was emphasised as imperative in moving forward with the implementation of these acts. This collaboration is seen as a key factor in promoting the responsible use of personal information and in effectively addressing digital offences. The need for joint efforts in creating a secure and ethical digital environment was highlighted.

Another significant point discussed was the incorporation of ethics in the AI systems lifecycle. It was emphasised that ethics should be included from conception to production of AI systems. This includes the integration of a module on AI ethics and bias in training programmes. Ethical competence, which includes knowledge of laws and policies, was deemed necessary for individuals involved in AI development. Additionally, the need for an ethically tuned organisational environment was highlighted to ensure the responsible and ethical use of AI systems.

The importance of industry interaction in AI and data science training was also emphasised. The inclusion of industry experts in training sessions was seen as a means of facilitating knowledge sharing and promoting morally sound solutions. This collaboration between the training programmes and industry experts was found to be beneficial in keeping up with the latest trends and developments in the field.

The positive impact of training programmes on participants was highlighted with the assertion that these programmes support quality education, industry innovation, infrastructure development, zero hunger initiatives, and responsible consumption. The post-training feedback from previous programmes indicated that the training positively influenced the participants.

Lastly, the use of open AI systems was advocated as a means of contributing to sustainable digital development. It was noted that proprietary AI systems are generally used to make money, ensure security, empower technology, and simplify tasks. However, open AI systems were proposed as a more sustainable alternative for digital development.

In conclusion, the discussion highlighted the significance of the POPI Act and the Cyber Crimes Act in South Africa for ensuring personal data protection and addressing digital offences. Collaboration between businesses, individuals, and law enforcement agencies was deemed essential in moving forward with these acts. Ethics in AI systems development and the incorporation of industry interaction in training programmes were emphasised. The positive impact of training programmes on participants and the advocacy for the use of open AI systems in sustainable digital development were also discussed as important aspects of the conversation.

Susan Waweru

The Kenyan government has demonstrated a strong commitment to implementing and adhering to policies related to artificial intelligence (AI) and digital transformation. The Constitution of Kenya plays a significant role in guiding the development and use of AI. It includes provisions that emphasise transparency, accountability, and the protection of privacy rights. This indicates that the government recognises the fundamental importance of privacy in AI systems.

Moving beyond theoretical frameworks to actual implementation is a crucial step in the development of AI. The government understands the significance of leadership commitment in successfully executing plans. Without strong leadership support and commitment, the implementation and execution of policies become challenging.

The Kenyan government is actively pursuing digitisation and aims to develop an intelligent government. Key efforts in this direction include onboarding all government services onto eCitizen, a platform that provides online access to government services. The President himself is overseeing the Digital Transformation Agenda, highlighting the government’s high level of interest in digitisation. Currently, the government’s focus is on infrastructure development to support these digital initiatives.

Privacy and accessibility are two important principles emphasised in the development of digital public goods and AI technology. The government recognises that video surveillance mechanisms should respect privacy and not infringe on people’s freedoms. The Data Protection Act in Kenya primarily affects data controllers and processors, ensuring that personal data is handled with care and protects individual privacy.

To further support AI development, the Kenyan government is working towards separate legislation and strategies specifically for AI. This demonstrates a commitment to creating a comprehensive and focused approach to AI policy. The government is actively drafting AI legislation and has established a central working group to review and update tech-related legislations, policies, and strategies.

In line with their commitment to effective governance, the Kenyan government is developing an AI chatbot. This chatbot, using natural language processing with large datasets, is aimed at enhancing compliance and bringing government services closer to the people. It will be available 24/7, providing services in both English and Swahili.

Demystifying AI and promoting human-centred design are also important aspects. The government recognises that creating awareness and understanding among the public can enhance the adoption and reduce fear of AI. In addition, a focus on human-centred design ensures that AI development prioritises the needs of citizens over the benefits of organisations.

Finally, the benefits of AI, especially in public service delivery, are highlighted. The government acknowledges that AI has the potential to provide significant benefits to its citizens. The aim is to ensure that the advantages of AI technology outweigh any potential risks.

In conclusion, the Kenyan government has taken substantial steps towards implementing and adhering to AI and digital transformation policies. With a strong commitment to privacy, accessibility, and human-centred design, as well as efforts to develop separate AI legislation and strategies, the government is actively working to create a more inclusive and technologically advanced society. Through initiatives such as the AI chatbot and the digitisation agenda, the government aims to provide efficient and accessible services to its citizens.

Moderator – Mark Irura

During the discussion, several important topics related to healthcare and the implementation of digital solutions were discussed. Mark Irura emphasised the need for risk assessment and harm prevention when incorporating digital solutions. He highlighted the importance of evaluating potential risks and taking necessary precautions to protect individuals from physical, emotional, and psychological harm. Irura also stressed the importance of implementing data protection protocols to safeguard sensitive information and maintain citizens’ privacy.

The discussion also acknowledged the challenge of balancing business interests with Sustainable Development Goals (SDGs) and the integration of artificial intelligence (AI). It was recognised that business requirements and regulations may take precedence at times, making it difficult to align them with the objectives of sustainable development and the use of AI technologies. The speakers agreed that finding a harmonious balance between these different aspects is crucial to ensure the successful implementation of digital solutions that contribute positively to both business interests and the achievement of SDGs.

Mark Irura further emphasised the need for developing strategies that can effectively align business objectives, SDGs, and AI technologies. He inquired about the approach used to align these elements in addressing various challenges. This highlights the importance of creating a comprehensive framework and implementing strategies that consider all three components, providing a cohesive and integrated approach to problem-solving.

Overall, the speakers strongly emphasised the need for rigorous certification processes, active user involvement in decision-making processes, and robust data protection measures. These measures are crucial to mitigate risks and ensure the well-being of individuals when implementing digital solutions. The discussion conveyed the wider implications of the implementation process and the importance of responsible use of AI technologies in healthcare and other sectors.

Session transcript

Moderator – Mark Irura:
I want to check also if the colleagues online have been able to join. Bobina, Dr. Mina Zulfa and Darlington, are you online with us?

Meena Lysko:
This is Mina. Yes, I am online. Thank you.

Moderator – Mark Irura:
Bobina? Perfect.

Zulfa Bobina:
Hello. I’m here as well.

Moderator – Mark Irura:
All right, thank you. So we are missing Darlington, but we’ll start with the session. I will start with introductions. My name is Mark Irura. And today we are here to talk about AI and its use, particularly for sustainable development as far as digital public goods are concerned. It’s some work that we have been doing in Africa, and we kind of will do a little bit of a deep dive looking at some of the things that we have done as a program within GIZ, but also explore what are some of the risks that are coming out in the discussions that we have. With us today, we have Yilmaz from the Federal Ministry of Economic Cooperation and Development, BMZ. We have Susan O’Hara seated beside me. She’s the head of legal at the Office of the Data Protection Commissioner. I have Dr. Mina Lisko. She brings on board her experience having worked with government, academia, and the private sector. And she’s currently a director at Move Beyond Consulting based in South Africa. And we have Bobina Zulfa. Bobina is an AI and data rights researcher at Policy based in Uganda. Policy is a feminist collective of technologists, data scientists, creatives, and academics working at the intersection of data, design, and technology to see how government can improve on service delivery. We’ll start with a keynote from Yilmaz to talk to us a little bit about, from a high overview, what they are doing before we delve into the conversation. So over to you. Thank you.

Yilmaz Akkoyun:
Dear Mark, distinguished guests and colleagues, dear ladies and gentlemen, dear IGF friends, it’s a great honor on behalf of the German BMZ and pleasure to share a few opening remarks today highlighting the potentials of AI, especially African AI, for inclusive development. What is the potential of AI for inclusive development? I think we already heard a lot on day zero and today. In my view, it can be instrumental in achieving the SDGs. They can facilitate medical service delivery, increase efficiency in agriculture, and improve food security, challenges of our time. Yet, only a fraction of the population worldwide has access to AI applications that are tailored to their needs. And we want to change this. This is why we are here. And on top of that, the negative effects of AI disproportionately affect developing countries, especially in the global south. However, we also need to be aware of the risks related to AI. These risks range from high greenhouse gas emissions of large language models to digital disinformation and risks to civil and democratic rights. The international community is becoming increasingly aware of these issues, and we see it here at the IGF. Accordingly, in my view, the promotion of ethical, fair, and trustworthy AI, as well as the regulation of its risks, are beginning to be addressed at the global level, as we heard this morning in the G7 context of the AI Hiroshima process. AI has been addressed in the UN, G7, G20, and international organizations such as UNESCO and the OECD have published principles and clear recommendations that aim to protect human rights with AI being on the rise worldwide. And the EU is on the forefront of regulating AI with the EU AI Act. Secretary General Guterres is convening a multi-stakeholder high-level advisory board for AI that will include emerging and developing countries. I think these conversations between countries from the global north and the global south are essential so we can make sure that AI benefits all. And when talking about AI, we mostly hear about models and applications developed in Silicon Valley, California of the US, or in Europe, but there’s so much more. And we discuss large language models that represent and benefit only a fraction of the world population. That is why I’m especially excited to hear about AI use cases today that were developed and deployed in African countries and that truly represent African AI, and that were designed specifically to benefit the public in African countries. As the German Federal Ministry of Economic Cooperation and Development, we want to enhance economic, political participation of all people in our partner countries. And we are very eager to support our global partners to realize the potential of AI through local innovation in these countries that we are talking about here in this session. We are very committed to the idea that digital public goods are an important enabler. For example, to be more concrete, the access to open African language datasets is supporting local governments and the private sector in building AI-empowered services for citizens. For instance, our initiative Fair Forward contributes to the development of open AI training datasets in different languages, Kiswahili, Kinyarwanda, and Luganda, languages spoken by more than 150 million people collectively. And some of the examples we’ll get to know in this session are built on these language datasets. I’m looking very much forward to this. And to give you an outlook, we see open access to AI training data and research, as well as open source AI models as the foundation for local innovation. Therefore, relevant data, AI models, and methods should be shared openly as digital public goods. To realize the potential of AI for inclusive and sustainable development, we need to make sure at the same time that AI systems are treated as digital public goods. Open, transparent, and inclusive at the same time. In this way, a global exchange on AI innovations can emerge. This IGF with AI being mentioned in so many sessions is one starting point for the global exchange. And now, I’m looking very much forward to the use cases. And thank you so much for being part of this wonderful session.

Moderator – Mark Irura:
Thank you so much. So, before we dive in and building upon that, we are kind of taking a critical approach to try and see how are we beginning to define what AI means to us in the continent, in the African continent. And today, we specifically have this idea that we can actually build solutions and systems and not just look at it from a policy and a framework perspective, so to speak. And I will start with Susan, because she’s in the room and in the hot seat. And I will ask, I will start you to the framework, right? And what the Office of the Data Protection Commissioner is doing in Kenya as far as thinking about AI. And then, also explore if you have any ideas and context about what is happening in the rest of the continent.

Susan Waweru:
Thank you, Mark, for your question. And good evening to all. As you’ve heard, my name is Susan Awero from the Office of the Data Protection Commissioner in Kenya. As we may be aware, in the AI context, privacy is of fundamental importance to ensure that AI works for the benefit of the people and not to their harm. Discussing on the frameworks, in Kenya, the top echelon of frameworks is the Constitution of Kenya. Within that constitution, we have several provisions that sort of guide AI in Kenya now. One of them is the values and principles of governance. From a government perspective, we are bound to be transparent, to be accountable, to be ethical in everything that we do. This includes in the deployment of AI for public service, in the service delivery in all forms. Secondly, we have values and principles of public service. These are the values and principles that govern us as public servants in how we carry out our public duties. So, that is what, at a constitutional level, will be guided by in the deployment of AI in delivery of service. Of most importance is the Bill of Rights and Fundamental Freedoms. Within that Bill of Rights and Fundamental Freedoms, which is also in our constitution, we have the right to privacy. The right to privacy is what bats data protection laws, data protection policies, and data protection frameworks in Kenya. So, having the top organ, the top-groomed norm in Kenya, giving the guardrails in which AI, privacy, data protection will be guided by, gives a good background and a firm fundamental foundation for any other strategy or policy in tech or in AI can then spring from. It forms the global guardrail for everything that should be done. So, it may not specifically touch on AI, but these values and principles, the Bill of Rights and Fundamental Freedoms, give you the constitutional guardrail of what you can and cannot do in the AI space. So, it is from that mark, then, the frameworks. All other frameworks, including the AI strategy, the digital master plan, all must adhere to.

Moderator – Mark Irura:
Thanks. Thanks, Susan. And building upon that and the increasing knowledge that we have these things anchored in constitution or in principles that you want to develop, and then we have digital public goods. And digital public goods, then, are ways in which government is offering shared services. So, if you have a registry, for example, a civil birth and death registry, how can it be leveraged across government, rather than the social security and the hospital insurance fund and the tax administration all have their own registers, but kind of duplicating that effort. And so, from that knowledge, we are also beginning to think about how, because we have an opportunity and we have seen how multiple registers affect how government delivers services. Then we see the need for adopting this in government, because it can be cheaper. It can be cheaper in terms of how government procures these services. And I’m giving this background, because I do not know if Darlington is online. Is Darlington there? All right. So, you can introduce yourself and then move on to the question and share with us what you’re doing in West Africa. And then you can also talk a little bit about the lessons you’re learning from the work that you’re doing. And are you using any digital public goods approaches in your work? So, over to you, Darlington. Thank you.

Darlington Akogo:
Thank you for having me. My name is Darlington Akogo, founder and CEO of Mino Health AI Labs and Karangroo AI. So, I’m in a moving vehicle. So, apologies for any sounds. But, yeah. So, what we do at Mino Health AI Labs is build artificial intelligence solutions for healthcare. And so, connected to the question, we do have one AI system that is focused on medical image interpretation. And, you know, we got the health regulators in Ghana, FDA Ghana, to certify and approve this AI system. And, yeah. So, we rolled it out. We have users from all over. We’ve had sign-ups from about 50 countries around the world. In Ghana, we have it being used in, you know, some of the top, I mean, the capital cities, but also some small towns. And, we are expanding access to even really rural areas. The benefits of, you know, this AI system to the communities, for example, is that by default, if you go take a medical image and x-ray, for example, it will take several weeks before you get the results. Because, you know, there are very few radiologists. So, in Ghana, for example, there are about 40 radiologists. If you take an African country like Liberia, they only have less than five radiologists. So, what this AI system does is, you know, help speed up that process by using AI to interpret that medical image. So, the AI system, if you – we have it online at platform.vinohealth.ai. And, this AI system is able to generate results in just a few seconds. About five, ten seconds, you get the results. So, what used to take weeks can now just take a few seconds. It makes all the difference in healthcare. Because, you want to know exactly what is wrong with people quick enough that you can respond to it. The lessons we’ve learned are quite a lot. One key one is within the space of AI, there’s a huge difference between, you know, doing AI for research or, you know, doing some sort of demo proof of concept. And, building real world AI that is meant to work with real humans. There’s a whole lot of difference. The key thing is the rigorous evaluations you need to do. And, this is super applicable in healthcare. So, getting the AI system to be certified by FDA or health regulator is a very, very major step. And, what it takes to get health regulators to certify, you know, an AI system for some use cases quite a lot. But then, you learn a lot of lessons. So, one of the key things we learned is just double down on rigorous evaluation. The other bit is, you don’t want to build the AI system and just hand it over to, you know, the users. Let them decide what kind of features they want, how they want the AI system to fit into their workflow. That is very important.

Moderator – Mark Irura:
Thank you so much, Darlington. Thank you. And, moving from what you’ve just said, like, I will turn to you, Mina, and ask about, Darlington just said, like, it’s very different when you’re doing a research project, but when you’re actually implementing a solution. Like, there are a lot of things. There are a lot of risks. And, from the wide goal, I think one of the approaches from, you know, a digital public goods perspective, or a DPI perspective, a digital public infrastructure perspective, is can we cause harm? What are the risks? How can we expose data we shouldn’t expose, and how do we protect code bases we shouldn’t, so that there are no harms that ultimately translate to the citizens? So, I will invite you to reflect on that. Thank you.

Meena Lysko:
Thank you. Thank you very much, Mark. And then, perhaps, firstly, thank you for this platform and giving me the opportunity to actually e-visit Japan. I do wish I could have been there in person. I could not, and I apologize for that. So, thank you for this opportunity to e-visit Japan and actually the captivating and quite unique city, Kyoto. I know it is the birth city of Nintendo, and it actually hosts a phenomenal number of the UNESCO World Sites. So, Mark and team, I am very envious of you. So, maybe I will come back to, I was intending to also share with you some of the work we have done or that we are doing. So, I will come back to it. In terms of your question, so looking at AI ethics and governance standards, probably first in South Africa, right, we have the ever-evolving digital landscape, and we have the Protection of Personal Information Act, or POPI Act, and we also have the Cyber Crimes Act, which stands as significant legal frameworks, which is shaping the realm of data privacy, security, and digital crime prevention. So, the POPI Act, which is endorsed in South Africa, that prioritizes the safeguarding of individuals’ personal information. It encourages responsible data handling by organizations. The POPI Act emphasis is on individual privacy and is reshaping the way organizations collect and manage personal data, and it prompts them to adopt stringent data protection measures. Perhaps I can give an example. So, I frequently get these sort of annoying calls, and then I ask, how did you get my number? And then they go about and say, but you do know that I have not shared my number with you willingly, so this is against the POPI Act. And very often the phone goes down immediately. So, people in South Africa are very aware of the POPI Act, and people feel safeguarded through the POPI Act. However, challenges do emerge in balancing innovation and compliance, especially in the age of digital privacy. In parallel to the POPI Act, we have the Cyber Crimes Act. This addresses the escalating threat of cyber crime by providing a legal structure to tackle various digital offenses, thereby fortifying the defenses against cyber threats. So moving forward, I think it becomes quite imperative for businesses, individuals, and law enforcement agencies to actually collaborate in the implementation of these acts. Thank you, Mark.

Moderator – Mark Irura:
Thank you, Mina. And I turn to you, Bobina. So we’ve talked about digital public goods. We’ve talked about how we protect citizens. And Susan gave a very good introduction on what is being done as far as the frameworks that we have are concerned on data rights. SDG number 16 talks about partnerships. As civil society, in this particular field on digital public goods, do you have any collaborations with other stakeholders, whether in private or public sector? And do you think, and that’s loaded, do you think that there’s alignment from what you can see in the landscape, right, with sustainable development as far as AI is concerned right now? Over to you, Bobina.

Zulfa Bobina:
Okay, thank you, Mark. Please allow me to go and record my video because my internet is unstable. Can you hear me?

Moderator – Mark Irura:
Yes, yes, we can hear you.

Zulfa Bobina:
Yes?

Moderator – Mark Irura:
Yes, go for it.

Zulfa Bobina:
Good afternoon. So we lost you now. We can’t hear you. Okay, can you hear me now? Yes, yes, go for it. Okay, great. I’ll quickly get to your question, Mark. Very interesting discussion from the rest of the panelists. Good to hear about the number of things you’ve been working on and just sticking around. I just want to say just from the get-go, the composition itself, I think I’ve just been thinking the idea of AI technologies, even in the continent or globally as a whole right now, being digital public goods, it’s still very much an ideal space that we’re, in a sense, working towards because that’s not really a reality at the moment because a lot of the things you describe as being a digital public good, it’s not includable, it’s more inclusive in a sense, is not really what’s happening at the moment. And so in the sense of trying to relate that to the SDGs and working towards something a world as a whole, as the technology is being adopted here on the continent and how that’s happening along the line of intersection of different partners and how they’re possibly working towards realisation of different SDGs. I think I’ll say I do see a number of examples, for example, here in Uganda where I’m based in Kampala or just across the African region or Africa as a whole. I’ll give an example. I see partnerships along maybe academia and the practice, especially. I’ll give one example, like the Lacuna Fund with the Macquarie AI Lab. For example, here in Kampala, Uganda, I see a lot happening at the Macquarie AI Lab and a lot of that is in partnership with a lot of maybe developing countries or they take themselves. So, for example, the Lacuna Fund, which is building the natural language text and speech database, are being, I think, working in collaboration with Google. So I think a lot of what I see is within the academia space over to private sector. And as civil society just coming in to do more advocacy, both speaking towards the issue of the number of data seekers, I’ve brought ethical considerations in the adoption of these technologies. So I think it’s something that is springing up in a sense. It’s not happening on a very grand scale, but it’s something that we see coming up. And I guess we can hope that it’s only going to, especially with more work being done around the space of advocacy, we can see more of that happening over the coming months and the coming years.

Moderator – Mark Irura:
All right. Thank you so much. Thank you so much, Bobina. And I come back to you, Susan. I’ll act like a journalist and say I’m sure the people in the room are wondering. Probably people in Kenya will say the Kenyans are asking as if it’s one person. How do we move from these frameworks, right, to the actual implementation? And what are some of the things that you’re doing in this regard so that they don’t remain on paper?

Susan Waweru:
Mark, that’s a good question. And it’s one I’m passionate about. I’m known as the get-it-done girl. So my reputation is to move things from paper to actual implementation and execution. Death in the drawer is a concept we learn in policy and in business administration where you can have the best policies, you can have the best strategies, frameworks, legislations, all documented. And that is one thing you see in Kenya. We have some of the best documentations even borrowed by the West. But implementation becomes one of the biggest challenges, not only in AI but in the tech space. So how you get it done, from my perspective, is one. Leadership matters. If you don’t have leadership commitment to getting what is on paper out to be physically seen, would be a challenge. So what we do is, as the technocrats in government, we seek to influence leadership. And we have some of our parliamentarians here with us. We seek to influence them on the importance of what has been documented. Because if the policy is done at the strategy level and just benched, then it becomes a challenge. But as technocrats, influencing the leadership on the importance of the documents that have been prepared is key. Once you get the leadership buy-in, then it trickles down to the user and the citizenry buy-in. Because those using the frameworks, for example, the Data Protection Act is an act passed by parliament to be implemented majorly. It affects majorly data controllers and data processors. Who are largely entities. So if we don’t get entities on board through awareness creation, through advocacy, then we don’t have that document done. And one way to get user buy-in, and we’ll talk about this later, is to have a free flow of information. To be transparent in what you do. To be very simple and clear on what the compliance journey is for data protection and for privacy. So leadership buy-in. Leadership matters, citizenry buy-in. Another thing is collaboration. Partnerships with organizations and entities who have executed that which is in our documentation. Once we collaborate, for example, with other bodies, other government agencies, for example, who have implemented their AI applications successfully in the Kenyan government, then we collaborate with them on how to do that. Currently in Kenya, I can say this get-it-done attitude is at high gear. In the tech space, the government has what it calls the Digital Transformation Agenda. It’s spearheaded by the presidency, with the president himself overseeing and calling out most of the projects. Currently that Digital Transformation Agenda is at infrastructure development stage and onboarding all government services onto one platform which we call eCitizen. And he gives specific timelines on when he wants all of that done and checks them himself. That’s the leadership. That’s the level at which the government of Kenya is interested in digitization towards moving to an intelligent government where we don’t react to public sector needs. We preempt them and provide them even before they happen. Those are the three ways, Mark, I would say, how we get documentation to the ground.

Moderator – Mark Irura:
Thanks, Susan. Of course, today we’ll wait to see what you’ve been doing as well with AI itself. I hope we can get a chance to see that if we don’t run out of time. I come to you, Dr. Mina, and I ask about training and capacity building. What does that mean to you for different stakeholders, whether they’re in policy, whether it’s the level of graduates that we have? We know a lot of them have to go overseas or outside the continent to get their training to be able to come back and be part of an ecosystem. So what does that look for you right now, and especially now with the risks and the potential harms of AI being apparent? Thank you.

Meena Lysko:
Thank you, Mark. I’m really looking forward to share some of the programs we’re busy with currently, but I’ll hold back and maybe address this particular key question. So the emphasis should be in including ethics in AI systems lifecycle. So that should be from conception all the way through production, and it’s a cycle. It means that it should go continuously at a sustained sort of initiative throughout the working stages of a particular system. Within some of the programs, and for example, the one I’m currently on, we’ve incorporated a module on AI ethics and bias. Now, albeit that we are looking at very hands-on development, we looked at the soft skill, if I can call it that, where we need for our participants, our trainees, to emphasize that the adopting of ethics in AI is more than just knowing ethical frameworks and the AI systems lifecycle. So you require awareness of ethics from the perspective of knowledge, skills, and attitudes. And that means knowledge of laws, policies, standards, principles, and practices. And then we also need to integrate with that professional bodies and activists. And we have a number within South Africa itself. For example, we have an AI overarching representative within South Africa as a body. We have, I think it’s called DepHub in South Africa, which focuses on AI policies and data recommendations. And then we must also look at application of ethical competence. So we need an ethically tuned organizational environment. And in tune with that as well, we have to look at ethical judgment. So we’ve been emphasizing that our participants in our training program is fully aware of these aspects. So they need to be, their projects, their developments require to be guided by their ethical principles and philosophies. So they need to be imbibed with that. In the projects that they are in, they have to apply ethics throughout the design and development process. And we’ve also, to ensure that we are training people in AI and data science as an example, but we’ve also incorporated to invite industry experts into our sessions for engaging with the participants so that there is an encouragement of healthy knowledge sharing. But also in the opposite direction, there is youthful perspectives that is shared on promoting morally sound solutions. So they’re not yet contaminated with what is going on in a market for the purposes of just for profit. And that’s where we’ve seen it happening within our training programs as a very successful sort of sharing mechanism conjured. Thanks, Mark.

Moderator – Mark Irura:
Thanks, Mina. And I come to you, Darlington, and I ask this question that touches a little bit on what Dr. Mina said, working with industry experts. So you have a bucket where we have the SDGs, we have the AI and the problem that needs to be solved. In your case, you talked about radiology and being able to read and interpret what those images mean. And then we have the complexities of running a business. So talk to us a little bit about strategies. If any exist to align this in the work that you’re doing. If you’re still there. Darlington is not there. Okay, then I would move to you, Bobina. Are you online?

Zulfa Bobina:
Yes, I am.

Moderator – Mark Irura:
All right. I will ask a question that is related to ethical deployment of AI. What are you doing as civil society in this regard to make sure, for example, that people will not be left behind by digitization and digitization topics, that children will learn at an earlier age about the risks of these technologies and even as they begin to use them? Over to you, Bobina.

Zulfa Bobina:
Okay, thank you, Mark. I hope you can hear me. That’s a really profound question because I think it goes over a lot of the things we’re trying to unpack throughout this conversation. But also, like Dr. Aminah was going over a number of the ethical concerns and how these are being navigated. But I’ll say, well, for instance, with the work with blood policing, a lot of what we do is research, which is sociotechnical, in a sense. So we’re not developers, but we look at the land and look at how these technologies, as they’re being adopted and deployed, in different communities. So in our role, in a sense, a lot of what we’re doing is one knowledge production with our research, and then advocacy on the other hand. So with the knowledge production, I would say a lot of, just like very largely, the things we’re looking at very critically right now in terms of just addressing the ethical concerns and bringing communities to understand the workings of these technologies because we think it’s very elitist to… I think we always compare this to the health conversation where there is a disease outbreak and then governments find a language to communicate that this is about the general population even when there is all this scientific language about it. So I think we’ve been trying to think around this and how do we come up with language that the populations, the ordinary person within the country or anywhere else across the continent will be able to understand these technologies and how they impact them on a day-to-day, how they can incorporate them in their lives and how this could be something to just benefit their lives. I think on just like a broader scale, also just given the time I have, just going over those two points of knowledge production and advocates, I think we’re looking very critically at issues of one, visible lighting workers. So this is really a composition of automation and the invisible workforce behind a lot of these technologies that we’re being told are sort of working frictionlessly. So trying to get the understanding, the buying of government was supposed to be regulated in a sense for these technologies that are being adopted by, for example, the people doing this work, this invisible work. So that’s one of the things we’re looking at. The other being the harmonization of collective and individual rights. So a lot of the frameworks, I think, that are being developed, and I think this is a trickle from the West where we’re getting sort of like a blueprint from the GDPR, et cetera, where we see that a lot of our frameworks are driven towards individual rights. I think that’s problematic, especially as a society, a data find more and more, there is need for us to move towards a place where we harmonize both collective and individual rights and that would bring in a participatory. Thank you.

Moderator – Mark Irura:
Hi, Bobina. Can you hear me?

Zulfa Bobina:
Yes, I can hear you. I don’t know if you can hear me.

Moderator – Mark Irura:
Yeah, I lost you temporarily. Just repeat the last sentence as you wind up. Audible gasp. Okay. No, we’re here with you. Please, please. Yeah, yeah. Ah, I think we lost Bobina. Okay. Darlington, I see you’re settled now. Okay, perfect. Cool. I had a question for you, which I think you did not hear, but I will take two questions from the audience. You can prepare them if anyone has a question and probably one from the room and one online. My question to you, Darlington, before we lost you in cyberspace, was we have the SDGs, the Sustainable Development Goals, we have you as a business, and then we have AI, and not all the time these interests are aligned. Sometimes it’s the business which has to take precedence. Sometimes, like in the problem that you gave us, you’re trying to solve a really impactful problem, and at times it’s just, you know, you have to comply with some certain regulations. What are some of the strategies that you have to align all of this in solving the problems that you want to do and also aligning with the SDGs?

Darlington Akogo:
Yeah, I mean, that’s a very, very important question. The initial solution or the initial strategy is make sure your business is built around solving a problem in itself, so a problem connected to the SDG. Then fundamentally, there’s no conflict to begin with. So if you are profiting off of something that is damaging the environment or destroying the health of people, then alignment becomes a really, really big problem. But if fundamentally you have a social enterprise, you have business built around solving a problem, like in our case, the whole business is built around being able to provide healthcare and make it accessible to everyone. For Mino Health and for CARA, we’re making sure we solve food security. Outside of that, you know, there are definitely instances where maybe if you took one route, you’d make a lot of profit, but the impact might not be so much. And then there’s another route where, you know, it might not be the case. So I can give you a real-world example. So we work on drug discovery with AI. There’s a scenario we’ve looked at where you could take certain conditions, work on new drugs for them, and it’ll be very expensive. You know, there are certain medications where, you know, a few tablets are tens of thousands of dollars, hundreds of thousands and millions, and you could sell it to a few people and make a lot of money. But then the question is, are you actually building any equitable access to healthcare by doing that? And so when it comes to those scenarios, you need to have guiding principles. What you can do is have an internal constitution that says this is our moral backbone and we need to live by it. And the board is basically obliged to make decisions off of it. So even if the CEO veers off by not following that internal code of conduct, that constitution, they could be voted out. And depending on how serious you are about this, you can solidify this within the company’s constitution, and then it will be fully followed. Some people go a step further, even the way you register the business. So there’s a category in some countries where you can register as a social enterprise or, you know, for profits, but for public good, I think the term is. And when you do that, it means that your primary obligation is not to shareholders and investors, it’s to the public. So those are legal ways of making a binding to make sure that you are actually focused on addressing the SDGs and not just, you know, maximizing profits.

Moderator – Mark Irura:
Thank you, thank you. And I guess what I’m hearing from you as well is the ability to also consider self-regulation, especially in this space, as you innovate, as you solve these problems, even where there might be a lacuna in the law or in the frameworks that exist. I don’t know if there are any questions coming in from the room to begin with or from online. Yes. Hi, Leah.

Audience:
Hi, everyone. I’m Leah from the Digital Public Goods Alliance. And I think we should also quickly talk about infrastructure. I mean, apparently we had some troubles here, which is a good bridge to talk about data infrastructure and access to compute. Obviously you need both of them in order to democratize the use of, the development of, and also the benefits of AI in an African or global South context. So how do you deal with these challenges in your project, in your country context? Thank you.

Moderator – Mark Irura:
So I will open up that question to anyone to take it up. Yeah.

Susan Waweru:
Thank you for the question. And something in Kenya, one of the things I mentioned under the digital transformation agenda is the first building block is the infrastructure. So I know for the next five years, the government is having the last mile connectivity project which seeks to bring fiber connectivity to every market, to every bus station, to every public entity. That will give free wifi and that gives them the access to digital public goods. So that’s one of the things that was adopted as one of the first things to be done because you can’t develop digital public goods without accessibility. Accessibility, equal accessibility is very important. And I think one of the bedrocks to make AI successful in other tech and DPGs. So that’s what I know from the Kenyan experience that’s happening.

Meena Lysko:
Anyone else? Dr. Amina, would you like to come in? Yes, sure, Mark. I was looking for a Zoom, raise my hand, but thank you for asking me as well. I think from a South African perspective, let’s see if I can turn the video on. So from a South African perspective, you may have read in the news or aware that we have this thing called load shedding. It’s a term coined within South Africa where it is a structured approach for us to actually manage electricity within, power consumption within the country when there are constraints on the national grid. So this brings about, of course, in addition to what we do have existing as challenges and I guess globally, with infrastructure to ensure connectivity is redundancy. But with redundancy, I guess we also need to ensure that it is affordable and it must be affordable to every latitude and longitude and to the decimal of latitude and longitude. So it reaches every sphere of life within the world. And well, from a South African context within South Africa. And in running our bootcamp, for example, a program we are currently doing, this has been a challenge to run a hybrid program where people cannot stay online for the entire duration full of the training because of this matter of connectivity. Fortunately, we record sessions so they can follow it up post the session. So we have solutions around it. So the one aspect is infrastructure, but it is also about redundancy. And then it is also the question of, we have this reliance in education and training and also now in our kind of 4IR where we are relying heavily on infrastructure. And the question becomes, what happens if someday for whatever reason, and we’ve seen this through global to disasters, natural disasters in various parts of the world, how do we then manage to come back online as expediently as possible when infrastructure is affected? Because in this 4IR, in this AI and data evolved world, our reliance is fully on infrastructure to keep global economies going. So that the risk is quite high. And I think that kind of is like a call for action to look into this. In kind of going in the opposite extreme, is the question of the impact infrastructure has on the environment. So in energy consumption, for example, is massive, right? Within this context. So these are sort of things that I think we have to be very mindful of and look into in a responsible, we talk about responsibility. So we gotta be responsible about that as well. Thank you, Mark.

Moderator – Mark Irura:
Thank you, thank you. I don’t know if Sumaya, you can find one question for us online to read out.

Audience:
Thank you, Mark. We have a few questions online. Okay. So the first one is, do we have an AI policy in Kenya? Question mark. If yes, and legislation to operationalize the policy. Question number two. How should human workers perceive and interact with robots working alongside them? Question mark. Are these robots supposed to be treated as tools or colleagues by the humans working with them? So the question’s here. Thank you.

Moderator – Mark Irura:
Susan, I’ll direct the one for Kenya to you.

Susan Waweru:
Kenya has what is the digital master plan. In it is some aspects about AI. Recently, about two weeks ago, the government led by the president instructed for an AI legislation to be drafted. So that’s an ongoing work. Further, there’s a central working group that’s looking at all tech-related legislations, policies, and strategies. And one of the things that will be considered is the AI policy in place. So the answer is yes. Within the digital master plan, we have aspects of the AI policy within there. But however, there are efforts, I think, within this year to have legislation, policies, and strategies that will guide that.

Moderator – Mark Irura:
Thank you. And then the second question is existential, right? Should humans interact with robots? We already are, right? We already are to some extent. If there are questions online and in the room, we will take them and we will continue to answer them because I want to move to ask our panelists and everyone who has shared here to take a minute and just wrap up and leave the room. We wanted at least to hear what’s happening. We wanted to show you what’s happening. And we wanted you to appreciate, we are not just talking in Africa. We are doing something. And I don’t know if I begin with you, Yilmaz. If it’s okay, like just a minute, yeah.

Yilmaz Akkoyun:
Yes, thank you so much. It’s a real privilege to listen to these different examples and use cases because this was really inspiring for me. And hear more about it, creating African AI and how it works, also the challenges and how you deal with it. This was super helpful. I would like to stay in close touch with you to continue this conversation. I think it’s just the starting point. And I’m so happy to see the success stories already. And I can just congratulate to this panel, to you also for the amazing moderation and the different panelists which participated also from remote. And let’s please continue this conversation. And for now, I can just, I absorbed so much because I didn’t hear about it too much in advance. And yeah, this is why we are also here to get in touch and join the IGF here. And I think it’s just the beginning.

Moderator – Mark Irura:
Thank you. As Sumayya, are we going to put it up? Are we going to put it up? Okay, good. As you prepare, then I’ll go online and I will ask Bobina to reflect on her closing remarks.

Zulfa Bobina:
Sure, thank you very much again for having me be a part of this conversation. I think like someone had mentioned earlier, there has been a lot of conversations happening even at Kyoto, in Kyoto rather, about just weighing around the compositions, I mean, going around AI technologies as a whole. So I think to be talking about the direction of, how do we get to realize this being a digital public good and indeed, being of benefit to everyone is a point towards, we’re coming from the point of the initial compositions around digitization and now as a public, as a whole is getting data paid more and more, how do we let the composition evolve as the technologies are evolving as well? And so I think for me, I’m very excited to just hear about some things that are happening here and there across the continent and very excited to see more of that and very happy to keep in touch with you all to just let this conversation keep on going. Thank you.

Moderator – Mark Irura:
Thank you so much. And then I move to you, Dr. Mina.

Meena Lysko:
Thank you, Mark. In context of the sustainable development goals, our training has aimed to support quality education, industry innovation and infrastructure, zero hunger and responsible consumption and production. I take with me today what you have said and I think that could be a nice global call. Self-regulate as you innovate. Our post-training feedback from previous programs as well as feedback from participants in our current program is giving a glimpse of how a paid forward is being achieved. And then I want to kind of sum up by saying this, proprietary AI systems are generally used to make money, enable security, empower technology, simplify tasks that would otherwise be mundane and rudimentary. But if AI ecosystems could be designed to take advantage of openly available software systems, publicly accessible datasets and generally openly available AI models and standards or open content, it will enable digital public goods to avail for Africa generally free works and hence contribute to sustainable continental and international digital development. Thank you, Mark.

Moderator – Mark Irura:
Thank you. And then I move to you, Darlington.

Darlington Akogo:
Yeah, so I think we are in one of the best moments in human history where we are building technology that finally digitize what makes us special as a species and potentially even surpasses. The potential is beyond anything we can think. So we are what, seven years away from the deadline of the SDGs. And there’s a lot of realization that we are not close to meeting the targets. I strongly believe if we can double down on properly using AI ambitiously, whether in Africa, Asia or anywhere in the world, if we can seriously double down, invest properly in it, we can address right about everything on the SDGs. There’s no limit to how far AI can go, especially in the context of foundation models now and how general they are. So I would say let’s double down on it, but let’s do this in a very responsible and ethical way. So as we are solving the SDGs, we don’t create a new batch of problem for the next target that we have to create. So let’s leverage AI and solve the SDGs.

Moderator – Mark Irura:
Thank you so much. And before Susan closes for us, they have been working on an interesting project. Maybe she can talk to us about it and then give us closing remarks. It’s been projected before you. It’s a tool that can help citizens to learn about the act and it can communicate in Sheng. Sheng is a mixture of Swahili and English, English and Swahili. So over to you, Susan.

Susan Waweru:
Thank you, Mark. So just to quickly run through, one of the things we are developing is an AI chatbot to provide the services that the ODPC should provide. This chatbot is using natural language processing with large datasets to just train it on the questions the citizenry may have on the ODPC project. So it speaks both English and Swahili, which are the two official languages in Kenya. So Sumai, if you just may ask it, what is a data controller? This is an awareness tool. It’s a tool to enhance compliance. It’s a tool to bring the services closer to people. And it just overcomes challenges such as the eight to five working day. So as a data controller and a data processor seeking to register or make a complaint, you’re not limited to working hours. That can be done at any time. It gives information, it gives processes, and it’s all free of charge, giving it accessibility. So to just end the session, my clarion call is that AI is inevitable. Both, we’re already using it. It’s already on our phones. It’s already in our public services. So it’s inevitable. The main thing I would say is to have it human-centered. Even when we’re developing the chatbot, we put ourselves in the shoes of the citizen more than the benefit of the organization. So if we can enhance human-centered AI and maybe bring up the benefits more than the risks, that would be best. The way to do this is to demystify AI. And such a panel is one of the things we do. You demystify it, because currently it’s a scary big monster, which is not what it truly is. That’s not the whole aspect. It’s what it could be, but it has much more benefits, especially to public service delivery. And with that, Mark, I just want to say thank you to you and the organizers, Sumeya and Bonifaz, for this, and largely to IGF.

Moderator – Mark Irura:
Thank you so much. Let’s give a round of applause to everyone who’s contributed to this conversation. I hope the session has been valuable to you. I hope you learned something. And I hope we can connect. I hope we can talk more about the topic. Thank you so much. And thank you online as well for joining us. Thank you so much. Bye.

Audience

Speech speed

183 words per minute

Speech length

189 words

Speech time

62 secs

Darlington Akogo

Speech speed

185 words per minute

Speech length

1254 words

Speech time

406 secs

Meena Lysko

Speech speed

134 words per minute

Speech length

1635 words

Speech time

731 secs

Moderator – Mark Irura

Speech speed

148 words per minute

Speech length

2163 words

Speech time

876 secs

Susan Waweru

Speech speed

161 words per minute

Speech length

1629 words

Speech time

606 secs

Yilmaz Akkoyun

Speech speed

146 words per minute

Speech length

919 words

Speech time

377 secs

Zulfa Bobina

Speech speed

182 words per minute

Speech length

1291 words

Speech time

427 secs

IGF’s knowledge unlocked: AI-driven insights for our digital future | IGF 2023 side event

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Markus Kummer

The initial stages of the Internet Governance Forum (IGF) primarily focused on connectivity and internet access, with no consideration given to artificial intelligence (AI). During that time, the main concerns revolved around ensuring that people had access to the internet and were able to connect. However, as time went on, the landscape changed significantly with the advent of apps, video streaming, smartphones, and other technological advancements facilitated by AI. These developments highlight the growing importance of AI in shaping the digital world.

Despite the progress made in connecting people to the internet, challenges still exist in bringing the last billion individuals online. The assumption was that the industry would take the lead in connecting this population, but it has proven to be a difficult task. One of the major hurdles in this endeavor is language and cultural diversity. The remaining individuals who are not yet connected to the internet predominantly come from non-English speaking countries. Overcoming these linguistic and cultural barriers is essential to ensure universal access to the internet.

The Tunis agenda, a significant document related to Internet governance, outlined a broader definition of the concept beyond just the management of the Domain Name System (DNS) and internet protocol resources. It acknowledged that Internet governance encompassed a range of issues concerning the use and abuse of the internet. This expanded understanding remains relevant and continues to guide discussions and decision-making in the field.

The IGF has accumulated an immense amount of data over the years. It has been suggested that this data should be mined for valuable insights. In 2011, Vint Cerf, one of the founding fathers of the internet, highlighted the importance of data mining during the Nairobi IGF. Data mining involves extracting meaningful information and patterns from extensive datasets. Given the rich and diverse dataset available within the IGF, there is the potential to uncover valuable insights that can inform future policies and strategies around internet governance.

AI applications can play a crucial role in mining and categorizing the vast amount of data accumulated through the IGF. Markus Kummer, a prominent figure in internet governance, has mentioned the publication of a book summarizing the knowledge generated through the IGF. This highlights the challenge of effectively mining and utilizing the wealth of information available. By leveraging AI tools, the process of data mining and categorization can be significantly enhanced, allowing for more efficient and accurate analysis of the vast dataset.

In conclusion, while AI was not initially considered during the early stages of the IGF, its importance has become increasingly significant with the evolution of the digital landscape. Challenges persist in connecting the last billion individuals to the internet, particularly in dealing with language and cultural diversity. The broader definition of internet governance outlined in the Tunis agenda remains valid and continues to shape discussions within the field. The immense data accumulated through the IGF presents an opportunity for valuable insights when mined and analyzed effectively, with AI applications serving as useful tools in this process.

Jovan Kurbalija

The importance of preserving the knowledge generated during Internet Governance Forum (IGF) sessions was emphasised. This knowledge has the potential to assist and benefit communities affected by digitalisation issues. The Diplo Foundation, in collaboration with Markus Kummer, has been documenting IGF sessions since 2006. To facilitate this process, AI technology is employed, enabling the creation of summaries, reports, and daily digests. The AI system has the capability to codify and translate the arguments presented during sessions, resulting in the development of a comprehensive knowledge graph.

The knowledge database generated from IGF discussions is considered a public good that belongs to all stakeholders. However, it was noted that this valuable resource is currently underutilised. Therefore, there is a collective call for the initiation and promotion of the IGF knowledge database, aiming to fully harness its potential benefits.

While there are extensive discussions about the impact of Artificial Intelligence (AI) on humanity, the need to explore AI as a practical tool and gain a comprehensive understanding of its functionalities was recognised. It was suggested that the Internet Governance (IG) community should focus on delving into the practical aspects of AI, rather than mere speculation about its potential impacts.

To enhance knowledge sharing and coherence, it was proposed that an AI tool be developed to connect and compare discussions across various IGF sessions. This tool would help identify commonalities, link related topics, and facilitate a more comprehensive understanding of the subject matter.

The use of AI for the session report system was viewed positively, as it allows experts to collaborate with AI technology to generate interactive reports. These reports include detailed breakdowns per speaker, narrative summaries, and discussion points, as well as information regarding speech length and speed. The AI system continuously learns and improves through the integration of corrective feedback.

The IGF has evolved into a knowledge base that holds significant influence over Internet-related organizations. It serves as a platform for learning, capacity building, and the provision of global resources. Notably, the IGF’s culture of respect and engagement, which fosters a listening culture and promotes the acceptance of diverse opinions, was highly appreciated. There was a suggestion to utilize AI and human expertise to propagate this culture among younger generations, strengthening the overall impact and sustainability of the IGF’s mission.

In conclusion, the extended summary highlights the importance of preserving knowledge generated during IGF sessions and emphasizes the collaborative efforts between the Diplo Foundation and AI technology in documenting and summarizing these sessions. It underlines the call for the initiation and utilization of the IGF knowledge database, as well as the need to explore the practical aspects of AI. The potential benefits of an AI tool to link and compare discussions across various sessions are recognized. The positive perspective towards utilizing AI for the session report system is noted, along with the IGF’s influence as a knowledge base and its culture of respect and engagement.

Sorina Teleanu

The International Governance Forum (IGF) held discussions on the role of Artificial Intelligence (AI) in society, with a focus on its benefits rather than its potential to replace humans. The sentiment expressed during the discussions was positive.

Speakers at the IGF emphasized the need to approach AI in a practical manner and avoid cliches. They encouraged participants to explore how AI actually works, rather than focusing solely on its ‘magic’. This proactive stance aims to deepen understanding and harness the full potential of AI.

There was a consensus among the speakers that AI is not detrimental to jobs, but rather a tool to assist humans. They dismissed the idea of AI taking over human jobs in the near future and highlighted the importance of AI supporting and enhancing human capabilities.

One significant concern raised at the IGF was the underutilization of the valuable information produced. While the forum generates a wealth of knowledge, it was acknowledged that much of it remains unused or unexplored. This raises questions about the effectiveness of disseminating and utilizing the knowledge generated by the IGF.

The speakers also stressed the potential of technology in maximizing the knowledge acquired by the IGF over the years. They emphasized the need to leverage technology to track the evolution of discussions and enhance understanding of topics such as the digital divide. By harnessing technology, the wealth of knowledge accumulated by the IGF can be effectively utilized to contribute to the achievement of the Sustainable Development Goals.

Additionally, there was an emphasis on the need to move the discussions forward and avoid repetition. The speakers highlighted technology as a means to facilitate progress, avoid cliches, and promote innovation in governance and societal debates. Using technology as a starting point for discussions can provide an overview of previous debates and lay the groundwork for more in-depth and constructive conversations.

In conclusion, the discussions at the IGF established that AI will bring about benefits without replacing humans. The importance of approaching AI in a practical manner, avoiding cliches, and harnessing technology to maximize the utilization of knowledge were key takeaways. Moving forward, the IGF aims to leverage technology to advance governance and effectively address societal challenges.

Wim Degezelle

During discussions about Internet Governance Forum (IGF) activities, it was identified that there is a need to improve the codification and collection of knowledge. The participants emphasised the importance of moving beyond mere discussions and working towards tangible outputs. This indicates a desire to generate concrete reports and outcomes from IGF discussions.

Another point raised was the need for better coordination and consolidation of similar discussions that take place at different workshops within the IGF. It was observed that multiple sessions on internet fragmentation often resulted in repeated messages about collaborative work, albeit using different phrasing. The crowded schedule of IGF sessions was identified as a challenge, making it difficult to establish links to previous discussions from past years or sessions. Therefore, participants suggested that better coordination and consolidation of similar discussions would improve efficiency and reduce redundancy within the IGF.

Participants also acknowledged the potential role of AI and other technologies in enhancing knowledge management. It was noted that during meetings, a specific tool was able to break down participants’ words into distinct arguments and label key topics. Additionally, the tool was capable of associating relevant Sustainable Development Goals (SDGs) with the discussions. This demonstrates how AI and technology can help categorise and link discussions, facilitating better knowledge management within the IGF.

Moreover, there was a shared positive sentiment towards the potential of the tool to compare and link discussions from different sessions. Participants expressed a desire for the tool to identify common themes across multiple sessions and suggest comparative analysis. This highlights the potential for AI and technology to further enhance knowledge management within IGF by providing a comprehensive and comparative understanding of discussions.

In conclusion, the discussions surrounding knowledge codification and collection within IGF activities stressed the need for tangible outputs and better coordination of similar discussions. Furthermore, the value of AI and other technologies in categorising, linking, and enhancing knowledge was recognised. The potential for these technologies to compare and link discussions from various sessions was also highlighted. Overall, this analysis provides insights into improving knowledge management within the context of IGF.

Audience

The Internet Governance Forum (IGF) has become a vital platform, enabling stakeholders to participate and contribute to policy discussions related to the internet. This inclusive forum allows dialogue and collaboration among governments, non-governmental organizations, businesses, academic institutions, and individuals interested in shaping the internet’s future.

One key aspect that sets the IGF apart is its ability to influence internet-related organizations. Stakeholders have found the IGF to be an important channel for contributing to policy development and decision-making processes. This influence has been significant, shaping the strategies and actions of internet governance entities.

The IGF’s positive impact is reinforced by its evolution and longevity, surpassing initial expectations. It was originally anticipated that the IGF would only last for a limited period, but its resilience and continued success prove its value. The IGF is now regarded as a model worth emulating, leading to the establishment of similar forums worldwide and the contribution of resources from various regions, strengthening global internet governance.

Another significant aspect of the IGF is its role in promoting global collaboration and discussion. The forum provides a platform for stakeholders to engage in fruitful dialogue, allowing for agreement and disagreement. Through open exchanges and constructive debates, the IGF facilitates consensus building, shaping policies that impact internet governance. Additionally, the IGF’s influence extends beyond its immediate activities and impacts other internet governance organizations operating in related domains.

In conclusion, the Internet Governance Forum (IGF) has become a valuable knowledge base and a platform for global collaboration and discussion. Its importance lies in bringing together diverse stakeholders, providing opportunities for active participation, and influencing internet-related organizations worldwide. The continued success and growth of the IGF over the past two decades highlight the need for its continuation and evolution in the future.

Anja Gengo

The Internet Governance Forum (IGF) is an extensive database that contains a vast collection of reports, records, and documents on digital inclusion. For the past 18 years, the IGF has been actively producing various types of reports and documents, which serve as significant indicators of the current state of affairs and future directions in the field. This highlights the IGF’s commitment to remaining up-to-date and providing valuable insights into the digital inclusion landscape.

One argument presented is that artificial intelligence (AI) can be a valuable tool in managing the IGF’s massive database, provided that it is a trusted system. AI has the ability to process data quickly and yield accurate results, thereby enhancing the IGF’s data processing capabilities and achieving a higher level of inclusion in its processes.

Furthermore, there is a strong emphasis on the importance of identifying and including underrepresented and marginalized groups in the IGF processes. The IGF Secretariat acknowledges the lack of participation from certain countries, disciplines, and target groups and is making efforts to map these missing entities and onboard them. This commitment underlines the IGF’s dedication to promoting inclusivity and reducing inequalities in the digital space.

Anja Gengo, an observer, is impressed by the examination of speech length and speed in the discussions. This analysis provides insights into communication dynamics and has the potential to improve the effectiveness of discussions during IGF events. Additionally, Gengo is excited about a mini competition, the outcome of which is eagerly anticipated.

Overall, the analysis of the IGF’s database and its efforts towards inclusion are deemed highly valuable for the IGF’s long-term utility. It not only enhances decision-making but also supports the IGF in effectively addressing the challenges and opportunities within the digital inclusion landscape.

Session transcript

Jovan Kurbalija:
Okay, I guess you can hear me now. Good, great to see you and you’re in unique position and it’s been on the perils of all who are going to miss this session, 5,000 people, because this is a special session. And this session is special because it speaks about something which is very concrete and also very powerful. It speaks about knowledge that has been developed over the last 18 plus years in the IGF community. Think just about all of the sessions discussion at this IGF, what was said, questions that were made, and what knowledge each of us gathered from it. Well I’m writing books and I have a few books and I here and there publish them and some people got interested in them, some not. And this is the way of preserving this knowledge. But generally speaking, this knowledge is not codified and made useful for our discussion, not only of us here. That’s important, but people outside IGF community who are impacted by what is discussed here or who may need to know more on the digitalization and issues. Now Diplo, together with Markus Kummer, who is today with us, who is, for those of you who are not aware, who is one of the real fathers of the Internet Governance Forum. There are so many fathers, you know, the successes have many fathers, but he created the first IGF and we started 2006 with the first reporting from IGF, a remote participation in the first reporting. Therefore we have now 18 years of the reporting from the IGF, which is very powerful knowledge base. Now from this IGF we are reporting as well. So you can get, for almost any session, you can get, including this session, you can get a few things. You can get a summary report written by experts, you can get a report written by AI, drafted by artificial intelligence, and you can have also daily IGF. You know how it is. First day you try to follow the sessions and you’re enthusiastic that you will grasp what’s going on. At least my experience after the first day I realized that it’s not impossible and you start navigating the lunch areas and the bar areas and connecting, which is great. I think this is a great purpose of IGF. But what we do every day, based on this reporting, we create IGF daily. There is just IGF daily from yesterday, which have a summary of discussions and top day picks. Therefore, with the help of AI system and our experts, we create a summary of what was discussed the previous day. Now we were very critical today because there are so many repetitions, you know. Technology will give opportunities but also make risks, but also some new insights and ideas. So there is that interplay between repeating, repeating, repeating, but also having some new insights. Now what you can also consult here and you can see on this website, I’m introducing this way functionally because this is the way that you understand what we are basically discussing when it comes to the AI. And here is, for example, I know one interesting session, I’m sorry, but I’ll find it here. It was climate, I think, one of the critical, probably I’ll miss, where you have the summary of the session and you have also indication of what was said and how the discussion space was framed. So you can have it and this is done by artificial intelligence. As we discussed, this session is also codified and translated by artificial intelligence and you can see the main points from discussion. You can see, for example, what was, let’s see, at least one session that I was in, which is bottom-up AI. You can see that there is a report from the session and there is a knowledge graph. How arguments which Sorina, who is here, and I made relate to each other around topics, around issues. Therefore you can finish this meeting as one big knowledge graph where you can see how discussion in this session relates to some other session. This is a huge, powerful knowledge database which is completely unused and it is a public good. It belongs to all of us. And this session aims to initiate this discussion together with our panelists, with Marcus and colleagues from IGF Secretariat and Anja and, of course, Sorina. And Marcus, when you started IGF, did you plan to make this big AI system or not? Just a few suggestions and a few reflections from your side and then we’ll move to Anja.

Markus Kummer:
Well, AI was not a hot issue then. There were other issues but let’s not forget, 20 years ago, the internet was not the same as it is now. I do remember when we celebrated the first billion internet users, first billion online, but I think it was 2005 or so. Now we have more, around six billion internet users, so just the sheer number, the sheer, is a huge difference. But 2005, we didn’t have video streaming, we didn’t have Skype, there was no Netflix, there was no YouTube. All these things have added and the apps didn’t exist, there are no smartphones, so it was a totally different environment. But what was already clear, the people, the internet users, cared very much about the internet and obviously access to the internet was still a number one priority but connectivity remains an important issue. I do remember when, I think it was 2008 or so, we started thinking about bringing more people online and, well, access was always a big issue but 2008, it was at the meeting in Hyderabad, somebody said, actually, the biggest challenge will be not the next billion people but the last billion people, to bring the last billion online because the next billions will come almost automatically, industry will do it and it has happened that way, indeed, we have now six billion people online but the last billion, that will be a challenge and obviously, it was also mentioned in today’s session on the way towards the GDC, there are digital issues but there are also analog issues and I think the languages remain an analog issue. To be really inclusive, I think the internet must become more multilingual. It’s obvious that the remaining people who are not online yet, they don’t come from the English-speaking world, they come from the countries with different languages and changes will happen. The more people that come online, they will come from different culture, bring different languages, different cultural values and that will also have an impact on the internet but back to your question, no, we didn’t think about AI, we didn’t also, we didn’t really know what to expect. We just realized there was a hunger for having these discussions and that manifested itself before, this is during the working group on internet governance when we held regular consultation, there was a clear appetite to have these discussions on issues surrounding the internet and also, then we had Tunis and the Tunis agenda remains very valid and there were those who thought internet governance was just about naming and addressing but the Tunis agenda clearly spells out internet governance is more than naming and addressing, more than the DNS and the allocation of internet protocol resources and it says the internet governance also is about issues relating to the use and abuse of the internet and that is a definition that is very broad indeed and that also obviously includes AI.

Jovan Kurbalija:
Marcus, one thing which you mentioned and I think is critical also for the future of AI is that there are so-called unintended consequences. You just start moving and you don’t know where we land and you end up with a great event and if you don’t mind, that could be a nice segue and what you mentioned, different cultural contexts, recently we did analysis of, for example, Ubuntu philosophy, African philosophy, which is not codified to the large extent but it should and can influence AI developments. I don’t know if you had something else to conclude and then we pass to Anya, Sorina and then to basically…

Markus Kummer:
Pass on, yes.

Jovan Kurbalija:
Good. Anya, you were in the Secretariat, you were sort of making sure that everything works and it’s great, great work behind the scenes, very often not noticeable but how do you see this knowledge dimension of this huge pool which we are trying at Diplo to activate somehow, how does it look from the perspective of the Secretariat?

Anja Gengo:
Thank you. Thank you very much, Jovan, and also to Sorina, thank you for organizing this session and continuously supporting the IGF. First of all, thank you for your kind words. I hope you’ll share Jovan’s words. I will not repeat again all this what I said. I hope you could hear me but if needed, I will. In any case, just a big thank you, of course, to the organizers and for the kind words. I hope that you shared the feedback as Jovan so far, that you’re enjoying the IGF and that the program-wise, technically-wise as well, fits your requirements to feel comfortable to navigate this very robust agenda. I fully agree with, of course, Marcus and Jovan both said, the IGF is just one big database of everything and everyone, to say it in a very blunt manner. If you look at the past 18 years of the IGF, and we internally, of course, have access to all its archives, then it’s a lot of terabytes of data of different kinds of reports, documents that have been produced so far, a lot of just records of participation of the world through the IGF and its multi-stakeholder model. For us, those are precious resources because they are very important indicators of the status quo, but they’re also excellent navigators of where we want to go in the future, given the fact that digital inclusion is at the core of the IGF. Numbers, for example, are important. If you look at the reports on statistics of the participation by country, by different profiles, then it gives you a very nice picture on who’s participating, but most importantly, who are we leaving behind, and where do we need to concentrate our, for example, capacity development efforts to ensure that everyone’s onboarded with us. All those analysis are done to a good extent by a very small team of the IGF Secretariat manually. It’s very good that we are now living in a phase of this rapid AI development, where the AI, at least certain segments of it, if maneuvered, if in good hands, can be a trusted tool to deal with this big database and to ensure that the data are processed in a quicker way and to give you accurate result that you want to achieve. So we certainly welcome the involvement of these systems, as long as they are trusted systems, in the IGF, as we are seeing it as a great help to improve the process, first of all, but especially to reach the inclusion level that we are aiming for years. And unfortunately, it’s still very challenging, regardless of the fact that, of course, we have a big portion of the world being unconnected. Good portion of the world is connected, and that meaningfully connected world is still not active participant in the IGF processes. So the Secretariat is aware of that. We work on that. We map basically through a multilayered dimension of the stakeholder community who is missing. So we’re looking at particularly certain countries that are missing, certain disciplines that are missing, target groups, for example. We’re looking who are the marginalized groups across communities. And you can imagine the complexity there. Not every country, every community shares the challenges, resources, capacity. So that’s the complexity, and a small team in Geneva of four or five persons working at the Secretariat certainly can’t manage that in a quick way. So we do welcome these types of support into the IGF system, and I think it would also make the participation of the just regular participants in the IGF’s intersessional work and the annual meeting much more quicker and more comfortable and meaningful for everyone.

Jovan Kurbalija:
Thank you, Anja. One point which came from your reflection is, if you count, I think counted something like 30 sessions discussing AI. And there is a hell of a lot of excitement. Everybody likes to become expert on AI. And what we are noticing, a high level of cliches. Whatever cliche is that AI is endangering humanity, will kill us all in a few years, to all the cliches. But what is one point is, and what always motivates us, at least as a Diplo, is that we have to walk to talk. Not only to talk about AI, but also to use AI as a practical tool. And it’s a bit, I expected a fuller room, but it seems people like magic, talk about magic of AI, but not necessarily to see how it works and how it operates. And what you’re doing in the Secretariat with very limited resources is you’re trying to walk the talk. And I think there is a need in the IG community more to walk the talk. To look under the bonnet and see what’s going on, what are neural networks, how TCPIP functions, how you do that. It will make much more serious discussion. Here is our next speaker, Serena, who is, as you know, a person who walks the talks on so many issues. And she’s probably a person who has the lowest tolerance for any sort of cliches. Sometimes, although I’m very careful about cliches, but sometimes I write something and Serena just call me from the other office, what do you mean? It’s another cliche. It’s a bit tolerant, a bit, you know, here and there I may use a cliche. Sorina, how we can walk the talk, what Anja started and…

Sorina Teleanu:
Well, maybe asking how do we avoid going too much into that. But beyond that, I don’t think there is a way to stop people using cliches at the IG or any other digital policy discussion. I have a challenge with the mics. Apologies. Technology is not helping us. I think the idea is to use technology for what it’s best at. Helping us, not replacing us. As Jovan was saying, there’s a lot of talk these days about how AI is going to destroy everything, take our jobs. Well, we’ve had a bit of fun over the past few days with our reporting. And I think I can say after two days that AI is not going to take my job anytime soon. But beyond that, look at the IGF. So we’re talking about how to make use of technology to show the wealth of knowledge that the IGF has acquired over the years. This is the 18th annual meeting. How many of you have read the… What’s the most recent annual report? Messages, let’s call it like that. How many of you have read IGF messages for the past, let’s say, three years?

Jovan Kurbalija:
But be frank.

Sorina Teleanu:
But be frank. Wow. We should give you an award or something. Excellent. And beyond those three years, have you read all IGF messages? Okay. Well done. The point is there’s so much produced every year. We have recordings from every single session. We have session reports. We have the messages. We have the annual report. We have policy network reports. We have best practice forum reports, outcome documents of the parliamentary track, youth dialogue reports. There’s so much happening, but we produce them every year, and then we kind of leave them there. Can we try to unpack a bit all this knowledge and see how the discussion on, for instance, digital divide evolved from 18 years ago when we started the IGF to now? How can we actually take advantage of everything that’s being discussed here at the IGF to move the debate forward instead of kind of repeating the same things all over again? And we think technology can help here, can give us like a starting point. Okay. I want to have another session. I’m speaking too fast. On the digital divide, this is what has been discussed about the digital divide at the IGF in the previous 18 years. Let’s see how we take this forward and stop saying the same things all over again. I’m trying to respond to Jovan’s question about how to avoid cliches. Maybe yes, we can use technology for that. Be a bit more innovative, be a bit more forward-looking into how we’re debating these things, starting from, yeah, looking at what has been said before and taking, again, taking it forward instead of repeating the same points. So our hope is that technology is going to help us a bit in that direction. And I think it’s also very timely in these current debates about, you know, digital cooperation forum possibly or not, and whether we need something new or not, again, showing the wealth of knowledge that the IGF has acquired over the years and how we can make most of it the most of it. Thank you.

Jovan Kurbalija:
Thank you. Thank you. Sorina, we are working on, with Sorina’s help, on the AI cliché detector, which will immediately detect clichés in any speeches, and sort of that would be interesting. We have to keep it a bit discreet because then people could be annoyed, oh, I’m telling clichés, like myself, when Sorina detects clichés in my writings, you feel like uneasy. We will conclude this intro with Wim. Wim, you have been involved with, let’s say, knowledge aspect of AI as expert consultants, participants in the MAG, putting different hats. And what’s your take on this huge knowledge base, which was described by all our discussant, and possibility of tapping it, need to tap it, how to do it?

Wim Degezelle:
Well, thank you. Just to clarify, well, I have been involved in a number of intersessional activities, and I think they go back on the initiative by Marcus, really to make a first step. An important step, I think, that was also from having discussions on topics at IGF, to having discussions that start in the months before IGF, and try to come up with already a tangible output, a tangible report. And I think that’s an important step already in the whole context, trying to codify and bring knowledge together. But now what we, and what my experience is, is we are now a step further, and the discussions are way more focused, but they’re still going on in different, I hesitate the word silos, that has a whole different bunch of meanings in the context of IGF. But just, we had, for example, this morning, a policy network on internet fragmentation. And one of the messages we say, it’s important that stakeholders work together and discuss this together, because there are different views. But at the same moment, I’m aware, and I looked at the agenda too, that there were 10 other workshops that are talking about the same topic. And in some workshops have been the days before, but they are actually saying exactly the same, but with different words. They come up with categories, they come up with this message from, we have to work together, we have to discuss together, but they just formulate it different. And it would be nice to combine these. And then coming back to the use of AI and use of technologies, if we, if I, or if we look at the schedule, it’s impossible to do that, even or even afterwards, or even making the links to last year. And I was just checking the tool that analyzed what I have been saying this morning. And I must say, I didn’t read the text, but the fact that this tool took from the five or 10 minutes that I was talking and divided that up into arguments, three or four different arguments, and automatically labels that from these or key topics. And then I see it also adds which SDGs could be linked or are linked to what I just have said. I think that’s already something wonderful. What I am, I think, missing or what would be great, but I think that was the graph you showed earlier, if this would also do the next step and then help with comparing and linking what is being said in other sessions, where you actually at the end of the week and say, well, we have had five sessions that maybe, I don’t know if the tool would be able to do that or if technology is to be, is able to bringing that fine tuning, but at least say they were talking about the same, go and check whether the new ones actually is just new ones or if they are talking about something different. So I think that there are huge opportunities there.

Jovan Kurbalija:
Thank you. Thank you, Wim. Well, as a matter of fact, it exists. We are fine tuning. As you said, this is approximation. And what is the beauty in this reporting system, which our colleagues may show again and what Wim was referring to, is that you’re always fine tuning with the experts. As Sorina said, sometimes we are underwhelmed with the quality. But when you correct, the AI system is learning how to do it in the next iteration. And as you can, what Wim was referring to is basically this type of, if you can just display quickly this, yes, this report from the session that you can use where you have main points from discussion done by experts and AI, provided by AI, by fine tune by experts. Then you have knowledge graph, which I said, where you have blue points are about topics and the white points are about speakers. And that’s probably the way if you put 10 sessions about fragmentation related issues, it can cross reference and say, hey, this was discussion in the session which Wim moderated and the next session that can help even visually. And then you have this, obviously, narrative report, which is what was also interesting. And I just invite you to look into this. At the bottom, you have for each session what was said, speed of speaking. We’ll have the fastest speaker at the IGF, which I’m getting since time is running out. A length of speech, we’ll have the shortest and longest speech at the IGF, speech time. And you have a report for per speaker. Therefore, you can see if what was said is basically useful, useful for discussion.

Wim Degezelle:
No. And what I referred to, if you click for more, it’s exactly the point I was making, that you have the different arguments split out and automatically linked to this topic. Well, and linked to topics. And I think that is a way you can compare with what is being said in other sessions.

Jovan Kurbalija:
Thank you. Well, I guess this is all from us, except if my fellow panelists do not want to say more. Anja, your body language?

Anja Gengo:
I am so impressed by this, looking into the speech length, speech speed and so on. And I’m very excited to see who wins this mini competition here. I don’t think you have me there, but I think I’ll be among the first five for sure. That’s speed, yes, the speed, but very interesting. And I think this is very useful for the IGF long term.

Jovan Kurbalija:
We may have even award for the fastest speaker at the IGF and slowest.

Markus Kummer:
If I may add a word, I mean, I did not talk about AI, but we were aware, of course, of the knowledge that was all. And we published it in book form the first years, you know, a summary of all that was said. But who reads a book and a summary? And here, this is an amazing tool. I do remember back in 2011, in the Nairobi IGF, I was on a panel, the main session, and Vint Cerf then said, pointed out that this immense data accumulated. And he said, there is a need for data mining. And now we are a little bit late, but this is precisely that. And it is very impressive indeed and a fantastic tool. Thanks.

Jovan Kurbalija:
I remember that session when the transcriber or automatic system was putting win surf instead of windsurf. And then we have to be careful because AI can misspell. But I can recall that point. Thank you. Any comments? I think the preferred point will be given to the person who read the last three reports. Good friend and colleague. And it’s so great to see you, a bit of a legendary member of the IGF community.

Audience:
Thank you, Jovan, and thank you for the wonderful panel. I would just like to commend the fact that the IGF became a knowledge base for us. Really, the beauty about it was because it was an independent platform that allows all stakeholders on equal footage to participate and contribute and talk about policy and also participate in capacity building and learning. The fact that it is a non-outcome event gave it more soft power to influence all Internet-related organizations, all stakeholders. We were disagreeing. We agreed. We reached a consensus and that consensus flew to other organizations. With time, it became a knowledge base. And I think it’s not only a knowledge base, it’s also a soft power or a soft force that influenced all related Internet governance organizations. And we’re blessed with resources from all over the world that we wouldn’t have the chance to know them if we didn’t participate in the IGF, like the wonderful panel that we see here from all over. So, these are all opportunities that have been given to us by the IGF, which someone at the time of those thought that it will not even continue to more than five years. And now we are 20 years and we’re looking for 20 years more, hopefully. So, hopefully. And actually, it became a model that has been copied into other dimensions. So, that was the beauty of the IGF. Another idea, since you talked about AI and clichés, maybe you can use the narrative AI to see how the IGF has emerged and evolved over the 20 years and how it can move to the next 20 years.

Jovan Kurbalija:
Well, we won’t ask AI, we will ask you to write this article because you are the living legend of the AI and of the IGF. I think if I could buy it, it would be better. Okay, okay. Thank you. Well, those of you who are on ChargePT, you may ask ChargePT what it will, how ChargePT would answer El-Ghuzain’s questions on this issue. Well, if you don’t have any other comments, questions, it was also a short and sweet session. We didn’t take too much of your time. We heard many interesting ideas. We had heard from history, through Anja’s secretariat perspective, from Serena’s no-cliché perspective, to Wim’s perspective of giving a concrete example of the reporting, as you said, giving a concrete example of the reporting as it is happening. And, well, that’s concluding a statement on the question of rich knowledge, not only codified in the sessions, but also in the way how IGF has been developing, performing, and developing some sort of tacit culture and understanding of thousands of people getting together and generating some new knowledge, new, sometimes, ethics, new respect, new understanding. And we shouldn’t forget it. We don’t live, unfortunately, in a society worldwide which cherishes respect and for different views. And the predominant view is that there are two views, my view and wrong view. And that’s how the world is, unfortunately, developing. But IGF has been fostering listening culture, engagement, respect for the others’ opinions. And I think, for me personally, it has been probably the first achievement, IGF. And we take this idea to use AI and human expertise, maybe to do another book on the AI and share with younger generations who should take it for the next 20 years. Thank you very much.

Anja Gengo

Speech speed

177 words per minute

Speech length

742 words

Speech time

251 secs

Audience

Speech speed

157 words per minute

Speech length

301 words

Speech time

115 secs

Jovan Kurbalija

Speech speed

156 words per minute

Speech length

2240 words

Speech time

861 secs

Markus Kummer

Speech speed

150 words per minute

Speech length

676 words

Speech time

270 secs

Sorina Teleanu

Speech speed

192 words per minute

Speech length

604 words

Speech time

189 secs

Wim Degezelle

Speech speed

169 words per minute

Speech length

612 words

Speech time

217 secs

Launch of Fellowship for Refugees on Border Surveillance | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

This comprehensive analysis covers a wide range of topics related to education, generative AI, risk management, information literacy, multi-stakeholder engagement, the actions of the European private sector in oppressive regimes, the impact of misinformation and disinformation, and the coexistence of privacy and safety in technology design.

One of the discussions revolves around educating people about generative AI and the need to mitigate its risks. The audience seeks advice on how to educate individuals about this technology, indicating recognition of its potential risks. However, the sentiment is neutral, suggesting a need for more information and guidance in this area.

Another argument highlights the importance of promoting critical thinking and curiosity among children in the face of the age of disinformation and rapid technological change. The supporting facts include a quote from Jacinda Ardern, who emphasises the shift from relying on facts obtained from traditional library resources to the current digital age with multifaceted sources. She urges individuals to seek knowledge about the process and origin of the information presented. This positive argument underscores the need to equip children with the necessary skills to navigate and critically evaluate information in the digital era.

The analysis also addresses the need for a multi-stakeholder approach to problem-solving and the challenges faced by civil society, particularly from the Global South, in effectively participating in solution-finding dialogues. These challenges include disparities in accessibility and effectiveness compared to governments and corporate organisations. This observation points towards the importance of inclusivity and equal representation in decision-making processes.

Another notable point relates to monitoring the actions of the European private sector, particularly within countries with oppressive regimes. The argument raises questions about how to effectively monitor the activities of companies operating in these contexts, such as China, Vietnam, and Myanmar. This highlights concerns about the impact of the private sector on human rights and the need for oversight and accountability.

The analysis also delves into the impact of misinformation and disinformation, noting that individuals who distrust institutions are more susceptible to these phenomena. This observation emphasises the importance of building trust in structures and institutions to combat the spread of false information.

Furthermore, the debate on designing technology that balances privacy and safety in the online world is also addressed. The argument suggests that current technology and design choices might limit the coexistence of privacy and safety, forcing the prioritisation of one over the other. This highlights the ongoing challenge of developing technology that can effectively address both concerns.

In conclusion, this analysis highlights the need to educate about generative AI, mitigate its risks, foster critical thinking and curiosity among children, ensure inclusivity in problem-solving dialogues, monitor the actions of the European private sector, build trust in institutions to combat misinformation, and address the challenge of designing technology that balances privacy and safety. These observations reflect the complexity and interdisciplinary nature of the issues discussed, as well as the importance of considering diverse perspectives to inform effective strategies and solutions.

Karoline Edtstadler

During the analysis, several key points were discussed regarding the views expressed by Karoline Edtstadler. Firstly, she emphasised the need for greater recognition and opportunities for ambitious women. Edtstadler observed that women who strive for success are often viewed negatively, being labelled as pushy or attempting to replace men. She believes that society should overcome this perception and provide more support and encouragement to women with ambitious goals.

Secondly, Edtstadler underscored the value of women’s unique perspectives in leadership roles. She argued that women’s ability to perceive life from their point of view – particularly as those capable of giving birth and responsible for nurturing and upbringing – makes them special. The shared yet different life experiences, such as motherhood, contribute to their valuable insights and decision-making capabilities.

In terms of AI regulation, the European Union’s efforts were commended. The EU is taking the lead in regulating AI and prioritising the classification of risks associated with AI applications. This focus on risk evaluation aims to strike a balance between promoting beneficial AI technologies and addressing potential societal impacts.

Austria was recognised for its proactive approach to digital market regulation. Even before the implementation of the EU’s Digital Services Act (DSA) and the Digital Markets Act (DMA), Austria had already established the Communications Platform Act, effective from 1st January 2021. Under this act, social media platforms are obliged to promptly address online hate speech. Austria’s early actions demonstrate the country’s commitment to creating legal frameworks concerning digital services.

Collaboration and multi-stakeholder involvement were identified as crucial factors in addressing the challenges posed by AI, digital markets, and misinformation. Edtstadler advocated for a concerted effort involving governments, parliamentarians, civil society, and tech enterprises. She emphasised the importance of collective efforts and shared understanding in tackling these complex issues.

The analysis also highlighted the importance of education and awareness in effectively handling the impacts of social media and new technologies like AI. This includes equipping the public with knowledge and skills to navigate technology, particularly among the elderly. Additionally, it was emphasised that regulations should strike a balance between ensuring safety and privacy while still fostering innovation.

Restoring trust in institutions, governments, and democracy was identified as a crucial objective. Given the rise of misinformation and disinformation during events like the Covid-19 pandemic, Europe aims to counter these challenges through robust regulations. By addressing the issue of misinformation, trust can be rebuilt among citizens.

It was also noted that technology, including AI, should not replace human decision-making, particularly in matters like judgment in law enforcement. While AI can offer efficiency in finding judgments and organising knowledge, drawing a clear line between human judgment and AI is important.

Handling the downsides of technology was deemed necessary to ensure its benefits for society. Technologies like AI can be used for good, such as performing precise surgeries and speeding up tasks in law firms. However, challenges and risks should be addressed to make technology beneficial for all.

The analysis further underlined the importance of a multi-faceted approach in decision-making processes. Edtstadler highlighted Austria’s implementation of the Sustainable Development Goals (SDGs), wherein civil society was invited to contribute and share their actions in dialogue forums. This multi-stakeholder approach promotes inclusivity and diversity of perspectives in decision-making.

In conclusion, the analysis emphasised the need for recognition and empowerment of ambitious women, effective regulation of AI and digital markets, collaboration among stakeholders, education and awareness, addressing challenges in democracy and technology, and restoring trust in institutions and governments. These key points and insights offer valuable perspectives for policymakers and individuals seeking to promote a fair and inclusive society in the face of technological advancements.

Jacinda Ardern

The Christchurch Call to Action is a global initiative aimed at tackling extremist content online. It was established in response to a terrorist attack in New Zealand that was live-streamed on Facebook. Supported by over 150 member organizations, including governments, civil societies, and tech platforms, the Call sets out objectives such as creating a crisis response model and better understanding the process of radicalization.

New Zealand Prime Minister Jacinda Ardern believes that it is crucial to understand the role of content curation in driving radicalization. She highlights the case of the terrorist involved in the Christchurch attack, who acknowledged being radicalized by YouTube. Ardern calls for an improved understanding of how curated content can influence behavior online.

Ardern advocates for a multi-stakeholder solution to address the presence of extremist content online. She emphasizes the need for collaboration between governments, civil society, and tech platforms, recognizing that it requires a collective effort to effectively eliminate such content. The Call focuses not only on existing forms of online terror tools but also aims to adapt to future forms used by extremists. It proposes measures such as implementing a strong crisis response model and working towards a deeper understanding of radicalization pathways.

Privacy-enhancing tools play a crucial role in preventing radicalization. These tools enable researchers to access necessary data to understand the pathways towards radicalization. By studying successful off-ramps, these tools can contribute to preventing further instances of online radicalization.

One of the challenges in understanding the role of algorithms in radicalization is the issue of privacy and intellectual property. It is difficult to obtain insight into how algorithms may drive certain behaviors due to privacy concerns and proprietary rights. Despite these challenges, gaining a deeper understanding of how algorithms contribute to radicalization is essential.

Artificial intelligence (AI) presents both opportunities and risks in addressing online extremism. AI can assist in areas where there have been previous struggles, such as content moderation on social media. However, caution exists among the public due to potential harm and risks associated with AI. Ardern argues that guardrails need to be established before AI can cause harm, and the development of these guardrails should involve multiple stakeholders, including companies, governments, and civil society.

The involvement of civil society is crucial in discussions around AI in law enforcement to protect privacy and human rights. Ardern believes that civil society, alongside the government, can act as a pressure point in addressing questions regarding privacy and human rights in the context of AI deployment.

Education plays a vital role in addressing online extremism. Teaching critical thinking skills to children is essential to equip them with the ability to think critically and evaluate information. Adapting to rapid technological changes is also necessary, as the accessibility of information has significantly evolved from previous generations, leading to challenges such as disinformation and the need for digital literacy.

The inclusion of civil society and continuous improvement are important aspects of addressing challenges. The creation of a network that includes civil society may face practical obstacles, but ongoing efforts are being made to involve civil society in initiatives such as the Christchurch Call. Ardern acknowledges that learning and improvement are continuous processes, emphasizing the importance of making engagement meaningful and easy.

Overcoming the debate around privacy and safety on social media is a critical step in addressing extremist content online. Efforts to access previously private information through tools created by the Christchurch Call Initiative are underway, allowing researchers to study this information in real-time. The findings of the research will inform further action, involving social media companies in addressing the identified issues.

Disinformation is a significant challenge, and Ardern highlights factors that make individuals susceptible to it, such as distrust in institutions, disenfranchisement, lower socioeconomic status, and lesser education. Preventing individuals from falling for false information is crucial, and rebuilding trust in institutions is necessary to address the impact of disinformation.

Supporting regulators focusing on technological developments is crucial in managing the challenges presented by technological advancements. Ardern acknowledges the poly-crisis resulting from these developments and emphasizes the need to support regulatory efforts.

Ardern expresses optimism in the ability of humans to adapt and design solutions for crises. She has witnessed humans successfully designing solutions and rapidly adapting to protect humanity, giving hope for addressing the challenges posed by technological developments.

Information integrity issues, such as the lack of a shared reality around climate change, impact serious problems. Ardern emphasizes the need to address these issues to effectively tackle challenges like climate change.

In conclusion, the detailed analysis highlights the importance of the Christchurch Call to Action in addressing extremist content online. The Call emphasizes the need for a multi-stakeholder approach involving governments, civil society, and tech platforms. Privacy-enhancing tools and understanding the role of algorithms are crucial in preventing radicalization. Guardrails need to be established for AI before it can cause harm, with civil society involvement to protect privacy and human rights. Education plays a vital role in teaching critical thinking skills and adapting to technological changes. The involvement of civil society, continuous improvement, and overcoming the debate around privacy and safety on social media are essential steps in addressing extremist content. The management of disinformation, support for regulators, and human adaptability in designing solutions for crises are also key considerations.

Maria Ressa

The analysis of the given information reveals several important points made by the speakers. Firstly, it highlights the significant online harassment faced by women journalists, which hampers their ability to participate in public discourse. It is reported that women journalists covering misogynistic leaders often face considerable online harassment and are frequently told to ‘buckle up’ by their editors. This indicates a systemic problem that needs to be addressed.

The role of technology in facilitating hate speech and the dissemination of harmful content is also underscored. The Christchurch terrorist attack, for instance, was live-streamed, demonstrating the misuse of technology for spreading violent and harmful content. This highlights the need to address the role of technology in inciting hate and enabling the circulation of such harmful material.

Efforts to address these challenges require more than just asking news organisations to remove harmful content. The analysis suggests that a multi-stakeholder effort is necessary. Following the Christchurch attack, Jacinda Ardern led a successful multi-stakeholder initiative known as the Christchurch Initiative, which aimed to eliminate extremist content online. This approach emphasises the need for collaboration and coordination among various stakeholders to effectively combat online attacks and extremist content.

The analysis also highlights the importance of strong government action in addressing this issue. The New Zealand government, for instance, took robust measures to eliminate the influence of the Christchurch attacker by removing his name and the footage of the attack from the media. However, it is crucial that government action remains inclusive and does not suppress free speech.

Furthermore, the analysis points out that valuable lessons can be learned from the Christchurch approach in combating radicalisation. The approach was developed in response to a horrific domestic terror attack that was live-streamed on Facebook. It aims to understand how people become radicalised, with a focus on the role of curated content and algorithmic outcomes online.

The impact of social media behaviour modification systems and the current focus on content moderation is a source of concern. Data from the Philippines has been analysed, indicating that lies spread faster on social media than factual information. The analysis argues that current solutions, which mainly focus on content moderation, are not effective in addressing the problem. Instead, a shift towards addressing structural issues, such as platform design, is recommended.

Furthermore, the potential harms of generative AI should be prevented rather than merely reacted to. Concerns over the impact of generative AI are mentioned, and the need for proactive measures to address the harm caused by AI is emphasised.

Civil society collaboration and the corruption of the information ecosystem are seen as crucial problems. The analysis suggests that civil society needs to come together more to address these challenges effectively.

The weaknesses of institutions in the global south, as well as countries experiencing regression of democracy, contribute to the challenges. Authoritarian leaders are leveraging technology to retain and gain more power, which further exacerbates the issue.

Interestingly, the analysis highlights that even intelligent individuals can fall victim to misinformation and behaviour modification in information warfare or operations. This emphasises the need for education and awareness to combat these challenges effectively.

The integration of privacy and trust into tech design is seen as possible; however, it often lacks regulation and pressure from civil society.

Lastly, the analysis suggests that we are in a pivotal moment for internet governance. Maria Ressa, one of the speakers, expresses a more pessimistic viewpoint on the situation, while others remain optimistic. The importance of effective internet governance is underscored, as it directly impacts various areas, including peace, justice, and strong institutions.

In conclusion, the analysis highlights the challenges faced by women journalists in public discourse, the negative impact of technology in facilitating hate speech and harmful content, the need for multi-stakeholder approaches, the importance of strong government action, and the lessons from the Christchurch approach. It also emphasises the concerns regarding social media behaviour modification systems and the current focus on content moderation. Structural issues in platform design, prevention of harm from generative AI, civil society collaboration, corruption of the information ecosystem, weaknesses of institutions, susceptibility to misinformation, and the incorporation of privacy and trust into tech design are other noteworthy points raised. Overall, the analysis underscores the significance of effective internet governance in addressing these complex issues.

Session transcript

Karoline Edtstadler:
It’s really a big honor for me to sit on the same panel, even if you’re not here, Jacinda, with you. You are really also a role model for women, and it’s a pleasure that I have the impression I’m getting also a role model by hearing what you said about me. So I would say you can break it down with a joke, which is of course only a joke, but this goes the following. The last 2,000 years, the world has been ruled by men. The next 2,000 years, the world will be ruled by women. It can only get better. But this is not the end of the story, because we are living in a very diverse world. We are living in a challenging world, and I think we need both, the approach of women and of men. But the difference is, and Jacinda already mentioned, being ambitious is something very important, that we women are judged and seen in a different way. If you are ambitious as a woman, you’re the pushy one. You’re the one you want to get the position of a man, and so on and so forth. And I think what we as a society have to learn is that we need both ways of seeing the world. And we women can make a difference, because we are giving birth. We are mothers. We are really perceiving the life. And I think this is also why we are different than men. And that’s good. There’s nothing bad in it. And especially in times like that, you mentioned a few of the crises we are still going through. It’s very important to have both ways of seeing the world, both assessments of female and male. And one last thing, I think women are still not that good than men in making better networks, in holding together, in encouraging ourselves. And that’s why I founded a conference last year in August in Salzburg, which is called The Next Generation is Female. And it’s not about things against men. It’s with the support of strong men. And it’s really for female leaders in Europe to get together, to network, to exchange their selves, and to have personal chains also and encourage ourselves, because it’s not easy and we will go into details also regarding hatred in the internet and being judged as a woman.

Maria Ressa:
And that’s where we’ll go. For the men, I hope you find this as inclusive. Part of the reason I started this way is because the attacks against women online are literally off the scale. When I talk to reporters who are, in some instances, covering male leaders who are misogynist, their editors tell them, you know, buckle up. It’s not our problem. But I think one of the things that we want to lay out is that it is a problem of the technology, it is an incitement of the technology, and it is knocking women’s voices out of the public debate. Let me bring it back to what exactly we’re talking about, the technology that is shaping our world today. And one of the most interesting things Jacinda Ardern did was a very strong reaction to the live streaming of a terrorist attack. It was the first time that a government literally took, asked all news organizations around the world to take out the name of the attacker. So this was, I was surprised when we got this. But when we thought about it, I was like, oh, well, that kind of makes sense. But also to try to deal with taking down this footage from all around the world. Jacinda, you’ve pointed to the Christchurch Initiative as a multi-stakeholder solution for eliminating terrorist and extremist content online. What did it succeed in doing, and where can you see that moving forward, given the landscape we’re working in today?

Jacinda Ardern:
Thank you. A really big question, but I hope that there are some useful lessons to be learned. Where we’ve succeeded, where we have more work to do. So I assume that a number of people in the room will have a bit of prior knowledge about the Christchurch Call to Action, which is over 150 now strong with member organizations made up of, and supporters made up of the likes of government, civil society, and technology membership and platforms. But taking a step back, why did we create this grouping in the first place? Well, as you say, on the 15th of March in 2019, we experienced a horrific domestic terror attack against our Muslim community. It was live streamed on Facebook for a total of 17 minutes, and then subsequently uploaded a number of times over the following days. It was just prolific. People were encountering it without seeking it. And you’re right to acknowledge that in some cases, it was in people’s feeds. Because it was being reposted by news outlets or referenced by news outlets. Now in the aftermath of that, of course, New Zealanders had a very strong reaction. This should never have been able to happen. But now that it’s happened to us, what can we do to try and prevent it happening again? And we took an approach that was not just about how do we address the fact that live streaming itself became a platform for this horrific attack? Because if we just focused on that, that’s a relatively narrow brief. And we know that the tools that are used for violent extremism or by a violent extremist or terrorist online, they’re going to change. Live streaming was a tool at that time. The response was ill-coordinated by other tech platforms for a number of reasons. So work needed to be done there, yes, but we also wanted to make sure that we were ready and fit for purpose should other new forms of technology be the order of the day for those violent extremists. So the Christchurch Call to Action has a number of objectives. Some of them are things like creating a crisis response model so that we are able to stand up quickly should anything like this occur again. And we have not seen at the scale and magnitude of Christchurch online since then. And that’s in part because we now have this almost civil defense model. But we also said, how does someone become radicalized in the first place, acknowledging that in our case, the terrorists involved acknowledge themselves that they believe themselves for being radicalized by YouTube. Now, you know, people will debate whether or not they believe that to be the case. But regardless, there were questions there to be asked around what we can do as governments within our own societies, but also to better understand these pathways. You know, what is curated? How is curated content and algorithmic outcomes driving particular behavior online? So we’ve got a large piece of work now looking at understanding that better. And these, I think, are areas where our learnings will be hugely beneficial much more broadly.

Maria Ressa:
That’s fantastic. Let me follow up with that, which is, you know, last week or I guess a week and a half or so ago. I taught a class with Hillary Clinton and the Dean of SIPA, Karen Yari Milo, where we looked at the radicalism that comes with the virulent ideology of terrorism, right? How that radicalizes people. But one of the things we did in the class was to show how similar it is with what we are going through now on a larger scale with political extremism. Are there any lessons from the Christchurch approach and the pillars that you’ve created, how to deal with radicalization, for example, that we can learn to combat the polarization we’re dealing with globally?

Jacinda Ardern:
Good question. And where I come at it from is our starting point was, how did this individual become so radicalized that they were driven to fly to our country, embed themselves in a community, and then plan an attack against our Muslim community and take 51 lives? How is it that that can happen and what can we do to prevent it? And now the learnings from that may be applicable across a range of different areas and a range of different sources of motivation and rationales, whatever they may happen to be presented by the individual. One common denominator that we determined was that, despite the ideology that might be driving the behavior, was that we couldn’t actually answer some of these questions because so often there would be this issue around, well, privacy, intellectual property. It was very hard to get an insight into how, for instance, algorithms might be driving some of this behavior. If indeed it is. And so we took a step back and over time pulled together a group of individuals, as in governments and platforms, who were willing to put funding into creating a privacy-enhancing tool, which will enable researchers to look at the data that we need to look at in order to understand these pathways, and that will enable researchers across a range of fields to better understand that user journey and that curated content, help understand what successful off-ramps look like, and I hope further prevent this kind of radicalization online.

Maria Ressa:
No, that’s a perfect example. And Caroline, you were in the EU, the EU has been ahead, and data being one of the key factors for how we’re able to see the patterns and trends that influence behavior. Could you tell us about the EU’s approach to its democracy action plan, and then now rolling out the Digital Services Act and the Digital Markets Act?

Karoline Edtstadler:
Well, I think at times like this we should do everything in parallel, and there are so many crises and so many challenges we should find an answer for that it is really quite hard to do so. But I really think that the European Union is regarding the AI Act ahead, and if I’m saying ahead, I mean we are of course lacking back, because we should have been quicker. But the developments were so quick in the last two years, I would say, that it is no more like that. So now we are really trying to do something regarding the AI to have a framework for AI to have a classification of the risks of AI, and I think that is something very important. To classify the risks, because there are some applications, they do not harm us. We need them, I don’t know, for some spam filters, it’s not doing a risk, but on the other hand we have AI which is really harming the whole of our society. And this is the one thing. The other thing is that we already have the DSA and the DMA in the European Union, and I can proudly say that we in Austria were pushing that a lot, and we already started a process in 2020 to have a legal framework in Austria. And it was, I would say, now I put it diplomatically, I had a lot of discussions also with the European level, because they were not happy that we wanted to have an Austrian legal framework for that. But they knew that it will last for at least two years to create it in the European Union, and we were really quick in Austria, we had the Communications Platform Act set into place from the 1st of January 2021, and this is something where the social media platforms have to deal with that issue. They have to do reports, they have to set up, they had to set up a system where someone who has hatred in the internet can push a button and say, this is against me, do something, delete it now, because it’s going around the world very quickly, and you as a victim should be helped in the minute it comes across. So now we have the DSA and the DMA, and of course we have to reveal our legislation, but this was also my goal, to have first the national level, then the European, and now I’m here as a member of the leadership panel, and really try to create something for the universe. So this is for the whole international community, and this is something which is not easy, because of course different governments coming from different standpoints have different assessments of the situation, but in general it’s about human beings treating and have the need to treat this big thing of danger also for our whole society, as Jacinda also said, and as we saw in her country with this really horrifying attack, terrorist attack.

Maria Ressa:
No, that’s from the data from the Philippines that we’ve looked at and analysed in the Nobel Lecture in 2021, I called social media, the tech companies, behaviour modification systems, and I will tweet the data that shows that, as well as the impact we saw in our society. So let me ask our two leaders, you know, for social media, the first time that machine learning and artificial intelligence was really allowed to insidiously manipulate humanity at scale, you’re talking about at that point maybe 3.2 billion, right, deployed at scale across platforms, because it doesn’t just stay in one, there was a lot of public debate and a lot of lobbying money that was focused around downstream solutions, right, the way I think about it is, you know, there’s a factory of lies, I mean, you would have seen this already that is spewing lies into our information ecosystem, the river, and what we tend to focus on in the public is content moderation. Content moderation is like taking a glass of water from the river, cleaning it up, and then dumping it back into the river. So, you know, how can we move away from these downstream solutions like content moderation more into structural problems like design? The fact that MIT in 2018 said lies spread six times faster on these technology platforms than really boring facts. So that design allowed surveillance for profit, right, a business model that we didn’t name until Shoshana Zuboff wrote a book called Surveillance Capitalism in 2019. That just meant that we were retrofitting, we were reacting to the problems after they materialized. Now that we’re in the age of generative AI, I wonder how we can avoid being reactive. Why should the harm come first before we protect the people here? I know it’s a tough question to throw at you, but let me give you an example, for example, of like the pharmaceutical industry. There was a COVID vaccine that we were all looking for, like imagine if the COVID, the pharmaceutical companies didn’t have to first test it, that they could test it in public. So this group A, I’m going to give you vaccine A, and this group here, I’m going to give you vaccine B. Oh, group A, I’m so sorry, you died. I only say that because it is exactly what happened in Myanmar, for example, where both the UN and MEDA sent teams to study genocide in Myanmar. So can we do anything to find, to prevent these types of harms happening? And Caroline first or Jacinda? Caroline.

Karoline Edtstadler:
Well, I would say the first thing is to raise the awareness, to take it as it is, to raise the awareness and to allow people. education and give them skills to deal with that. The second thing, and this is what we are trying to do, we are doing that also in the leadership panel, is to set some legal framework in place. And I would say it should be a regulation that is not hindering innovation, because we know that the developments are quick, they are needed, and they can be used for the best of us. But we have to learn to handle them and also to handle the downsides. And now it’s said like very easily put some legal framework in place, but it’s not so easy because I’m sure that we will lag behind also in the future. And I sometimes compare that with my former profession as a criminal judge. As a criminal judge you’re sitting at the courtroom, but you never have all the information the perpetrator has. And you are always behind, but you in the end have to deal with it and you can deal with that. And I think that’s the same approach we have to use in the regard of new technologies of AI and all the things coming along. And we already proved that it is possible to do so with the DSA and the DMA and before with the legal framework we put in place in Austria. Because, maybe two more sentences to that, when I started the process in 2020 and when I invited to social media platforms to get into a dialogue with me about hatred in the Internet and what we can do against it and that we want to put up a legal framework from the parliamentarian side. Because we as democracies are represented by the parliamentarians and we are ruled by governments. They said, oh no you don’t have to do that because we are so good in handling the hatred in the Internet. We are deleting all the hate postings and so on. We don’t need a legal framework from the national state or from, I don’t know, the EU. And now we have it. And now I think almost all of them are quite okay with them. Let’s put it like that. And we are now in a process also here in Tokyo, we were in Addis Abeba, getting into an exchange, exchanging our experiences and also the expectations of society and this is a good development.

Maria Ressa:
Fantastic. Jacinda, your thoughts? Upstream solutions for generative AI.

Jacinda Ardern:
And look here, I think that that sentiment that you shared in instigating this part of the conversation around how do we put in place guardrails before the fact. This has to be, I think, one of our key take homes over the last, you know, ten years or more. And I think we’re naturally seeing, I think, a hesitancy or a scepticism in the public as a result of the fact that we’ve been retrofitting solutions to try and prevent harms after the fact. Pew released some research, I believe it was recently, demonstrating that roughly half of people were quite negative about the relative benefits of AI and those who know more are even more negative. Now I’m glad that will be because we are talking so much about the potential harms and there isn’t that same emphasis on the opportunities that exist. But I also think it speaks to the experience in recent times of the public and the fact that this is, you know, it’s relatively rare to have a field of work where just because you can, you do. As in we have the ability to develop this tech and so we push ahead even though there are those who are flagging risks and flagging harm. What I’m, I’m an optimist though and I think what I find really encouraging is that we are having these open conversations around the concerns that exist and included in those conversations are those who are at the forefront of the tech itself. And this is where I come back to the fact that I as a past regulator, I am not in the best position to tell you precisely the prescription for those guardrails. But I can tell you in my experience the best methodology to developing them. And that in my mind will always be in this fast paced environment, not to solely take a regulatory approach, although it’s an incredibly important part of the mix, but because of the rapid pace in which we see these technologies developed. And that the multiple, I think, intersections and perspectives we need at the table, that a multi stakeholder approach that includes companies, government and civil society is incredibly important. And, you know, in my mind, that is even if I can’t give you the prescription, I’m absolutely, I absolutely believe that will be the how. One other thing I did not anticipate when we set up the Christchurch call to action and when we convened a group of that nature, was the fact that the companies themselves created a natural tension amongst themselves. Those who are willing to do the least were pulled up by those who are willing to do the most. There was full exposure over, you know, those issues where they might have been up that might have said previously in a one on one, that’s not possible. You got attention there where others were, they knew that it wasn’t possible just to speak to a regulator as though they were unfamiliar with the tech or with the parameters they were operating within, because they’re in a room with those who did understand. And I think that’s particularly important in an area where this is so fast paced, it is highly technical. We need that tension, I think, in the room as well. The final thing I’d say is there are opportunities here. AI may well help us in some areas where we have previously struggled with some of those existing issues that we might might have been spoken to around content moderation, social media, and so on. And naturally, so many of these things just collide in these conversations. And so we should keep looking for those opportunities. But I, for one, always want to take a risk based approach. And I’ll always look for the guardrails.

Maria Ressa:
Fantastic. So I’m going to ask one more question. And then if you have questions, please just go to the microphones. We’re coming up on the last 20 minutes. So this last one, so we’ve tackled the first contact with AI. This we’ve looked at generative AI. And yes, the EU’s doctrine on the AI, there’s lots of doctrines that have been pushed out already. But let’s talk about the use of AI in law enforcement and surveillance. The concerns that have been raised about civil liberties, about privacy, what guardrails can we put in place to protect human rights? And I’m going to toss that first to Jacinda.

Jacinda Ardern:
Yeah, this is this is where we should not be starting from scratch. You know, liberal democracy should pull from the toolkit of human rights, privacy, you know, these these are well established rules and norms. Now, where if indeed there is any nuance in that discussion for any particular area, and often it should be relatively black and white, but if there is any nuance in the discussion, that is where civil society, in my mind, has to be at the table. And again, you know, not to harp on about the importance of the multi stakeholder approach, but let’s let’s first and foremost, not forget that we have well established human rights practices, privacy laws, and we should this should be our fallback position. Any question mark over that then civil society alongside government should be really a good pressure point in those

Maria Ressa:
conversations. And this is where I would encourage civil society to to come up stronger we must because the use of Pegasus and predator, the increasing conflicts all around us. Caroline, the same question to you what guardrails can

Karoline Edtstadler:
we also put? Well, I fully second what Jacinda said. I don’t think that we have to invent the wheels newly. There is already a human rights based order in the world, even if we see, especially since February last year that some are really disobeying everything we concluded to follow. But coming back to the to the internet and technology side, I think we have to guarantee rules based approach in this regard. And I also fully second that AI and all the other technologies can be used and are already used to the best of all of us. Think of the medicines. They are used for operations. They can do it much more precisely than a human person could ever do it. And this helps us of course. And also in the law enforcement you asked. I recently heard a presentation also in Austria before lawyers and barristers and it was also told that in the future of course law firms will use AI in finding the judgments, in structuring the knowledge quicker. But the question is to which point will we go? Will in the end be there a judge, not a judge, but some technology sitting and deciding if someone has to be sent to prison or not? So this is really where we should draw a line. And this is what we are trying within the European Union with the AI Act to structure the risks of AI. And I really do think that this is the way we could guarantee that these technologies are used for the best of all of us. And of course we also have to be clear there is always a downside. But let’s handle these downsides and then it’s better for all of us. Great. Annie,

Maria Ressa:
the mic is open for any questions from the audience. Yes, please. Do I have it? Okay. Say your name and then to whom you want to throw the question. I’m

Audience:
Larry Maggett and I’m the CEO of Connect Safely. And I guess I’m here for some advice because we are writing a parents and educator guide to generative AI. And we’ve got a journalist here, we’ve got a couple of politicians who are really good at talking to the general public. So how would you address parents, educators, people who don’t have a technical knowledge of what GAI is to reassure them that it’s probably not the end of the world, at least initially. But also warn them that there are significant risks and focus a little bit on what they can do within their own families and classrooms to mitigate the risk for those people, for the kids and themselves. Thank you.

Karoline Edtstadler:
Caroline, you want to take it? Well, I think it’s true. The reality is sometimes that children are explaining to parents how to use the phone or they are not doing so and they are simply using their phones and doing things the parents didn’t want them to do with the phones. So I think it’s also something we as governments have to try to put into some legislation or let’s say information campaigns to get the knowledge and the skills to the people. And this is of course a big big challenge because we have to also train elder people because they used these things but there is again always a downside of it. And this is something we can only do together. We had some campaigns also in Austria and some trainings for elder people and we had a lot of discussions also how to train parents. And I don’t have the answer how to do but I think this is the way forward to exchange also our experiences in different countries, what works and how it can work. Great, thank you.

Jacinda Ardern:
This is such a good question. You know I was in the generation that really sat in that really interesting transition point where you know we went from being students who were taught how to use the Dewey Decimal System to find a book in a library and once you’d figured out how to find in a book in a library you had found your fact and your resource. To then being in a period where we were of course inundated with the ability to seek information at our fingertips but we weren’t really taught I think as successfully that what we then found on that shelf might not necessarily be the fact that we thought we were finding before. And the way that my, I had a history teacher who was extremely influential for me growing up who described it as going from a hose to a fire hydrant for kids. So regardless of the particular tech at any given time, be it generative AI or whatever else we may encounter in the future, I would hope that we teach our kids to be curious. Not cynical but curious. And now the tools that we have may be giving the impression that we’re going from a fire hydrant back down to a really well refined hose but that water has been derived from a particular source in a particular way and we need to teach kids just to be curious about that. To go back not just from the information in front of you but think a couple of layers back and think critically in a couple of layers back. So I would sum it up with just curiosity in everything. I think that is going to help us with this, with the age of disinformation, with the rapid technological change and I hope create a generation that is not cynical as a

Audience:
result. Fantastic. Hi, good morning. My name is Michael. I’m the Executive Director of the Forum on Information and Democracy. It’s very intimidating to be in front of greatness but I’ll try to ask a good question. One of the themes I’ve heard today and yesterday in fact was the importance of a multi-stakeholder approach to finding solutions and my question is specifically around the participation of civil society. It’s very easy for governments to show up. It’s very easy for companies to show up, particularly in an environment where pay to play is so pervasive. Where you pay a few hundred thousand dollars, your CEO can show up and speak at an event. You can host a session in a panel. You can capture the narrative. It’s not so easy for civil society. You can’t just buy a business class ticket and get on a plane the next day and show up in an event. So if we’re going to really advance a multi-stakeholder approach, what are some solutions to ensure civil society, especially those from the Global South, can participate effectively? I like the Global South. Let’s, yeah. Well I can only

Karoline Edtstadler:
say we try really to include civil society and I think also the understanding is there that we can tackle these problems and issues only together. Not the government alone, not the parliamentarians alone, not the civil society, not the tech enterprises, but only we as a civil society together and I really mean all of us including the government. And we are doing that in Austria also. I give you an example for the implementation of the SDGs. I will go back on Wednesday and we’ll have the third dialogue forum on SDGs where we really invite also the civil society to contribute, to tell us what they are doing and this is the same here. You can’t do it bottom down. You can only do

Jacinda Ardern:
really good person to speak to this yourself, so maybe you should have a punt at the question. My very brief contribution would be that Michael, I totally agree with you. Early on in the call, you know, most of my interactions were, you know, with civil society at the table because that was what we were building. wanted to be a structure where civil society were at the table. As you say, there are some real practical things to overcome in creating a network of that nature. There are, and they may well be in the room, I can’t see the room, but if anyone from our Christchurch call network is there, I’d ask them to give a quick raise of their hand and just to share at some point, whenever it’s appropriate, their experience. We certainly have learned as we’ve gone over the last four years around how we can make it easier at a practical level and meaningful, that engagement. But the fact we are still going, and I think it is still seen as a valuable network, I hope means we’re doing some things right, but also learning as we go because we’re not perfect. But I’d hand back to you, dear moderator.

Maria Ressa:
Thanks, Jacinda. I mean, Michael, you know there are these times when civil society comes together. We have coming up the Paris Peace Forum coming in. Over the last few years, that’s been one way that we’ve been able to get civil society together, but frankly, not enough, I think. And there are many different groups, like Talin in Estonia has just handed over the Open Government Partnership to Kenya, right? We are, there are all these different groups that are working together, partly some on past problems that could evolve to take on the, you know, I’m a journalist, so information is power, and that is, to me, the top problem. If we do not solve the corruption of the information ecosystem, we cannot solve anything, let alone climate change, right? Let me throw, let’s take three questions, and then our leaders can answer. Please.

Audience:
Good morning, Svetlana Zenz, Article 19. I work in the program which actually engages civil society talking to tech sector. My main countries are China, Vietnam, and Myanmar. Yeah. So the question is the following. I mean, all the European initiatives regarding controlling and, let’s say, monitoring the private sector, especially ICT sector, working in European territories are great. And of course, it’s a human rights centric. I mean, some of the CSOs in Europe might not agree with me, but in comparison with Myanmar, for instance, they’re very good points to follow. So my question is that all the private sector which is regulated in Europe, especially with the Digital Act or Digital EA Act, how would you monitor their actions in the countries with the territory and regimes? Great. Go ahead, please. Hi, yeah, I’m Viet Vu from the DAIS in Toronto Metropolitan University. Maria, we had you in March at our Democracy Exchange and on Democracy in Power, and so it’s related to that. How do you square the fact that much of the people most susceptible to misinformation and disinformation are the kind of people who lack fundamental trust in structures and institutions? I’m sure there are strange conspiracies about what we’re doing in this room today. How do you reach those people? Great, lack trust. And we’ll take one more. Yes, and I think, I hope it’s not too big of a question, but we are being told as humanity that privacy and safety cannot coexist in the online world. We are being told that because the technology is the way it is and because we are faced with design choices that currently exist, privacy cannot be absolute if there is any consideration of safety and safety cannot be guaranteed to anybody because we have to really care about privacy. My question to you is, how can we take a step back and think about human rights and start from there and then think about design choices instead of ending up, to be honest, in very stupid debates about little technology choices, little technology bits and pieces that we need to be working on to overcome challenges and get to the place where we can have both? We really need your help as thought leaders, so any thoughts about that would be really welcome.

Maria Ressa:
Fantastic. Let me toss it, Jacinda, you first, Caroline, and then I’ll pick up some of the questions too.

Jacinda Ardern:
Yeah, I’ll try through, maybe through the last two. I’ll leave the first one to others. Just starting on that last one around the safety debate and the privacy debate. I shared very briefly one experience we had with that, but it persisted for years because as I said with the Christchurch poll, for instance, we didn’t want to just look at downstream, we wanted to go upstream. We wanted to look at those things that may be contributing to radicalization. Algorithm outcomes kept coming up, privacy then kept coming up. Well, we’ve then demonstrated with the establishment of this tool that you can overcome that debate. It did take some resource to establish this tool, but the Christchurch Call Initiative on Algorithmic Outcomes, which now has researchers who are now in real time accessing what previously we were told was information for privacy reasons we would not be able to get to. Now, the next step for us will be demonstrating that that research can prove valuable and then saying to the social media companies, well, this is what we’re learning now, what are we going to do about it? So there, I think, will be the critical next step. But the learning for me there is there are ways. It took too long, though. That was four years that it took us to really overcome that issue. But I hope that that gives some encouragement that we are pushing past it. And sometimes that creative tension I talked about in the room with other tech companies is really helpful for those debates. The second issue that, you know, right hitting the nail on the head, what do we do about those who are susceptible to disinformation? You know, we’ve seen what it can do to liberal democracies when that is writ large. We’ve had some very recent examples in a number of countries, and it is devastating. Here I track back again. Now, there are those who are doing research on this, I believe, and particularly the likes of Columbia, which are tracking back to look at what are the common themes that we’re seeing in those who are most susceptible. But instincts will probably tell us quite a lot as well. And if you’ve got an inherent distrust of the state, probably at some point the state’s failed you in some form. Now, that’s a generalization. But if there’s a general view that your economic position in life is influenced by the state, and you’re in a lower socioeconomic category, and you’re disenfranchised, or you’ve had an experience with the state, for instance, where at some point you’ve been in their care, these are some of the features that we see, and of course, educational attainment as well. Now, we need to track back then as governments think about what we can do to reestablish that trust in institutions. And it means by actually delivering for our people as they expect us to. It’s as simple as that. When it comes down to the one-on-one, I’ve tried to have conversations with people who are deep in conspiracy, and it is an incredibly demoralizing experience. That’s why I always go back to the beginning. How do we stop people falling in in the first place?

Karoline Edtstadler:
Caroline. Well, I would like to start with the second question, because I think that’s the main question for us as politicians. How can we gain trust again in institutions, in governments, in democracy as such? I would say this is also the most difficult question to be answered. We are living in challenging times. This was mentioned already several times, and people are tired of crisis, and they want to believe easy solutions. And this is really our problem, but democracy is a hard work every day, and we have to fight for the trust of the people on a daily basis. So this is the only thing we can do, and we all have to be aware of the fact that you normally cannot find a solution which is beloved by everyone. So there will always be a certain amount of people, a group or something like that, you can name it, who is not happy with the decision. But democracy means that we find majorities, and this is something which was clear in the past, and now it’s not so clear. And one of the reasons is that, and this is also going to the first question, that you can find misinformation, disinformation in the internet, that you find your group only echoing your opinion. And this is really something we found out, especially during the COVID pandemic, that is nearly impossible to get people out of such chambers if they are in these, yeah, in their opinions and surrounded by people who have the same opinion. So what we try to do is to regulate things in Europe, and we would like to be a role model also for the world. That’s why I’m very happy that I’m part of the leadership panel and that I can contribute also from my experiences in Austria, but also at the European level. And again, we are not at the end at this story. And regarding this third point, privacy versus safety, I think we need both of them. And it’s always a challenge, and it has always been a challenge to guarantee human rights. You always have the situation that the human right of the one person ends where the human right of the other person is infringed. And this is something we have to do on a daily base, and what I did as a criminal judge in the courtroom on a daily base. If someone wants to demonstrate, he can, of course, do that. But this right ends when the body of, I don’t know, another person or a policeman is injured. And also here, you have to find the balance, and this is what we have to do. So I would not be that pessimistic than the person, I think it was a woman, who put the question to me that we can do both. We have to do both.

Maria Ressa:
Jacinda has a hard stop at the top of the hour. So let me quickly answer, and then I wanna ask Jacinda for her last thoughts before we let you go, Jacinda. So the quick answer, the first question, the weakness of institutions in the global south, and the countries that you mentioned are the countries where we have seen the regression of democracy, right? And yes, in countries with authoritarian leaders, most of the time, they are using this technology to retain and to gain more power. How do we deal with that? We can talk about that more after the panel. The second one, the cognitive bias that you mentioned, it is there, but frankly, smart people think that they’re immune from the behavior modification aspects of information warfare or information operations. We are all susceptible, and sometimes, the smarter you are, the harder you fall, right? This is a problem. I think it’s a problem for leaders. It is a problem for our shared reality. This is the reason why I have spoken out a lot more about the dangers, because without a shared reality, we cannot do anything together. Finally, the last one, oh my God, I love your question, because privacy by design, trust and safety by design, when the tech companies say that they cannot, it just means they won’t, because there is no regulation, no law, no civil society pressure to demand it. We deserve better. Let me throw it back to Jacinda Ardern for her closing thoughts.

Jacinda Ardern:
Oh, look, I think that you’ve traversed a set of issues that are confronting, I think, all of us in different ways and cut across a range of other incredibly serious and important issues. How do you tackle climate change unless you have a shared reality around the problem definition? The degree to which we see information integrity issues playing out in geo-strategic issues, the fact that they’re coupled with what would be considered traditional forms of warfare. There is a poly-crisis, and at every level of that poly-crisis, we see this extra layer of the challenges presented by technological developments that we’ve seen in recent times. But I’m an optimist, and I’m an optimist because in the worst of times, I’ve been exposed to the ability of humans to design solutions and rapidly adapt and implement solutions, ultimately, for the most part, to protect humanity. And we have that within our capability. We need to empower those who are specifically focused on doing that, who are dedicating themselves to it, often at great sacrifice. We need to support regulators who are focused on doing that, and we need to continue to just rally one another in what is an incredibly difficult space. So my final note to those in the room who are working in these areas, I acknowledge you and the work you do. It is incredibly tough going, but you are in the right place at the right time, and your grandchildren will thank you for it.

Maria Ressa:
Thank you. Thank you, Jacinda Ardern. Caroline, your thoughts?

Karoline Edtstadler:
Well, I can only second what Jacinda said. Your grandchildren will thank you one day because it’s the time now to create the future, and these challenging and crucial times need all of us. And I’m coming back to what I already said. We cannot do it alone as governments. We cannot leave it to the tech enterprises. We cannot do it as politicians, no matter where you serve. We need all of us. We need to change society, to be aware of the challenges ahead, and stay optimistic. I really would like to conclude by stay optimistic. I think thinking back and learning from history, normally it took about 100 years to get used to a new technology. And we are talking about the internet, and we have got father of the internet. We serve as our chair in the leadership panel, and he invented the internet about 50 years ago. So we are halfway. It’s the right time to set the legislation for the internet. It’s the right time to be aware for the children, the parents, the grandparents, how to and what to do with the internet and all these applications we already use in our daily life, and see the positive things, how we changed our life to the positive, since we have all these technologies included in our daily life. So this is really what I try to do. I’m really proud that I have the opportunity to contribute at that level, but that doesn’t mean that it is more important than other levels. The contrary is the case. Everyone is needed in this process, and we can only do it together.

Maria Ressa:
Fantastic, and the last thing I would say is everyone in this room, you are here for the Internet Governance Forum. It is a pivotal moment, and they are so wonderfully optimistic. I’m probably a little more pessimistic, but it depends on what you do, right? It comes down to all of us, and I hate to say it that way, but it is this moment in time. Thank you so much, Right Honorable Jacinda Ardern, Minister Extatler. You guys in the room. We move to the main session. Thank you for coming, and let’s move.

Audience

Speech speed

192 words per minute

Speech length

783 words

Speech time

244 secs

Jacinda Ardern

Speech speed

182 words per minute

Speech length

3161 words

Speech time

1043 secs

Karoline Edtstadler

Speech speed

177 words per minute

Speech length

2948 words

Speech time

999 secs

Maria Ressa

Speech speed

166 words per minute

Speech length

1750 words

Speech time

634 secs

Design Beyond Deception: A Manual for Design Practitioners | IGF 2023 Launch / Award Event #169

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Cristiana Santos

The analysis focused on discussions around different aspects of e-commerce, deceptive design, dark patterns, and regulation. One of the speakers, Chandini, conducted research that had a positive influence on regulators, leading to the implementation of easier subscription and unsubscription processes on platforms like Amazon. This highlights the importance of academic research in shaping policies and improving user experience in e-commerce.

Cristiana Santos brought attention to deceptive design practices from a legal standpoint. She discussed how the risk of sanctions can serve as a deterrent for organizations engaging in such practices. Additionally, she emphasized the significance of naming and shaming these practices to create accountability and discourage their use. This legal perspective sheds light on the potential consequences and strategies for tackling deceptive design in the industry.

The analysis also delved into the prevalence of dark patterns, not only within big tech companies but also in smaller, public organizations. Dark patterns refer to manipulative design tactics that make it difficult for users to refuse or withdraw consent. The negative sentiment surrounding dark patterns was evident, as they were found to have harmful effects on users. Studies have shown that dark patterns can cause cognitive harm, result in the loss of control over personal data, evoke negative emotional responses, and create regret over privacy choices. This highlights the need to address and mitigate the adverse impact of dark patterns on individuals’ well-being.

Furthermore, there was a call for better regulation and a shared vocabulary surrounding dark patterns. The speaker, Cristiana Santos, suggested that a shared understanding of dark patterns would greatly benefit user studies, decision mapping, and harm assessments. It is essential for regulatory bodies and scholars to align in their understanding of dark patterns to effectively regulate and combat their negative consequences. This emphasizes the importance of collaboration and knowledge exchange among key stakeholders to address the challenges posed by dark patterns.

In conclusion, this analysis explored important topics related to e-commerce, deceptive design, dark patterns, and regulation. It highlighted the influence of research on policy-making, the legal standpoint on deceptive design practices, the prevalence and harmful effects of dark patterns, and the need for better regulation and a shared vocabulary to address these issues effectively. This comprehensive examination provides valuable insights into the complexities surrounding user experience and the imperative for responsible technological practices in the digital landscape.

Titiksha Vashist

The analysis explores the issue of deceptive design and its negative impact on users and digital ecosystems. One aspect that is discussed is the existence of dark patterns in various online experiences, such as e-commerce apps, social media, and fintech services. These dark patterns are intentionally designed to deceive or manipulate users, ultimately influencing their decision-making. This can lead users to make choices that they would not have made if not for the deceptive design.

Another significant point raised is the harmful consequences of deceptive design on individuals and digital ecosystems as a whole. Deceptive design can result in privacy violations, financial losses, psychological harm, and wasted time and resources. These consequences not only affect individuals but also have broader implications for the integrity and functioning of digital ecosystems.

The analysis also highlights the “Design Beyond Deception” project, which spanned 18 months and involved global expert consultations, workshops, and a research series. The primary goal of this project was to gain a better understanding of how deceptive design impacts contexts that have received less attention in previous research. By shedding light on these understudied areas, the project aims to contribute to the overall understanding of the harmful effects of deceptive design.

Additionally, the analysis underscores the role of regulatory bodies in addressing deceptive design practices. The US Federal Trade Commission and the European Commission have been actively investigating deceptive practices in their respective jurisdictions. This global attention demonstrates the recognition of the need to combat deceptive design and protect users from its negative impact.

In conclusion, the analysis emphasizes that deceptive design has grave consequences and calls for global investigation and action. Its negative effects extend to both individual users and the wider digital ecosystem. Deceptive design distorts fair competition and leads to unfair trade practices. Therefore, it is crucial to address deceptive design in order to safeguard the integrity and well-being of users and digital systems.

Caroline Sinders

Harmful design patterns present a significant challenge on a global scale, particularly within the realm of the modern web. These patterns are characterized by their deceptive and manipulative nature, subverting users’ expectations. They are prevalent universally across various websites and digital platforms.

These harmful design patterns create an unequal web, where users with a design background or knowledge of user experience (UX) design are more equipped to recognize and avoid them. This knowledge gap creates a disparity between users who can navigate the web safely and those who lack this understanding.

Addressing and investigating these harmful design patterns requires a comprehensive understanding of the expected design patterns and where deception or manipulation occurs. This highlights the importance of interdisciplinary research, bringing together policymakers, regulators, and designers. The collaboration of these different areas of expertise can lead to more effective strategies to combat and mitigate the negative effects of these design patterns.

Caroline Sinders, a passionate advocate, emphasizes the need for extensive research that encompasses technical, design, and policy perspectives. Understanding the entire process of product development, including manufacturing and testing, is essential for thorough analysis of the interface. This comprehensive approach strengthens the ability to identify and address deceptive design patterns, ensuring a more user-friendly and trustworthy digital landscape.

In summary, harmful design patterns pose a global issue within the modern web, deceiving and manipulating users and compromising their online experiences. The resulting unequal web underscores the importance of interdisciplinary collaboration to address these patterns. Policymakers, regulators, and designers must work together to develop effective strategies and solutions. Extensive research, incorporating technical, design, and policy perspectives, is necessary to understand and combat deceptive design patterns, ultimately creating a more secure and user-centric digital environment.

Maitreya Shah

Deceptive design practices, particularly in accessibility overlay tools, have detrimental effects on individuals with disabilities. These tools make superficial changes to the user interface, giving the illusion of accessibility without addressing the source code. Consequently, people with disabilities are deceived into perceiving websites as accessible, when in reality, they may still encounter barriers. This not only undermines their ability to navigate and interact with online content but also hinders their equal participation in society.

One concerning aspect is that accessibility overlays can obstruct assistive technologies, which are essential for individuals with disabilities to access and interact with digital content. By impeding these technologies, accessibility overlays violate the privacy and independence of people with disabilities, making it challenging for them to fully engage with online platforms.

Furthermore, companies that use accessibility overlay tools are potentially disregarding their moral and legal obligation to create genuinely accessible websites. By relying on these tools, they sidestep the necessary steps to ensure that their digital content is inclusive, effectively excluding individuals with disabilities from participating in online activities.

A related issue is the possibility of users with disabilities being coerced into making unwanted purchases as a result of these deceptive design practices. When accessibility overlays create a false sense of accessibility, users may unknowingly engage in transactions that are not aligned with their preferences or needs. This highlights the harmful consequences of deceptive designs and the ethical responsibilities that businesses should uphold.

Deceptive designs are not limited to accessibility overlay tools but also extend to AI technologies, such as chatbots and large language models. These technologies are designed to exhibit human-like characteristics while interacting with users. However, this blurring of boundaries between humans and machines can be unsafe and misleading.

An alarming case involved a person who was influenced by a chatbot to attempt to assassinate the UK Queen. Although this is an extreme example, it demonstrates the potential dangers associated with deceptive designs in AI technologies. Additionally, the data mining practices utilized in AI can violate users’ privacy rights, further exacerbating the concerns surrounding these technologies.

Given the prevalence of deceptive designs in AI and emerging technology, there is a pressing need for regulations to address these practices. Regulators worldwide are increasingly recognizing the importance of mitigating the harmful effects of deceptive design and promoting transparency and accountability in the development and implementation of AI technologies. This regulatory intervention aims to shape discussions surrounding emerging technology and ensure that ethical considerations are taken into account.

In conclusion, deceptive design practices, whether in accessibility overlay tools or AI technologies, present significant challenges and risks. They harm individuals with disabilities, diminish their access to online platforms, and violate their privacy rights. It is imperative for companies to refrain from using accessibility overlay tools that deceive users and hinder full accessibility. Additionally, the regulation of AI and emerging technology is crucial to address deceptive design practices and ensure a safe, inclusive, and transparent digital environment for all.

Chandni Gupta

The research conducted on dark patterns has revealed a concerning trend of deceptive designs being used by businesses across various sectors on websites and apps. This is a cause for concern as these dark patterns are designed to manipulate and deceive users, often leading them to make unintended decisions or take inappropriate actions. Chandni’s research has shown that many dark patterns that exist today aren’t necessarily illegal, which raises questions about the ethics behind their use.

Furthermore, data from Australia highlights the negative consequences experienced by consumers as a result of encountering dark patterns. Research revealed that 83% of Australians have experienced one or more negative consequences due to dark patterns. These consequences include compromised emotional well-being, financial loss, and a loss of control over personal information. The impact of dark patterns on consumers’ lives and their trust in businesses can’t be underestimated.

One argument that emerges from the research is that businesses need to take responsibility for their actions and change their behavior towards dark patterns. The prevalence of these manipulative designs can harm consumer trust and loyalty in the long run. It is disheartening that businesses aren’t being held accountable for these practices, leading to a sense of frustration among consumers. However, some businesses have the ability to make changes today and set an example for others to follow.

Additionally, it is recognized that everyone in the digital ecosystem has a role to play in combating dark patterns. Governments, regulators, businesses, and UX designers all have a responsibility to address this issue. By working together, it is possible to create a fair, safe, and inclusive digital economy for consumers. UX designers, in particular, can share research resources with their colleagues to demonstrate the impact that better online patterns can actually have.

In conclusion, the research on dark patterns highlights the concerning prevalence of deceptive designs on websites and apps. Consumers in Australia have reported significant harm resulting from encountering dark patterns. It is crucial for businesses to take responsibility for their actions and change their behavior towards these manipulative practices. Additionally, a collective effort from all stakeholders in the digital ecosystem is needed to combat dark patterns and create a more trustworthy and inclusive online environment for consumers.

Session transcript

Titiksha Vashist:
. . . . . . . . . . . . . . . . . . . . . . . . on this. Plainly put, dark patterns are often carefully designed to alter decision-making by users or trick users into actions they did not intend to take. Now, deceptive design is something we’ve all encountered on the web, right? They have found their way into a plethora of online experiences from e-commerce app to social media, from fintech services to education and so forth. Now these design choices, which may seem very innocent and innocuous on the outside, have multi-sided harms actually baked into them. And by tricking, manipulating, misdirecting or hiding information from users, these patterns harm not just the single end user of the internet, but also digital ecosystems at large. And that is also, those are also findings which resulted from the work that we did on this issue. This project called Design Beyond Deception sought to understand the harmful impacts of deceptive design specifically in understudied contexts because a lot of the academic work so far on deceptive design was limited to the United States and European Union and we wanted to look at what it looks like in other countries, right? Where the nature of digitalization itself is different. We also wanted to see how we can replace such design practices with practices that embody values, right? And these are values that consumers, that companies, civil society, governments want reflected online, right? And that’s precisely why our project also had a very strong practice or application component and not just a theoretical one. Now moving on to what are the harms caused by these deceptive design patterns, right? And there are two ways in which we categorize these harms, right? One is the personal consumer detriment, which is focused on harms which you and I as people can identify we have undergone, right? These include privacy harms, financial loss, a lot of financial loss has been documented in countries such as India. Psychological detriment and time and resource loss which happens. But at the same time, if we look deeply into the problem of deceptive design, we also realize that there are also structural consumer detriments as well as harms on the larger digital economy, including loss of trust. So a lot of research showed that when websites and apps used forced registration or price comparison prevention and so on, it weakens or distorts competition in a digital market. What that essentially means is that because of the use of these deceptive patterns, there is unfair trade practice being done in the digital economy. And this currently does not find any anchoring in our laws, but that’s precisely why this topic has to be issued, has to be discussed at a platform such as this. Next, I wanna talk about why we are talking about deceptive design, which seems like a more designer-centered issue at the UNIGF. And the simple reason is we are increasingly seeing regulators worldwide investigating deceptive practices in their specific contexts. These include the Federal Trade Commission in the United States. It includes the European Commission and the BUEC which have been looking at this issue for a while and trying to understand how it can create a stronger European consumer protection law. And it’s also found mentioned in the DSA. And consumer councils in countries such as the Netherlands, Norway, Australia, and very recently, India, also issued guidelines and working papers and have been trying to push policy on deceptive design. Finally, data protection authorities have been at the forefront in several jurisdictions to talk about the privacy and data harms which result from deceptive practices. Now, regulators are investigating the consumer harms, privacy and data harms, and competition harms. which result from these patterns. And this is precisely where I want to move into a little bit about what our project was about. So the Design Beyond Deception project was an 18-month-long project which sought to bridge the gap between the theory and practice. We held more than four large group-focused consultations, engaged with over 50 global experts in various domains, and held 20-plus in-depth interviews on this issue. We also issued a research series, which is also being launched today, by authors from across the world who focused on understudied areas. And this research was very generously supported by the University of Notre Dame and IBM’s Tech Ethics Lab in the United States. Now very quickly, going over the project process, we started out with, of course, a review of academic literature, given the multidisciplinary and cross-sectional nature of the issue itself. Second, to tap into the in-depth expertise from multiple stakeholders placed across fields of theory and practice, we did scoping interviews with experts, which helped us give shape to the rest of the project. Third, we thought that creating a new body of work which contextualizes deceptive design specifically will help deepen the conversation significantly on the issue. And that led to focus groups and workshops with stakeholders, which led us to our final goal, which is the creation of a manual for design practitioners who otherwise would not have, as a part of their curriculum or training as designers, an understanding of deceptive practices and how it may harm their end users. So the stakeholders we engaged with for this particular project were academics and researchers, design practitioners, start-ups, civil society and policy folk, and of course, industry, which included a whole bunch of people from top to bottom who are involved in different decision-making processes, which very… very much so impact, you know, design decisions in a company. While our manual themes span what is deceptive design for a designer and not for a researcher, we also look at rethinking the user, designing with values, design for privacy. We touch upon culturally responsible design and finally look at how regulation meets design, wherein we also probe the design practitioner to look at designing our collective future from a different standpoint. And since this manual has been made for practitioners, it is full of frameworks, activities, and teamwork, things that perhaps a product team can sit together and do on their own, right? Very quickly, talking about the research series, which also we are launching today, it focused essentially on understudied areas and understudied harms, including how, for example, crafting a definition for deceptive design is harder than it may seem. And for those of you who are lawyers in this room, you would completely understand why this is a huge challenge. We also talk about how identifying anti-competitive harms in deceptive design discourse is crucial. Also, how deceptive design plays in voice interfaces and further such research pieces, which were contributed from people across the world. So without further ado, I would request you to explore this project online or pick up a copy of the manual and research series here from the table in the first row for you to peruse. And without taking much of the time, I would very quickly now want to invite the speakers who have graciously joined us online. We have two speakers, Chandini Gupta and Maitreya Shah, who have joined us online, and I hope they can hear me. We also have videos from two speakers who, because of time zone issues, could not join us online, but have been very generous. So, to quickly introduce the speakers, Chandini is currently the Deputy CEO and Digital Policy Director at the Consumer Policy Research Centre, which is Australia’s only dedicated consumer policy think tank. She has previously worked at the Australian Competition and Consumer Commission, the OECD and the United Nations. She has over 15 years of experience in consumer policy domestically as well as internationally, and her research focuses on exploring consumer shift from the analogue towards the digital economy. Her work was extremely crucial in the sense that it was the first study in Australia which – I’m sorry, just – yeah, it was the first study in Australia which essentially led to policy change and consumer action on deceptive design. Maitreya, who’s also joining us online today, Maitreya Shah is a blind lawyer and researcher. His work lies in the intersection of ethics and governance of emerging technologies and disability rights. He was most recently at Regulatory Genome, a spin-out of the University of Cambridge, and was previously a LAMP to Member of Parliament Fellow in India. He has extensively worked in areas of digital accessibility, AI governance, regulatory technologies and disability law. Currently, he is a fellow at the Berkman Klein Centre for Internet and Society at Harvard University where he will be examining AI fairness frameworks from the standpoint of disability justice. We also have two recordings from Carolyn Sinders and Professor Christiana Santos. Carolyn Sinders is an award-winning critical designer, researcher and artist. They’re founder of a human rights and design lab called Convocation Research Plus Design, and she’s also currently at the Information Commissioner’s Office, which is the UK’s data protection and privacy regulator. Finally, Professor Christiana Santos is an assistant professor in privacy and data protection law at Utrecht. University in the Netherlands. She’s also an expert of the Data Protection Unit Council of Europe and expert for the implementation of the EDPB support pool of experts amongst her many varied accomplishments. Without further ado I would request Dhaneshree to play the video by Caroline Sinders who will touch upon deceptive design from a design practitioners standpoint.

Caroline Sinders:
I’m a researcher and postdoctoral fellow with the Information Commissioner’s Office in the United Kingdom. That’s the Data Protection Privacy Regulator. I also run a human rights lab called Convocation Research and Design. I really wish I could be there in person. I’m so sorry I can’t be so I’ve made this recording instead. Thank you so much to the Pravana Institute for inviting me to be on this panel. I’m one of the contributors to their recent toolkit that’s out on deceptive design patterns and I’m excited to present to you today. Talk a little bit about why design and interdisciplinary thinking is so important when it when it comes to creating regulation investigations and other ways to help curb and mitigate the harms of deceptive design patterns. I’ve also created a very small presentation that I’m excited to show to all of you. Harmful design patterns are everywhere. They’re very prolific in the modern web and they’re universally found. I have not in all of my extensive research ever come across a country or region that does not have harmful design patterns. They are in fact a global phenomenon and a global menace is the way to think about it. My article for the Pravana Institute’s toolkit focuses on what do we do with emergent spaces let’s say like the metaverse or IOT or voice activation when design patterns are not standardized yet for users meaning Users have not engaged with voice activation enough to understand what all of the design patterns are within that space. Or in the case of something like the metaverse, where there’s not a lot of people using that and it’s a really emergent space, what are the healthy design patterns within that? We haven’t really come to that space yet. A lot of current design patterns are because we’ve existed in this kind of flattened modern web for quite a few years. And so there’s been many years of research to figure out what could healthy or trustworthy or pro-user design look like. And it’s that subversion where harmful design patterns exist. This kind of research is so important because it will impact how users create safety. It will impact forms of regulation. And this kind of work does really require an interdisciplinary lens. And so what does policy need to help combat harmful design patterns? Again, it’s this understanding that design is an expertise. And as I was saying earlier, this integral part of the web. What we need is to sort of broaden our idea of what, let’s say, a researcher looks like or what knowledge looks like. One of the things that’s been exciting in the many years that I’ve been researching harmful design patterns is the ability to work with all different kinds of legal experts who recognize that design is an expertise. What this means is when we’re investigating things like harmful design patterns is actually having a knowledge of what are design patterns, what are different kinds of standardized design patterns, how to run different kinds of evaluations, like a heuristic evaluation or a usability evaluation or an accessibility evaluation. These are things that actually are, there are many different ways to do them. But there are agreed upon tests in a way or a series of different kinds of tests people can conduct. But these are the ways in which you can sort of look at, let’s say, like the health of a product or how well or not well. that product is designed. Often when investigating harmful design patterns, what you need to find or sort of look at or help surface is where does the confusion or manipulation or exploitation lie? So where is the harmful design pattern actually subverting this expected design pattern? The expected design pattern, the user thinks that they’re engaging with, right? Because that’s what’s being subverted unintentionally, let’s say, or intentionally. And this is where having a background in UX design is really, really important to be able to recognize that. A paper done by the European Privacy Board actually found that they were testing with a few thousand users, they found those that were less susceptible to harmful design patterns were ones that had heard of UX design or knew what UX design was. Right? And this is really important to kind of highlight. This means we’re creating an unequal and unequitable web if the only way for people to try to avoid harmful design patterns is to have a design background. So conversely, I think to help investigate more, this kind of interdisciplinary knowledge is needed. Understanding how products are made, how they’re tested, and having, and again, being able to do that to different kinds of analysis, let’s say on the interface itself. Design, inconsistent design, and we see these a lot in different kinds of harmful design patterns can confuse users. They can overwhelm. So if there’s too many features or too many choices, let’s say, misunderstanding a core audience can also lead to poor or unhelpful design decisions. But we’ll see this in the example I’m going to show. So inconsistent design can be a product name changing choices or a changing name. Choices are not illustrated the same way. The name doesn’t match up with what the user thinks they’re doing. All of these things can confuse users. This also means sometimes if you’re engaging or calling something something too technical, then a user might understand. and what it is. Thank you so much for having me here. I’m so sorry that this is a short talk. But one thing I wanted to really emphasize, again, is design can be an equalizing action that distills code and policy into understandable interfaces. What we need more is more research, more collaborative and interdisciplinary research between policymakers, regulators, policy analysis, and designers.

Titiksha Vashist:
Thanks, Caroline. And now, moving on to Chandni, who’s joined us online. I would request Dhanushree to put up the slides. And over to you, Chandni. Welcome, and thank you for being here. Thank you so much. I just want to confirm that you can hear me and you can see my slides? Yes. All good.

Chandni Gupta:
Excellent. So thank you so much for the introduction earlier. And thank you so much for having me. Before I begin, I do have to say congratulations to Pranava Institute, who have created such a practical tool, which I’m sure and I hope will become a valuable resource for the UX community from here on. I’m delighted to share with you today some of the insights from our research. So one of the things that we at the Consumer Policy Research Centre do is look at what is the evidence-based research that can bring about systemic change. And this was one of the ones that we have been working on for a number of months now. So it was about 18 months ago that we started our journey into looking at deceptive and manipulative designs. And as part of our research, what we really wanted to understand were two things. What are the common deceptive patterns that Australians come across most frequently? And what’s the impact on consumers? And we had Caroline say how important. it is to be able to understand that impact and what we really wanted to do was quantify that harm. Dark patterns today are so prominent across websites and apps we use every day. They used to influence our decisions, our choices, our experiences and is it in our best interest? Often not. Is it illegal? Largely not. So in case you’re wondering where dark patterns exist, as Caroline said as well, they are so prominent, they are everywhere. Even as part of our research, we asked a national representative sample of 2,000 Australians in our survey to list the names of those businesses they could recall using deceptive designs and businesses from almost 50 different sectors were identified. I mentioned before that many of the dark patterns that exist today aren’t illegal. Currently in Australia, we can look through the lens of misleading and deceptive conduct, unfair contract terms or the Privacy Act. But the law currently offers a very narrow lens for how regulators can act. But are consumers experiencing harm? Well, the short answer is yes. Research revealed that 83% of Australians had experienced one or more negative consequences as a result of dark patterns being used on websites and apps. Yet eight out of the ten dark patterns we looked at could be implemented here in Australia without any consequence to businesses. Consumers in our survey reported being compromised in their emotional well-being, experiencing financial loss and feeling a real loss of control over their personal information. And it was anything from feeling pressured into sharing more data than they needed or accidentally making a purchase. In fact, As part of our qualitative part of our research, the frustration really came through. And it came down to three elements. One, there’s a lack of meaningful choice. Sometimes accepting the preferred business choice is the only way to access a product or service. For example, in our suite, we saw an example of a fitness center that didn’t let you see their timetable until you created a profile on their app. Two, it’s the pervasive amount of pressure that’s put on consumers, especially once their personal details have been shared and suddenly they’re prone to hyper-personalized content or continuous direct mail. And three, and finally, there’s a sense of frustration that businesses aren’t being held accountable for any of these practices. When it comes to younger consumers, the impact only compounded. Consumers aged between 18 and 28 were more likely to experience both financial and data harms. For example, one in three spent more than they intended, and that was 65% above the national average. This demographic in Australia often has less disposable income, so the impact of harms is likely to be felt more as well. On the flip side, there’s also a cost for businesses. Almost one in three of the consumers we surveyed stopped using the website altogether. Almost one in six felt their trust in the organization had been undermined, and more than one in four thought negatively about the organization. So while in the short term, dark patterns may lead to financial and data gains, in the long run, they will deteriorate consumer trust and loyalty. So our research has highlighted is that everyone in the digital ecosystem has a role to play, and Dedeksham mentioned this earlier as well. There’s definitely a role for governments. regulators and we’ve been really pleased to see some of the changes that are coming about such as look government currently considering here introducing an unfair trading prohibition and dark patterns being included as part of that legislation and the privacy act is finally getting reviewed which currently is from the 1980s so it not only predates dark patterns it predates the internet however it’s actually businesses who are in the best position right now to make changes today and lead by example whether it’s auditing their online presence or testing with consumers best interests in mind even small businesses can be really mindful about the off-the-shelf e-commerce products they’re choosing and which features they’re turning on and off now from what i’ve heard from ux designers that have reached out to me during conferences and events is that it’s often not in their hands and much of this is a business decision that happens in another part of the company but one of the things that they can do is share this type of research resources such as the pronounced handbook and other work that’s happening in this space with their colleagues to show the impact better online patterns can actually have not only on consumers but also on their business. I’ll end with saying we’ve actually all got a role to play in ensuring a fair safe and inclusive digital economy for consumers. Thank you so much.

Titiksha Vashist:
Thank you so much Chandini for that presentation and I would very much like to point out that Chandini’s research and the research done at her institute in fact very recently helped push the case for making unsubscription or unsubscribe easier on e-commerce platforms like amazon and that’s a big move right coming from regulators. So more power to you and thank you so much for joining us today. I would now like to request Dhaneshree to play a recorded video we have from Professor Christiana Santos who will talk about deceptive design from a legal standpoint and share some of her work.

Cristiana Santos:
The first time in a decision we suggest that along with this DPA other enforcers name and publicize violations as dark patterns in their decisions. This way we believe that organizations can factor the risk of sanctions into their business calculations and also policy makers can be aware of the the true extent of these practices, right? And naming dark patterns is now more important than ever, especially since DSA and the DMA codify dark patterns explicitly. So it’s a legal term. We also found that the dark patterns are used both by big tech, also by small and public organizations. Most decisions refer to the user interface or to the user experience or user journey and to information-based practices. Finally, we understood that harms caused by dark patterns are not caused in decisions yet. Let’s have a look at the privacy-related dark patterns we found in these decisions. So in this table, you can see the data protection cases according to the practices related to dark patterns types. The majority of dark patterns are referred to obstruction practices, and they are related to the difficulty of refusal and withdrawal of consent, more than 30 decisions. These are followed by forced practices. So when users withdraw consent, but unnecessary trackers are loaded or trackers are stored before consent is asked, more than 25 decisions. Finally, policy to use a service at the same time and in both, for example. So we understand that enforcement cases are a way for a general deterrence of dark patterns. And we showcase these dark patterns decisions in this website, deceptivedesign.org. And this website is being. updated daily with new decisions. So, let’s talk about the harms caused by dark patterns. There is a growing body of evidence from human computer interaction studies, from computer science studies, referring to dark patterns that actually might elicit or lead to potential or actual harm. But there are also harms related to dark patterns in privacy and several studies focused on constant interactions and they show several harms caused by dark patterns. Labor and cognitive harms, loss of control, privacy concerns and fatigue, negative emotional responses, regretting privacy choices and all these harms provide evidence of severity of harms. And for a concrete example, scholarly works find that the pre-selected purposes, pre-selected options for processing data or even except all purposes option at the first layer of a concern banner can or may use user’s personal data or even very sensitive data depending on the website in question and these can share this personal data by default with hundreds of third-party advertisers and this might provide evidence of a potential severity and impact regarding dark patterns harms. However, constant claims, at least these scoped ones, for non-material damages are not being used within the redressed system, even though there are so many decisions related to dark patterns and related to violations of consent interactions. Finally, We know that dark patterns occur in different domains, not only in privacy, right? And there are several data protection regulators and policy makers that show interest in contributing to this space of dark patterns. And we find at least five reports from the EU, from the UK and US bodies published in 2022 alone. But these sources often lack citation provenance trails for typologies and definitions, making it difficult to trace where new specific types of dark patterns emerge and the rich conditions. On the other hand, academic literature has grown rapidly since Brignell released his original typology in 2010. In the years since, foundational work by Bosch, Gray, Mathur, Luguri, Strahi-Letsit have added many new dark patterns. These typologies have had some overlaps and also some misalignments. We analysed those academic and regulatory taxonomies and counted 245 dark patterns. Many of these dark patterns indeed either overlap or misalign with other types of dark patterns coming from all these different sources. And so we constructed an ontology of dark patterns knowledge. We aggregated existing patterns, identified their provenance through direct citations and inferences. We clustered similar patterns. So we created these high level, middle level and low level patterns. And this ontology of dark patterns enable a shared vocabulary for regulators and dark pattern scholars, enabling more alignment in user studies, in mapping to decisions and discussions of harms. and for scholars also to help to trace the presence and types of dark patterns over time. Regulators would anticipate the presence of existing patterns in new contexts or domains and to guide alternative detection. Thank you for your time and if you have any question and any suggestion, please consider to send me an email. Thank you so much.

Titiksha Vashist:
Thank you to Professor Santos for that presentation and for showing us very clearly how deceptive designs now are a part of the legal discourse increasingly as different countries across the world look at it closer and make it a part of their case law. I would now finally like to invite Maitreya Shah to share his comments with us and thank you so much Maitreya for your patience and thank you so much for being with us. Hi Detectioner, thank you so much for having me

Maitreya Shah:
here. I hope you can see my presentation. Yes, Maitreya, you’re all set. Thank you so much and congratulations for launching this at one of the best platforms possible in the world to talk about this. So yeah, hello everyone. I’m Maitreya Shah and thank you so much Detectioner and Pranava for that generous introduction. So my fellow speakers have already touched upon many forms of deceptive designs and how they interact with consumers, how they pose harm to people and what are the dark patterns that exist on the internet and elsewhere today. You know, dark patterns, deceptive designs are quite multidisciplinary with the rise of AI and emerging technologies. I intend to talk about two things very briefly. The first is the piece that I wrote for the research series that Panava is launching today, which deals with accessibility overlays and their harms on people with disabilities. The other is briefly to my work because a lot of my work is on AI bias, fairness, and ethics. I tend to briefly touch upon the deceptive design dark patterns that are emerging through AI and emerging technologies and the new models that we see in the world today. To start with, deceptive design practices in accessibility overlay tools. I wrote an analytical piece for the ethical design research series of Panava. I evaluated what are called accessibility overlay tools. Before I delve into what accessibility overlay tools are and what deceptive design practices are, I’ll give you a brief on accessibility. Accessibility is the idea to make websites and applications usable for people with disabilities. It is a legal right and a legal obligation to various instruments international and domestic. I’ve given here a few examples. These accessibility overlay tools are basically designed to subvert the legal obligations to make websites accessible. I have tried to analyze these tools from a deceptive design lens and call out the dark patterns and how they end up harming people with disabilities on the Internet. So a generic overlay, as a lot of you who come from the design side of things know is usually. on the UI or UX side of websites or web applications. It is, you know, in the forms of pop-ups or these, you know, JavaScript boxes that usually come up, and they tend to deviate or obstruct the attention of users on websites and, you know, shift their focus to something different, like sign-up boxes or advertisements and so on. An accessibility overlay tool is exactly like this. However, what it claims to do is it claims to make the website accessible for people with disabilities. Now, in line with a lot of international standards and regulations, the World Wide Consortium has come out with a web accessibility guidelines and standards that are guiding developers and designers to make websites accessible. And these standards require a lot of manual labor and a lot of manual design input right from the source code. So these accessibility overlay tools do not end up making any changes in the source code. They only make changes to the user interface side of things. They only basically change the font, color, contrast, or size, or maybe, you know, add some image descriptions on the website, which are things that are already built in the assistive technology of people with disabilities. So accessibility overlay tools are not doing anything new. Assistive technology like screen readers that people with blindness, for example, use already have a lot of these features built in. So what are the harms? So these companies that sell these accessibility overlay tools claim that they are making the website accessible. And what ends up happening is, whenever there is an accessibility overlay tool in a website, there is a toolbar and an announcement on the top of the website. on its landing page that says that, you know, the website is accessible and the person visiting the website can utilize this feature to get an accessible, you know, experience and interaction on the website. So, people with disabilities, they are, you know, their trust gets kindled. They tend to use the website with the anticipation that the website would be accessible and what ends up happening is that they are deceived and manipulated to choices that they do not intend to make, which is inherently the idea of deceptive design. This is done to, as I earlier said, subvert the legal obligation to make websites accessible. Companies, they employ designers that don’t incorporate accessibility features from the very inception of the website building process and then they are afraid of lawsuits and paying hefty compensations. So, they resort to these sort of contrivances and these sort of shortcuts to make their websites accessible. So, there are many issues. Before I come to the strategies of countering these tools, there are many issues that end up happening with people with disabilities when these overlay tools are deployed in a website or a web interface. So, firstly, many screen readers that blind people especially use get obstructed by these overlay tools. These overlay tools also tend to impede the privacy of people with disabilities because they detect assistive technology. And there are many other issues like false and inaccurate image descriptions that might lose or manipulate people into purchasing things that they do not want to. You know, in line with the idea of today’s discussion, I have given here a few points around strategies that would move us from theory to practice. How do we, you know, counter these accessibility overlay tools? How do we see that there are, you know, companies don’t use these tools and that they don’t harm people with disabilities? So, these are a few examples that I have personally researched and I’ve gathered from across the globe that are, you know, somehow effective strategies to counter the deceptive practices of these tools, including regulatory actions, community advocacy, tools that could counter these accessibility overlays, and educating and sensitizing designers and web developers to start with. So, this was possible through, you know, Pranava’s collaboration and consultation that I could have with them to think about, you know, how these accessibility issues could be manifest in deceptive design language and how they harm people with disabilities to understand this issue that is quite marginalized and very less talked about. I’ll quickly move to, you know, artificial intelligence technologies. There is a lot of hype and a lot of discussion around that VPT and tools today. You know, we interact with chatbots and with these new forms of large language model technologies today. So, these are the kind of issues that one faces. I, in my presentation, have two broad issues that I wanted to focus on. Two examples that I wanted to share with you that have come up in my research so far. And I’ll be very brief because I’m mindful of the lack of time. So a lot of regulators, they are talking about and they are making people aware about the deceptive design practices to answer for measles, which is basically human characteristics that are carried by non-human identities. So for example, sad bots and generative AI models that take on human characteristics and blur those boundaries between humans and tech and that tend to manipulate users, that tend to subvert users’ autonomy in their privacy. In the previous slide, I’d given an example where a person back in 2021 was influenced by a sad bot and had attempted to assassinate the queen of the United Kingdom. So these are the kind of issues that one could face because of sad bots and large language models. I’m so sorry to interrupt you. Could you just very quickly wrap up? We’re one minute over time. And I would just say, yeah, thank you. Thank you. This is very briefly, again, an example from data mining practices and how they intend to violate the privacy of users. I’ll quickly move to these are a few examples, again, to move from theory to practice, how regulators are trying to shape the discussion around AI and emerging tech and deceptive design practices and how you or I as lawyers, designers, or community advocates can influence the work on this. Yeah, that’s it. Thank you so much. I’m sorry for running over time.

Titiksha Vashist:
Thank you so much for joining us, Maitreya, and for sharing your specific research. at the intersection of deceptive design and disability. And I wish you all the best for a lot of your forthcoming work on AI and deceptive design. That being said, in the interest of time, let me thank everyone for joining us for this particular launch event. You see the QR code to our project right up here on the screen. And if you’d like to grab a physical copy of the manual or the research series, they’re right here on the front desk right up here. Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times. But thank you for making it to this event. And thank you to everyone for attending this particular session. We are definitely available offline if you are interested in this issue and want to talk more about it. Thank you. Thank you. Deceptive by design. up here. Again, I would like to extend my gratitude to both Chandni and Maitreya, who are joining us at very, very odd times. But thank you for making it to this event. And thank you to everyone for attending this particular session. We are definitely available offline if you are interested in this issue and want to talk more about it. Thank you. . .

Caroline Sinders

Speech speed

188 words per minute

Speech length

1099 words

Speech time

352 secs

Chandni Gupta

Speech speed

160 words per minute

Speech length

1076 words

Speech time

403 secs

Cristiana Santos

Speech speed

128 words per minute

Speech length

859 words

Speech time

401 secs

Maitreya Shah

Speech speed

143 words per minute

Speech length

1536 words

Speech time

643 secs

Titiksha Vashist

Speech speed

127 words per minute

Speech length

2361 words

Speech time

1119 secs

(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explored various topics related to the global information ecosystem and its challenges. One key concern highlighted was the negative impact of disinformation, which extends beyond a Western-centric approach. The speakers emphasized the need to consider the effects of disinformation in different languages, as it can affect people’s offline lives. It was recognized that addressing disinformation globally is crucial, rather than focusing on specific regions.

The work of Wikimedia and Global Voices in creating a trustworthy global information ecosystem was appreciated. These organizations were praised for their contributions, involving individuals from different parts of the world. Collaboration and a multi-stakeholder approach were deemed essential in building a reliable information ecosystem.

A speaker, Nick Beniquista, argued for major system-level interventions to address the challenges faced by the information ecosystem. Initiatives such as Pluralis in Europe, trust initiatives for quality online information, and policy interventions like bargaining codes were mentioned. This indicates the need for a comprehensive approach and the involvement of various stakeholders to tackle the complex issues within the information ecosystem.

However, some concerns were raised about the proposed principles discussed during the analysis. These principles were deemed somewhat understated in dealing with the complexity of the challenges. Although they may be widely accepted, there are doubts about their sufficiency in addressing the depth and breadth of the issues. Therefore, comprehensive strategies and solutions are needed.

Furthermore, questions were raised about the effectiveness of a participatory, citizen-driven approach in addressing the systemic challenges of the information ecosystem. One speaker described this approach as “quaint,” suggesting doubts about its effectiveness given the scale of the challenges. This highlights the need to consider alternative strategies alongside participatory approaches.

Regulation and the differentiation between large and small online platforms were emphasized as crucial factors in addressing the challenges of the information ecosystem. It was argued that large platforms bear a special responsibility for content management and accessibility. Efforts by the Danish government and the European Union (EU) were highlighted, including partnerships with organizations like Access Now and the development of regulations that consider different local contexts outside the EU. This underscores the importance of globally applicable regulatory frameworks that also respect regional variations.

The analysis also mentioned concerns about the operationalization of the discussed principles and the potential consequences of the proposed internet safety bill in Sri Lanka. The bill, which has passed its first reading in parliament, raised concerns about censorship and the potential fragmentation of the internet. An audience member expressed opposition to the bill and sought help in collective action, emphasizing the need for collaboration and partnerships in addressing internet governance and legislation.

In summary, the analysis delved into various aspects of the global information ecosystem and its challenges. It highlighted the negative impacts of disinformation, the significance of a trustworthy information ecosystem, the need for major system-level interventions, as well as concerns about certain approaches and proposed bills. Collaborative efforts and collective action are crucial in establishing a reliable and inclusive global information ecosystem.

Moderator

The session focused on the work of a task force dedicated to promoting trustworthy information online, as well as the launch of a set of principles by this task force. The task force is a newly established multi-stakeholder entity within the Freedom Online Coalition. Its main goal is to offer policy recommendations to government institutions and lawmakers to ensure a healthy and reliable online information ecosystem.

The United States is actively promoting trustworthy information online and is committed to addressing the global issue of disinformation. They are implementing initiatives such as fact-checking and media literacy programs to combat the spread of false information. Efforts are also being made to protect and promote open and resilient information ecosystems and support the long-term sustainability of independent media outlets.

While promoting trustworthy information online, the US government emphasizes the importance of not undermining fundamental democratic freedoms. They caution against using regulatory measures to suppress peaceful dissent and silence independent media, civil society activists, human rights defenders, and marginalized groups.

The session also highlighted the importance of platforms like the Freedom Online Coalition and the International Governance Forum (IGF) in countering disinformation and addressing global threats. These platforms are crucial spaces for bringing together stakeholders to tackle the challenges posed by the spread of misinformation and to ensure a secure and open internet.

One significant issue discussed during the session was the consolidation of power over online speech, which negatively impacts platforms advocating for freedom of expression. The session also addressed the exclusion of participation, which can lead to the spread of misinformation. It was noted that depriving half the world’s population of involvement in knowledge spaces contributes to the spread of false information, particularly in the age of generative artificial intelligence.

The session stressed the importance of diversity in media and information, acknowledging that news framing bias is a pervasive problem, and that news organizations alone are insufficient for meeting the need for diverse and reliable information. It was also emphasized that building reliable information structures requires the involvement of civil society and the private sector through partnerships.

Governments were encouraged to play an active role in regulating the online space to promote engagement, free debates, and protect human rights. Striking a balance between regulation and trustworthiness is crucial in ensuring the effectiveness and fairness of online platforms.

The session also addressed the need for educating policy-makers and governments about platforms like Wikipedia and how they operate. This knowledge is important for understanding the value and significance of protecting and promoting such platforms.

The launch of the task force and its principles were seen as an opportunity to pave a strategic path forward and to coordinate with other international initiatives. Participants expressed the need for dialogue and engagement with stakeholders, as well as with counterparts in the ecosystem, to ensure well-informed policies and effective regulations.

The session ended with participants being encouraged to learn more about the task force and get involved. The importance of their role in contributing to the development and implementation of strategies to address the challenges related to trustworthy information online was highlighted.

In conclusion, the session covered various aspects related to the task force’s work on promoting trustworthy information online. It underlined the importance of balancing regulation and trustworthiness, the need for diversity in media and information, and the significance of multi-stakeholder engagement to address global threats and challenges. The session also highlighted the ongoing efforts by the United States and other countries to counter disinformation and promote reliable information online. Overall, the discussion emphasized the key role of collaboration between different stakeholders in building a more trustworthy and inclusive online information ecosystem.

Klara Therese Christensen

This analysis provides a detailed exploration of key points surrounding the role of the internet in relation to marginalized voices, information distortion, and the need for reliable information structures. One argument put forth is that while the internet presents opportunities for marginalized voices to be heard, it also brings about the potential for distortion and muddled reliability of information. This highlights the challenge of navigating and discerning credible information in the digital age.

Partnerships with civil society and the private sector are emphasised as vital in building reliable information structures. By collaborating with these sectors, it is believed that information can be better managed and disseminated. These partnerships can contribute to the development of robust platforms and frameworks that promote the availability and accessibility of accurate information.

Governments are seen as having a responsibility to create human rights-based ecosystems of information. This implies that governments should prioritize the protection of individuals’ rights to access and share reliable information. By ensuring the existence of a conducive environment for the free flow of information, governments can help to counteract the negative effects of misinformation and disinformation.

The analysis also discusses the need for sound regulation in managing online spaces. While it is recognized that regulation is necessary to curb harmful content and maintain order, it is crucial to strike a balance with the preservation of freedom of debate and active engagement. Finding this equilibrium ensures that online spaces remain open and democratic while effectively managing potentially harmful content.

Furthermore, community engagement is considered pivotal in determining and implementing appropriate regulatory measures. By involving and empowering communities, there is a higher likelihood of generating regulations that reflect the needs and perspectives of those affected by them. This participatory approach can foster more effective and inclusive governance of the internet.

The responsibility of large online platforms in content regulation is also highlighted. These platforms are seen as having a unique role in determining what content is published and how it is accessed. Given their influence and reach, the analysis suggests that these platforms should bear a responsibility to uphold ethical standards and prioritize reliable and reputable content.

The analysis touches upon the importance of government funding for the Global South and majority voices. Recognising the existing inequalities, it is argued that governments should allocate resources to support marginalised regions and communities, enabling them to actively participate and have their voices heard.

Noteworthy observations include the excitement surrounding the European Union’s efforts to regulate big tech. The EU is viewed as a potential model for global implementation due to the progress it has made in developing regulations that could serve as a reference for other jurisdictions.

The analysis also emphasises the necessity of collaboration with various organisations to engage in meaningful dialogue and foster improvement. By partnering with diverse stakeholders, there is a greater opportunity to address the challenges associated with information access and dissemination effectively.

In conclusion, this extended analysis highlights the multifaceted issues surrounding the internet’s impact on information reliability and the inclusion of marginalised voices. It underscores the importance of partnerships, government responsibility, sound regulation, community engagement, and the role of large online platforms. Moreover, it reflects the growing recognition that a collaborative and multi-stakeholder approach is essential for building reliable information structures and ensuring the availability and accessibility of trustworthy information online.

Alisson Peters

The United States actively promotes trustworthy information online and combats disinformation on a global scale. They support initiatives to address disinformation and emphasize the importance of digital media and information literacy in enabling individuals to freely express themselves and evaluate information. Additionally, the United States focuses on media resilience by bolstering the resilience of media outlets against legal and regulatory challenges. They support fact-checking and independent media initiatives, aiming to ensure citizens have access to accurate and reliable information.

However, there is concern about the misuse of power by governments to ban certain forms of expression. Governments around the globe claim broad powers to restrict freedom of expression, silencing peaceful dissent. Stakeholder platforms like the Internet Governance Forum (IGF) play a critical role in addressing threats to freedom of expression. These platforms are essential for finding solutions to challenges in the digital world.

The Freedom Online Coalition is a global platform working towards promoting trustworthy online information. It is important to strike a balance between promoting reliable information and upholding democratic principles. The task force’s efforts must not compromise democratic values.

In conclusion, the United States actively promotes trustworthy information online, supports initiatives to combat disinformation, and emphasizes the importance of digital media and information literacy. They also focus on media resilience and support fact-checking and independent media. However, there is concern about the misuse of power by governments to censor expression. Stakeholder platforms like the IGF are critical in addressing threats to freedom of expression. The Freedom Online Coalition promotes trustworthy information while upholding democratic principles.

Ivan Sigal

In the analysis of the given text, several key points are highlighted. Firstly, it is emphasised that online spaces should be open and interoperable, and that user agency is crucial. This means that individuals should have the freedom to access and engage with online platforms and content and have control over their online experiences. The argument is made that the healthy promotion of a wide range of participation is critical in the internet space.

Promoting voice and expression is identified as another important aspect of online spaces. It is suggested that critical thinking about how institutions and media are built is necessary to achieve this goal. Historical facts and friction in the internet context indicate that creating spaces where people can participate more or less equally requires a proactive effort and careful consideration of the diversity of media sources, their funding, and sustenance.

Ivan Sigal, along with organizations like Wikipedia, Global Voices, and Witness, values citizen-generated participatory internet as the core of trustworthy online information. These organizations are seen as starting from an open knowledge perspective and working with communities for whom being online is not easy. However, the break in trust around large social media platforms is identified as a significant challenge.

The potential impact of internet regulations on small and medium-sized non-profit initiatives is a concern. It is argued that regulations being implemented in many global north countries could make it either impossible or expensive for civic-oriented initiatives to create new platforms.

The need for trustworthiness and authenticity in information sharing is emphasized. Global Voices and Wikipedia are highlighted as examples of initiatives that aim to create and share trustworthy information. It is stated that these initiatives are seen as a civic act by many.

Furthermore, the analysis acknowledges the pervasive and complicated bias in news framing. It suggests that news organisations alone are not sufficient to provide all the different kinds of information required in the world. Therefore, alternatives that allow easy entry into an information space and enable the addition of a diversity of voices are needed.

The importance of including a participatory side in regulatory processes is emphasized. It is argued that previous principles have not adequately emphasized this aspect. The analysis suggests that reestablishing the participatory side is crucial to make effective regulations.

The issue of disinformation is also discussed, highlighting its intentional misleading of people and groups. It is noted that disinformation affects many communities in multiple languages. Additionally, the distinction between misinformation and disinformation is highlighted, with the former being seen as ignorance in another language and the latter as deliberate lying.

The analysis also touches upon the need for better information in other languages, particularly for marginalized groups. Initiatives such as Rising Voices, which work with indigenous and marginalized groups to identify languages and support the creation of their own trustworthy information sources, are valued.

The importance of including community voices in conversations is stressed, particularly those from communities that traditionally have less power and resources. The analysis suggests that these communities should not be ignored, and their voices should be included in discussions.

Overall, the analysis advocates for open and interoperable online spaces that prioritize user agency and promote voice and expression. It underscores the importance of proactive efforts to build equitable spaces, address the challenges related to trust on social media platforms, and consider the impact of regulations on non-profit initiatives. It highlights the need for trustworthy information, alternative news sources, and multilingual support. The analysis also underscores the significance of including a participatory side in regulatory processes, distinguishing between misinformation and disinformation, and valuing community voices.

Jan Gerlach

The discussion revolves around the topic of internet regulation and its impact on online spaces. Several key arguments are presented, highlighting the potential negative consequences of centralizing power over online speech and content trustworthiness in the hands of platforms. The Wikimedia Foundation argues that regulation is pushing the decision-making authority on online content to platforms, which raises concerns about the consolidation of power and the potential for biases.

Another argument raised is that excluding people from participating in online knowledge spaces can promote misinformation. It is suggested that when individuals are prevented from engaging in these spaces, the void left behind is often filled with inaccurate and misleading information. The discussion emphasizes the importance of a participatory approach in knowledge spaces as it is seen as essential for promoting peace, security, and combating misinformation.

In contrast to the centralized approach, the conversation encourages regulations that empower communities to make decisions about online content. Jan Gerlach argues for a decentralized approach to internet governance, advocating for regulations that distribute decision-making power among various stakeholders rather than concentrating it solely in the hands of platforms. This approach seeks to ensure a more inclusive and diverse representation in shaping the online environment.

Other noteworthy points include the concerns about laws that make knowledge more expensive, which are viewed as potentially limiting access to information. Furthermore, the discussion highlights the negative impact of regulations that primarily benefit big media houses at the expense of independent journalism and individuals in conflict zones.

The significance of collaboration and sharing best practices is emphasized to safeguard people’s ability to contribute to online spaces and tell their stories. The engagement of governments in conversations about online spaces and freedom of expression is also welcomed, showcasing the importance of multi-stakeholder involvement in shaping internet policies.

The role of Wikipedia is highlighted as an “honest broker” in supporting journalism and promoting information integrity. Moreover, the organization serves to educate policymakers about the mechanisms and functioning of Wikipedia and the potential effects of different regulations on global online spaces. This education aims to increase awareness and ensure more informed decision-making processes.

The establishment of a task force and the associated principles is considered essential for coordinating responses to challenges related to information integrity. This initiative brings together governments, civil society, and proactive private actors to strategize and coordinate processes that promote information integrity in online spaces.

Finally, the conversation encourages individuals to actively engage and join communities like Wikimedia, contributing to their development and understanding how systems like Wikipedia and citizen journalism work. It emphasizes that organizations like Wikimedia exist to support these communities, underscoring the collective responsibility in creating and maintaining diverse and accessible online spaces.

In conclusion, the discussion on internet regulation and online spaces highlights the potential negative consequences of centralization and exclusion. It calls for a participatory approach in knowledge spaces and regulations that empower communities. The conversation also raises concerns about laws that make knowledge more expensive and regulations that benefit big media houses. Collaboration, government engagement, and the role of organizations like Wikimedia are seen as critical components in safeguarding people’s ability to contribute to online spaces, promoting information integrity, and supporting diverse and accessible online environments.

Session transcript

Moderator:
You Because as you can see we are a very small group being the first session of the day I believe. Thanks so much to everybody for for joining today. The session is Safeguarding a Trustworthy Global Information Ecosystem and in this session we are going to focus on the work of the task force on trustworthy information online and the launching of a set of principles by that task force. We hope it’s gonna be an interactive session. I think we’re such a small group and a number of us are very deeply involved in this work that I think it could actually be a strategy session for for the task force for the work going ahead and for the principles. So maybe to start with I could just give some context to to the task force and then we’ll move into opening remarks and and dig into discussion. So the task force on trustworthy information online is a multi-stakeholder task force that has recently been launched in the Freedom Online Coalition. The task force is continuing the work of the Action Coalition on Trustworthy Information Online that was established by the Danish Ministry of Foreign Affairs, Wikimedia, Witness, Global Voices and Salesforce under the Tech for Democracy initiative by the the Danish government. While in the FOC the task force is going to be chaired by the government of Denmark and the Wikimedia Foundation and the Action Coalition’s intention was to identify solutions to support trustworthy information online and the objective of this task force will be to carry forward that work and propose policy recommendations for governmental institutions and lawmakers with the goal of safeguarding a healthy online information ecosystem. So that’s very broadly the task force and then later in the session we’re going to get into the principles that have been proposed and the work of the task force but to start with first we’ll have opening remarks from Allison Peters the acting Deputy Assistant Secretary of State in the Bureau of Democracy and Labor in the US State Department. Allison.

Alisson Peters:
Well good morning to a bunch of very familiar faces and friends and a sincere thank you in particular to our colleagues in the Danish government for their leadership in establishing the Freedom Online Coalition’s newest task force on trustworthy information online and also to our fellow FOC advisory network members the Wikimedia Foundation for taking on the role of co-chair alongside the Danish government. As the chair of the FOC we in the United States are proud of our partnership with both the government of Denmark and all FOC members as well as the advisory network to advance human rights online and an open internet that is interoperable secure and reliable for all. Digital media and information literacy empowers people to freely express themselves and arms individuals with the knowledge and skills to communicate and critically evaluate information. The United States is promoting trustworthy information online by bolstering our support for initiatives to address disinformation globally from fact-checking initiatives to media literacy while at the same time we seek to also bolster an independent media globally. We’re promoting and protecting open and resilient information ecosystems by addressing critical needs for at-risk journalists, fostering the long-term sustainability of independent media outlets, enhancing the impact of investigated journalism and bolstering outlets resilience to legal and regulatory challenges including through our journalism protection platform. And I’ll note here we’re very proud members as is the government of Netherlands as our chair and the government of Denmark of the Freedom Online Coalition and we are going to continue to work through that global platform with our partners and allies to advance these efforts. I will note for this conversation and I think for the broader community here at IGF that we really have to continue to be mindful that our approaches to promoting trustworthy information online including our efforts to counter disinformation do not inadvertently undermine the bedrock principles that undergird democracies particularly fundamental freedoms, freedom of expression both online and offline. We’ve seen how governments around the globe continue to claim for themselves very broad powers to ban certain forms of expression all too often misusing that power to repress peaceful dissent and silence the voices of independent media, civil society activists, human rights defenders, dissidents, members of religious, ethnic, racial and other minority groups around the globe. That’s why platforms like IGF are so critical for us to continue to bring stakeholders together to address these threats and challenges and strengthen our resolve to tackle them. So again I just really want to thank you all for being here bright and early for what is a really critical conversation. This is just the start of the conversation not the end in our work in the Freedom Online Coalition and we look forward to an exciting year and years ahead for this task force. Thank you guys so much.

Moderator:
Thanks so much Allison and it’s great to hear the number of approaches the US government is taking to foster trustworthy information ecosystems and I think that really underscores the importance of taking a multi-pronged approach to this. And so maybe to just start the session first I wanted to introduce our other panelists. We have Jan Gerlach the director of public policy from Wikimedia, Ivan Siegel the executive director of Global Voices and Clara Christensen the head of section Danish Ministry of Foreign Affairs. They all fill different seats, company, private sector, civil society and government which I think is great because it’s important that we bring different perspectives to this conversation. And maybe to start with it would be wonderful to hear from each of our panelists about what do you see as the key challenges to fostering a trustworthy information space and how can the work of the task force help address these challenges. And maybe we can just go down the line starting with Jan.

Jan Gerlach:
Yeah, hi, everybody I guess Key challenges is what you asked for. Yeah, so my name is Jan. I’m at the Wikimedia Foundation. We are the nonprofit that hosts and operates Wikipedia and supports a global set of communities that built Wikipedia and other free knowledge projects. And from our perspective key challenges right now are I think a trend towards consolidation of power over speech online that is actually driven by lots of governments that seek to promote freedom of expression. And we’re seeing regulation that unfortunately pushes the powers to make decisions about what content should be online and what is and isn’t trustworthy on two platforms. Whereas this knowledge is really held by communities around the world and if we prevent people from participating we’re really not doing ourselves a favor. I wrote down a few notes this morning and I was really thinking you know when you when you prevent half the world from participating in knowledge spaces this is actually just also a matter of peace and security to make it really a drastic statement here. When half the world is prevented from joining conversations and deciding what is and isn’t trustworthy then that void will be filled with misinformation. And I think that’s a humongous challenge for all of us especially in the age of generative AI that is powered by knowledge that is out there on the internet. And when half of that knowledge is not true, is not verifiable, is not trustworthy then we all have a big problem. And I think that’s sort of the challenge that we’re looking at right now.

Moderator:
Yeah thanks for that and I think that echoes a lot of what Allison was saying as well in terms of governments asserting power and control over access to the information, access to different types of information. I think you also see this from a commercial perspective as well in terms of what how companies are curating the information that we we have access to. Ivan it would be wonderful to hear from you. You do citizen related journalism. From your perspective what do you see as the challenges?

Ivan Sigal:
Good morning everyone, I’m Ivan Sigal. I’m the executive director of Global Voices. Global Voices is a large community of writers, translators, and digital activists mostly based in focusing on global majority communities around the world. And we are coming up on our 20th anniversary this year. So we’ve been practicing the art of identifying and finding accurate and trustworthy information in online spaces but with a particular attention to equity and diversity of voices and languages. Asking whose knowledge, asking whose perspectives matter, and who do we hear, and how are individuals represented, how do they represent themselves in online spaces for a very long time now. And interestingly the basics haven’t changed that much. The core question still is I think for a trustworthy information online space is you have to have a open interoperable network that has something like a common carrier system and you have to have user agency. That’s the first step. And then the second is the healthy and across society a healthy promotion of a wide range of participation because a dominant mode of expression or a dominant way of thinking about the internet is that it’s frictionless, it’s easy, and that openness equates to somehow the availability for everybody to do anything in online spaces. But when you actually think about the internet in context of history you realize that historical facts and friction and participation and access has always been inequitable and it’s always been the effort to kind of find to build spaces where people can participate more or less equally is actually a lot of work. It takes a lot of effort, a lot of time to create spaces where people can come together and talk in an equitable way and that’s a lot of what we do. And I think that that kind of promotion of voice and promotion of expression requires thinking carefully and critically about how institutions are built, institutions of knowledge are built, about how not just about freedom of expression, freedom of media, but also about whose media. So thinking about carefully about the diversity of those sources, about how they’re funded, how they’re sustained, and so on and so forth. So I think a lot of the comments we’ve heard thus far I agree with everything said from Jan and from Allison so I’ll stop there for the moment and just continue.

Moderator:
Yeah thanks, Ivan and I think that’s a really important point that the internet creates a number of opportunities to create equal spaces but we have to have the intention when we actually build those spaces and use them to to have them be equal. Clara maybe from your perspective as a government what are the challenges to a trustworthy and safe information environment?

Klara Therese Christensen:
Now yes you can hear me great. Hi good morning everyone thank you so much for showing up. My name is Clara Christensen and I’m part of the tech ambassadors team at the Danish Ministry of Foreign Affairs and I’m pretty new to the whole tech agenda. I just started this August so I’m really excited to be here and be part of this discussion. And first and foremost I want to thank our friends and colleagues in the Freedom Online Coalition and especially the the chairship of the US and how you sort of carried this task force forward. I think this is really exciting for us to see from from the Danish perspective and I’m really excited to be here today because I think that online information is shaping our world and our realities and that’s why we need to build healthy online information systems. And while as we’ve heard you know this is sort of an opportunity to give voice to marginalized groups to to people who normally wouldn’t have a chance to participate then definitely the sort of online forum also can distort information and sort of make it harder to navigate what kind of information is trustworthy what is not and this is why we need to build reliable information structures in partnership with civil society with private sector and I think this is sort of one of the Danish key values that we need to build these things in partnership. Yeah so I’m really happy to to be part of this task force together with Witness, Global Voices, Wikimedia, Salesforce and Freedom Online Coalition. I think this is like it’s gonna be a great discussion and happy to see this sort of growing out of the Tech for Democracy initiative that we launched two years back. Happy to see it grow this is exciting and I think sort of as a government we do have a responsibility to try to build human rights based ecosystems of information and that also means regulation and I think definitely there is a tension between sort of as we talked about you know some governments may be wanting to take a lot of control over these online spaces in a way that might not be very conducive to sort of a free debate and active engagement and on the other hand sort of also the government taking a role into sort of yeah trying to to provide like some sound regulation and we have to do that in partnership with the private sector, with civil society, with our community to try to make sort of regulation that that works that actually matter and that can provide sort of trustworthy information. So I think this is going to be exciting sort of talking a little bit about how do we do that and how do we actually engage with you know the communities to sort of make sure that we do this in the right way. Yeah and I think sort of I’m so happy to see these principles being launched today I think this is really a good foundation and I’m happy to talk about how we put them into action and how we actually sort of build on these principles to to try to have more trustworthy information online. I think that’s that’s it for me.

Moderator:
Thanks Clara. So as you said the first part of the work of the task force is really the launch of these principles. It’s a core set of principles to guide the work that it will be doing. There are three principles I think everyone’s got the paper in front of them. Meaningful multi-stakeholder engagement, protect and promote international human rights standards, and a diverse trustworthy and equitable internet. And since we have a very small group, many who are already familiar with this work, maybe we can spend some time just really digging into these principles. But first I don’t know Jan or Ivan if you want to talk a little bit about the background, what went into developing them, some of the thinking behind these principles, since you were connected to the coalition as well.

Ivan Sigal:
Yeah sure I’ll happily do that. So something that really attracted me to this particular group is that on the nonprofit side we had Wikipedia, Global Voices, and Witness. Three organizations that I think have an unusual perspective on what it takes to actually build trustworthy online spaces and trustworthy online information because they have started from an open knowledge perspective and from working with communities that are not necessarily, for whom being online is not necessarily an easy thing, especially in the context of say Witness’s work and some of Global Voices’ work. But that kind of idea of a citizen-generated participatory internet is the core of a somewhat now almost naive and older idea that has since been commercialized and this now sits broadly across all societies as opposed to of building communities with intention. And these three groups are all our communities built with intention. So working with them is, to me, is a really great place to assert or reassert a set of values as to what it actually takes to try to build trustworthy information spaces and open knowledge. And so I’m super happy that we’re doing it in this way.

Jan Gerlach:
Yeah, and I think to add to that, Ivan actually alluded to it, it’s not a given that people can contribute to these spaces, right, and can tell the stories from the world around them, from their communities. I want to emphasize that also adding knowledge to Wikipedia is not a trivial task in many places in the world. And not just because connectivity is a problem, but actually it might be dangerous to just document the places that you inhibit in places where freedom of expression is not upheld or where governments are actively trying to suppress certain information about how their countries run, right? And that is why, again, it is very, very important that these groups come together, organizations like ours to share also best practices, to share, I think, strategic thinking and why these spaces here are really important for us to come together, and why I think also the engagement of governments is just so welcome, right, who need to understand how their actions in, say, North America, in Europe, in the global north, how their regulation actually affects people elsewhere too and enables them or empowers them to participate, or, in the worst case, actually prevents them from doing so. And that’s why I think we’ve happily joined this task force because this is a great forum to raise these issues.

Moderator:
Thank you for that. And so, I mean, there’s three principles. The meaningful multi-stakeholder engagement, which is focusing on, I think, a lot of what you were saying, Ivan, about the importance of having different stakeholders come to the table to inform the design, development, deployment, evaluation of technologies. I think it’s interesting that this has standards and protocols relevant to the information ecosystem, which gives an important nod to the technical community. And working together to protect human rights and democracy in the front lines. Then protect and promote international human rights standards, so ensuring that regulation is in line with international human rights standards, strengthening privacy and data protection regimes across the world, and a diverse, trustworthy, and equitable internet, prioritizing free, open, transparent, interoperable, reliable, safe, and secure internet. And so, I guess my first question is, are there any reactions to these principles as they sit right now? My understanding is that the task force will actually be fleshing them out quite a bit more. So, first question to everybody in the room is, are there reactions to these principles? They seem on target. Um. I don’t know.

Ivan Sigal:
I’ll just say really quickly, it’s a really interesting moment to try to do this because, as you said, and several speakers have said already, many governments are thinking about how to regulate the internet much more actively now, and not just regulation from a repression standpoint, though that is certainly happening, but we also see lots and lots of attempts from global north countries trying to think about how to regulate, especially the platforms and the big tech companies in ways that are potentially really complicated and difficult for small, medium-sized, citizen-driven initiatives, or non-profit initiatives, or potentially rebound in ways that make it impossible or extremely expensive to create new kinds of platforms that are civic in intent rather than commercial in intent. And so, and at the same time, we have seen something like a break in trust around the large social media platforms. That’s been true for years, but the last two or three years have been really intense in that regard, which is both a big challenge and also a huge opportunity for us to reset, potentially, or rethink ways around instantiating and supporting these kind of basic, these communities that have a core set of civic values in their approach to online participation in the creation of community, the creation of knowledge, the creation of information. So when we think about these statements, I think that’s where we’ve been coming from as a group. And so if you see, not that many of the previous set of principles that we’ve seen launched over the years have really emphasized this participatory side. And I think that’s really important for us to kind of reestablish that side of it as well as the other part. So thanks.

Moderator:
Yeah, and I agree. I think we are seeing a rise in platform regulation that can have either intentional or unintentional impacts on platforms if it doesn’t really speak to the business model or the way that the platform functions or the services that are offered and can have unintentional consequences for the rights of users. And so I guess there’s two approaches. We were thinking that we would have a larger group. We were thinking maybe we would go, each person would take a principle and talk about it, talk about why it’s important, what it might mean in practice and how it could guide the work of fostering trustworthy information ecosystems online. So we could do that or we could talk about maybe a little bit more tangibly how the task force can apply these principles to the work that it’s doing, what might be the priorities of the task force going forward. It would be great to have input of what others think the priorities of the task force should be as it starts to work within the FOC. So I don’t know if there’s a preference between those two approaches. Yes, we are. This is fully interactive. So please, questions, comments.

Audience:
Thank you. I was formerly at Global Voices. So I’m very happy to be here. Keiko, it is great to see the work of Wikimedia and Global Voices on the Coalition working towards the trustworthy global information ecosystem. And I see the panel seem to sort of reify this approach to global ecosystem in terms of its diversity and inclusion where many of us are present. And I was wondering, because a lot of the disinformation and its harms are happening in the other areas outside of the Western-centric approach. And I was wondering how you guys are going to sort of scaffold their way of, not many of them are shifting from oral culture to digital cultures. And the impact of disinformation is not so much that is limited in the cyberspace, but there are coming to the lives of people that are in different languages. And that is why I think it’s very important in places like Global Voices and Wikimedia that has all these people that are contributing their time and efforts in other parts of the world. Thank you.

Moderator:
Yeah, are there other questions or comments, input into perhaps challenges that you see in the information ecosystem that the task force could concentrate on? Go ahead.

Audience:
Hi, good morning. I’m Nick Beniquista. I’m from the Center for International Media Assistance at the National Endowment for Democracy. Look, the principles look fine. I’d say, if anything, they look a little innocuous. No one’s gonna disagree with these. And we work on media development as a kind of an approach to information integrity, and have argued that over the years, we need systems level really pretty major interventions if we’re gonna fix the problems that we have in the information ecosystem. So things that affect how eyeballs and money are being moved through the digital ecosystem. So this includes things like Pluralis in Europe, trying to really bring massive amounts of private capital to bear, trust initiatives that are trying to really change the economic incentives for quality information online, and many, many others. And of course, policy interventions like bargaining codes that could really transform. It’s imperfect, I know, but we’re looking to all these options. In that context, the sort of participatory, citizen-driven approach seems a little quaint. And just to be provocative. Hasn’t, I mean, Wikipedia and Global Voices is incredible. You’ve done incredible work over the years, but faced with these sort of systemic level challenges, how does your vision for a kind of a participatory approach still matter?

Jan Gerlach:
Sure. I think it matters more than ever, probably. And I guess I need to say that, but I do believe in it as well. You’re talking about sort of changing incentives, economic incentives around eyeballs. And you’re probably alluding to supporting journalism and I think Wikimedia can be sort of an honest broker in there, as in, if stories go away, if stories, if local journalism isn’t funded, isn’t sustainable, regional journalism, those stories cannot be on Wikipedia, right? Wikipedia is not a place for original research, but every edit, every article refers to sources out there that are verified by the people who work on Wikipedia. And that’s why we have a very strong interest in the media landscape being healthy and being diverse, right, for these stories to not just be sort of driven by engagement, as you mentioned, but really documenting the world and being trustworthy. And now every story that goes away, however, also goes behind a paywall, is not accessible for many people around the world. We understand that journalism needs to be funded, media work needs to be sustainable, but we really have concerns about laws that basically just put a larger price tag on this knowledge, right, per se. And so I think there’s a role for governments to play there, there’s a role for independent initiatives, but I think the answer cannot be, let’s move money away from all platforms and make it harder for non-profit platforms even to share and carry this knowledge and move it to, say, big media conglomerates, right? And that’s, I think, what we’ve been seeing around the world how this has been happening, right? It’s not independent journalism that ultimately benefits, it’s not your person somewhere in, I think, a conflict zone who ultimately benefits, but it’s usually the big media houses that we see sort of pushing this kind of regulation as well. We’re really worried about that, but we see ourselves sort of as an honest broker in the middle, right? We know this must be accessible, but it must also be sustainable to actually work as in media, right? And that’s why this is, I think, a super important space for us to engage in and we welcome the question.

Ivan Sigal:
Let me just add that both of these organizations are part of a process of field building, so it’s not just about Global Voices and Wikipedia, it’s about a whole universe of people who see it as their, see it as a civic act to create and share information that’s trustworthy. And that is not only about media creation, that’s, it’s also about knowledge building outside of the news. And, you know, that’s what SEMA does, you focus very much on the news and the professionalization of it. It’s really important to say that one of the reasons projects like ours got started is because of pervasive and complicated bias in news framing. That’s a history of the news media from the last 50 years. I mean, it’s not the case that news organizations are adequate or sufficient for all the kinds of information we need in the world. We do need a diversity of voices, a diversity of perspectives. And in many countries around the world, as you know, if you work in the media development field, it’s been very, very hard to get that kind of diversity, even when there is a financial sustainability. That’s what, and so creating alternatives that allow people to have easy entry into an information space, to be able to build their own systems, their own communications platforms, their own communities, whatever initiative they might create that helps to add a diversity of perspectives and voices and more information coming from more places is a good thing. It is not a zero-sum system. And like, yes, Global Voices is small, but we’ve had about 8,000 people participate with us and we’ve had hundreds of media partners over the years. And we work on our typical basis with about 50. So it’s, you know, at any given time. So it’s not by itself maybe as significant as you’d like it to be, but it is part of a larger way of thinking about how information works. And I think that kind of story is really important to maintain and sustain and grow. And there’s no reason why it can’t keep growing as long as there’s a fundamental framework to allow it to be true. And so that’s why sometimes saying these, as you acknowledged, sometimes very basic ideas, these very basic principles need to be restated because the alternative, which is that we build a regulatory process that’s all about big technology versus large media outlets, which are basically competing for access to information, to advertising dollars, takes the civics out of the equation. And so we’re here to try to make sure that the civics stays part of the equation.

Moderator:
I don’t know if there was additional thoughts onto the comment that you made, which I understood kind of about the voices and multi-stakeholder voices and maybe power of voice as well.

Ivan Sigal:
I mean, I can address that really briefly as well, which is just, yes, you’re absolutely right, Kiko, about how disinformation does affect many communities in many languages. And I think it’s very important to make a clear distinction between misinformation and disinformation as well, by the way. Misinformation, which is generally ignorance in another language and disinformation, which is lying, which is intentional misleading of peoples and groups. We certainly see a lot of that and thinking about how to buttress or support better information in other languages in a whole range of languages is a big part of what we do. I know Wikipedia also does that. We have an initiative called Rising Voices, which works with indigenous and marginalized groups to help to identify and languages to helps to build their own information sources and trustworthy information sources. And lots of others do have that kind of activity as well. And I think it’s super important to keep putting an emphasis on that type of project to stand in opposition to free-floating disinformation. Thanks.

Klara Therese Christensen:
So yeah, no, I think just like commenting on some of your thoughts on sort of like regulation, what’s the role of a regulation? And I think we need to sort of distinguish between the very large online platforms and sort of how we regulate them versus sort of the more like not-for-profit or smaller platforms and how to sort of like give access to like multiple voices and then also recognizing that very large online platforms do have like a special responsibility for like what kind of content comes online and how do you access it? And I think that has to be coupled with like, for example, funding from governments to support Global South, the global majority voice. to make sure that we try to create a more open space. And I think that’s some of the things that, for example, the Danish government is also trying to do through partners, through Access Now, to international media support, some of these organizations that we’re partnering up with to try to sort of make this a more open space where more voices can be heard. Because I definitely agree with you that this is something that we see as a big challenge. And it’s sometimes sitting in a government position up somewhere in Europe, it can be really hard and challenging to see where we have the blind spots that we have and where we are sort of restricting information and restricting the debate. So I think that’s, for us, super important to sort of partner up with organizations like yours to sort of to engage in that conversation and to get better. Then, of course, we have the whole sort of EU regulation, like a lot of regulation coming out of the EU right now, which I think is, for me, like super exciting and interesting to see how the EU, because, I mean, Denmark, as like a small country, we don’t do a lot of regulation ourselves on sort of very large online platforms, for example, and seeing how the EU is trying to build some regulation, but without having a lot of big tech companies and big online platforms, and how I think the EU is sort of trying to, yeah, to build, to make some regulation that could be used worldwide, but still sort of grappling a bit with how to do that in a way that, where we still sort of take into consideration the different local contexts in the global majority and sort of outside the EU, and I think that could be really interesting to also hear some perspectives on how you see that, how we’re doing that, if it could be better, how we, as a small country like Denmark, could sort of engage in that discussion also in the EU and what we should sort of bring to the table. I think that would be really interesting to hear from everyone here, and yeah, also on the panel. Yeah, that would be great.

Audience:
Hi, my name’s Michael Karanikolas. I’m the Executive Director of the UCLA Institute for Technology Law and Policy. These look really good. It strikes me that all three of these principles pose a challenge to traditional concentrations of power. Interoperability poses a challenge to large online platforms. Human rights standards restrict what governments might wanna do, and multi-stakeholder engagement. I’m academic slash civil society. Multi-stakeholder engagement is great for civil society because it gives them a seat at the table, but where it’s meaningful, obviously it restricts authority among governments to just take the actions that they wanna take and companies to take the action that they wanna take. So I guess my question is, have there been early responses from governments and industry? Is there a strategy for developing buy-in among the players whose power would be eroded by the adoption of these standards? Is that what we’re doing now, is developing that strategy? How do you make these actionable by generating will to move towards these by the people who it’s not necessarily in their immediate interests to do so? Hi, my name is Guus van Zwol from the Dutch government, Dutch MFA. Thank you for a great presentation. I mean, this is an issue that we’re very happy as an FOC country that this topic is being taken up. We think it’s a very important topic. That’s reason also why last summer we presented together with Canada, the Global Declaration on Information Integrity, which I think mimics a lot of these same principles, but maybe are a little bit more detailed. I’m just wondering, I mean, I mean, my question is the following. Being part now part of the, now we’re doing this work within the Freedom Online Coalition. I mean, this is a topic that’s also high on the UN agenda with the UN Code of Conduct, for example, which is part of our common agenda. And UNESCO has promoted their Internet for Trust initiative. And my question would be, how are we going to operationalize or promote these principles in those fora? Because that will be, I think, one of the key challenges that we see, which would also be, well, which would also provide a certain rationale or pretext for other countries to start regulating more these fora that we’re discussing. Not these international fora, but the social media companies, et cetera, et cetera. So my question would be, is how we’re going to operationalize these principles and how we’re going to organize ourselves in order to also address those international fora since we are, I mean, the FOC is, by definition, a diplomatic coalition.

Moderator:
Yeah, thanks. Maybe just to summarize, because I think there’s a couple of different threads that have emerged. One is a question of kind of what’s next with these principles? Is there going to be buy-in? How are they going to be used? My response to that right now is that the principles are meant to lay the foundation for the work of the task force, which has just been launched within the Freedom Online Coalition. And so the strategy around how these principles are going to be used is being built and developed. And this is the starting point to share that this is the foundation that the task force is going to be working off of. Another question, Joost, to what you were pointing to, was how are we going to coordinate with other initiatives that exist around information integrity, trustworthy online ecosystems, et cetera? How are we going to promote the work of the task force and the principles in key international forums, debates, processes that are happening at the international level? Also, I heard a number of, I guess, suggestions of what is needed to create a safe and trustworthy information ecosystem from taking a systems-level approach to ensuring that it is participatory and citizen-driven to ensuring that the regulation is human rights-respecting and is tailored to the platform. And also a number of challenges that individuals are facing at the local level with respect to the impact of disinformation. So maybe those are the different threads, and I don’t know if there’s any responses from the panel to those, or thoughts from other members in the audience that would like to build on some of those threads.

Jan Gerlach:
Well, I see the creation of the task force also the launch of the principles today as sort of an invitation to help figure this out. I mean, I think we gotta be honest here that there’s no clear strategic path forward, right? There’s, I think, and I guess this speaks actually to the challenge of having all these processes that are somewhat loosely related, but where the coordination and connection isn’t always so clear. And having such a task force that actually brings together governments and civil society, and hopefully also really proactive private actors can help as that, I think, coordination group that maps these processes and coordinates how we all speak with one another and maybe with others that we need to bring along. I think from a Wikipedia perspective, our team’s main task is often to educate people about how Wikipedia actually works. Everybody uses it, but nobody really knows what’s under the hood. And once we start educating policy makers and governments about that, they’re like, oh, wow, this, I didn’t know this, right? This is something we should be protecting. And we’re, I mean, we’re seeing this as an opportunity to actually do this in an FOC context to bring along governments who have very lofty diplomatic goals, but we’d love to sort of get them engaged on this and through diplomatic briefings, help them also understand what’s at stake elsewhere, right? It’s one way to say, one thing to say, yes, the EU is regulating the online spaces, and it’s also just learning how to do this a little bit, but then showing the real effects that some of these regulations have in places where Wikipedians sit in the global South and are affected by this, are affected by maybe a mechanism that forces platforms to remove content or are affected by laws to retain data. And just sort of, I think, having this as a focal point where these conversations happen, I think is the strongest sort of proposition that this task force actually has.

Moderator:
Reactions or thoughts? We’ve got four minutes left.

Ivan Sigal:
Just make a final comment. Well, I wanna say thank you. I thought your point was very clear and very helpful. I mean, all of these three points are in some ways a challenge to traditional stakeholder positions, and embedding that challenge within the framework of a intergovernmental group is itself a strategy, right? It is itself to say, here’s a way of talking about those, and bringing these communities that traditionally don’t have a lot of power, are traditionally dispersed, and because they’re dispersed, it’s very, very hard to organize around some kind of considered position, and then to present that in a framework in which it does actually have a, is in dialogue with entities that have the potential, at least, to think about regulation, think about supporting positions. Look, this conversation’s been going on for a very long time. Attempts to build principles, attempts to build coalitions. The Web We Want project was, the Web Foundation was 12 years ago, 14 years ago. Now, there’s older projects as well that have a lot of the same kind of language, and they tend to disintegrate because there isn’t a formal structure for maintaining and supporting them that has an engagement with any kind of regulatory process. I was just sitting here and doodling on the different domains of authority and knowledge where these things, these issues take place, right? Speech, privacy, antitrust, content moderation, four different domains of expertise that often have conflicting goals, conflicting ends towards what they would like to see as an ideal regulatory environment, an ideal solution for some of the problems we see. Even fundamentally, sometimes fundamentally, different understandings of what the problem even is. And I think our basic goal here is to make sure that the voices of the communities that we work with are included in those conversations and not ignored, not skipped over because we have less power, potentially, or fewer resources, or because we don’t have a profit motive that underlines our activities. So I’ll stop there and let you guys continue.

Moderator:
Thank you. I think I should, we’ve got one question.

Audience:
One comment that I need an answer from you because I represent Sri Lanka, Internet Governance Initiative of Sri Lanka. So at the moment, there is a proposed bill regarding internet safety in Sri Lanka, which is almost the first reading had done in the parliament which is mostly discusses about the internet safety but it creates regulations to censorship, to fragmented internet, and also it harmful for the platforms, media, and users as well. So where these kind of issues comes, where you stand, how we reach to you, how we can do an action for us as a people we are in the developing world. Thank you.

Moderator:
Yeah, thank you so much for that. One for highlighting the upcoming bill in Sri Lanka but also flagging kind of the concluding question of the panel, which is next steps, how can people stay connected to the work and get in touch? So maybe I will hand to Clara and then maybe Jan if you could speak to the next steps and how people can stay connected to the work of the task force. But before that, did you have any kind of concluding remarks or reactions to anything that’s been said?

Klara Therese Christensen:
Yeah, thanks. I think I was, sorry. I also just wanted to comment sort of on this issue between sort of giving serenity or authority when you sort of work in this multi-stakeholder approach. And I do think that that is of course a challenge but I also think that this is the only way to build like good reliable regulation that actually gets implemented. If we don’t have buy-in from the private sector, for example, it is super hard to make sort of sound regulation that actually will have an impact. And I think that’s why it’s so important and something that we work for from the Danish side to sort of like try to include more private sector engagement, more civil society engagement to actually make sure that when we do make regulation then it’s well-informed and we have some buy-in to actually make it work out in the real world. So I do think this is like a very good example of like why this is difficult, why it takes time but also why this is sort of the only way we can do because states and governments, they can do so much but if we don’t have sort of buy-in from the rest of the ecosystem, I think it’s gonna be really difficult to like create more trustworthy information online because the internet is not only regulated by government, right, it’s like, it’s so sort of big and also live beyond sort of the serenity of the states. I think that’s something that provides some challenges but also some really great opportunities and forces us to go into deeper dialogue with some of our counterparts. And I do think that that’s sort of some of the important work that we should sort of continue working on in this task force.

Moderator:
Yeah, thanks so much. Maybe on to the last question of what’s next for the task force and how people can stay connected?

Jan Gerlach:
Well, first of all, we’re excited to launch this today officially and as a task force, I think we hope to grow and find many more people who want to contribute so that’s one way to stay connected and be part of this hopefully growing momentum. We just connected on this actually and I think as co-chairs, Wikimedia, we’re interested in people following us. There’s spaces for discussion like mailing lists, public policy mailing lists. I think one way to also be part of this is actually to become a Wikipedian. If I can do a shameless plug here. And just understand this better, right? I think that’s sort of my whole point here. We need people around the world to understand what is going on and how these systems work, how the citizen journalism space works, how Wikipedia works, how all these civic spaces actually function and by joining them, you’re making a huge contribution, right? And obviously, we don’t wanna make this all like individual responsibilities, right? That’s why there are organizations like ours as well. But staying connected through these very communities that we support is one really meaningful way to actually help because at the end of the day, we are just here to serve them, right? And by directly joining them, you’re actually doing very helpful work. So yeah, be part of this and try to stay connected in that way. Yeah, thank you.

Moderator:
So with that, I think we are out of time. Thank you so much for everybody’s participation and your inputs. And if you are interested in learning more about the task force or even participating, please do talk to one of us up here. Thank you.

Alisson Peters

Speech speed

163 words per minute

Speech length

515 words

Speech time

189 secs

Audience

Speech speed

169 words per minute

Speech length

1091 words

Speech time

388 secs

Ivan Sigal

Speech speed

179 words per minute

Speech length

2083 words

Speech time

698 secs

Jan Gerlach

Speech speed

170 words per minute

Speech length

1645 words

Speech time

580 secs

Klara Therese Christensen

Speech speed

180 words per minute

Speech length

1507 words

Speech time

502 secs

Moderator

Speech speed

168 words per minute

Speech length

1929 words

Speech time

690 secs

Meeting Spot for CSIRT Practitioners: Share Your Experiences | IGF 2023 Networking Session #44

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

In the analysis, the speakers emphasised the importance of building bridges between different communities to contribute to an open, free, stable, and secure internet. They highlighted the need for increased interaction and adoption of each other’s languages and processes between network operators and cybersecurity specialists. This closer collaboration would facilitate a more effective response to incidents and enhance overall information sharing in the field of cybersecurity.

The speakers also stressed the significance of finding a balance between security and stable communication. They acknowledged that while security is essential for protecting networks and data, it should not hinder the smooth flow of communication. Striking this balance ensures that individuals and organisations can communicate freely while maintaining a safe online environment.

Cooperation at both the national and global level was identified as highly beneficial for internet security. The analysis indicated that different regions have various experiences that can be shared for mutual benefit. Adopting a “defend locally, share globally” approach contributes to wider global security and promotes cooperation in tackling cybersecurity challenges.

Furthermore, the speakers discussed how geopolitical issues can both challenge and strengthen the cooperation of Computer Emergency Response Teams (CERTs). While geopolitical tensions can potentially hinder cooperation, recent events have highlighted how the commitment to keeping the internet secure has strengthened certain relationships despite these challenges.

The analysis also highlighted the crucial role of sharing information in tracing the origins of cyberattacks. However, it was noted that this can be difficult due to factors such as local laws and regulations and the intersection between cybersecurity and national security. Despite these challenges, the speakers emphasised the importance of sharing information to effectively combat cyber threats.

Resource limitations were identified as a constraint to international cooperation. The analysis suggested that having expert-level communication specialists is necessary for continuous monitoring and maximising resource findings. Addressing resource constraints would facilitate more effective international cooperation in the field of cybersecurity.

In times of global crises, such as the current pandemic, the speakers emphasised the need to continue information sharing. They viewed the pandemic as a blueprint for global information exchange during crisis situations. Even amid geopolitical tensions, the speakers concluded that the continuation of information exchange is vital to effectively address cybersecurity challenges.

Overall, this comprehensive analysis underscored the importance of building bridges between different communities, striking a balance between security and stable communication, and promoting cooperation at both national and global levels. It also highlighted the challenges and opportunities presented by geopolitical issues, the significance of sharing information, the constraints of resource limitations, and the importance of continuing information sharing during global crises.

Bernhards Blumbergs

A recent meeting addressed the importance of freedom, openness, and security on the internet. While acknowledging that achieving all three aspects simultaneously may not always be possible, participants stressed the need for ongoing efforts to strive for them. The argument put forth was that the internet should be a space that promotes freedom of expression, ensures open access to information, and prioritizes user security and privacy.

Regarding information sharing, participants highlighted its crucial role in the development and progress of the internet. Even during times of geopolitical tension, it was emphasized that continued information sharing is vital. Peter Koch from the German top-level domain registry specifically emphasized the significance of maintaining information exchange despite any underlying political conflicts. Additionally, the meeting discussed how the COVID-19 pandemic served as a blueprint for prioritizing global information exchange during a crisis, showcasing that challenges can be overcome to facilitate the flow of information.

The meeting also underscored the need to understand and prioritize device and personal security. Participants agreed that enhancing cybersecurity requires individuals to have a deeper understanding of device security and personal security practices. Furthermore, they recognized the essential nature of practicing good cyber hygiene at both personal and national levels to create a safer internet environment.

Importantly, it was emphasized that information sharing should not be restricted to specific layers within the internet infrastructure. Participants argued that sharing information should extend beyond technical, operational, and strategic layers and instead be facilitated between these layers. Building understanding and effective communication across different levels of the internet infrastructure were highlighted as crucial aspects of successful information sharing.

In conclusion, the meeting highlighted the importance of striving for freedom, openness, and security on the internet, despite the challenges of achieving all three simultaneously. It also emphasized the critical role of information sharing, particularly during periods of geopolitical tension and crises. Additionally, understanding and prioritizing device and personal security, along with facilitating information sharing across various levels of the internet infrastructure, were identified as key factors in creating a better and more secure internet environment.

Adli Wahid

Adly Wahid, a security specialist at the Asia-Pacific Network Information Centre, is actively engaged with the CERT and C-CERT community in the Asia-Pacific region. This engagement allows him to interact with various stakeholders involved in cybersecurity, fostering collaboration and knowledge sharing.

Previously, Adly Wahid has gained valuable experience working for the National CERT, Malaysia CERT, and a CERT dedicated to the financial institution. These prior positions have equipped him with a strong background in handling cybersecurity incidents and implementing effective security measures.

The importance of cooperation between CERTs and CSIRTs at both national and global levels is paramount, as it ensures a wider exchange of experiences and technologies to effectively combat cyber threats. By collaborating and benefiting from one another’s expertise, CERTs and CSIRTs can enhance their capabilities in dealing with cybersecurity incidents. Despite global problems and adversarial geopolitical issues, cooperation between these entities has actually been strengthened, showcasing their commitment to making the internet a secure and safe place.

Recent geopolitical issues have played a positive role in strengthening the cooperation between CERTs and CSIRTs. The analysis reveals that these geopolitical issues have actually heightened the commitment to collaboration, as stakeholders recognize the shared interest in safeguarding cybersecurity. By uniting, these entities are better equipped to address the evolving challenges in the digital landscape.

Overall, Adly Wahid’s expertise and experience, combined with the increased cooperation between CERTs and CSIRTs, contribute to ongoing efforts to ensure cybersecurity at various levels. This insight highlights the significance of international collaboration and knowledge sharing in effectively tackling cyber threats and promoting a secure digital environment.

Masae Toyama

Masae Toyama, a cybersecurity practitioner, has drawn attention to the pressing need for increased representation of cybersecurity workers in internet governance forums. In these spaces, Toyama noticed a distinct lack of voice for professionals in the field of cybersecurity, and they encountered difficulty in connecting with others who shared similar backgrounds during previous forums. This experience prompted Toyama to recognize the necessity for a dedicated platform where cybersecurity meets internet governance.

Toyama firmly believes that cybersecurity practitioners play a fundamental role in upholding a secure and stable cyberspace. However, despite their significance, their presence and voices are not as prominently heard among the various stakeholders within internet governance forums. Drawing attention to this disparity, Toyama advocates for a stronger representation of cybersecurity experts within these platforms.

Toyama’s positive stance emphasizes the importance of creating a space where the intersection of cybersecurity and internet governance can be realized. By fostering a greater inclusion of cybersecurity professionals within forums like the Internet Governance Forum, the collective knowledge and expertise of the cybersecurity field can be harnessed to effectively address the challenges and concerns of internet governance.

In summary, Masae Toyama highlights the pressing need for a more robust representation of cybersecurity workers in internet governance forums. Their personal experience revealed a lack of voice for cybersecurity professionals, and they emphasize the essential role they play in maintaining a secure cyberspace. Toyama advocates for the creation of a platform where cybersecurity and internet governance intersect, in order to strengthen the presence and voices of cybersecurity practitioners within these influential forums. This perspective offers valuable insights into the ongoing dialogue surrounding the intersection of cybersecurity and internet governance and underscores the significance of including diverse perspectives in shaping the future of the digital landscape.

Moderator

The need for increased representation of cybersecurity practitioners in the Internet Governance Forum (IGF) is emphasised. Currently, there is a lack of individuals with backgrounds in cybersecurity, such as those working at CERT or actively involved in cybersecurity, participating in the IGF. This lack of representation results in their voices not being heard as loudly as other stakeholders.

A proposed session by a speaker is recognised as beneficial for all participants. The session aims to address the need for greater involvement and voice of cybersecurity practitioners in the IGF. It is expected that such sessions will provide a platform for cybersecurity professionals to share their expertise and insights among the various stakeholders involved.

Networking sessions are also implemented to encourage participants to interact and discuss their experiences and views on cybersecurity. These sessions provide an opportunity for attendees to engage with individuals they may not have spoken to before, fostering collaboration and the exchange of ideas.

Building bridges between network operators and cybersecurity specialists is considered crucial for establishing an open, stable, and secure internet. Recognising that these two professions utilise different languages, mindsets, concepts, and processes, there is a need to bridge the gap between them. The initiative taken by organisations like ADLI in strengthening the partnership between these communities is highly regarded.

Several challenges in the field of cybersecurity are identified, such as the obstacles related to information sharing. Cyberattacks are often unpredictable, making it difficult to trace their sources. In addition, local regulations and national security issues can complicate the sharing of information. These challenges need to be resolved in order to build strong collaborations and improve cybersecurity practices globally.

Resource limitations and the need for capacity building also pose significant challenges in the cybersecurity sector. Constant monitoring, particularly through cooperation with international entities, requires specialist skills. Given the link between cybersecurity and national security, enhancing capacity building initiatives becomes imperative.

The importance of information sharing and building trusted networks for message exchange is emphasised. It is not only necessary to share information within specific layers of cybersecurity but also between those layers. By doing so, a deeper understanding can be developed, contributing to a more comprehensive and effective cybersecurity framework.

Cyber hygiene, which entails understanding device security, personal security, and learning about cyberspace, is considered essential for maintaining a secure online environment. The responsibility for practicing cyber hygiene extends to all individuals, not just technical experts. By promoting the importance of cyber hygiene, stronger global communities can be built, further enhancing cybersecurity.

In conclusion, the need for greater representation of cybersecurity practitioners in the IGF is highlighted. Proposed sessions and networking opportunities aim to address this need, facilitating knowledge sharing and collaboration among stakeholders. Challenges related to information sharing, resource limitations, and capacity building are identified, emphasising the necessity for proactive measures. The significance of information sharing, building trusted networks, practicing cyber hygiene, and ensuring widespread understanding of cybersecurity principles are all crucial for creating a secure and stable cyberspace.

Hiroki Mashiko

The analysis highlights key points about Entity Data, a prominent system integration company in Japan. It is noted that Entity Data has an internal Computer Emergency Response Team (CERT), known as Entity Data CERT. This CERT is responsible for handling and responding to cybersecurity incidents within the company.

One notable fact revealed in the analysis is that Hiroki Mashiko, an individual associated with Entity Data, works as a forensic engineer at Entity Data CERT. This indicates that Mashiko is involved in investigating and analysing digital evidence related to cyber incidents within the company. The analysis suggests that Mashiko’s role as a forensic engineer emphasises his technical skills and expertise.

Another point made in the analysis is that Mashiko is described as being more focused on technical aspects rather than governance-related matters. This suggests that his strengths lie primarily in technical areas rather than broader aspects of corporate governance. However, the analysis does not provide further information regarding Mashiko’s specific responsibilities or tasks within his role.

The analysis overall has a neutral sentiment, indicating a lack of strong positive or negative opinions or emotions. While it offers valuable insights into Entity Data, Entity Data CERT, and Hiroki Mashiko, it does not draw any further conclusions or assessments beyond these observations.

To summarise, this expanded summary provides a more detailed overview of the analysis. It highlights Entity Data and its internal CERT, Entity Data CERT, as well as Hiroki Mashiko’s role as a forensic engineer. Furthermore, it emphasises Mashiko’s technical orientation and the neutral sentiment of the analysis.

Session transcript

Masae Toyama:
May I share my screen? Thank you. All right then, yeah, before we get started, I’d like to ask the moderators to briefly introduce themselves. So first, I’d like to pass the mic to Mashiko-san, and then later on online, Ali and Vivi. So off you go, please.

Hiroki Mashiko:
Hi all. Hello, I’m Hiroki Mashiko from Entity Data CERT. Entity Data is one of the major system integration company in Japan. And Entity Data CERT is the internal CERT of the Entity Data. Actually, I’m a forensic engineer of Entity Data CERT. So actually, I’m a technical-oriented people, more than governance-oriented, and so on. But maybe you know, the governance itself is strongly connected to my work as well. So I’m looking forward to hearing your opinions of today’s discussions. So let’s make a great discussion today. OK, thank you.

Masae Toyama:
Thank you, Mashiko-san. May I pass the floor to Ali?

Adli Wahid:
Yep. Ohayou gozaimasu. Good morning, everyone. My name is Adly Wahid, and I am with the Asia-Pacific Network Information Center as a security specialist. I do a lot of engagement with the CERT and C-CERT community. in this region, including helping to establish newer CERTs. And in the past, I have used to work for National CERT, which is a Malaysia CERT, and a CERT for the financial institution. So looking forward to discussing and chatting with everybody today. Thank you.

Bernhards Blumbergs:
Thank you, Adli. So last but not least, Bibi Sam, please. Minna-sama, oihou gozaimasu. Welcome, everyone. Good morning. My name is Bernhards, but everyone calls me Bibi, so please follow these guidelines. I am here in Japan in Nara Institute of Science and Technology doing my postdoc, but I am a member of the National CERT team of Latvia, so CERT LV, and also I’m affiliated with the NATO Cooperative Cyber Defense Center of Excellence in Tallinn. I’m the ambassador and former researcher for that center of excellence. Well, I’m looking forward to moderating and having a productive conversations with you.

Masae Toyama:
Thank you, Bibi-san. The more strict guideline is presented, so please follow. Well, thank you. So my name is Masaya Toyama from JP CERT Coordination Center. I became part of CERT community four years ago, and my first IGF was 2020, which was fully online. Then in the last IGF in Ethiopia, actually it was so hard for me to find people with similar backgrounds, namely working at CERT or doing cybersecurity. So my idea was to break out of this situation and create a place where your day-to-day work in cybersecurity meets internet governance. While cybersecurity practitioners plays an important role in keeping secure and stable cyberspace, their voice in the IGF is not that loud enough amongst various stakeholders in IGF. So I think that the more fellows we get, the louder our voice would be so that we know what needs to be done. This is the background story of why I decided to submit the IGF session proposal. I am delighted that the IGF found my proposal beneficial to participants. Well, however, if they really care about us, I think the session should be set for. later, not kicking off at 8.30 in the morning. Anyway, I don’t want to spend too much time on me speaking. So now I’d like to listen to participants’ self-introduction. So would you please? Yes, I will pass the mic. Online first. Well, thank you. So let’s have some voice from online participants. So I’d like to open the floor, especially for online. So please, participants, if you’d like to start. OK, so let me read out the names. And if you can turn on your microphone on, you can have a brief self-introduction. So let me call out the name. Is there Kenny Chantre? Hello? Hello. Hi. Thank you for coming. Would you please introduce yourself and what made you come to this session? Hello.

Moderator:
My name is Kenny Chantre. I am Cape Verdean, living in Cape Verde in these moments. My interest to come to this meeting is to know it’s. to know more about internet governance. For now, I am ambassador from Pan-Africa Young, ambassador for internet governance. And my interest is to know more about internet governance. Thank you. All right, thank you so much. So let’s move on to, sorry, my pronunciation. Captioner Terrarin. Hello. Oh, sorry. I just messed up. So let me move on to the next person, who is Francisco Mostedosa. Hi. Hello, nice to meet you. My name is Francisco from Ecuador. It’s very important, the networking, the initiative of the multi-stakeholder. But in my country, it’s important to promote these actions and these events for all stakeholders. Thank you so much. Thank you very much. So let’s move on to the next person, Amir Adas Mohammadi Koushiki. Thank you. Hello. May I ask you to introduce yourself very briefly? I’m sorry, Amir, we cannot hear you. Right. So in this case, let me, uh, thank you so much for, uh, for, uh, for, uh, for, uh, Thank you so much for, uh, putting yourself on mute. We will, uh, ask you to later on to have yourself in the breakout discussion. So the last person at the moment is Saudia Pina Mango. Hello. Good morning. If you can turn on your microphone, we will ask, ask to introduce yourself very briefly. But if not, we will move on to the introduction for online participants, on-site participants. Right, okay, she’s gone. So now we have some people, which is much better than in the beginning compared to the beginning. So, um, okay, so let’s move on. Let’s go back to the original agenda so that we can proceed the breakout discussion. So, um, let’s move on. Here’s a little bit of housekeeping. Let me try to keep it short. So as I said, this networking session is, uh, asking you to, uh, stand up and walk freely to talk to someone you have not yet spoken to. Maybe it’s difficult, especially on, on, uh, on site, but yeah, please try. We will have two or three short sessions. Each session will be 10 minutes. Seven minutes short discussion plus three minutes comment. Comment section is trying to interact with people in on site and online. So we will exchange our comments and try to understand what was going to be discussed. And please cooperate with the moderators for timekeeping, especially because we changed the agenda. The instruction might be different from the original slide. So please, thank you for your cooperation. And we prepared some guiding questions to facilitate the conversation. But besides the guiding question, of course, you can introduce your name and your identification or what makes you to come to this IGF. Yes, of course, you can talk about this kind of icebreaking. And as I see, we have less than 10 participants on site. But I can see the sticky notes on your chest or whatever. So you can identify who to talk. So watch the sticky notes. All right, so let’s go to the first discussion. I will pass the mic to Mashiko-san. OK, so I will read the question. And I will keep the time on this session. So thank you for your cooperation. Good, so after I introduced myself, I finished to read the question. This introduction is only for on-site participants. Please make some two groups, I think. Two or three. Two or three groups, yeah, I think. And start the discussion, please. OK, so the first guiding question is, when do you feel that your commitment to cybersecurity is creating and sustaining an open, free, and secure internet? Yeah, this is a bit difficult question, I guess. Maybe it’s especially difficult for the brain in the morning. Yeah, but yeah, let’s. Yeah, actually, I tried to put the easiest one on the site. So yes.

Audience:
Hi, my name is Pablo. I work with Adly in APNIC. And perhaps we can just have an open conversation here around these questions, because we kind of know each other or not. But perhaps it is good to contribute on the record for also leaving this on the webcast and transcript. So I would like to tell you a little bit about when I feel that our commitment to cybersecurity creates and supports an open, stable, and secure internet. And I think it’s all about bridges. In our case, our community is mostly by network operators, internet service providers. And early on, around 10 years ago, we thought about how the network operators need to interact more with the cybersecurity specialists. And we also realized that these are kind of two different professions. and how to build those bridges between them. So in order to create cross-pollination between communities, it is important to be ready to explain yourself in a language not necessarily yours. And both network operators and cybersecurity specialists use very particular language, mind frames, concepts, processes. And the processes of incident response are very different from the processes of patching and connecting networks as well. So in order for these communities to interact, they need to struggle a bit to explain themselves to each other and build that bridge. And we have found that fascinating. And ADLI has been an incredible bridge among these communities, but also as well with other communities, such as the policymakers and other parts of the technical community as well. So something that we have learned throughout the years is that the best way to contribute to an open, free, stable, and secure internet is not only by doing your work very well within your area of specialty, but actually to really build those bridges. And something that is very important in incident response is cooperation and information sharing one way or another. And the more obstacles and blockages we put to this transfer of information and collaboration, the least we contribute to an open, stable, and secure internet. In summary, I think it’s all about bridges. And I think this is an effort to bridge between different colors and specialties. And thank you for organizing this very cool workshop.

Moderator:
Thank you, thank you for great comment. I believe that the creating networking in this session is much important because of your opinion. I totally agree with your opinion, thank you. So okay, so let’s create groups, some groups and have a discussion of the topic. So I think the online participants, it’s already separated to some breaking out rooms, I guess. So for on-site participants, please stand up, gathering and talking, start talking, thank you. Hi, I’m here to join the group. The group? Breaking out room? But please stop talking. And please pay attention. Can you show it to me? OK. So I would like to pass the mic to some people, one people from on-site and one people from online, and ask your opinion of the guiding question. OK, so does anyone have an opinion of the first guiding question? Or what kind of opinions did you exchange during the chatting?

Audience:
OK, go. Go, please. So I’m the government-side person, so I’m talking about what we focus on the dialogue on the global conversation. We, Japan, I’m a senator from the Japanese government, and I focus on the, when I talk the oral experience, I focus on the balance of the security and stability communication. So the best solution to security is, one of the best solutions is shutting down, but to connect stably is also important. So when we talk with the other government people, we would like to focus on the balance of the security of communication and stability of communication. That’s my experience. OK, thank you very much. Can you please introduce yourself, or name and? I’m Masaki Nakamura from the Ministry of Internal Communications. International Affairs and Communication of Japan. Thank you.

Moderator:
OK, thank you very much. OK, so I would like to pass the mic to online participants. Adele-san, can you please give the mic to one participant from online? Thank you for waiting, online participants. So I suppose there’s a conversation in online participants. Adele-san or Bibi-san, is there anything that you listen to? I’ll just stop the timer and have some time for online participants. Thank you. Thank you. I’m afraid that online participants cannot hear the voice from the on-site, I guess, because they’re all in breakout room. OK. OK. I think so. Because they’re all in breakout room. And someone is talking. Bibi-san is talking. OK. OK. So. All right, online participants, are you audible? Are you listening to the on-site voice? Can you talk to the mic? OK. So the online participant is now having a conversation, and back in five seconds. Sorry for the logistics. Hello, online participants. We are back. We are back. Sorry. No worries. So I would like to ask opinions for the guiding question from an online participant. Okay. Maybe this time and for the next question, Adly will take the lead, but for this, so we were discussing not only the question itself about how is our work impacting the Internet and the security, but also how is it impacting the user experience. So I would like to ask Adly to take the lead. Okay. Thank you.

Bernhards Blumbergs:
So I would like to ask Adly to take the lead on not only the Internet and the security, but also addressing the question. The question within the question, can it actually be open, free, and secure at the same time? And I think that well, I think it’s not always possible to have all of these three things together, but we can always strive to reach the freedom, openness, and at the same time, security. With this I pass on the floor back.

Moderator:
Okay. Thank you very much. So online participants, I guess you’re already talking about the guiding question, too, right? No, we were talking about one, yes. Okay, okay. Good, good. So let’s move on to the guiding question number two. The question is, what international geopolitical issues prevent CSIRT from an open, free, and secure digital cyberspace in engaging with cybersecurity? And if we cooperate, how can we address this? This is also a bit difficult question for morning brains, but yeah, let’s start talking. OK, so let’s start talking. So for onset participants, please do not talk with your friends. Please make a new networking in this session. OK, so please stand up and gather again.

Adli Wahid:
So there was just three of us in the session. So we discussed a couple of things. But the first part was basically participant sharing the need to always cooperate. So both at the national level as well as globally, because definitely we can benefit from one another. We have different experiences. And when it comes to security, it is important to share quickly. So defend locally, but share globally, so that whatever experience we have in dealing with incidents or security or technology or tools can benefit others so that they can be secure as well. The second part on the geopolitical issues, yes, they do have an effect to the cooperation of the CERT. But in some of the recent events, the geopolitical issues have, in fact, strengthened the cooperation between CERTs and CSERTs. So this is a good sign, because it shows that despite what is happening around the world, there is a community that is committed to making sure that the Internet remains secure and safe for everyone. So that is it. If I missed anything, Bibi, you can jump in quickly. All right. That’s all. No. It’s all good. Oddly covered all the things. Yes. Thank you.

Audience:
Cheers. Okay. Thanks. So next, I would like to hear a voice from on-site participants. Does anyone… Okay. Please. Can you hear me right? Okay. Sorry. Yeah. Okay. My name is Dilnath Dishanayake from Sri Lanka CERT. Actually, my colleague, what the discussion we had. So if I summarize, so first one is, it is kind, you know, the sharing, as the online participation mentioned, it is sharing the information. Because when the cyberattacks happen, it is a sudden one, so we need to share the information. Because we don’t know where the source is coming from. Because it may happen for Sri Lanka context, but the origin is from another country. So finding this source is very difficult. So the sharing information is very challenging. And it’s also affecting with the local context, legal and the regulations, again, because it is actually cybersecurity is in line with the national security sometimes. So sharing information is one thing. And the resource limitation, because when we are cooperation with international, definitely we have to have the expert level communication specialist or something like that. Then only we can just have this. continuous monitoring or whatever the resource find that we can. And also, again, the capacity building, the national and international engagement. So those kind of things that we were discussing, and so at the moment, yeah, I’ll stop here. Okay?

Moderator:
Okay. Thank you very much. Yeah, it’s a very interesting opinion for me as well, but sorry, we do not have so much time. Let’s move on to the next guiding question, so can you please make a slide? Okay. So then question number three is, to promote cybersecurity, what is a key message you would like to convey at this IGF, which is attended by a wide range of stakeholders? Okay, yeah. We can see the many participants from many corporations on this IGF, so yeah, this question is important, I guess. So okay, so let’s stand up and gathering and talking again, please. So please find someone who have not yet spoken to. This is a networking session, so please don’t stick to your friends.

Audience:
Okay, thank you very much for your cooperation. So, this time I would like to ask remarks from on-site first. So, does anyone make remarks of this question? If any of you heard something interesting, told by someone, others, you can share as well. Thank you, thank you. Thank you for putting me on the spot. That was the price to pay for my smile, I guess. So, good morning. My name is Peter Koch. I work for the German top level domain registry and we are engaged a bit with the German CSER network. And in this round, we had a conversation about, yes, the message, but that’s hard in seven minutes, of course. So, take this with a grain of salt, but I think what we agreed on was that it is important that even in the face or maybe because we are facing so many geopolitical tangents, it is important to keep up the information sharing or continue information sharing and maybe the pandemic is a blueprint in a way where there is global crisis, but at the same time, global information exchange. So we need to keep up that information sharing. Thank you.

Bernhards Blumbergs:
Okay, thank you. Thank you very much. So next, I would like to hear the voice from online participants. All right, I will take this one. So this time we were four of us and we tried to exchange a lot of information. This was a very productive breakout session. So although there are multiple things and we encouraged every participant to bring in new viewpoints, so I think this can be summarized in two general key directions. So first of all, this is nothing new. We already touched upon this multiple times, information sharing, but this is not only about information sharing. This is facilitating also where to share the information, trusted networks, building the infrastructure for message exchange, but also designing the groups that can and may share the information because having just people in the group doesn’t make sense. You have to make sure that this information is there and everyone is engaged. Also information sharing, not only within just certain layers like technical, operational and strategic, but also between those layers so that to build the understanding and clarify in simple and understandable terms what this information or how this particular problem resonates also to operational and strategic levels. I’m talking from the bottom up, I’m mostly techie, so that’s why I take it from bottom up, from tech to the higher levels. So this is first one, but the second part is directly related to this. And what we identified is also understanding how to make the internet a better place, understanding the device security, personal security, learning about cyberspace, because if we are. using these tools we have to be well versed and understand how to use this in a best manner possible but also how to use it securely. This comes down to personal cyber cyber hygiene. So we start with just a single entity. But also this is expanded to the nation. It’s not also it’s not only related to experts. It’s related to everyone who is part of the society understanding cyber hygiene and getting to the national level and thus building a stronger global community. Oddly if there’s anything else you add these. Nothing good. You covered everything. Maybe. Thank you. All right. Thank you.

Moderator:
Thank you very much. Yeah it’s a very interesting opinions from me. And yeah it’s impressive for me that the cyber hygiene is important for not only the technical people but also but also all people in the world. And yeah it’s impressive for me and that totally agree with you. OK. Thank you very much. And this is the last question of the three decision. And so I will pass the mic to them again. OK. So I heard some interesting topics covered on site and online as well. So I hope this session had a was a good exercise for your morning brains. And I truly hope that you hope you enjoy the idea of 2023 just because today is a day day one. So I’d like to thank again the moderators supporters and everyone here for making the session happen to embrace the meeting spot for CSAT practitioners at this IGF and to exchange insightful views of each of you of Internet Governance. Thank you very much. Thank you very much. Bye-bye. Bye. Awesome. Well done. Very. .

Adli Wahid

Speech speed

162 words per minute

Speech length

322 words

Speech time

119 secs

Audience

Speech speed

134 words per minute

Speech length

1063 words

Speech time

475 secs

Bernhards Blumbergs

Speech speed

180 words per minute

Speech length

634 words

Speech time

211 secs

Hiroki Mashiko

Speech speed

129 words per minute

Speech length

106 words

Speech time

49 secs

Masae Toyama

Speech speed

111 words per minute

Speech length

433 words

Speech time

234 secs

Moderator

Speech speed

114 words per minute

Speech length

1687 words

Speech time

885 secs

Bridging Connectivity Gaps and Harnessing e-Resilience | IGF 2023 Networking Session #104

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bridging Connectivity Gaps and Enhancing E-Resilience: Innovative Solutions for Underserved Areas

This networking session at the IGF in Kyoto focused on addressing global connectivity challenges and enhancing e-resilience, particularly in underserved areas and during disasters. The discussion centred around innovative solutions to connect the unconnected and make existing connections more resilient.

Dr. Toshikazu Sakano from ATR introduced the LUGS (Local Upgradable and Generative System) technology, designed to restore local communication in disaster-affected areas. LUGS comprises a battery, Wi-Fi access point, and small servers, offering social networking services like chat, video communication, and information sharing. The system can be deployed quickly in areas with disrupted connectivity, providing essential communication capabilities.

Jeffrey Llanto and Glyndell Monterde from CivisNet Foundation presented the implementation of LUGS in the Philippines. They highlighted its successful testing during the COVID-19 pandemic and its adaptation to local needs, including integration with learning management systems and use as a charging station during disasters. The project demonstrated LUGS’s potential for improving access to information, enhancing coordination during disasters, and increasing community engagement.

Chandraprakash Sharma from WISFLUX India discussed the future perspectives of LUGS in developing nations, emphasising its potential for providing critical information access in remote areas of India. He highlighted the system’s versatility in addressing various challenges, including natural disasters, and its potential for integrating edge AI to offer localised services without internet connectivity.

Dr. Haruo Okamura presented an innovative solution combining LUGS with optical fibre cables to provide internet connectivity to disconnected areas. He introduced the BIRD (Broadband Infrastructure for Rural Area Digitalization) cable technology, which uses submarine cable technology for terrestrial applications. This approach allows for cost-effective and efficient deployment of fibre optic networks in challenging terrains.

The presentations highlighted several key points:

1. The importance of local communication restoration during disasters and in underserved areas. 2. The potential of LUGS to bridge the digital divide and enhance disaster resilience. 3. The adaptability of LUGS to various use cases, including education and disaster response. 4. The combination of LUGS with fibre optic technology as a long-term solution for connectivity. 5. The role of community engagement and capacity building in successful implementation.

The discussion also touched upon challenges such as infrastructure limitations, cost considerations, and the need for sustainable, long-term solutions. The speakers emphasised the importance of phased approaches, local implementation, and adherence to international standards in addressing global connectivity challenges.

In conclusion, the session presented a range of innovative solutions aimed at bridging connectivity gaps and enhancing e-resilience. These approaches, combining portable communication systems like LUGS with advanced fibre optic technologies, offer promising avenues for connecting underserved areas and improving disaster preparedness. The speakers underscored the need for continued collaboration, adaptation to local needs, and integration of emerging technologies to achieve universal connectivity and resilience.

Session transcript

Moderator:
Hello, good morning everyone from Kyoto International Conference Center. Sorry we are around five or six minutes late due to some technical reasons. Today we are here in IGF and I’m Binod Basnet, Director of Educating Nepal and we’re here for a networking session, bridging connectivity gaps and harnessing e-resilience. As the global stakeholders are striving for the last mile connectivity, we know that there’s still a one-third of population that still do not have access to the internet. And if we break this down to LDCs, it’s about 64% of population that do not have access to the internet. And during the COVID, it was quite evident that connectivity is a lifeline in many ways for information access, for healthcare, for education and so forth. But it’s not just only about connectivity, even in the regions and countries that have access to connectivity, it has to be a resilient connectivity. When there are cases of disaster, the connectivity tends to get disrupted. So what is the backup plan? That is a very pertinent question. So with these two issues in hand, connecting the unconnected and making the connected resilient, today we’re proposing some innovative solution for these two cases.

UNKNOWN:
So today we have a panel of speakers for our networking session. First we have Dr. Sakano from ATR, who will be talking about Lux system as a solution for disaster and backup communication system. And secondly, we have Jeffrey Lanto and Glendal from CivisNet. Jeff is the executive director of CivisNet, and Glendal is the representative of that organization. And thirdly, we have Mr. Chandraprakash Sarma from WISFLOX India Private Limited. And finally, we have one of the champions and pioneers of optic fibers and standards, Dr. Okamura-san, who will also be proposing an innovative solution to rural and unconnected populations connectivity. So without further ado, I’d like to ask Dr. Sakano to give a presentation on Lux system. Please give us an introduction of the Lux system.

Toshikazu Sakano:
Thank you. Can you hear me?

Moderator:
Yes. Okay.

Toshikazu Sakano:
So please show my slide. Okay. So I can share the slide by… Okay. Can I start? Okay. Thank you very much for the kind introduction. My name is Toshikazu Sakano from Advanced Telecommunications Research Institute International

UNKNOWN:
based in Kyoto, Japan. Actually, I started research and development on ICT for disaster countermeasure. Okay. Thank you very much for the kind introduction. My name is Toshikazu Sakano from Advanced Telecommunications Research Institute International based in Kyoto, Japan. Actually, I started research and development on ICT for disaster countermeasure just after the big earthquake occurred in west, east and north part of Japan in 2011 when I was working for entity laboratories. And I moved to ATR, current institution, and started the new project called Lux project. In my talk, I’d like to introduce the research and development and my idea of restore the issues that happened in the disaster situation. So let me briefly introduce our next slide, please. This is… Okay. This one slide is to introduce ATR. ATR is a private research institute founded in 1986 and the main themes are computational neuroscience, deep interaction science. This is robotics, communication robotics. You can see down right, this is Android robot developed by ATR and wireless communications and life science. And I’m from Web Engineering Laboratories, which is doing a research and development on wireless communication and other ICT issues. And I’m from Web Engineering Labs. Next please. So let me start the background of my research and development. This one slide showed the number of disasters by continent and top 10 countries in 2021. And looking at this slide, Asia-Pacific region has many disasters happens. And under this big disaster happened, please go to the next slide. Okay. Some big issue happens. So under the big earthquake or big disasters, communication network is disrupted often. For example, base station and communication buildings are disrupted. That means you can not take phone and internet anymore. And that prevent us from using daily use, Google, Yahoo, Facebook, Amazon, that kind of services you can not use anymore. But at the same time, under the big disasters, the demand for communications will go up. So there is a big gap happens under the disaster situation. So this is an issue I wanted to resolve using ICT. So next slide. Next. Okay. So what I thought in resolving this issue, I focus on the locality or local communication. This one slide with a lot of characters shows other human characteristics. People communicate more with the people with more closer physical distance. This characteristic can be said communication locality. So if you are very close, you communicate with the person more frequently. That is the characteristics of human. So if you restore the local communication under the disaster situation with internet and other network service disruption, that will help to people under the disaster situation.

Toshikazu Sakano:
That is the thing I wanted to do. Next please. So after starting the research and development, when I was NTT, I proposed architectural concept called MDRU, Movable and Deployable Research Unit. This concept is once big disaster occurs, you can bring the resources for restoring communication service for local communication to the disaster affected areas and restore quickly for local communication. This concept was standardized at ITUT as L.392 and this was when I was NTT Labs. And next please. And after I moved to ATR, I launched a new project called LUGS. And the MDRU itself was focused on the telephone service and LUGS is almost the same as MDRU but the focus point is internet services like social networking service. So LUGS itself is comprised of battery and Wi-Fi access point and small servers. And I put software for social networking service in the server. That is the concept of LUGS. So once big disaster occurs, you can bring this LUGS to the disaster affected area with no internet connectivity, then people can access to this LUGS using Wi-Fi functionality, using their own smartphones and browsers access to the social networking service functionality. That is the basic concept of LUGS. Next please. So this is the outlook of prototype of LUGS. LUGS is comprised of LUGS server, Wi-Fi access point, battery, network hub, and some papers. All these things are packed in a portable case and that is the outlook. Next please. This is the functions LUGS offer as a social networking service. As you can see, chat function and video one-to-one or group communication and feed function and pages function. These functions are offered to people that surround these LUGS devices. So this is limited to local communication, but you can keep using this kind of social networking function. That is the concept of LUGS. Next please. So our team has conducted a series of feasibility studies, mainly in seven islands in Philippines. And detail will be presented by next person, Jeffrey, so I will skip this slide. Next please. And this is also another activity after the big typhoon in Philippines and detail will be presented later. So next please. So I have focused on disaster issues, but LUGS itself can be used for other issues in the world. So potential demand of LUGS worldwide, while the internet use is widespread in everyday life and work for many in high income countries. As Vinod said, one-third of population worldwide do not have access to the internet. That is a big issue worldwide. So to restore, to bridging the gap, LUGS must be efficiently used to people in the area where broadband connectivity or internet connectivity is not fully penetrated. Okay, next please. So this one slide, so explain the status we have. So we have run research and development of LUGS for about five years with the support of stakeholders. As you can see at the bottom of this slide, we should move on to the commercialization phases to LUGS. So I launched a startup company named Negro Networks. The objective of the company is to deliver the LUGS system and solve solutions based on it. At the same time, we extended our R&D recently to include artificial intelligence in LUGS to be efficiently used by the first responders of disasters. We call this system FLOS, or Frontline Operational System. Okay, next please. Let’s complete faster, Sakano-san. We’re running out of time. Okay, this is summary. Thank you. Thank you for your attention.

Moderator:
Thank you so much, Sakano-san. Now I’d like to request Jeff to make his presentation.

Jeffery Llanto:
I’ll be the one. Thank you, Bino. To start with, I’m Jeffrey Lianto. I’m the Executive Director of CBISnet Foundation, and together with me is Glendel Maltrede, our Project Manager. And we will discuss and we will talk about the implementation of the locally accessible cloud system in the Philippines. So CBISnet Foundation is a project of the government under the Department of Science and Technology that evolved into a foundation in the year 2000. So CBISnet is one of the pioneers to provide internet connection to the Philippines way back in 1994. So we have been working with different partners and stakeholders like ATR and APNIC. So this is the locally accessible cloud system implementation in the Philippines, started in 2019 until March 2023. Next slide please. So next slide. As mentioned, we are a government project, and it evolved into foundation. Next slide please. Then we have partners, local and international partners. We have strong partnership with the government. We also have partnership with APNIC Foundation, ATR, NTT, and USAID. Next slide please. So we have been recognized our efforts on ICT in the Philippines, and again, it’s been recognized in one of the IGF in Mexico. Next slide. So about the last project, Dr. Sacano already elaborated about the project itself. So the next slide. So this is the implementation right now. The implementation of the project, it’s in the Hilutungan Island that’s in the center part of the Philippines. It’s around 7.5 kilometers away from the nearest point of presence of the internet. So we push the signal to the island called Hilutungan. Next slide. So this is the timeline for the last project. It started in 2019 until 2023. So what is very significant on this project is this is during the pandemic area. So when Dr. Sacano tested lax in Tokyo, I mean in Japan, there are several use cases that cannot be implemented in Japan, but is at a high need to areas like the Philippines. So it started as in 2019 we did some social preparation, then eventually when pandemic came in in 2020 and 2021, there were new use cases that were introduced, like the learning management system that was integrated to the locks with the help and the development of the software coming from India. So this was not part of the original plan that we had way back in 2019. So eventually, again, I don’t know if it’s lucky enough, in 2022, disaster came in. A big typhoon went to the Philippines. And again, new use cases were being introduced, especially on the side of the hardware for locks to implement, to be a charging station so that it can also help to the devices in the island. Next slide, please. So these are the activities that we want to show you. It’s more on the pictures. In order to implement the project, you need to have them penetrated at the grassroots levels. So we need to introduce this technology at a very minimal so that the island people, the folks on the islands can get those information immediately. Next slide. Then after training them, we are going to proceed in putting up the infrastructure. Again, this is a challenge because the island doesn’t have any electricity. It relies more on solar power. It doesn’t even have a very strong internet connection. They just rely also on weak signals coming from telephone companies. OK, next slide. Then once the infrastructure is already installed, this time it’s the installation of the locally accessible cloud system. And this is the trainer’s training. Here in the picture, you notice we train the teachers and also train the students how to access the system. Next slide. For the usability, there are several stakeholders involved in this one. The school, the local community, which involves fishermen, housewives, students, and so on. So there’s a constant training to the different stakeholders. And what’s very interesting is the LUX, we try to introduce it on a non-disaster era. We tried to introduce LUX on our Christmas party in which we had some kind of games that they use it on a non-disaster era. So there’s some kind of application for normal times. Next slide, please. And the LUX data access and retrieval for this one, we tested several areas, not only in Hilutungan, but also to the neighboring islands under the APNIC project. So LUX already evolved into another project. It’s called, we call it ILEP, or the Internet for Sustainable Livelihood, Education, and Tourism under the partnership with APNIC Foundation. Next slide. Lastly, we need to empower the community. We need to train them, especially the teachers, because they’re the ones who really has the capability to understand more and grasp more information. So we train the teachers so that they can troubleshoot and they can also install the system by themselves. OK, so next slide. So I will give you to Glendale, our project manager, to give the results of the project implementation in the Philippines. OK, thank you.

Glyndell Monterde:
Thank you, Sir Jeff. To further discuss the use cases of LUX in the Philippines during pandemic, let me discuss the results of the research and development. First, it tested the full potential of LUX outside Japan. So this means that during pandemic, during its step four and step five of research and development, it allowed successful testing of its performance and functionalities of its features, including voice, messaging, bulletin, among others. Secondly, it integrated Philippine use cases during pandemic. So we’ve identified additional use cases, as discussed by Sir Jeff. It includes the learning management system, the voice calls, the solar charges for device, and including also the local information systems integration of the barangay. Next is, we’ve implemented the local LUX and the cloud LUX methodology. This means that all information stored on the on-premise or local LUX will now be able to sync to the cloud LUX when internet becomes available. And next, it has a remote implementation to nearby islands, which means it was successfully being implemented in the island of Hilutungan, which is more than six kilometer away from the mainland of Cordoba, Cebu. Next is, is the collaboration and deployment to late the area in partnership with the Visayas State University, or VSU. So VSU piloted the successful testing and usage of the learning management system. And of course, the synchronization feature among its two campuses. Next is, we have the formation of the Islet Connect project, in partnership with APNIC Foundation Australia and Seed4Com. So the APNIC Foundation had given grants to CivisNet to connect the unserved islands. So it has two phases. The first phase, the Islet Connect makes project, it makes the LUX as a key component for the communication support during disaster in Hilutungan Island. And its second phase, it also connects the neighboring islands of Cauhagan and the Panganan Islands. And other, next slides please. For other key results, it also made the LUX, the presentation and demonstration to partners and stakeholders, including USAID-BECON, the Department of Science and Technology Region 7 of the Philippines, and also the Department of Education, and as well as Ramon Abete’s Foundation Incorporated. And we are able to present also to international conferences and meetings, including the UNESCO. And just to also give you the potential impacts of the LUX, the implementation of LUX, it gives improved access to information, which means that LUX provides access to critical information during disasters that helps in making informed decision and take appropriate actions to protect the lives in the community. Secondly, it will give you enhanced coordination, which means LUX facilitates better coordination among different stakeholders involved in the disaster response that helps ensure that resources and assistance are effectively distributed to the community. Next, it also gives increased community engagement, which means that this allows for more inclusive and community-driven approaches to disaster management. Next, we have the reduced isolation. It is LUX being a tool that can improve the ability of the people to seek assistance and communicate their needs during disasters. And lastly, LUX will give you capacity building. It is to develop skills in disaster communication and response, and to enhance resilience and ability to cope with future disasters. So that ends our presentation from CVSNet Foundation Incorporated. Thank you.

Moderator:
Thank you, Glendale and Jeff. Now, I’d like to quickly request Mr. Chandra Prakash to talk about the future perspectives of LUX in terms of developing nations. Thank you. Thank you.

Chandraprakash Sharma:
Can you put up the slides, please? So good morning, everyone. I’ll start with my introduction. So I’m Chandra Prakash Sharma, CEO and founder at WISFLUX. We are proud to be representing the Indian collaboration on this wonderful project, which has so much impact on developing nations. And thank you, Dr. Sakano and the wonderful team for pursuing this project. I really appreciate the dedication of everyone who are present here towards the resilient infrastructure that we all are seeking in developing nations, especially after the wonderful traditional drums and fireworks and the party we had last night. It was difficult probably to wake up early this morning. So going forward, let’s move on to the next slide. So I want to, so far we have done trials in Philippines and Japan, but next target is India and other developing nations. And Indian government right now is pushing very hard on the digitalization of the governance and infrastructure overall. And we have some wonderful projects going on in terms of digitalization, but the challenge remains because, although there has been tremendous progress in last few years about connecting the people in remote areas, but still, I think we have around 50% of population which doesn’t have access to the internet. So not just the access to the internet, but the access to the information is more important. You can say the internet maybe reaches later, but access to the critical information about government policies and schemes that are available for people in tribal areas or the poor remote areas that we have in different parts of India, because very diverse country geographically as well. We have mountainous region, hard to reach, hard to implement any infrastructure. We have desert and then we have deep forest where many tribal population is living. So to help them access the many advantages the government is offering, the schemes that government is offering, I think this kind of solution is very important for them. And then again, we have variety of disasters that can occur naturally in India, especially on the coastal regions. And we have earthquake prone regions as well. So after one disaster is hitting, this kind of device is very useful as we have seen through the implementation in Philippines. Going faster to save time for the next presenter here, you can see that this device has a lot of potential. Dr. Sakano talked about the inclusion of AI on this device. And it’s not just the AI access through the cloud services, but the wonderful thing about it is that you are able to access it locally. Probably you will understand in better terms by the term edge AI, which is for the people who are not connected to the internet. So it has good potential for the e-education as well. And we have in India, a public distribution system where government helps by distributing rations to people who cannot afford those. So digitally providing the information and solutions based on such device is very impactful. Next slide, please. I want to give you a perspective of the future of this technology, not in, I mean, it may seem simple in a way, but it has a huge potential. So right now the privileged who have access to the cloud are able to access the servers databases, the services from the cloud, including the very powerful and huge potential AI services we have nowadays available. Going forward, the cloud providers or the service providers are now trying to bring services as close as possible to the users by deploying the content and services in the edge of the cloud. But then you have the edge where user is sitting, which is the user local area network in which the lax network is basically, you can technically understand, this is where the lax architecture sits. And services now can be available within the local area network of the user with or without the connectivity of the upper layers. And then finally, you have the edge where the user devices are sitting, which connect to the upper layer, the local area network accessing the lax services. Next slide, please. And as we discussed, the impact of the AI is going to be tremendous on all the sectors, and especially if it can be made available local to the remote communities for farmers providing insights about, for example, the diseases, if they can upload a photo and understand better about the farming, the healthcare, local healthcare workers, the safety workers, the emergency responders, and even in the education. And the benefit of such AI is because it is local, it can offer the faster data processing and enhance security with the less consumption of bandwidth and being energy efficient. As already discussed that this portable device that we have here is capable of being charged by solar panels as well, which was tried in Philippines already. So next slide, please. So thank you very much, everyone. And now Dr. Okamura will present his wonderful contribution.

Moderator:
Thank you, Mr. Chanaprakash, and thank you for making that quick because at the end of this session, we are also looking to take some questions from the audience. So to not delay the question and answer session, I think we’ll go on with the last presentation. Dr. Okamura-san, the floor is yours.

Dr. Haruo Okamura:
Thank you very much for this opportunity. The title of my talk today is Connected and Unconnected in a Phased Manner. We have been listening to the presentation. We have been listening to the previous presentations, mainly the use of RACs in an independent manner to create not internet, but the intranet. But finally, my goal is to provide internet connectivity to the world, almost all the disconnected or not connected area in a phased manner in a very practicable way. So the combination of the RACs, multiple RACs plus optical fiber cable is my presentation. The next one, just briefly, I am a global planning president, and I am an expert of fiber optic systems and strategy standards. Actually, I am currently the international chairman of IEC, Fiber Optic Systems and Active Devices. And also, I am a developer of the solution board and the corresponding ITT standards. The presentation that I’m going to tell you is based on ITT recommendations, three recommendations that I have worked for as a editor, L.1700, L.110, and L.163. Next, please. This is all the concept of the phased approach, step-by-step approach. That means from intranet, based on the use of the independent RACs, into internet connectivity. For example, if you have a village A, one day introduces RACs, one RACs, that generates a intranet capability to the village people, maybe maximum 256 people. And next day, village B introduces another RACs, and village C, another RACs, independently. What will happen if we can connect those three multiple intranet RACs by using optical fiber cable, broadband optical fiber cable? As you can see in the slide, there’s a big mountain or a difficult terrain. Basically, we have been thinking that laying the optical fiber cable in difficult terrain has been very, very costly and difficult, and takes a long time for construction. But my idea will be eliminate that difficulties by using submarine cable. Submarine cable, you can imagine, that is very robust against high water pressure. pressure, you can lay the submarine cable directly on the surface of the ground. That eliminates all the cost of construction so that you can have affordable connectivity. And it is something not easy to understand so that I tried to make those trials as international ITT standards. So finally, Village ABC can be connected by using optical fiber cable and one day in the future you can connect maybe Village C to the internet so that all of a sudden those communities become internet capable large communities. This is my idea. So you can see the slide photograph here. The local people with bare hands is now implementing optical fiber cable on the surface of the ground of the unexplored jungle. That really happened in 2019 in Nepal mountain village. Next please. So what is BIRD? This is optical fiber cable so I will be very briefly touch upon what is the BIRD cable. That BIRD is broadband infrastructure for rural area digitalization. My invention. Next please. This is optical fiber cable as I said submarine cable based so that you have a very thick wall thickness stainless welded tube within that up to 48 fiber cores are included. And the total diameter of the cable is even 11 millimeter finger size. That is based on the submarine cable technology and also supported by ITT recommendations. That’s applicable to all terrain. Even in the sky or in the water or on the ground surface or the underground. Next please. This is one example of Japanese quality. This is a cross section picture of Japanese cable and the cable from other country. So you can clearly see that the quality of the cable structure and also the welding portion of the wall of the stainless steel tube very much difficult to use because of the cable outlet is the same but the inside is very different. Next one. Operator if you can go click the down portion of this slide the helicopter can fly. Like this. Thank you. This is just happening in March this year at the altitude of 5,300 meters to carry the cable drum into this high altitude area trying to lay the optical fiber cable board to mount a base camp. Thank you. Next one. So this is a cost reduction. About 90% cost reduction has been achieved because of a construction can be done on the surface of the ground. And this project has acquired WSIS World Summit for Information Society last year championship because this is a real solution opening the door for the globe to be connected and connected practicably by using lux plus optical fiber cable. Next one. So summary the top priority for ITU is connected and connected. It is very much often spoken about but we have been not available the real solution, physical solution and I have now presented the solution here based on the Japanese technology plus ITU standards and that is lux plus ITU compatible solution board that affordably safely bring broadband Wi-Fi hotspots practicably phase-wise across the decline in DIY basis, do-it-yourself by local people. And CAPEX of the board laying cable is about 6,000 U.S. dollars per one kilometer. This is dramatically reduced cost for the implementation. And the criteria for board cable and its deployment complies with the ITU standards. That concludes. Thank you very much.

Moderator:
Thank you, Dr. Okamura for the wonderful presentation. Now we’ve just got over 10 minutes of time and the floor is open for questions. If you want to ask any questions to any of our speakers, you’re ready to do so. Anyone from the audience, if you have any questions, you can come up to the mic and ask the question.

Audience:
Thank you. Hello. Good morning. Carlos Rey Moreno from the Association for Progressive Communications. Thank you very much. I really think it’s a very interesting solution. I’ve been following your work on fiber for many years on ITUT and it’s very interesting how it is evolving the connectivity of different access, you know, like bringing connectivity to the village or to whatever remote area it is now with low earth satellites or with microwave links and then from there start with lux and relying it with fiber to the next village or even within the village because of the interference of the Wi-Fi. I was thinking there is obviously a distance in between the lux, in between the villages that the fiber can go without a repeater and how you’ve been considering that in the model that you were presenting and what would be the increased cost of adding, you know, OLTs somewhere to the lux or, you know, the overall cost because definitely it has some legs and particularly in mountainous regions where villages are close by but you need repeaters too. You cannot do microwave, right? So thank you.

Dr. Haruo Okamura:
Thank you. You have presented a lot of issues. So first one is wireless or wired. Wireless connectivity, microwave or satellite, Elon Musk launched 12,000 satellites and by using 10 billion U.S. dollars and yet the life is only five to seven years and the transmission capacity is only maximum one giga BPS per one, you know, kind of satellite beam. So the fixed microwave also is the maximum at this moment is about one giga BPS, maximum transmission. And fiber can provide more than 10 times higher capacity than those, but one fiber. And as you can see, 48 fiber core can be included, the finger size cable, so the enormous improvement by using fiber. And next question is how long the fiber can connect each other. The maximum distance without any repeater is more than 500 kilometers today if you introduce state-of-the-art technologies by using a fiber amplification or 300 kilometers or 100 kilometers. Just if you’d like to go only 50 kilometers, the very cheap commodity type, yeah, a media converter can just transmit to 50 kilometer or even 100 kilometers, so there is no issue.

Audience:
Thank you. Sorry. Just to caveat a bit of my elements. One thing would be, you know, like 500 kilometers or more for sure, but then if you have several villages, you start to need to multiplex and you cannot have one single cable. So you need repeaters or multiplexers in between. And then in relation to LEOS, 100% on the capacity, microwaves on the capacity, but the $6,000 per kilometer for villages that cannot eat and that are far from wherever it is, I think, you know, long term for sure, fiber, but in the short term, maybe we need to use solutions that are more cost effective for the back home to get there and then, you know, little by little, building up the economies, no? Okay. Okay.

Dr. Haruo Okamura:
The maximum length of the one cable is about 12 to 15 kilometers due to the size of the cable drum. If you go to the submarine cable, it can just, because of the 40 kilometers, 80 kilometers per one segment, because of the cable range and the well equipped manufacturing, everything facilities are available, but for this terrestrial usage, the cable drum at this moment is, you know, only 12 to 15 kilometers. And each 15 to 12 kilometers, you need a splicing box, like, look like a repeater, but the inside is only fiber splicing, that’s all. So there is no difficulty to, you know, connect to 100 kilometer away villages. Thank you.

Audience:
Okay. Thank you very much. My name is James Ndufuye from Nigeria. I have two quick questions. Great presentations anyway. Great presentation. The first one, very nice solution, bold solution, like this is targeting the underserved areas. What percentage of your, of Nepal, of the country would this cover? And now, soon, do you project it can be covered? Then secondly, if you compare this to TV-wise space technology, TV-wise space technology, you so, yes, TV-wise space technology, which also can be applied in rural area. So what are the advantages and disadvantages? Thank you.

Dr. Haruo Okamura:
As I said, that in Nepal, I have been, we have been doing a project, as I said, up to, from Namche Bazaar, is the foot of Mount Everest, to the base camp, 42 kilometers. Now, as you can see, the helicopter flying, that carried in the cable drum already this much. So I don’t know what percentage of the, but the National Telecom Authority of Nepal declared that the use of this solution to Mount Everest region and Mount Annapurna trekking route. And we have already been doing in the west part of Nepal for about 10 kilometers. So I don’t know how much percentage this can cover, but this is all terrain type. Go to mountains, underwater, in the water, everything. I should not say everything, but I don’t know, go to the Mount Everest top is very difficult. And the use of fiber and open space for wires in TV space, I don’t know, at the capacity change, capacity difference is very large, so that I don’t have a good idea. Maybe Sakano-san could tell about it. I think TV wide space spectrum can be used for extended area of lugs, not the substitute of optical fiber. So TV wide space may be possible to be used for extending the area, but bandwidth will be limited. Because TV wide space has a large, you can cover a large area, but bandwidth is low. So we can think of that kind of technology can be included in our lugs solution. Thank you.

Audience:
Hello. Does it work? Yeah. Hello. My name is Niels Brock from DW Academy in Drasomatica. Great presentation. Two questions about the lugs. How much open software and open hardware is in this device? Thinking of also customization possibilities for local communities to get their hands on to pitch it to their very needs. And also about, this is a question more for the region, so if something is broken, how are the supply chain situation? Is this going to be like a waste very quickly, or do you have other solutions if there is a piece that is broken to quickly replace it? Thank you.

Toshikazu Sakano:
Okay. Thank you very much for good questions. For the first one, lugs can be replaced, as CP says, lugs can be used as edge computing. So if you include any softwares, you can use the functions the software provides locally. For example, in our feasibility study, we installed e-learning management software inside the lugs and used for school, that kind of thing you can use. So you can use in various way and for various applications with our solutions. And second one?

Chandraprakash Sharma:
Yeah. I would like to add to this answer. So as Jeff here wonderfully talked about the implementation in Philippines, I think one important thing that they did in Philippines was to actually train the people there so that they can repair if anything were to happen to this device, they could implement taking a box by themselves in a remote region without any help from us. And it’s built from off-the-shelf components that are readily available, not just in Japan or one locality, but they can replace by the hardware available in Philippines or hardware available in India. As for your question about the open hardware, right now we don’t have, but let’s say there is definitely a possibility to include capable hardware available like Raspberry Pi or if there were other open hardware available that can sustain the kind of server that we run in this. So yes. I hope it answers.

Jeffery Llanto:
Maybe I can add something for that one. LUX is a platform. So under the auspice of Dr. Sakano, and it evolves to different modules. So it started with file repositories, calls, and all those areas. So eventually the use cases started to come in based on real scenario on the community. For example, LUX is useless during disaster, especially typhoon, when all the devices in the island doesn’t have any power. Where do they charge? So again, technology is defeated. So I asked Dr. Sakano if you can look for solutions. So they talk with India. Then LUX become a charging station to the devices on the islands. So again, LUX is a platform, and we learn a lot from it. And we just met Dr. Okamura, and he said point-to-point connection or wireless connection is expensive. So maybe you could look for ways, let’s say a low-cost fiber optics. So again, we are learning a lot from LUX and through this project. And hopefully Dr. Sakano with the new one, they call it the new project. Yeah, FLOS, the Frontline Operations System.

Moderator:
Jeff, we’re out of time now. That’s all we can do for today. Thank you everyone for joining us today. And we also have a booth on the first floor, so we can always join in and talk about this informally outside. So thank you everyone for joining in. Let’s collaborate, let’s network, and let’s find solutions to bridge the digital divide together. Thank you so much. Have a good day.

Audience

Speech speed

150 words per minute

Speech length

582 words

Speech time

233 secs

Chandraprakash Sharma

Speech speed

166 words per minute

Speech length

1132 words

Speech time

409 secs

Dr. Haruo Okamura

Speech speed

140 words per minute

Speech length

1608 words

Speech time

691 secs

Glyndell Monterde

Speech speed

151 words per minute

Speech length

639 words

Speech time

254 secs

Jeffery Llanto

Speech speed

123 words per minute

Speech length

1158 words

Speech time

567 secs

Moderator

Speech speed

137 words per minute

Speech length

482 words

Speech time

211 secs

Toshikazu Sakano

Speech speed

111 words per minute

Speech length

796 words

Speech time

432 secs

UNKNOWN

Speech speed

104 words per minute

Speech length

635 words

Speech time

366 secs

Safeguarding Processing of SOGI Data in Kenya | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Jeremy Ouma

The assessment offers an exhaustive exploration of diverse facets of Kenyan society, legislation, and practice with significant emphasis being placed on LGBT rights and data protection.

A primary cause of concern emerges from the deficient legal protections for LGBT individuals against discrimination grounded in their sexual orientation or gender identity. The current Kenyan legal framework is found lacking in offering ample defence to these marginalised groups, thus fostering an environment in which the revelation of personal demographic data could invite bias and negative repercussions. This scenario is aggravated by particular practices linked to LGBT identities being outlawed in the nation.

However, this situation isn’t entirely devoid of optimism. A discernible societal shift is observed, with increasing momentum in Kenya to repeal or amend penal codes criminalising acts related to diverse gender identity. Despite hindrances to these initiatives, such as refusal of organisation registration, the identification as diverse is not outlawed. Activists highlight an ongoing case between a regulatory body and the ‘NGO board’ as a paradigm in this advocacy.

Prominent worries regarding data protection and processing standards also attract attention in the analysis. The differential handling of sexual orientation and gender identity data under current legal structures is deemed problematic. In order to address these issues, organisations need to register with the Office of the Data Protection Commissioner. They are urged to prepare and update data protection protocols and increase awareness about privacy during data management. An internal capacity building within civil organisations is particularly underscored to foster greater awareness and engagement on data protection.

Furthermore, the analysis points out persisting challenges relating to digital platforms and their content management strategies. These include the potential amplification of harmful content and a lack of understanding of local context in content moderation processes. The scarcity of transparency within these systems further exacerbates the situation, yet progress is recognised through ongoing legal cases, such as the one involving Meta and its former content moderators. This case serves as a tangible pursuit to hold platforms accountable within the Kenyan jurisdiction. Efforts are underway to bridge the gap between local users and platforms, nudging platforms to better comprehend and respond to their user base.

Finally, the study extends its purview beyond Kenya, expressing negative sentiment around homosexuality laws in Uganda. The persistent implementation of these unfair laws, resulting in adverse prosecutions based on them, is recognised as a significant violation of human rights.

In summary, the analysis identifies a complex network of obstacles within Kenyan society, but simultaneously showcases several steps taken to address and surmount these challenges. It provides a detailed account of ongoing efforts and the dire need for progress towards a more inclusive society.

Angela Minayo

Angela Minayo emphatically discusses the significance of well-regulated data management in preventing human rights violations. She emphasises that while data affords immense opportunities, if poorly regulated, it can be manipulated to facilitate human rights abridgements. Therefore, robust data regulation legislation is essential to safeguard human rights.

Minayo affirms the necessity of a harmonised protection framework for sensitive data, encompassing gender and sexual orientation. The continuing debate about data protection, confidentiality, and personal autonomy prompts her to argue for more comprehensive legislation that provides protection not just for conventional data but also for sensitive information regarding an individual’s gender identity and sexual orientation.

Conversely, Minayo voices her reservations concerning the efficacy of Kenya’s existing Data Protection Act. Despite the law’s progressive human rights perspective, she regards it as deficient in its coverage of gender identity under sensitive data and its contradictory approach in handling health data and sexual orientation.

Minayo spotlights non-profit entities and highlights the need for these organisations to comply with data protection regulations. She proposes that while data protection often associates with corporate compliance, both for-profit and non-profit entities need to ensure adherence to the regulations. Significant mishandling of personal data, especially sexual orientation and gender identity data, can result in severe human rights implications.

She goes on to emphasise the complexity of data protection and espouses an increase in the resources geared towards effective data protection, notably for non-profit entities that perhaps lack the necessary resources for comprehensive data security. Minayo further calls upon data entities in various countries to register data controllers and processors to legitimise their roles and allow for efficient allocation of budgets and responsibilities for data protection.

Minayo highly commends the use of data processing templates from Article 19, designed to assist non-profits. She affirms these templates as serving as a checklist for various data protection procedures. She also underscores the importance of organisations soliciting consent for personal data usage and documenting said consent.

In the context of protecting sensitive data, she references the severe implications for marginalised groups, pinpointing the increase in online homophobia following a Supreme Court ruling in Kenya that authorised LGBTQ organisations to be formally registered. She asserts that existing as queer in Kenya is akin to a political act, endangering individuals with stigma, and even death.

Finally, the digitisation of sectors such as sex work and resultant data protection concerns that emerge due to Kenyan users’ data being handled outside the country, leads Minayo to recommend improved awareness of data protection laws, specifically for the evolving digital economy.

In conclusion, Minayo centres her discussion on the importance of contemplating data protection from a broad spectrum, ranging from the necessity for robust regulations to protect sensitive data, the urgency for more resources for non-profit entities, the relevance of data protection across all sectors, and the significance of stakeholder awareness.

Audience

The discourse primarily centres around pressing issues related to data sensitivity and the recognition of diverse gender identities within Kenya’s legal framework. One of the main criticisms in the conversation pertains to the relatively rigid categories of gender identities officially recognised in Kenya, which currently include Male, Female, and occasionally, Intersex. This limited classification, as suggested in the discussion, disregards the broad spectrum of gender identities, leading to negative sentiment surrounding this lack of inclusivity.

Additionally, the classification and handling of sensitive information in the country are scrutinised. Specific emphasis is on the differential treatment of data regarding sexual orientation and gender identity under the Data Protection Act. Whilst the data on sexual orientation is treated as sensitive personal data, data on gender identity is viewed as general data. This discrepancy raises concerns about the absence of a specific law, offering protection against discrimination based on sexual orientation or gender identity in Kenya.

The conversation further extends to the issue of data protection measures, particularly with respect to sex workers. A unique point highlighted by the audience is the necessity to safeguard data associated with this group, emphasising non-profits’ interaction with sex workers and the requirement to guarantee adequate protection measures.

Another point of interest stemming from the dialogue pertains to the influence of Uganda’s Anti-Homosexuality Act on Kenya’s data protection laws. Particularly, concerns are raised about whether the stringent regulation enacted by neighbouring Uganda might impact the LGBT community’s data protection in Kenya.

Platform accountability in Kenya also draws concern from the audience, with particular focus on the efficiency and procedure of incident reporting in cases of data breaches. The police’s involvement in such instances is queried, implying an underlying need for more robust incident response protocols.

A significant part of the conversation is dedicated to the management and confidentiality of health data. Clarification is sought on the mechanisms for sharing health data between facilities, with additional questions being raised about whether individual consent should be obtained each time data is shared. Participants inquire about the legal coordination between the Health Privacy Act and the Data Protection Act, seeking to understand which of the two pieces of legislation primarily governs patient privacy. These discussions shed light on the evident gaps and ambiguities in the country’s data protection and privacy laws, highlighting the public’s demand for a more transparent and protective legal system.

Session transcript

Jeremy Ouma:
I’m going to ask Angela Minayo to introduce herself. I will probably allow my colleague to introduce herself first, then we can get into it. Also, another colleague of mine is on the way. I think he’ll introduce himself as soon as he gets here. Thank you. Over to you.

Angela Minayo:
My name is Angela Minayo and I’m the Director of the Center for Human Rights. I’m interested in the topic because I believe that data unlocks a lot, but also it can be an area where, if not well regulated, can lead to further human rights violations. So I’m grateful for this panel, and I await questions from Jeremy.

Jeremy Ouma:
Thank you, Angela. Thank you, Angela, and thank you to all of you for coming. So, since we have done some research, basically a brief, very brief, brief, by Article 19 Eastern Africa, as I’ve said, I work with Article 19 Eastern Africa. And I enjoy, we work on issues of freedom of expression, association, and, of course, the issue of gender identity. And also, specifically, for this session, we have this brief produced as part of our data, our voice project, supported by the GIZ Digital Transformation Center in Kenya. And the paper basically provides an overview of the processing of sexual orientation and gender identity data specific to the Kenyan context. And it also provides an overview of the process of data protection in Kenya, particularly for the LGBTQ people who face heightened privacy and discrimination risks in comparison to cisgender and populations, basically, in Kenya. So what we hope to do with this paper is to increase awareness of data protection by different stakeholders in Kenya, be it regulators, be it regulators, be it data protectors, be it data regulators, be it data regulators, be it data protectors. And by doing this awareness, people get to understand their rights. That’s for whoever is part of the community. That’s one. Two, people also understand the obligations that be it data controllers, data processors, and adhere to these data protection laws, building the trust that they have in the community. So I’ll briefly go into some of the findings that we had before we get to some questions with my colleague here. Let me briefly go into some of the findings. Okay, so as I’ve said, briefly some of the findings. One is the insufficient legal protections for LGBT people, especially in the Kenyan context, most laws, or rather the practice is outlawed, not being identifying as LGBT in the countries, but the practice or the action is outlawed. So this has negative impacts on disclosure of this kind of information, sexual orientation or gender identity data, disclosure and collection, especially, for example, be it a hospital, be it in banks, and the legal framework doesn’t provide protection from this discrimination. There’s a blanket protection for everyone that we call under the law, but there’s no specific protection from this kind of discrimination on the basis of sexual orientation or gender identity. That’s one, plus the social cultural context continues to have a great impact on the treatment of sexual and gender minorities in the country. We have this thing that we keep throwing around that it’s not our culture, that’s thrown around by government, by leaders, so it’s not a very good environment. That’s one. I think the second finding is about this differentiated treatment of SOGI data under the legal framework currently in place. So sexual orientation is classified as, and processed as sensitive personal data, but gender identity is classified and processed as personal data. So one is sensitive personal data, the other is general data under Section 2 of the Data Protection Act, which covers most things under data protection in the country. So in effect this means that data controllers and processors processing this kind of data must differentiate and accord higher levels of protection to sexual orientation data, despite gender identity also exposing data subjects to similar risks and consequences. That’s finding number two. Number three is restriction of SOGI data collection to legally recognized categories in both public and private sectors. So in the country most, if you go for example to a bank, to a hospital, and you need to collect data, the categories are majorly male, female. Sometimes they’ll say intersex, sometimes they’ll just put other. So this is basically attributable to the failure of the law to recognize other gender identities, sexual orientation, and this kind of people. So this paper was guided by some contributions from key stakeholders. We did some key informant interviews, we had some focus group discussions with key industry players in the country specific to Kenya. I think I’ll leave that as the very key findings, but we also have some… other findings and then go to some of the conclusions we drew and then some recommendations. Then after that we can we can speak to my my colleague and have a brief overview of the current situation. So I’ll go straight to some of the recommendations. Specifically for one, data controllers and processors. I’ve divided it into two. So we have recommendations for data controllers and processors. The second is, let’s call it the civil society. So one for data controllers and processors is basically around compliance. So all public private organizations and individuals processing personal data are required to register with the, let’s call it the regulator, not regulator, but we call it the ODPC, the Office of the Data Protection Commissioner in the country, as stipulated under the Data Protection Act. So we encourage that for anyone that is processing data, be it government, be it a hospital, be it a business, that you require to process to process data. Recommendation number two is to implement technical and organizational measures for compliant data processing of this kind of data, this kind of sensitive data, including doing a data protection assessment, data protection impact assessment, prior to processing and processing this kind of data, and also engaging these communities. That’s one. Two, appoint a data protection officer to oversee compliance with the laws, that’s the Data Protection Act, and other relevant privacy and data protection laws. That’s two. And for entities in the public and private sectors to basically prepare, update data protection policies and notices so that they are up to date with the needs of the community. Finally is basically awareness, and that is internal awareness for these entities to have a privacy aware culture so that you are aware of what you need to do when processing this kind of data, and how to handle that kind of data in the right way. I think I’ll go to the very last group. This is for basic, the civil society, these are the civil society actors, is one, to build in internal capacity and undertake training to, to one, understand the frameworks of data protection, the impact it has on the communities we work with, and two, is also once you have this knowledge, you engage the public and private sector entities to also create awareness for them to understand the impact this processing has, to understand what are the obligations under the laws. And finally is to advocate for the data protection commissioner to expressly include gender identity as a form of sensitive personal data. In light of the risk of significant harm that may be caused by the processing to data subjects, it’s also important to have this captured by the data protection commissioner and acknowledged, and have frameworks to protect this kind of data. That’s number two. Two, and finally is to advocate for better laws to remove the discriminatory laws, whether it’s repealing them or amending them to be in line with international standards. So yes, those are the kind of key findings and some of the recommendations we have in this brief. I think I’ll go to, if anyone has any questions up to that point, I’ll happily take them before we go to my colleague. Then we can have a discussion about other experience in the country. Yes, please.

Audience:
Why can’t you use the mic? So from your presentation, it seems that there’s like a kind of embedded contradiction, which is that on the one hand, the diverse gender identities are not recognized, right, officially. So, but at the same time, there is that risk of, like if somehow, you know, banks and medical facilities, so that’s kind of the focus of data protection. But in parallel, is there kind of a movement or a drive towards kind of having these recognitions, you know, formally, you know, instituted? Is there, you know, again, is there a drive or a movement? Is there something happening in Kenya?

Jeremy Ouma:
Okay, thank you. I think that’s the only question. I think, yes, there’s a drive to do that. Over the past couple of years, there’s been a drive to repeal a section of the penal code. I think it’s two sections, 162 and 163, that criminalizes the act, not necessarily the person. In Kenya, the identifying as diverse, it’s not criminalized, but the act is what is criminalized. So once you found, but most times, the law is abused. People are denied registration for organizations. They’re denied, but there’s been some good precedence that I think there’s been a long court case going on between the regulator that, is it called the NGO board?

Angela Minayo:
So there was a petition to the Constitutional Court to declare section 162 of the penal code to be unconstitutional based on the discrimination ground. That petition was not successful, and section 162 is still operational in our country. So Kenya’s policy around it is to act like they don’t exist. So whenever they’re asked about it, the government always says that that is not a priority for Kenya, that we’re a third world country, and we have more development issues to be concerned about. So what has helped Kenya and the LGBTQ community is that our constitution is very progressive. So the penal code is a relic of the colonial laws that we inherited from the British government systems, but the constitution is actually 2010, very new in terms of constitutional law practice, very, very new in constitutional making. And our constitution has the right to non-discrimination very strongly, and the Bill of Rights, which enumerates human rights. So when LGBTQ organizations are being denied registration based on the penal code, they went to court, and the court went up to the highest court, which is the Supreme Court. And the court ruled that while what they’re doing might be a violation of the penal code under section 162, they have the constitutional right to assembly and association and all that. And therefore, when the registrar of the NGOs refuses to register them, then they have contravened the constitution. So it’s the robustness of Kenya’s constitution, even on the right to privacy, under article 31 of Kenya’s constitution. And now the Data Protection Act that you see a very progressive human rights outlook, but we still have this elephant in the room, which is section 162, and you can see the contradictions. I hope that gives you an idea of what we are working with.

Jeremy Ouma:
And to also mention that there’s a lot of pushback from government, they are not, for example, in the petition, the registrar tried to say that you’re not supposed to do this. So there’s a lot of pushback from government. They are not ready to have these conversations. So yeah, I think then, I hope that answers you. Thank you. Then I’ll go straight to a couple of questions for my panelists here. The other one will arrive, I’m not sure, but it should be on the way. So we can basically start looking at the legal framework for processing of data, not necessarily just sexual or gender kind of data, but in general, what do you think, or rather, what’s the framework governing this processing of data?

Angela Minayo:
Yeah, so as I had stated earlier, it starts in our constitution, the right to privacy. Then we have international human rights commitments. Some of them, you know them, the International Covenant on Civil and Political Rights, and the African Convention of Human and People’s Rights. So these are the basis and the frameworks for the right to privacy. And then in 2019, we operationalized the Data Protection Act. Like many other African countries, it is a replication of the EU GDPR, and that comes with pros and cons. So some of the pros are that the GDPR provided a very good framework with a complaints handling mechanism, an independent office. I’ll put independent in quotes because you can say it’s independent, but who’s appointing? So independence is a bit of a, I won’t choose independent, I’ll just say it has a body because the independence is questionable. So we have the Data Protection Act, and it’s a very elaborate framework, and I’m not going to focus so much on the downsides other than to just say that just like the EU legislation, we expect that when data is being transferred from Kenya to other countries, that there’s enough safeguards to provide equal or similar protection for data that is being handled in the third countries where there’s similar protection. countries. So I hope that answers it. Maybe we can all go to the sexual orientation and gender identity. So I work in Kiktonet as a gender digital rights programs officer and last year we also had conversations around gender and data protection and this conversation cannot be had without talking about sexual orientation and gender identity data. Just a fun fact even before I delve into the Kenyan ecosystem is that there’s a very progressive approach to data protection from the southern African region. I don’t know if you knew this but South Africa, what are they called, SADC, the regional bloc in South Africa SADC came up with a model framework for privacy law for the member countries that belong to it and they put gender as one of the personal or sensitive data. So while Kenya just talks of sex they talk about gender and there’s a difference. So when you just talk about sex you’re referring to biological sex and this is what is assigned that path. So you’re male or female or intersex but when you talk about gender you’re talking about someone’s expression and that might sometimes not align with their biological sex. So that’s very progressive. I always try to talk about the SADC model law because it took a different approach even from the GDPR which is something we want to see more of regional blocs and countries taking their own approach to data protection in a way that makes sense for them but also in a progressive way in a feminist way. So I just like to mention that. So in Kenya you will find that gender identity is not specifically provided for under sensitive data. It is treated as just personal data and that means that it can be processed one when there’s consent and two even when there’s no consent when the data processor or the data controller can prove that that information or that data is necessary for performing certain tasks. So they’ll say you entered into a contract and part of my obligation was to do ABCD and that obviously means I have to process your data to do that. So that is a, how can I say this, there is no consent necessary in certain aspects of personal data processing. How is this a problem? It is a problem because we can still see how gender data or gender identity identifiers can still lead to human rights violations. It could be job applications, it could be loan applications, so it could lead to further violations. Now for sexual orientation I think we’ve already given you and we have set the scene for you to understand how sexual orientation is grappled with in our legal system that it is a crime and the said crime is still protected in the data protection framework which sends contradicting viewpoints and it’s not just sexual orientation data if I may add. Health data has also been one of the things we’ve grappled with because health data is dealt with in different frameworks. So there’s our Health Act which empowers health practitioners to collect data necessary to perform their work and at the same time you’re seeing that health data is sensitive data that cannot be processed unless certain safeguards are given. So this is something you will keep seeing in countries where they pass the Data Protection Act but don’t review or reform the laws that existed before. So you end up with a very interesting set of laws if I may say. So yeah, so when you have, when data is deemed sensitive it means there’s more safeguards towards this protection. You’ll find that if there’s no consent then the data controller or the data processor must prove the necessity of taking this data and of collecting this data or processing this data. Again that falls in data minimization that we want you to only collect data that truly is necessary for what you’re trying to do. So what happens when you put sexual orientation data and gender identity data separately? Let me give you an example. So you’re saying sexual orientation data is protected, right? But gender identity data is not sensitive data, it can be processed in any other way. But when we create links between datasets we can be able to tell that this is Angela or this is Jeremy. Jeremy is male, male or female, that is not protected we can tell, male. Jeremy is on a sex app that is for queer community. So that tells us that while Jeremy is male we can tell that, we can identify Jeremy’s sexual orientation from the apps he’s using. So we need to have a harmonized protection framework that protects Jeremy both for his identity as male and his orientation as queer. And this is of course just for example purposes after this meeting, please don’t harass Jeremy. But you get the point that we need, what I like to say is we need an equilibrium or a spectrum of protection that cuts across and doesn’t end and stop at a certain point.

Jeremy Ouma:
Thank you Angela and you’ve preempted my next question about probably what is your experience with the practice of processing of data, especially sensitive data..

Angela Minayo:
Thank you, I think for this talk to be important for the people in the room, I’d really like to talk about processing of data in non-profit entities. So for a long time we’ve been talking about data protection from, oh it’s a company’s compliance, actually I have that compliance but you have to use it. But what the message it sends is regulatory and compliance, that’s what companies do. Non-profits comply how when they’re sending financial reports to donors, like compliance is a very foreign word to non-profit entities. Yet you’ll find non-profit organizations process a lot of sexual orientation and gender identity data which if mishandled has serious human rights ramifications. So we need to understand data protection as something that applies both for non-profit and for-profit entities, the companies in this case. So you’ll find even how we have conversations around data protection, it’s all big tech and of course I understand why we do this because those are the examples that make the most impact and the most sense to the people in the room. But we also need to start talking about processing of data by non-profit entities because what will happen for instance if a non-profit like Article 19 for example, you operate in Kenya and Kenya is becoming maybe a draconian state and the state can have access to your documents. Do these organizations have a plan? Do they know how to fight back when this data is requested from them? We keep saying oh Apple is such a good company because they will never comply with request for information from governments. Do we ask the same questions when it comes to non-profit entities? So I think it’s very important to have those conversations of data protection even from a non-profit point of view. So from practice what is very worrying is this idea that data protection is a concern for companies and not a concern for non-profits, yet you will find the people who handle most of the sexual orientation and gender identity data, the people who are doing the research in these areas and collecting data in this area will be non-profit entities. Another worrying practice I’ve seen from my country, I’m speaking from Kenya’s perspective and during the question and answer forum I would like to invite you to give us perspective from your countries, is that there are so many myths and misconceptions around data protection and processing. I’ll give you an example. So last week, last month, let me just say last month because I’ve lost a sense of time, is that our Office of the Data Protection Commissioner issued penalties, penalty notices to three companies for breaching the Data Protection Act and one of the entities that was fined was a club, so like a place people go to have fun, a partying joint, and they were taking photos of people who are taking part, you know, so they, you know, in Kenya for some reason clubs have this obsession with taking photos of revelers having fun at their joint. I don’t know why, I don’t know if they feel like they wouldn’t make enough sales without, you know, it’s a whole data minimization and necessity principle, like do you really need it? But they did and they ended up being fined because the data subjects complained about them to the Data Protection Commissioner. I’m also meant to understand that it’s not, we should not take it for granted that our Commissioner can issue penalties. Apparently in some jurisdictions the investigative powers and the powers to give penalties are curtailed, so I just wanted to put that as a side note. And what the other clubs have understood from this penalty notice is to put emojis on the faces of the photos they take and they did this immediately after the penalty notice, like that’s how unserious of a country Kenya is, like we use humor to get through, it’s a very tough place to live in. So anyway, the point is they think putting emojis on the photos is complying with the data protection, it’s that bad, exactly. So there’s a very pedestrian approach to understanding data protection because if we have applications that can remove emojis and de-anonymize the photos, then they have not complied with the Data Protection Act if they don’t have the consent and necessity, if there’s no consent and all that. So I’ll just put, I’ll end it at that because I think it’s a light note and tells you what the problem we’re dealing with.

Jeremy Ouma:
Thank you. If you have any questions I will take them at the very end. I think I’ll just throw a couple of more, a few more questions to you. Kind of building on what you’ve just highlighted, is there some sort of worrying impacts that you have seen from this kind of processing of the of sensitive data or just any data in general?

Angela Minayo:
Actually I’ll give an example of an activity we were doing in Kiktonet. So last year we were doing something known as a Gender Internet Governance Exchange under the Our Voices Our Futures project by APC and we had people from the queer community as part of the participants and before we used to just work with women and we will take photos and post them, you know, part of the reporting but also the social media campaigns and they told us that some of them are closeted and putting their photos online in an activity that is clearly for queer people will put them at risk. I get comments that data protection should be some very, how can I say this, this is serious. This is about penalties of 50 million in the EU and Facebook and Meta but that minimizes the harm that such data breaches can have on normal ordinary people who are not celebrities, who are not in the EU and therefore their complaints cannot attract the penalties that are in the European Union. So understanding that in the context of a stigma and even deaths we’ve seen in our country against queer people, that is a serious risk we need to have in mind. I’ll give another example of the homophobia we saw online once the Supreme Court made a ruling allowing LGBTQ organizations to be formally registered. So we had a lot of disinformation online and what people understood that ruling to mean was that Kenya has formalized LGBTQ relationships, which was not even the case. We wish that was the position. It is not. It is not. And the homophobia, the messaging online was we will kill them, we are never going to accept that and I kid you not, it was not even just from people online, it was even from the leadership at the national level. So when there’s this understanding that this is an undesired people among us, it also warrants hate or justifies hate or motivates and incites hatred against that group. So being queer on its own, existing as queer, is a political act in Kenya and in certain other countries. So let me just end it at that.

Jeremy Ouma:
Okay, thank you. I think the final one should be, probably, do you have any recommendations or insights on best practice for collection and subsequent processing of this kind of data?

Angela Minayo:
Yes, first of all, I like at article 19 to actually publish the, I just want to call you out here. You need to publish this resource because they have annexed amazing templates that people can use when processing data. Data protection is a very complex principle. I have to always remind people, and I’m talking about it, it can’t cover all the bases in one topic or in a 45-minute panel session. But there’s need for more resources, not just for for-profit entities. Those ones have enough money to get the DPOs, to get the people to help them comply. But what happens to non-profits whose resources are quite minimal? So what article 19 has done is to come up with templates for non-profits when they’re, and it’s like a checklist, which is what we need because this is such a complex, it’s a complex process. So telling you, okay, you have this data, have you gotten consent? If you don’t have consent, do you have the basis for it? Have you documented? Documentation is so important in data protection because you need to also preempt what can happen in future. Will you ever need to provide proof of consent, for instance? Those are things we might not be thinking about, especially operating as non-profits, but those are, that is the age of data protection we’re in, that you need to be documenting consent, documenting contracts you have with data control, data processors. So let me just explain this. Sometimes you’ll find there are two entities involved in the processing of data. So there’s a data controller, the person who collects and also directs how the data is going to be processed. And then you can have another entity being the data processor. So these ones are the ones who are going to store the data, anonymize the data, analyze the data, and all that. So they might have different functions depending on if they’re a data controller or a data processor. So sometimes we use these words among people in the data protection field without explaining what the ramifications are. So to make it in a more simple way is that a data processor is an agent or an employee of the data controller. So essentially at the very end, the person who’s going to be responsible for all the data protection issues, the breaches or whatnot, and consent will be with the data controller and not the data processor. So having them registered with the data protection entities in their countries is very important because it also gives them the justification for having budgets and all that towards compliance. So let’s have this resource online, please, because I think for non-profits it’s very important.

Jeremy Ouma:
Okay, thank you. And just a disclaimer, the resource will be will be will be published by the… the end of the way in October, so sometime in November, it will be fully ready. It’s ready, it’s just that we haven’t yet published. There’s a whole process to go through, but it will be published. And the main aim of this resource is to create awareness about data protection. So today we looked at some of the challenges, the impacts that this processing has on specific groups, but the paper looks at trying to create awareness about data protection, and also some of the recommendations of best practices for data controllers, data processors, and also civil society actors in general. So yeah, I think I might want to leave it at that, but I’ll take some questions if there were any. We can take them at the same time, then, yeah, we can end after that. Over to the floor.

Audience:
Wait, where’s the microphone? Who sees that? Is this being live? Where’s the information coming from? Is it on the internet? Is it being filmed? You should have it. Do you want me to open it? Or for transcription? For transcription. OK. Just for live. I know, so it’s on YouTube. OK, so when they turn off the mic, I’ll ask my question. OK. When it’s over, it’s fine. OK, sure. Hi, I have three questions. I hope that’s OK. I have many questions. My first question has to do with how data from sex workers is being saved and used and protected, right? Because we know that data from sex workers tends to be more sensitive. And non-profits also work with sex workers. And maybe in Kenya, that happens as well. So I would like to know if there’s a difference, or if you have had any special remarks on the privacy of data for sex workers. Then my second question is how the Anti-Homosexuality Act from Uganda has affected Kenya and the data protection laws in Kenya. And my third question is, how is it like for you as civil society to work with platforms in terms of platform accountability? Because we know we have the data protection laws. But how accountable are platforms in Kenya when you register a report or an incident? Like, how does it work? And also together with the police, right, with the judiciary system. How is it like there? So these are my three questions. Thank you.

Jeremy Ouma:
Any other questions? Yes, please.

Audience:
Hi. This is more actually to clarify. Because when you’re giving the example of the health data sharing scheme, so my clarification is, is there a Health Privacy Act? OK. And in that case, when health care providers are getting the data, what is the sharing mechanism? Can they share with other, let’s say one hospital, is taking you as a patient, right? And if there is some kind of electronic health records, are they sharing and uploading to that repository? So it’s shared across facilities. So every facility has access to that data. Is that how it works? Or every time they need to get consent from the patient? That’s one clarification. And second is that if there is a Health Privacy Act, and then there is a Data Protection Act, how does the coordination between the two work? And what regime does the patient’s privacy fall under? Are they covered under the Data Protection or the Health Privacy Act? Thank you.

Jeremy Ouma:
Thank you. Is that the last question? OK. Do you want to take any of the questions? OK. Then take the two questions.

Angela Minayo:
I’ll start with Paisa’s question. And this is a topic I really like. So I hope you don’t get passionate and talk too much. But yeah. So it’s very interesting that you raised it, currently having a bill being tabled in government, in parliament, called the eHealth Act. So the eHealth Act is talking about telemedicine, but it’s talking about protection of health data, which is very interesting. And I think people need to stop doing this. They should just have called it the health privacy, because that’s what it does. And it’s providing a framework where first, consent. So collection will be consent-based. And two is that there will be sharing of data across health facilities. And three, there will be health identifiers. So they’re also going to be assigning unique numbers to both patients and health facilities. And four, they want it to be portable. So they want to give that control of data to the patient. So the patient will be having all the records in a portable format. We don’t know this portable format, but you know, data portability. And then they’ll be able to… So it’s still being debated, but that’s what they have in mind. On how it is going to operate together with the Act, you’ll find most of the time saying in the caveats for exceptions and things like that, if prescribed by any other law, or on the grounds if prescribed by any other law. So that is normally how we try to interlink laws. So if it’s talking about prescription by another law, we go to the other law, or any other relevant law in question. But you can talk about this after this. The question on sex workers. Again, sex work is also penalized in our country. So of course we understand that the situation is different places. But we also understand that sex work is also becoming digitized. So there’s OnlyFans and so many other, what are they normally called, webcam-based apps, where they’re still part of the digital economy. And that also means that this data is being processed sometimes outside Kenya. Again, the level of awareness is low even on just data protection accountability in Kenya. So how bad can it be for a user based in Kenya whose data is being processed outside Kenya, i.e. in the EU? Those are questions and conversations we are yet to grapple with in Kenya. But I’m glad that kick.net is part of the OVOF project. And this could be part of the research we can conduct to understand how it’s been dealt with. So I’ll just give you context for those two things. I’ll let Jeremy go for the homosexuality laws in Uganda and how it’s affecting Kenya and the platform accountability.

Jeremy Ouma:
Okay, thank you. I think I’ll start off with specifically platform accountability. And just to mention, first of all, there’s an ongoing case at the moment between Meta and some of its, let’s call them former content moderators. There’s an ongoing case about matters around accountability. There was, let’s call it good precedence where they can now be held accountable for their actions within the Kenyan jurisdiction. But that case is still developing. But it’s good progress for us. We see it as good progress. That was just first of all. But there’s also, from our point of view, there’s one of the things, or two things we’ve tried to do. First of all, there’s a coalition that we’ve tried to bring together around specifically content moderation. This is basically, we looked at some, we did some work around the current practices of content moderation in a couple of countries. But there’s a specific focus on Kenya. I’ll talk specifically about Kenya. Basically understanding the experiences and challenges of Kenyans around content moderation, takedowns and all that. So some of the things that we found is platforms are potentially amplifying harmful content. There’s lack of understanding of local context because we are trying to have some sort of decentralization in terms of content moderation so that we can probably at some point hold them accountable for what happens on their platforms. There’s also insufficient transparency in content moderation. And finally, we are trying to bring together, sort of bridge the gap between the platforms and the local users. So that’s one of the things we do. So we are trying to have… Okay, I’ll keep it very brief. So we are trying to basically bridge the gap between local stakeholders and local users and the platforms to sort of get some kind of conversation going on how we can make the platforms better. So yeah, I think in the interest of time, there was a second question. Uganda. For the case of Uganda, I think it’s very… It’s not somewhere we want to be. But I think we’ve recently been hearing cases of people being prosecuted based on this law. I think there’s also some very bad cases. But in Kenya, I think it’s very similar, but not as bad. But I think in relation to how it has affected Kenya, I think there’s been some…

Angela Minayo:
There’s some potential legislation. So we have the culture bill. It started on as a family values protection bill. Just to wrap up is that these are funded by Eurocentric far-right evangelical radicals. And it’s really sad. And it’s not African. It’s actually Western ideals being imposed on Africans. We’ll end at that. Thank you so much for attending our session. Thank you. Thank you. Thank you.

Angela Minayo

Speech speed

168 words per minute

Speech length

3903 words

Speech time

1395 secs

Audience

Speech speed

153 words per minute

Speech length

607 words

Speech time

238 secs

Jeremy Ouma

Speech speed

145 words per minute

Speech length

2503 words

Speech time

1034 secs

Digital sovereignty in Brazil: for what and for whom? | IGF 2023 Launch / Award Event #187

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The concept of digital sovereignty is being examined from various angles. Brazil, known for its investment in digital sovereignty, has a long tradition of supporting it through the production of technological equipment and open-source code. This highlights their commitment to maintaining control in the digital realm. However, there are concerns regarding the practicality of achieving complete independence in technology, as no country can truly be independent in terms of technology, food, or energy.

Another point of discussion is the need for clarification on how the current project will relate to the research conducted by FGV-CTS, a prominent research institution that has been studying digital sovereignty for a considerable time. This indicates the desire to build upon existing knowledge and ensure consistency in efforts related to digital sovereignty.

The legitimacy of internet governance in a multi-stakeholder environment is considered a complex issue. This is because digital sovereignty and the involvement of various stakeholders in governing the internet raise questions about who should have the authority and power to make decisions and set policies. It requires careful consideration to strike a balance between different interests and ensure fair representation in decision-making processes.

It is important to note that the term sovereignty inherently denotes exclusivity. When people talk about digital sovereignty, they often focus on their own or their community’s interests, without considering the broader implications. This highlights the challenges of defining and implementing digital sovereignty in a globalized and interconnected world.

On a positive note, the removal of ICANN from the sovereign system is seen as advantageous for certain aspects of internet coordination. By separating it from the influence of individual nations, ICANN can operate in a more impartial manner, contributing to more effective and neutral coordination of the internet.

However, there are differing opinions on the feasibility of digital sovereignty. Some argue that it is a fantasy, as sovereignty in a political and legal sense implies exclusivity. They believe that true sovereignty cannot be achieved in the digital realm.

Misunderstood or poorly implemented digital sovereignty may have significant consequences for the fundamental characteristics of the internet. Legislative or regulatory measures imposed under the guise of digital sovereignty could jeopardize the open and decentralized nature of the internet, hindering innovation and limiting access to information.

In contrast, others view sovereignty as a means of reasserting control and empowering individuals in the digital sphere. It is seen as a process term that allows people to assert control over what occurs in the digital realm, enabling them to shape the digital landscape according to their needs and interests.

In summary, discussions around digital sovereignty are multifaceted. While Brazil’s investment in digital sovereignty demonstrates a commitment to control and independence, challenges regarding practicality persist. The relationship between current projects and existing research needs clarification, and the legitimacy of internet governance in a multi-stakeholder environment is a complex matter. The term sovereignty itself carries connotations of exclusivity, and differing perspectives exist regarding the feasibility and implications of digital sovereignty. Ultimately, the aim is to achieve a balanced approach that preserves the fundamental characteristics of the internet while enabling individuals to have control and influence in the digital world.

Flavio Wagner

Brazil, as one of the world’s top 10 largest economies, possesses a robust industry across a variety of sectors, including the digital sphere. this positions brazil as a significant player in the global economy. Brazil has also implemented laws such as the Marco Civil and the privacy law, which are similar to the General Data Protection Regulation (GDPR) in the European Union. These regulations highlight Brazil’s commitment to safeguarding privacy and ensuring justice in the online realm.

However, there are growing concerns about internet fragmentation and digital sovereignty within Brazilian legislative bills and public documents. These concerns indicate a potential risk of unwanted internet fragmentation. Discussions on platform regulation and cybersecurity proposals often emphasise the importance of digital sovereignty as a motivation for proposed bills. The term “digital sovereignty” is increasingly mentioned in Brazilian legislative bills and public documents.

To address these concerns, a project in collaboration with CEPI aims to facilitate academic and public policy debates about sovereignty, connecting the in-depth analysis of the Brazilian context to the regional and global levels. This partnership with CEPI seeks to expand the understanding and knowledge of the academic and public policy discourse on sovereignty issues.

Moreover, the project collaborates with FGV Sao Paolo, a renowned academic institution with numerous groups working in different cities. The collaboration with Flavio Wagner aims to enrich the academic and public policy debates on sovereignty matters.

Despite diverse areas of focus, the project maintains an ongoing dialogue with various groups, including Lucabelli’s team. This continuous engagement enables the exchange of insights and perspectives, contributing to a comprehensive understanding of digital sovereignty within the Brazilian context.

By analysing public documents, bills, and policies, the project aims to comprehend the impact of digital sovereignty on the evolution of the Internet in Brazil. This comprehensive examination is crucial to evaluate potential implications of legislations or regulations tied to digital sovereignty, as there is a concern that they may threaten fundamental characteristics of the Internet.

As a precautionary measure, it is crucial to exercise awareness and discernment in comprehending how sovereignty is utilised and interpreted in relation to the digital economy. The project follows the approach advocated by the Internet Society to ensure a thoughtful and well-informed discussion on digital sovereignty.

Additionally, the project aims to educate and inform individuals about the diverse perspectives on digital sovereignty in Brazil, as well as the social, legal, and technological implications associated with different definitions of sovereignty. This comprehensive understanding will enhance the overall discourse on sovereignty, not only in Brazil but also in other parts of Latin America and globally.

Flavio Wagner recognises the importance of thoughtful evaluation when proposing public policies, regulations, or legislations to address sovereignty issues. This careful consideration is crucial in understanding the potential consequences and impacts they may have.

In conclusion, Brazil’s significant role in the global economy, along with its robust industry and commitment to internet regulation, has sparked discussions on digital sovereignty and its possible implications for the Internet’s evolution in the country. The project, in collaboration with CEPI and FGV Sao Paolo, aims to foster a comprehensive academic and public policy debate that encompasses various perspectives and potential consequences. By analysing the Brazilian context and connecting it to regional and global levels, the project strives to contribute to a more informed and well-rounded discourse on digital sovereignty.

Ana Paula Camelo

The term “digital sovereignty” is frequently mentioned in Brazilian legislative bills and other public documents. However, it lacks a shared definition, creating ambiguity and confusion about its meaning and implications. To address this issue, research has been conducted in partnership with the Brazilian chapter of the Internet Society and the Center for Studies on Freedom of Expression and Access to Information (CEPI).

The research aims to map and discuss the various narratives and stakeholders involved in Brazilian debates surrounding digital sovereignty. This comprehensive analysis, based on desk research, study group discussions, and expert interviews, will contribute to a deeper understanding of the topic.

In addition, an online training course on the issues of digital sovereignty is set to be launched. The course, accessible through the websites of the Brazilian Internet Society and CEPI, will feature recorded lectures, suggested bibliography, and interactive discussion activities. This initiative aims to enhance knowledge and awareness about digital sovereignty among a wide range of individuals.

Digital sovereignty is considered essential for several reasons. It plays a crucial role in self-determination, the regulation of state power, national security, and technological and scientific development. It also intersects with areas such as artificial intelligence, misinformation, and fake news, and serves as a means of protecting the rights of citizens, including those belonging to consumer and minority groups.

Ana Paula Camelo highlights the importance of understanding the Brazilian context in the global discourse on sovereignty. By doing so, it becomes possible to contribute to a more comprehensive and inclusive global narrative. Camelo encourages individuals to engage with the ongoing research, offering feedback and suggestions to facilitate collaboration.

In summary, the term “digital sovereignty” is widely used in Brazilian legislative bills and public documents, but lacks a shared definition. Through research conducted in partnership with the Brazilian Internet Society and CEPI, a comprehensive understanding of the topic is being developed. An upcoming online training course will further promote knowledge and understanding of digital sovereignty. Digital sovereignty is crucial for self-determination, state power regulation, national security, and technological and scientific development. Understanding the Brazilian context is emphasized to contribute to a broader, global narrative on sovereignty.

Raquel Gatto

The concept of digital sovereignty and its connection to internet fragmentation is a major concern for the Internet Society. They have defined fundamental principles and values for the internet and developed a toolkit to assess the impact on these principles. This approach involves continuously evaluating the situation in different regions and countries. They have created impact briefs that provide specific case studies, such as a proposed bill for a law regarding content moderation and fake news in Brazil. This highlights the significance of digital sovereignty in Brazil, where it is a major issue that has led to increased research.

Digital sovereignty encompasses political, technological, and economic aspects, and it is intricately connected to the issue of internet fragmentation. The Internet Society emphasises the importance of understanding different interpretations of sovereignty in the digital sphere. They recognise that capturing these interpretations is essential to safeguarding the fundamental aspects of the internet and ensuring that digital sovereignty does not compromise its core principles.

In their work, the Internet Society also focuses on the broader implications of digital sovereignty, including technological, political, and legal aspects. They highlight the need to consider the potential risks associated with claims of digital sovereignty and the impact they can have on industry, innovation, and infrastructure. By understanding these implications, they aim to contribute to peace, justice, and strong institutions in the digital realm.

Furthermore, researchers like Luca Belli have joined the conversation, bringing their expertise in cybersecurity and a nuanced understanding of digital sovereignty as part of national security protection. This highlights the significance of digital sovereignty in safeguarding nations from potential cyber threats.

Overall, there is a neutral stance towards understanding the implications of digital sovereignty. The Internet Society and other researchers place great importance on gaining a comprehensive understanding of its various dimensions. By doing so, they can contribute to the preservation of a free, open, and secure internet that upholds the fundamental principles and values defined by the Internet Society.

Session transcript

Raquel Gatto:
I’m going to introduce myself, my name is Raquel Gatto, I’m the vice president for the Internet Society Brazil chapter, and I have the pleasure here to have the president of the Internet Society Brazil chapter, Isaac Brazil, Flavio Wagner, on site, and online I have my colleagues who are going also to present this session, Ana Paula Camelo from CPFGV, Ana, welcome, and also online I have the pleasure to co-moderate this session with Pedro Lana, our director from Isaac Brazil. I also want to acknowledge Lohri Schippers, who is online and is our rapporteur, as well as other members from Isaac Brazil who are on site. Thank you very much for everyone joining. So without further ado, I just want to, as my role as moderator, I’m just going to give a little bit of context in terms of why this is one of the topics that Internet Society is tackling. First of all, in the flow of the global vision that Internet Society is putting forward, then I will have Flavio presenting about the Brazilian situation, the Brazilian scenario, and how we are going to tackle the research, and then Ana is going to give us the detailed version of this project, and then I’m opening up for questions, so we have 45 minutes to go all through. So let me just start by saying, so the question that was put on the table by the Internet Society, the international organization is, what is the biggest threat for the Internet? This is such a simple question in a way but with very complex answers and it’s interesting because there was one big concern that arises from the answers that were collected and the answer is SplinterNet. I’m not sure if everyone is familiar with that but SplinterNet is when you don’t have the network of networks. The internet is made of these smaller networks, voluntary adoption of a common protocol TCP IP that makes this the internet of you know the networks of the internet of the networks, the internet working, open, global, secure, trustworth internet and when it comes to the threats to divide it, to fragment the internet, there are multiple ways it can take place from political view, let’s say jurisdiction issues, it might be from digital sovereignty issues, digital national security issues, from technical challenges, infrastructure challenges but in all of those there is a pre-question which is what makes the internet the internet and what we are going to protect from not being divided, from not being splintered, splitted. So the Internet Society started precisely to define what are those fundamental characteristics, the principles, the main values for the internet to be what it is and it published a policy brief back in 2012. 2019, sorry, and then it followed up with an impact assessment a toolkit So it’s not only about Describing what it is to keep it open Globally connected secure interest to all worth internet, but it is also about How you can assess what is going on in your country in your region and and so on and then There are now The chapters and and and the global community is taking up and is creating this impact briefs Which are basically documents taking one case study. For example in Brazil. We had one case study about Proposal for a bill for a law Regarding content moderation. So and and and fake news. So that’s one of the Examples of those impact briefs there are being populated In this main project. So taking this lead on So here’s the biggest threat. Here’s what we want to protect then. What are the main? Overarching themes that that We are seeing this in the the public discourse So first of all, you have internet fragmentation, right? It is by nature in ISOC DNA to take the technical considerations for internet fragmentation so it’s when the Internet is no longer For example using a common protocol when the the Internet and those smaller networks are not connected to the whole Internet and so on but then It needs to be recognized that there are other forms being discussed, including here at the IGF, the policy network on internet fragmentation is also taking up on the other concepts for internet fragmentation, which will take also the user experience in the sense and the governance, the internet governance fragmentation, right? But we are not going to go for deep dive on those yet, but just to acknowledge that there is this overlapping between internet fragmentation and digital sovereignty, whereas digital sovereignty is one of the conditions, one of the situations where internet fragmentation is taking place or risks to take place. And then I come to the main topic of the session today, which is digital sovereignty and all the different definitions also that you can take in this in the sense that digital sovereignty is a political view when we take up the nation-state concept. It can be a technological matter when we are talking about the appropriation for example in developing countries to be more of producers than just receivers. It is also an economic issue and it is also a power struggle in historical terms for new shapes of digital colonialism. But this is something just as an overview that we wanted to bring in terms of how it has evolved the discussions in terms of assessing the risks for the internet, getting it to understand what is the internet that we want to protect and what are the shapes that is taking place in these discussions on the internet ecosystem. And then I’m going to give the floor next to Flavio, who is going to tell us how this is taking place in a Brazil scenario. Thank you very much, Flavio.

Flavio Wagner:
Thank you, Raquel. So, hi everybody. Nice to have you with us here this morning in Japan. So Brazil is a very large country, one of the 10 largest economies in the world with a strong industry in various sectors. And also in the digital realm, Brazil has a lot to show. And because of this context, Brazil, along the time, has proposed that they implemented various local regulations regarding the internet. For instance, the so-called Marco Civil, the Internet Bill of Rights, which was approved in 2014, setting a whole set of rights principles. It’s a principle-based law. Later on, in 2019, we approved our privacy law, very similar to the GDPR, for instance, in the European Union. And these laws are very compatible with international standards regarding rights and duties. There are many other discussions going on in Brazil. There are regulatory proposals being discussed in the National Congress, such as the Artificial Intelligence Bill, the Fake News Bill for content moderation. There are many discussions in the country regarding local law on cybersecurity, various discussions on dimensions of platform regulation, not only the question of content moderation and fighting disinformation, but also economic issues. And there are open discussions, not already in the form of bills in the Congress, but discussions about data sovereignty. This was a large discussion when we approved the Marco Civil many years ago. For instance, on data localization, so these are discussions that come again and again in the country. And the term digital sovereignty is increasingly cited in the Brazilian legislative bills and in public documents. If we take, for instance, current discussions on platform regulation and cybersecurity proposals, they include explicitly sovereignty as a strong motivation for the proposal of bills. But, of course, this is a question that’s not only a problem in Brazil, but is overall that there is no clear or shared definition of what sovereignty means. There are many flavors of sovereignty. For instance, the economic issue, as Raquel has presented before, the technological question, the data sovereignty, and so on. So, which was then our proposal? Next slide, please. Yeah, we partnered the Brazilian chapter of the Internet Society. We partnered with CEPI, a research center at the FGV, a very prestigious academic institution in Brazil, to develop a project which is partly funded by Internet Society Foundation. It started about one year ago, and it’s going on. And the main project objectives is to qualify the academic and public policy debate on sovereignty, and starting with an analysis of the Brazilian context, explore all the social technical dimensions of this debate and its technological and legal challenges. And because there are so many flavors of sovereignty, and sovereignty is invoked for various proposals in the country and elsewhere, we try to identify notions of sovereignty built from various stakeholders’ narratives from various sectors. taking into account the legal, social, economic and political implications and trying to connect this analysis of the Brazilian context connect not only to the local level but also to the regional and global levels. So, as concrete goals, the project aims to map and discuss first whether and in which sectors the discussion on digital sovereignty has emerged as a trend. Which, in second place, which narratives have guided this debate in an attempt to secure public support and based on which justifications? So, why is sovereignty being used as a motivation to secure public support for various proposals that are going on in Brazil? How, in third place, how the relationship between digital sovereignty and the internet takes place in the Brazilian debate? And the final goal is how the creation or change of policies and legislative instruments are linked to these narratives from the various sectors implies local and global challenges both in technical, political and social terms. As Raquel said, one of the global challenges we have is how we can avoid fragmentation and sometimes some regulatory proposals or legislative proposals motivated by a fair claim of digital sovereignty may have unintended consequences for the fundamental characteristics of the internet. And we are trying to explore those relationships. So now I go to Ana, which is online and will continue this presentation.

Ana Paula Camelo:
Thank you, Flavio. Thank you, Raquel, for introducing the broad and the Brazilian context that creates the great opportunity of our research. And I may say I could be with you in YouTube, but I’m glad that we have this online too and participation that allows me to be with you from Sao Paulo. So hello to everybody that is attending. Thank you for your time as well. As mentioned before, the digital sovereignty term is increasingly cited in the Brazilian legislative bills and other public documents, but without a clear shared definition, this is something that we are willing to work and deep our discussions. It creates for us as academic researchers and a great and important opportunity to identify and relate these understandings as Flavio has mentioned just before. And most cases refer to digital sovereignty related to internet. We are also open and interested to connect our local impacts and discussions to the global challenges. Being here is a great opportunity regarding our project related to this. If you could go to the next slide, Raquel, thank you. To achieve the goals and the objects mentioned, research is based on three main sources of data and information. First, I highlight the desk research to collect public documents and other types of publications from different sectors and to map narratives and stakeholders involved in Brazilian debates. This is our main base, the base of our research. The database built with all the documents collected has been studied for the methodical analysis, and we want to identify the narratives that are at play, who has been part of this discussion, which instruments are considered, and for what reason. In the end of the project, we will share an impact brief and other documents showing all these relations and all results of this goal. Alongside this effort, we have a study group on digital sovereignty and internet governance topics that happen monthly, and we have conducted several interviews with experts and researchers to advance some of these debates. I must say some of our colleagues are there with you in Kyoto and others here online attending this panel, so they are also welcome to join the discussion later. Then I can summarize this as our main methodological approach regarding the research. Now we are almost reaching the one-year project milestone, as Flavio mentioned, and our conversations and even the study group, I must say, were started even before the project, and now we are going to an important and second phase of the project. You can go to another slide, please, Raquel. Here you can see our project timeline to have as a reference. The preliminary results I will share in the next slide are based mainly on this great effort of the first year, so considering the documents mapping and the interviews and also the conversations and discussions with researchers and experts, and we are very excited that soon we will begin an open and free online training course exploring the digital sovereignty issues, some topics that we discussed and collected during the research. This is the first experience of this kind of course that CEPI and ISOC Brazil has been has powered together and it works with public calls for applicants that we want to attend the course and the people that the students that the participants are selected based on gender, race, sector, professions, regionals, diversity we want and we try to make a very diverse public to join the initiative with us. We are also working a lot to have public agents and also journalists with us so we can expand and help them to reflect the theme in their daily activities. The course will be based on record lectures, suggested bibliography and discussions activities and if you have interest in the end all the content and all the videos and the material will be shared at ISOC web page, ISOC Brazil and CEPI’s web page so it will be also open content to everybody that has interest in this theme and also focused on community outreach. We will have webinars, public events and the impact brief that I have mentioned that will be launched in the end of the projects. We will share if you the news you can follow us and you can know the updates about the project but I cannot stop and my participation here without talking about the main results until now that we have in the research. With the literature, the desk research with the literature review it brings us main challenges regarding the theme and mainly considering compatibility between traditional sovereignty of states and open and borderless nature of the internet, as Raquel has mentioned, but many of the concepts and the impacts discussed from the global north do not necessarily apply to the reality of the global south. So it’s a very important space for us to discuss our perspectives, our realities and to connect all these agendas. Our mapping effort is an ongoing task due to the theme of relevance nowadays in the country and by now we have more than 245 documents gathered and analyzed, as I mentioned, regarding our methodological approach. And this database contains bills, laws, reports for different instances and also news articles, media articles that help us to understand how the theme of digital sovereignty has been an important issue in the Brazilian context. Since the beginning of the mapping, it’s also possible to verify and to discuss that using the term digital sovereignty did not provide us meaningful results when we use this as a keyword. Then we needed to amplify our strategy and look for themes that were related to the thematic, but sometimes without the explicit use of the term, so we could reach more publications that are connected to the agenda. And after this, multiple and diverse understandings or are at stake and we bring them to some main contexts, some main understandings that it’s important to share with you that are relations between digital sovereignty and self-determination, data self-determination, states power to regulate, jurisdiction, technological or scientific development, national security, open source softwares, among others. So it’s just to highlight some of the key themes related to the discussion, but these are the main ones. And regarding the 15 semi-structured interviews that we carried out, they ratified, they confirmed some of the results we have collected and analyzed from the desk research. And the main one is that only one view indicated that he understood and he saw he could identify consensus about the definitions of digital sovereignty in Brazil and he associated it to the governance and to the government and to the society capacity to rule the development of the country and to use digital technology to collect data. But if he was the only one, the other 14 interviewees, they shared different perceptions and all of them were very clear saying we don’t still have a consensus, we don’t have a common, a shared definition or a shared understanding about it. But many of them associated to artificial intelligence and to misinformation, fake news agenda. as the context Flavio shared in the beginning of his presentation shows. So the interviews, they also provide us with very different perspectives related to political, legal and technical lenses, as Raquel mentioned. And it’s interesting to say that they related, at the same time, the digital sovereignty as an instrument to guarantee rights to citizens in different areas, such as consumer and minority groups’ rights. At the same time, they are very afraid of power imbalance and somehow the impact outside the Brazilian limits. So this is something that I would like to highlight. But I’m afraid of my time, so I’ll stop here, reinforcing that I would like to invite you, if you have interest, to follow us, CEPI and ISOC Brazil website and social network, so you can be updated about our next steps. And thank you all for your attention. I’ll be happy to answer any questions or comments later.

Raquel Gatto:
Thank you very much, Ana. And you were missed here in Japan, but next time, hopefully we are all together. So with that, we finished the presentations. And the idea, just to recap on this session, is to share the project that is ongoing between ISOC Brazil and CEPFGV, where we are looking into digital sovereignty, the documentation and the interviews that are being collected are going to draw a course and then materials and documents that we are shaping. to understand all the nuances and how digital sovereignty is understood in the country. But it’s also grounded into the Internet Society, the global work that is being done also in relations to Internet fragmentation and the understanding of digital sovereignty worldwide. And now, our intent was not only to share what we are doing, but also to collect inputs on your views and any other work that is underway that could be useful for us to consider in this project. So, I’m now opening up the microphone. Thank you. Mark is going. There are two microphones available. If anyone wants to make questions, please go ahead.

Audience:
Hello. Mark, Derisca speaking. I’m an Internet Governance Consultant. So, thank you for sharing the project. I have been following it. It’s actually very interesting and deep. One question that I do have is Brazil has a long tradition of not only the bills that we are discussing, but a prior investment in digital sovereignty in the production of technological equipment, in the development of open source code, and in a series of actions that predate the current discussion. So, in a way, the country was already ahead of the curve like some other countries that preceded this movement. So, has the group started looking into that more historical approach and trying to understand if that has any correlation with the current developments or if these are different phenomena that are happening in different places and time? Thank you. Thank you, Mark. I’m going to take both questions and then… I think we have time for one more question and then we can go back to the presenters. Raul? I have two questions. One is that I saw very recently a paper published by Luca Belli from FGV about digital sovereignty in Brazil and India. I wonder if it has any relation with this work or is it absolutely a parallel line? Okay. Good. Thank you very much for the question. I think it’s a very good question. I think it’s a very good question. Because I didn’t understand if it was related because it’s like an anticipation of the results of the work that you are doing. And the second question is that if you are considering as one possible or you are questioning the expression itself, digital sovereignty, within the framework of the lot which difficulty in the program, to consider this expression, but what about if the expression doesn’t make sense at all, right? In fact, every time that I read things about the salinity, it’s about the independence or technological independence, and I don’t know if there is any country in the world that has such thing like food sovereignty or energy sovereignty, because there is no any country in the world that is absolutely independent in everything about, in relation with everywhere, everybody. So I’m just open to questions. Thank you. ≫ Thank you very much, Raul, and very valuable questions. I’m going to go to Anna Flavio and I might contribute. ≫ It’s a really short one. has to do with how we just asked it. Ale, can you introduce yourself? Yeah, definitely. I’m Alexandre Costa Barbosa. I’m also joining discussions of the SAPI FGV-ISAC Research Group on Digital Sovereignty. I’m a fellow at the Weins and Mao Institute. It’s actually I’d like to hear a bit more about, allow me to ask this question, how is it going to really relate with the research conducted by the FGV-CTS? Because actually, the research group from CTS is working with digital sovereignty for a long time. So how can we combine the efforts that’s being conducted by Sao Paolo? Thank you very much.

Raquel Gatto:
Thank you very much, Ale. So I’m just going to give a brief putting my head as also part of the ISAC Brazil chapter to say that we invited Luca Belli. He joined one of the group sessions, the local group sessions, precisely so we could find some of the synergies in the project. But Luca Belli work is focusing on cybersecurity, so digital sovereignty understood as those of national security protection. And while we’re looking into this project into a wider and broad view of digital sovereignty, also including the technological issues and the political and legal implications. So I’m going to pass to Flavio and Ana if they also want to comment. But I think this is important to be clear on the parallel work that is being done. But it’s not a competition. It’s really that we are in this moment in Brazil where this is. It’s such an important issue that more and more research is blossoming, which is a good way. And then we are looking for the synergies on how to work together. So that’s my reaction. Flavio?

Flavio Wagner:
Yeah, Raul, maybe you are not aware, but FGV is a very large academic institution, with many different groups, and even in different cities. So Lucabelli is in Rio, it’s one group, and we are partnering here with FGV Sao Paulo, which is a different group. And so, as Raul said, we are trying to keep a dialogue with Lucabelli and his team. We invited him to discuss with us, but we are taking really different directions, I suppose. And regarding the question from Mark, we are looking for all documents, all public documents, and bills, and public policies that use explicitly the concept of digital sovereignty as a motivation, or are related to digital sovereignty. In this regard, of course, Brazil has a long past tradition of digital sovereignty approaches. The development of local technologies back in the 70s and in the 80s, we had a very strong industrial policy for the development of local technology. So, of course, this also is of interest for the project, for the mapping we are trying to build. But this is more a historical thing, so we are more interested in what’s happening now, which are the current discourses of different stakeholders, and how this can impact the evolution of the Internet in the country. And regarding the other question from Raul… Self-determination.

Raquel Gatto:
You want to… Let’s put Ana and then I can tackle this one if needed. And we have more questions in a moment. Milton, I’m just taking it. Ana, do you want to make any reactions? We have more questions coming from the floor soon. Yes, just a brief contribution regarding the historical approach that Mark has questioned. I can reinforce Flavio’s perspective about our short-term, I must say, interest looking to the present, looking to the ongoing controversy and debates going on. But we will be able, I’m pretty sure, to build an interesting timeline regarding the main topics and some trends related to the discussion in Brazil that I would say are very recent in the way you’re framing and what you’re looking for regarding the past. And also related to FGV, thank you, Raquel and Flavio, for introducing that we are a big institution and we have very different but connected centers. And I would say that we are very open and connected to the CTS agenda. They have great materials and research on this theme. But in our research, we are looking, I would say, in a broader approach. We want to have this kind of understanding and have this connection with the CTS approach, but not as the only approach on the table. We have seen other perspectives and controversial perspectives going on. So, we want to understand them and go deeper inside the narratives and the main… questions and issues in their background, so we can also even contribute to CTS research in some way. So this is something I would like to add. And finally, regarding the meaning that Raul shared in his question, that expression sometimes they don’t have a sense at all. It’s a very good point, Raul, thank you for sharing it. And it’s a personal perspective, I’d say, sometimes some expressions are used as a hype, as people bring them to make sense, to connect to very different reasons and subjects, and it lacks somehow the connection with the first or the main core it could represent or it could make sense. So this is one thing that we are very worried about, but the main issue, I would say, they are still connected to critical infrastructure, when you say, told about the food and the energy, and somehow they share this kind of relevance, they appear with this relevance and the need for let’s talk about it, let’s make it happen. But something that we have discussed is that we are still in the world’s place, not with concrete agendas, I would say, with concrete initiatives. Besides the discussions about AI and the fake news that we’ve mentioned, it’s still something more discussed, but without concrete impact yet, but many things are happening.

Audience:
Thank you very much, Ana. I think in regards to the second questions from Raul, if that’s going to be a patchwork, where you have an organized view of these meanings, or if that’s a crazy kaleidoscope, that’s going to be answered by the end of the research. And then we have Milton. You want to take the floor? Sure. I missed the first part of your talk, but I know that we’re having a conversation about the legitimacy of Internet governments in the multi-stakeholder environment and also digital sovereignty. And I think one thing I want to ask you about, when most people talk about digital sovereignty, they think about it only for themselves or their community, right? The problem with that is that the notion of sovereignty in a political and legal sense inherently means a kind of exclusivity, right? So if I have sovereignty as, let’s say, the United States, then you don’t, right? And if Brazil has sovereignty over its Internet, then Venezuela and Europe and the U.S. don’t. And so I would like to encourage you to think about the international relations aspect of sovereignty and not hold it up as some kind of fantasy where everybody can control everything for themselves, but they don’t have to worry about anything else. That’s just a fantasy. And the other thing is, you’re all involved in ICANN, so you all know that one of the best things we did when we created ICANN was we got it out of the sovereign system, right? We said names and numbers are going to be not governmentally run, and so that’s been very successful at making sure that So I think there is a lot of debate about what is the best way to do this. I think the best way to do this is to make sure that certain aspects of Internet coordination are not politicized. So I want to make sure people understand the term sovereignty has an appeal to people, but it’s not a one-way street. It’s going to be competing claims of sovereignty, and sometimes those claims are not necessarily going to be the same.

Raquel Gatto:
So I think that’s a good point, Milton. I just want to make sure that also to understand part of the work we are doing right now, the first phase, let’s say, is really capturing the photographs. So how it’s being used in the documents and how people are really seeing sovereignty. So it doesn’t mean that this is the understanding that we have, but it’s really capturing how it’s being used. So it’s very exciting that as a citizen and as business citizen our digital sovereign economy is being so and so I want to make sure that all the citizens who watch this, you know, have somebody heartbreaking memory, and the people who are watching this, they may be thinking about those claims of digital sovereignty possibly hurting some fundamental aspects of the Internet.

Flavio Wagner:
And, of course, most people, when they talk about sovereignty, about self-determination, about the local technological development or the local control of data of Brazilian citizens, of the people who are watching this, they may be thinking about that. But, of course, we are not aware of the possible implications of the legislations or regulations that can be imposed. And so we in the project are very much aware of those things and following the line of the Internet Society approach that digital sovereignty, if not well understood and not well implemented, may hurt some fundamental characteristics of the Internet and we are very much aware of this.

Audience:
Thank you very much, Flavio. And we have one last question before we wrap up. Thank you. Thank you very much. My name is Peter Bruck. I’m the chairperson of the World Summit Awards. I’m from Austria and I initiated the Data Intelligence Initiative and we talked a lot about the issue of data sovereignty, especially in the context in Europe. And what Milton was saying was very interesting because he gives us the legal definition and the legal implications. I see that the term sovereignty, and I think that this is also very much reflected in some of what you have said, is a way of asserting control in a situation where you don’t feel that you have control. So it is a process term in that sense, a process term on the way to see if you can empower yourself to get control over something. So you’re striving for sovereignty. And then I think what would be important is that you are then getting to a situation of negotiation with the others. And I think this is something which we need to really see. I’m very interested at the IGF and at all conversations here that we are looking at things which are doable and not fantasy. And I think it’s very important that this term sovereignty is seen as a term enabling people to claim their rights. control over whatever is happening in the digital sphere on many, many different levels. So I just wanted to add this to Milton’s intervention and thank you very much also for your smart conversation and reply.

Raquel Gatto:
Thank you very much for this contribution. And I think it’s important and adds to our goal here, which is precisely to understand how… first to understand how it’s being used and then to help educate where we want to take this from. And then I’m going to give one minute wrap-up to Flavio and Ana just to say a few thoughts and then we need to close the session and go for the next. Thank you.

Flavio Wagner:
Thank you, Raquel. And thank you all for coming and sharing your thoughts with us. As I said, we are trying to have a photograph of what are the narratives from the different sectors in this area in Brazil and try to relate this to the global discussion on sovereignty and just also as a mean to educate the people in Brazil. So we are collecting information and then we will spread the conclusions of the project to the wider community, not only in Brazil, but in the region, in Latin America and globally. So that people understand these different flavors of sovereignty and the legal and technological and social implications of those proposals and those definitions. So that we are really trying to contribute to the debate and show the implications of the different definitions of sovereignty and the different implications of public policies or legislations or regulations that are proposed to be implemented following this motivation of sovereignty. And I conclude here. Thank you.

Ana Paula Camelo:
Thank you, Flavio. Ana? Well, I would like also to thank you all for your contribution. You are very insightful and I want to reinforce that at the same time we want to look at the Brazilian context and the Brazilian debate somehow and their implication. We have this goal to not keep on looking only inside, to our reality, to our context. And this kind of dialogue with other perspectives and other realities, other understandings and impacts are very important for us. And we will keep this aim until the end of the project to make this discussion broader, but also to contribute, as Flavio mentioned, to our country. So I would reinforce you are very welcome also to share and send feedbacks and other suggestions and to be connected to our research, not only during this panel, but also after. It will be a pleasure to keep in touch with you. So thank you again. And Raquel?

Raquel Gatto:
Thank you very much, Ana, Flavio, also Pedro and Lori, who are supporting online. We need to wrap up because the next session is going to start soon. And thank you very much for everyone for the contributions. We keep here at the IJF available for any conversations you want to have and further inputs for the project. Next is a legitimacy of the multistakeholderism in IJF spaces. Thank you very much.

Ana Paula Camelo

Speech speed

129 words per minute

Speech length

1629 words

Speech time

755 secs

Audience

Speech speed

172 words per minute

Speech length

1249 words

Speech time

437 secs

Flavio Wagner

Speech speed

137 words per minute

Speech length

1329 words

Speech time

584 secs

Raquel Gatto

Speech speed

137 words per minute

Speech length

2300 words

Speech time

1007 secs