Safeguarding the free flow of information amidst conflict | IGF 2023 WS #386

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Rizk Joelle

Digital threats and misinformation have a significant negative impact on civilians residing in conflict zones. The dissemination of harmful information can exacerbate pre-existing social tensions and grievances, leading to an increase in violence and violations of humanitarian law. Furthermore, the spread of misinformation can cause distress and a psychological burden among individuals living in conflict-affected areas. This hampers their ability to access potentially life-saving information during emergencies. The distortion of facts and the influence of beliefs and behaviours as a consequence of the dissemination of harmful information also contribute to raising tensions in conflict zones.

One concerning aspect is the blurred line between civilian and military targets in the context of digital conflicts. Civilians and civilian infrastructure are increasingly becoming targets of digital attacks. With the growing emphasis on shared digital infrastructure, there is an increased risk of civilian infrastructure being targeted. This blurring of lines undermines the principle of distinction between civilians and military objectives, which is a critical pillar of international humanitarian law.

Moreover, digital threats pose a threat to public trust in humanitarian organizations. Cyber operations, data breaches, and information campaigns not only damage public trust but also hinder the ability of humanitarian aid organizations to provide life-saving services. This erosion of trust compromises their efforts to assist and support individuals in need.

To address these challenges, it is crucial for affected communities to build resilience against harmful information and increase awareness of the potential risks and consequences in the cyber domain. Building resilience requires the involvement of multiple stakeholders, including civil society and companies. Information and communication technology (ICT) companies, in particular, should be mindful of the legal consequences surrounding their role and actions in the cyber domain. It is important that self-imposed restrictions or sanctions do not impede the flow of essential services to the civilian population.

In addition to community resilience and awareness-building efforts, policy enforcement within business models is crucial. Upstream thinking in the business model can help reinforce policies aimed at countering digital threats and misinformation. However, the discussion around policy enforcement in business models is challenging. It requires expertise and a feedback loop with tech companies to find effective and efficient solutions.

In conclusion, digital threats and misinformation have dire consequences for civilians in conflict zones. The dissemination of harmful information exacerbates social tensions and violence, while digital attacks on civilians and civilian infrastructure blur the line between military and civilian targets. These threats also undermine public trust in humanitarian organizations and hinder the provision of life-saving services. To tackle these challenges, it is essential to build community resilience, increase awareness, and enforce policies within business models. Collaboration between stakeholders and tech companies is key to addressing these complex issues and safeguarding the well-being of individuals in conflict zones.

Speaker

In conflict zones, technology companies face a myriad of risks and must carefully balance the interests of multiple stakeholders. These companies play a critical role in providing essential information and functions but can also unintentionally facilitate violence and spread false information. One major challenge is responding to government demands, such as granting access to user information, conducting surveillance, or shutting down networks. These demands can come from both sides of the conflict and may lack clarity or have excessively broad scope.

Dealing with government demands during peace is limited in conflict situations due to associated risks. Companies can request clarity on demand legality, respond minimally or partially, challenge the demands, or disclose them publicly. However, in conflict settings, these actions may pose significant risks.

To navigate these challenges, technology companies can implement various measures. These include establishing risk management frameworks, clear escalation procedures, and consistent decision reviews. By doing so, companies can better manage risks of operating in conflict zones. Collaboration with other organizations in coordinating responses in conflict regions and consulting with experts to understand potential implications of decisions can also help.

Respecting international humanitarian law is a key principle of corporate responsibility in conflict situations. Companies are expected to respect human rights and require guidance on respecting international humanitarian laws when conducting business in conflict-affected areas. Enhanced due diligence, considering heightened risks and negative human rights impacts, is recommended by the United Nations Guiding Principles on Business and Human Rights.

Further articulation is needed on what international humanitarian law means for technology companies, indicating further guidance is needed in this area. To address design issues in platforms, companies should consider building the capacity to apply a conflict lens during product development, better identifying and resolving issues in conflict zones.

Addressing information topics requires considering both upstream and downstream solutions. This comprehensive approach takes into account the flow of information from sources (upstream) to distribution and consumption (downstream).

Overall, technology companies operating in conflict zones face unique challenges and must navigate complex risks. Implementing effective risk management frameworks, respecting international humanitarian law, and incorporating a conflict lens into product development can better address the multifaceted issues they encounter. Further guidance is needed in certain areas to ensure operations in conflict zones align with established principles and standards.

Chantal Joris

The analysis delves into the challenges surrounding the free flow of information during conflicts. It starts by highlighting the digital threats that journalists and human rights defenders face in such situations. These threats include mass surveillance, content blocking, internet shutdowns, and other forms of coercion aimed at hindering the dissemination of information. The sentiment towards these challenges is negative, as they pose a significant threat to the values of freedom of expression and access to information.

Another significant aspect explored in the analysis is the role of tech companies in conflicts. Digital companies have become increasingly important actors in these situations, and the analysis argues that they have a responsibility to develop strategies to avoid involvement in human rights violations. This neutral stance reflects the need to address the complex ethical dilemmas faced by tech companies, balancing their business interests while safeguarding human rights.

The analysis also discusses the reliance of civilians on information communication technologies (ICT) during conflicts. Civilians often use ICT to ensure their safety, gain information on conflict conditions, locate areas of fighting, and communicate with their loved ones. This neutral sentiment highlights the significance of ICT in providing vital communication channels and access to information for affected civilians.

The analysis further sheds light on the attempts made by the army and political parties to control the narrative and shape the discourse during conflicts. Conflict parties often aim to manipulate information and control the narrative for various reasons. This negative sentiment highlights the detrimental impact of information control on the public’s understanding of conflicts and the potential for shaping biased opinions.

A key observation from the analysis is the necessity of a multi-stakeholder approach in conflict contexts. It stresses the importance of different actors, such as ICT companies, content moderators, and organizations like the International Committee of the Red Cross (ICRC), working collaboratively to tackle the diverse threats to information flow. This positive sentiment reflects the recognition that no single entity can address the complexities of information challenges during conflicts alone.

Moreover, the analysis calls for identifying gaps in understanding and addressing the issues related to information flow during conflicts. This neutral sentiment highlights the need for more clarity and targeted efforts to bridge these gaps. The conclusion emphasizes the importance of comprehensively addressing the challenges and harnessing the potential of information communication technologies to ensure the free flow of information during conflicts.

In conclusion, the analysis explores the various challenges and dynamics surrounding the free flow of information during conflicts. It highlights digital threats, the role of tech companies, civilian reliance on ICT, information control by conflict parties, the necessity of a multi-stakeholder approach, and the need for identifying gaps for clarity. With this comprehensive understanding, stakeholders can work towards developing strategies and policies that uphold the values of information access and freedom of expression in conflict situations.

Khattab Hamad

Sudan is currently embroiled in a civil war between two allied forces that began in 2013. However, the conflict has been riddled with challenges and disagreements, particularly regarding security agreements and the unification of the armies in Sudan. These disagreements resulted in the conflict’s end on April 15th. Unfortunately, the sentiment surrounding this war is negative.

Information control has played a significant role in the conflict, with internet disruptions and the spread of misinformation being notable events. Internet shutdowns during exams and civil unrest have been used by authorities to manipulate public opinion. The sentiment towards these events is negative.

Another issue in the conflict is the misuse of social media platforms, which have been exploited by both sides to spread their own narratives and manipulate public opinion. This misuse has prompted concerns about information imbalance and led platforms like META to take down accounts associated with the Rapid Support Forces. The sentiment towards this misuse is negative.

The RSF (Sudanese Armed Forces) and the Arab Support Forces have been criticized for their harmful practices towards civilians and the nation’s infrastructure. Privacy violation cases, including the use of spyware, have been reported. The RSF imported the predator spyware of Intellexa, while the National Intelligence and Security Service (NISS) imported the remote control system of the Italian company hacking team in 2012. The sentiment towards these privacy violations is negative.

The conflict has also had a significant impact on the ICT (Information and Communication Technology) sector in Sudan. Power outages have impaired network stability and e-banking services, forcing ICT companies to rely on uninterruptible power supply systems and generators. The sentiment towards this situation is negative.

On a positive note, telecom workers have been recognized as crucial for maintaining access to information infrastructure during conflicts. It is argued that they should be given extraordinary protection, similar to doctors and journalists, due to their vital role in ensuring the continuous flow of information. The sentiment towards this proposal is positive.

In conclusion, Sudan’s civil war has had far-reaching consequences, impacting security agreements, information control, privacy rights, the ICT sector, and the protection of key players in the information infrastructure. Efforts to address these challenges and protect these key players are essential for promoting peaceful resolutions and mitigating the impact of future conflicts.

Tetiana Avdieieva

During the armed conflicts in Ukraine, there have been severe restrictions on free speech and the free flow of information. Since the war began in 2014, the country has witnessed a decline in the protection of free speech and access to information. This has resulted in mass surveillance, content blocking, Internet shutdowns, and sophisticated manipulation of information.

Digital security concerns have also arisen during these conflicts. Attacks on media outlets and journalists largely originate from Russia, with DDoS attacks on websites disrupting connectivity. Coordinated disinformation campaigns on social media and messaging platforms further exacerbate the situation, influencing public opinion and spreading false narratives.

One key issue highlighted is the control over narratives and the free flow of information during armed conflicts. The ability to shape public opinion becomes a powerful tool in these circumstances, with the potential to influence the course of the conflict and its outcomes. It is crucial to address this issue by formulating an exit strategy that lifts restrictions from the outset of the armed conflict. This strategy should consider the vulnerability of post-war societies to malicious narratives and work towards reestablishing human rights that were restricted during the conflict.

Another significant concern is the gap in international law regarding the handling of information manipulation during peace and conflict. Current legal frameworks do not adequately address the issue, leaving room for exploitation and the spread of disinformation that incites aggression and hatred.

There have also been attempts to shift the focus away from the harm inflicted upon civilians and the suppression of opposition during these conflicts. These attempts to change the narrative divert attention from the atrocities committed and the need to protect the rights and safety of civilians.

The extensive support for the invasion among the Russian community is a cause for concern. According to data from Meduza, a significant portion of Russian citizens, ranging from 70% to 80%, support the invasion. This highlights the challenge of countering misinformation and disinformation within Russia and addressing the narratives that drive aggression and illegal activities.

The role of ICT companies in moderating harmful content in conflict settings is crucial. These companies need assistance, both globally and locally, to effectively combat harmful information. This includes distinguishing between harmful information and illegal content, as well as understanding the localized contexts in which they operate. Local partners can provide valuable insights into regional issues, such as identifying and addressing local slur words and cultural sensitivities.

However, it is important to approach the role of tech giants with caution, avoiding a strategy of blaming and shaming. Over-censorship and driving people to unmoderated spaces can be unintended consequences of such an approach. Instead, a collaborative approach that involves ICT companies, multi-stakeholder engagement, and responsible corporate practices is necessary to foster a safer online environment.

In conclusion, the armed conflicts in Ukraine have led to significant restrictions on free speech and the free flow of information. Digital security concerns, information manipulation, and the spread of disinformation within Russia pose additional challenges. It is crucial to adopt an exit strategy that lifts restrictions and safeguards vulnerable post-war societies from malicious narratives. Efforts should also be made to address gaps in international law regarding the handling of information manipulation. The support for the invasion among the Russian community and attempts to divert attention from civilian harm and opposition suppression further complicate the situation. ICT companies play a crucial role in moderating harmful content, and a collaborative approach is necessary to strike a balance between curbing misinformation and ensuring freedom of expression.

Audience

An analysis conducted by Access Now reveals that prevailing trends in content governance are endangering freedom of expression and other fundamental rights. Several issues have been identified in relation to parties involved in conflicts, highlighting the dangers faced by these rights.

During times of crisis, content governance has been exploited in various ways, breaching international humanitarian law. One concerning practice is the intentional spread of disinformation as a warfare tactic. Additionally, platforms have been used for population movement, and sharing content depicting prisoners of war illegally has been observed. These actions not only violate international laws but also contribute to the erosion of freedoms.

While internet restrictions exist in conflict zones, it is interesting to note that Russia maintains significant accessibility to various platforms. Many Ukrainian media and telegram channels continue to be effectively available in Russia. Furthermore, despite restrictions, information can still flow through various social media and messaging platforms. This highlights the complexity of internet restrictions and the need for further examination.

The analysis also underlines the need for international laws addressing informational warfare. Both Russia and Ukraine face internet warfare, yet there is a lack of legal frameworks specifically designed to address this issue. The absence of such laws creates a significant gap in addressing and countering the threats posed by disinformation campaigns and cybersecurity breaches.

Russia particularly faces numerous cybersecurity threats and disinformation campaigns, primarily originating from Ukraine. Instances of Russian citizens’ personal data being leaked and published online have been identified, along with the identification of over 3,000 disinformation narratives against Russia. These threats pose challenges to the integrity and security of information in the country.

Social media platforms’ over-enforcement is flagged as a major problem for media and journalists, with many legitimate news sources having their accounts suspended or restricted. This issue is particularly prevalent in cases involving conflict settings, such as Palestine and Afghanistan, where the presence of dangerous organizations contributes to heightened enforcement measures.

The complexity of platform rules is highlighted as a concern in conflict settings. In such situations, rules can be confusing and easily violated, with typical infractions including the posting of images depicting dead bodies. This observation sheds light on the challenges faced by content creators and users as they navigate restrictive guidelines during conflicts.

Addressing misinformation requires the implementation of upstream solutions, as highlighted by Maria Risa. This approach focuses on addressing misinformation at its root causes, rather than solely addressing its dissemination. By focusing on upstream solutions, it is possible to create more effective strategies to combat misinformation and its harmful effects.

The analysis raises questions about the design of platforms and the role of algorithms and business models in managing information. It suggests the need to reconsider and possibly redesign these aspects to ensure fairness, accuracy, and accountability in content dissemination. This observation emphasizes the ongoing need for innovation and improvement within the digital landscape.

BSR, a leading global organization, provides a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings. This initiative aims to promote the respect and protection of human rights, even in challenging circumstances. The toolkit, developed in collaboration with Just Peace Labs, offers detailed guidance, making it an invaluable resource for responsible business practices.

Furthermore, the analysis advocates for human-centered approaches in digital transformation, particularly in conflict zones. Stakeholder consultation can be challenging in war zones, highlighting the importance of ensuring that the interests and needs of all individuals are considered and that no one is left behind in the process.

There is a noted lack of focus on countries like Afghanistan and Sudan in discussions surrounding these issues. This observation emphasizes the need to broaden the scope of discourse and pay equal attention to conflicts and human rights violations occurring in these regions.

Global media platforms play a substantial role in shaping public opinion, primarily through their recommendation algorithms. However, concerns arise regarding the impartiality and bias of these algorithms. The analysis reveals that global media platforms often alter their recommendation algorithms to favor one side in informational wars, despite presenting themselves as neutral. This highlights the potential influence and manipulation of public opinion through these platforms.

Given the significance of global media platforms, the analysis argues that global society should exert more pressure on these entities. Increased accountability and transparency are necessary to ensure that these platforms operate in an unbiased and fair manner, considering the critical role they play in shaping public discourse.

In conclusion, the prevailing trends in content governance pose a threat to freedom of expression and fundamental rights. Exploitation of content governance during times of crisis, the need for international laws addressing informational warfare, and the over-enforcement by social media platforms are among the challenges highlighted in the analysis. The complexity of internet restrictions and the design of platforms also warrant further consideration. Additionally, the importance of upstream solutions, human-centered approaches, and the inclusion of marginalized regions in discussions emerge as key insights. Efforts towards increasing platform accountability and transparency are crucial to safeguarding a fair and unbiased digital landscape.

Session transcript

Chantal Joris:
You Good afternoon, everyone, all the participants in the room, and also good morning, afternoon or evening for those who join online. My name is Chantal Joris. I’m with Freedom of Expression organization, Article 19, and I will be moderating the session today. In today’s session, we want to explore some of the current challenges posed to the free flow of information, specifically during armed conflicts. And I want to start with making a couple of opening remarks as to where we are at. And we do know that conflict parties have always been very keen to control the narrative and shape the narrative during conflicts, perhaps to garner domestic and international support, to maybe portray in a favorable light how the conflict is going for them. And of course, also often to cover up human rights violations and violations of international humanitarian law. So this is nothing new, yet what has changed, of course, is how armed conflicts look like in the internet age. We see an increased use of digital threats against journalists. and human rights defenders, mass surveillance, content blocking, internet shutdowns and even the way that information is manipulated has become much more sophisticated with the tools that parties have available today. And of course at the same time, civilians really rely at an unprecedented level on information communication technologies to keep themselves safe, to know what’s going on during the conflict, where fighting takes place and also to be communicating with the people, with their loved ones and see that they are okay. And also I want to emphasize a little bit that these issues are not necessarily limited to just sort of the top 5 to 10 conflicts that tend to make the headlines but there are currently about 110 active armed conflicts in all regions of the world and also beyond conflict parties, even states that are not part of the conflict have to grapple with questions, for example we’ve seen recently should they sanction propagandists, ban foreign media outlets so this is really an issue that concerns every state, all states and the whole world and also what we have seen is that digital companies have become increasingly important actors as well in conflicts and they do need to find strategies to avoid to become complicit in human rights violations and violations of humanitarian laws. So to discuss some of these challenges I’m very happy to introduce the panelists of today also I do want to make a quick remark in this context that we notice that many of our partners from conflict regions have not been able to come to IGF in person and have these discussions in person although we talk a lot about the need for an open and secure internet, including of course during conflicts, and they are often the stakeholders that are most affected and they are not really able to join these discussions except online. Similarly, some of our speakers, most of our speakers on this topic that we really wanted to have at the table are also joining us online today. The first speaker joining us online is Tetiana Avdievjeva. She is Legal Counsel at the Digital Security Lab Ukraine, an organization that has been established to address digital security concerns of human rights defenders and organizations in Ukraine. We also have Khattab Hamad, an independent Sudanese researcher focusing on digital rights and internet governance, who is working with the Open Observatory of Network Interference and the Code of Africa. We have Joëlle Rysk joining us. She is Digital Risks Advisor at the Protection Department of the International Committee of the Red Cross. And next to me here in person is Eleni Hickok. She is Managing Director of the Global Network Initiative, of which Article 19 is also a member. I will also introduce what this multi-stakeholder initiative is all about. Also, we were supposed to have here Irene Kahn, Special Rapporteur on Freedom of Expression. Unfortunately, she had to be in New York at the same time in person, and we were struggling to remove her from the program, so apologies for that. But she has been focusing on these questions as well, and I encourage you to read also her report from last year on disinformation in armed conflicts, and she continues to engage in this discussion as well. So, a quick breakdown of the format of the session. So, we have… We have about 75 minutes to discuss these challenges. I will address a couple of questions to the speakers, but it is really meant as an interactive discussion, it is meant to be a roundtable, so I will also be asking some of the questions to you as well, after the speakers have been able to express themselves on the issues, so throughout the discussion, and then at the end also there will be a chance obviously to give input also what we might have missed, what open questions there are for the speakers. So perhaps let’s start with discussing the main digital risks that we see, and also the risks to the free flow of information during conflicts, and I will first have Tatiana from Ukraine and Khattab from Sudan talk about this, but then also again I will be very keen to hear from you what, in your areas of work or from the regions you are from, what you have been observing as sort of the key challenges in this respect. So Tatiana, if I can start with you.

Tetiana Avdieieva:
Yeah, hi everyone, and it’s my great pleasure to be here today and to talk about such an important topic. So first of all I wanted to share a brief overview of what is going on in Ukraine currently, regarding the restrictions on free speech, free flow of information and ideas, which were introduced long before the full-scale invasion, since the war in Ukraine started in 2014 with the occupation of Crimea, and after the full-scale invasion as a rapid response to the changing circumstances. So basically restrictions in Ukrainian context can be divided into two parts. The first part concerns the restrictions which are related to the regime of the martial law and derogations from the international obligations. And the second part relates to so-called permanent restrictions. For example, there is a line of restrictions based on origin, particularly concerning Russian films, Russian music and other related issues. Also, there are restrictions serving as a kind of follow-up of Article 20, for example, prohibition of propaganda for war, prohibition of justifications of illegal aggression, etc. The problem is, especially with the restrictions which were introduced after the full-scale invasion, that restrictions drafted in a rush are often poorly formulated and therefore there are lots of problems with their practical application. However, what concerns me the most in this discussion is the perception of the restrictions of the kind by the international community. The problem often is that people don’t take into account the context of the restrictions. And when I’m speaking of the context, it is not only and purely about missiles flying above someone’s head. It is about the motives which drive people to be involved into the armed conflicts. And that is a very important reservation to be made at the very beginning of this discussion, because we have to speak about the root causes. And I often make this comparison for me, armed conflicts can be compared to the rules of saving energy, that armed conflicts do not appear from nowhere and they do not disappear anywhere. So when, for example, a certain situation starts, we have to understand that there are motives behind the aggression on the side of the aggressor. And therefore we have to work with those motives to prevent further escalation and to prevent repetition of the armed conflict, to prevent re-escalation basically. In this case, assessment of the context is, unfortunately, not a basic math, it is rather a rocket science. Because for example, in Ukrainian context, the preparation of the fertile ground for propaganda for Russian interference has been done in the information space for at least the last 30 years of Ukrainian independence, when on the entire European level it was said that Ukraine is not basically a state and that there is no right to sovereignty and that was basically a gift to Ukrainian nation, that all the representations in front of international community from the side of the post-Soviet countries were done by Russia, etc. What does it mean? It means that there was a particular narrative which was developed and narrative with which we have to work. Why this is important? Because usually restrictions are treated, I would say, rather in vacuum. So we are trying to apply the ordinary human rights standards to the speech which is shared, developed, to the narrative which is developed in the context of the armed conflict. And it is very important because at the very end of the day, what any country which is in the state of war faces is the statement that as soon as the armed conflict is over, all the restrictions have to be lifted. And here we miss a very important point, the point about the transition period, so-called exit strategy, which is very frequently substantiated by automatic cancellation of the restrictions. And that actually is a part of the discussion on the rebuilding of Ukraine in terms of reinforcing the democratic values, re-establishing human rights which were restricted, etc. So at this particular point, it is very important to mention that we have to think about the transition period of lifting the restrictions from the very beginning of the armed conflict. Because when the restrictions are introduced, we have to understand that they cannot end purely when there is a peace agreement. Otherwise, it won’t make any sense from the practical standpoint because narratives will still be there in the air. Therefore, we have to develop this exit strategy and understand that post-war societies are very vulnerable towards any kind of malicious narrative. And they cannot be left without protection even after the end of the war. And finally, a brief overview of the digital security concerns. I will try to summarize it in one minute, not to steal a lot of time. Currently, there are lots of problems from the digital security side. For example, there are attacks on databases, attacks on media, which not only target the media as website for sharing information, but also target the journalists, which is more important because people experience chilling effect and they’re super afraid of sharing any kind of idea because they potentially might be targeted. Indeed, I mean, from the side of the aggressor state because currently in Ukraine, at least in Ukrainian context, the biggest threat is stemming from Russia, especially for those journalists who are working on the frontline and who can be captured, who can be tortured, who can be killed. And there were lots of examples of such things happening. Also, there is a problem of DDoS attacks on websites, which actually interrupts the work of the websites and disables the sustainable connection. There were attempts to share Melbourne spyware again in order to track in the videos, in order to check what- that they’re working on and in order to prevent the, basically the truth to be distributed to the general public. And finally, there are coordinated disinformation campaigns on the social media, on like platforms, messaging services, including Telegram, which is another important topic and probably this topic is a topic for the separate discussion. So I won’t be stopping on that, like for my entire speech, but just mention it for you to understand that this discourse is very extensive and there are lots of things to talk about. I will stop here. I will give floor back to Shantala. Thank you very much to listen to me and we’ll be happy to share further ideas in the course of the discussion.

Chantal Joris:
Thank you very much, Tatiana. Khattab, if I can bring you in and have you share also your observations about the situation in Sudan, also following the recent outbreaks of hostilities a couple of months ago.

Khattab Hamad:
Thank you, Shantala. Hi, everyone. So I wanna welcome you and the other participants and it’s really an honor for me to speak at the IGF. So to keep the attendees updated, Sudan is going through a war between two forces that have been allied since the year of 2013. And the disagreement came to an end on April 15th due to differences in the security agreements related to the unification of the armies in Sudan. So this put the Sudanese in a position in a bad position due to the parties to the war, because the parties of the war are not following the laws of war, in addition to its impact on basic services, including electricity and communication. So this contributed to widespread manipulation of war narrative and the spread of misinformation in addition to the intense polarization. So to answer your question, in Sudan right now we have international downs, we have targeting of telecom workers, we have also disinformation campaigns and also we have privacy violation. And unfortunately, these practices are used by both sides of war, not only one side like RSF or the Arab Support Forces, RSF or Sudanese Armed Forces, the official military. So regarding internet disruption, internet disruption is not a new experience for the people in Sudan. The authorities used to shut down the internet during exams and civil unrest. And this time, due to the ongoing conflict, there were numerous and periodic internet disruptions in Khartoum, the capital of Sudan, and the cities of Nyala, Zalingei, and Al-Junaina. These events are considered as effort of information control during the war. However, some disruption cases in Khartoum are related to security concerns of… the telecom engineers and other telecom related workers as they may face violence because of their movement towards maintenance. So the absence of internet connection opened a wide door to applying this information as people cannot verify information that they got from local sources. Moreover, this information during the conflict also exists in cyberspace and it has several actors, but there are two main players here. They are SAF, the Students Armed Forces, and RSAF. Both parties are using proxy accounts and influencers on social media platforms to promote and to propagate their narrative regarding the war. Actually, this practice puts civilians at risk because getting wrong information may impact their decision to move around their neighborhood or the decision of displacement. Moreover, what I observed is that this information is threatening the humanitarian response. So, for example, the ICRC office in Sudan posted on Facebook warning the people not to follow, do not follow this information. So also during this war, several privacy violation cases happened, such as physical phone inspection, a lot of cases of physical phone inspection by soldiers from both sides. And also the use of spyware. Actually, we couldn’t verify. the use of spyware until now, but there are claims of that. But the important thing here is we have to mention that RSF imported the predator spyware of Intellexa. And Intellexa is an EU-based company that is providing intelligence tools. And also, this is not the first time of using spyware in Sudan. The NISS, the National Intelligence and Security Service, imported the remote control system of the Italian company hacking team in 2012. So I think that’s it from my side, Chantal. Back to you.

Chantal Joris:
Thank you very much. And thank you also for this account and explaining how these information threats can also really lead to offline violence and concrete harms to civilians. So same question to the people in the room. What have you seen or what have you perceived as being the main, in your experience, the main risks to the free flow of information, be it through surveillance, propaganda, internet shutdown? What’s your perspective?

Audience:
Hi. Thank you so much for a great presentations. I’m Alishka Birkova from Access Now. And we are also working on the issue of content governance in times of crisis. And we have been recently mapping a number of prevailing trends in the field that, in one way or another, are not necessarily related to the content governance. And we are also working on the issue of content governance in times of crisis. And we have been recently mapping prevailing trends in the field that, in one way or another, put either freedom of expression and other fundamental rights in danger. And we looked specifically. at this issue from the perspective of international humanitarian law and so we are witnessing several issues especially parties to the conflicts that are actually very much in this instigators of those. One of them is of course the intentional spread of disinformation as a part of worldwide tactic where we noticed number of cases that we are now so we have these different case scenario that we are supporting with the case studies too that really happen in the field such as for instance claiming or warning that there will be invasion taking place and in reality this invasion has never occurred there is a very specific example from Israel in 2021 where even international media were convinced and believed that this invasion take place and reported on it which was just a part of military strategy and there are a number of other examples from different regions around the world where we see that. Another one is of course using platforms for the purpose of moving the parts of population from one territory to another which from the perspective of international humanitarian law is not at least in the context of non-international armed conflict it’s not even permitted and so we see those cases as well. Of course the whole entire issue of the content depicting prisoners of war that was very largely reported and that can again put in danger the privacy of those people identity and so and so on so the safety and security of those individuals depicted on that video content that is being shared and there may be other two or three case scenarios that we identified in the field and that we are now still gathering case studies and this will be all summarized in our upcoming report that we’re hoping to publish in following weeks I don’t want to overcome it but I am happy to elaborate further without going much in and give space to others as well.

Chantal Joris:
Thank you very much for the excellent points. Anyone else?

Audience:
Thanks for giving me the floor and opportunity to speak and express myself. I’m Tim from Russia and what I can say about internet shutdowns, internet restrictions in terms of conflict, it’s pretty obvious that any country involved in the conflict will ensure that there will be some restrictions on internet websites, media and so on, but frankly speaking it is not so restricted as it could be seen from abroad as long as there are plenty of like, you can stop information from flowing around through like telegram messengers from some social media and stuff and lots of Ukrainian media and Ukrainian telegram channels are still and effectively available in Russia So I can’t say there is a super restricted environment in the Russian media sphere. So far we face lots of, as the same as Ukrainian speaker said, we face lots of like cybersecurity threats coming obviously from Ukraine the same way, like denial-of-service attacks, like some sophisticated attacks on governmental and non-governmental like private web services companies and we have lots of like data leaks, for example recently Ukrainian hackers published a leaked database from the company who was a service provider for all the airline tickets and airline connections and stuff. So basically all the imaginable personal data including like names, dates and all the flying information of Russian people, Russian citizens was published in the internet, in the telegram and was available for any malicious actors and so far we see a lot of threats and insecurities from disinformation campaigns and threats and fakes which are used as a weapon in the informational war happening aside of real war in between Russia and Ukraine. And it’s so sad that this kind of informational war and this kind of weapons and weaponry used in informational war is not described in any international law and is not even somehow imagined and prescribed what’s, because there is, you know, it’s, the station is like that. There is, say, international law for wars, for real wars and for real warfare, but there is no international laws for informational warfare, and both of the countries and both of the, like, all the citizens of our countries, both Ukraine and Russia, suffer from this internet warfare. So the situation is like that. So the situation is like that, that both parties use this kind of weapons in the informational war in between our countries. For example, for this year, working in a, like, non-profit organization, which is, which focuses on countering disinformation and fakes in Russia, we have found more over 3,000 disinformational narratives, threatening Russian Federation and Russian citizens in some different ways. And this is about, like, number of narratives, but separately, we have counted each, like, post and message in social media, and the number of messages and posts and reposts placed in social media, an overwhelming 10 million copies in the Russian media sphere.

Chantal Joris:
Thank you. I think there will be probably a. quite some disagreement in the room and also I will let Tetiana perhaps respond and react to some of the remarks. Certainly there is a gap in international law as to how to deal appropriately with information manipulation actually both in times of peace and in times

Tetiana Avdieieva:
of armed conflicts. I don’t know if we have any… Yes. Yeah, just a brief response. First of all, I find it particularly interesting when the discussion around the incitements to aggression, propaganda for war and incitements to hatred turns into the discussion around the disinformation campaign spread inside Russia which for me is slightly a shifting of the context because when we are speaking of the aggression issues per se, we have to take into account the narratives which are primarily aimed at actually instigating their own conflict and also narratives which are shared inside Russia connected to the, for example, inviting people to join in Russian armed forces or connected to actually incitements to commit illegal activities which predominantly are shared in Russian media especially those which are state-backed. Also as regards the digital security threats and digital security concerns, what concerns me the most is the attempt to basically substitute the actual topic of harming civilians and the topic of basically trying to suppress activists, opposition, human rights defenders and journalists by the fact that there are restrictions which affect the entire community in Russia. First and foremost, because among the Russian community itself, there is an extensive support towards the invasion. Even Russian independent media Meduza, it’s in its findings and its research stated that from 70 to 80% of Russian citizens actually support the invasion. When assessing the restrictions in this context, the proportionality analysis, in my opinion, would a little bit differ comparing to the situation when we are just declaring the like facts without providing the appropriate context for some. So I will stop here and I won’t probably create the bottle out of this discussion here. But I think that it’s a very important topic to clearly define the things we are talking about and to clearly indicate in which context they’re done to whom they’re attributable and what are the specific consequences of the actions which are taken and what is the reasoning behind those actions which are taken. Thank you.

Chantal Joris:
Thank you. Hello? Yes, thank you very much. As mentioned, when we go to the factual scenarios of specific conflicts, for sure, there can be a lot of disagreements as to what specifically the issues are. I will take one more contribution and then I will, I will, and then let’s hear from Joel Risk from the ICRC.

Audience:
Hi, I’m Rafik from Internews here. This may be more of a niche issue. potentially, but one of the biggest frustrations that we hear from our media and journalist partners particularly, though also from civil society, is around over-enforcement from social media platforms where legitimate news reporting or commentary on conflict is taken down and legitimate news sources have their accounts suspended or restricted from amplifying or boosting content. Sometimes it’s through automation in cases like Palestine or Afghanistan where you can’t report on the news without mentioning dangerous organisations. We find a lot of media outlets wind up getting their pages restricted, and then other times it’s through mass reporting and targeting of these news sources that result in incorrect, having their pages taken down. Sometimes people do actually violate the rules of the platform too, maybe posting pictures of dead bodies and things like that that do violate the rules, but in a conflict setting it’s often complicated. So yeah, just in terms of the free flow of information, that’s another issue. Thank you, yes, absolutely. I mean, also promoting a certain narrative or sharing

Chantal Joris:
violations for propaganda purposes, for example, is obviously something very different than reporting on them to make them publicly known, but given how often automated tools are also involved in content moderation, it’s very difficult to make that distinction properly. Joelle, let me… turn to you and perhaps ask you as well hearing from from the situation in in ukraine and sudan does that um is that also what the the sort of threats that you that you have perceived globally as a humanitarian organizations and and what sort of specific risks um has the icrc identified in terms of how these digital threats can harm civilians

Rizk Joelle:
thank you shantan and thank you for just you know contributions they make to ukraine uh sudan i will maybe focus a little bit more on the harms to civilians to us um rather than on the nature of the uh of the threats so because of course our concern is not only about the use of digital technology but also that the lack of access to to that especially to connectivity particularly when people need reliable information the most to make potentially life-saving decisions the we share the information dimension of conflict that also becomes i’m sorry we have a little bit of you’re breaking up a little bit i don’t know if there’s anything uh i don’t know if it’s the connection or if there’s anything you can do with the with the mic that will make it a little let me change the mic setting um is it better like that okay yes i see you nodding oh okay all right great thank you sorry i it was a mic setting i believe um so i was saying that the information dimension of conflict have also become part of in a way of digital front lines um because digital platforms are used to amplify um a spread of harmful information at a wider scale reach and speed than than we’ve ever seen before and that is a concern because it compromises people’s safety their safety, their rights, their ability to access also these rights and their and their dignity. And this, the difficulty is that this happens in various ways that are very difficult to prove that Jenna spoke of attribution a little bit. It’s very difficult indeed to even not only to do that but also to prove how harmful information is actually causing harm to civilians affected by conflict and I’ll try to speak about that a little bit. And I see that different actors, whether they are state or non state are leveraging the information space to achieve information advantages, you, you had said earlier, but also to shape public opinion shape the narrative, the dominant narrative, and but also to influence people’s beliefs, their interests and their, their behaviors, which is where in situations of conflict is really becomes an issue of risk, potentially to other civilians. The information space in that sense is an extension of the conflict domain, and impacts people that are already in a vulnerable situation because they’re already affected by conflict. So, the digitalization of communication system then becomes basically a convergent of the information and digital that being said, not all harmful information and distorted information, whether it’s misinformation this information, malinformation and hateful information, not all of it is a result of organized information operations right, not all of it is state sponsored, but the use of digital platforms, really have has a mix of state and non state actors and to an organized spread of narratives but also an organic spread of information and harmful information. And what I’ve seen in the past years. And maybe also just to caveat on that that makes it very complex from humanitarian angle. Again, to identify to detect that that is a harmful narrative but also to assess what is the harm to that to the civilians and then to think of an adequate response to these complexities that I just mentioned. And what I’ve seen in the past years is that how countries affected by armed conflict. countries the spread of misinformation and disinformation, and also hateful and offensive speech can already aggravate tensions and can intensify conflict. dynamics, which of course will have a very important toll on civilian population. For example, harmful information can increase pre-existing social tensions, pre-existing grievances. It can also even take advantage of pre-existing grievances to escalate social tensions and exacerbate polarization, violence, all the way to a point where it’s a disintegration of social cohesion. Information narratives can also encourage acts of violence against people or encourage other violations of humanitarian law, and you already mentioned quite a few examples. Alishka also mentioned a couple of examples. The spread of misinformation and misinformation can increase vulnerabilities to those affected by conflict. The distress, the psychological weight it can cause, which is often invisible. For example, think of how harmful information may feed anxiety and fear and also mental suffering of people that are already under significant distress. We fear that the spread of harmful information can also trigger threats, harassments, which may lead to displacement and evictions, and I think a couple of examples were already given in the room. We also worry about stigmatization and of discrimination. Think of survivors, for example, of sexual violence. Think of families that are thought about as belonging to one or the other of a group or one or the other an ethnic group, for example, where they may be stigmatized about people being denied access to essential services as a result as well, only because they belong to a group that is subject to an information campaign or a narrative. We also fear that distorted information in times of emergencies and people’s ability to access potentially life-saving information is heavily compromised today. People may not be able to judge what information they can trust, at what time when time when they really need accurate and timely information for their safety and for their protection. For example, to understand what is happening around them, where danger and risks may be coming from, roads that are open or not safe or not, locations of checkpoints, et cetera, and how and where they may find assistance, whether it’s medical or other types of assistance, or take measures and make timely decisions to protect themselves or to even search for help. So the digital information space can also become a space where behavior that are counter to international and humanitarian law may occur, including, and I will not give contextual examples, including the incitement to targeting of civilians, to killing civilians, making threats of violence that may be considered as terrorizing the civilian population, but also information campaigns, whether they are online or offline, and I would like to underscore online and offline, can also disrupt and undermine humanitarian operations. Hattab spoke a bit about that, but I wanna say that when this happens, when undermining humanitarian operations may also hinder the ability to provide these humanitarian services to people that are most in need for it, and of course also compromise safety of humanitarian aid workers. One last point I’d make on this is that even the approaches that are adopted to address this phenomenon in themselves, and Chantal, you mentioned that in the beginning, it may also, intentionally or not, impact people’s access to information. It may fuel crackdown, more surveillance, more tracking of people, crackdown on freedoms, on media and journalists, and of course also on political dissent and potentially also on minorities. So as a humanitarian actor, we believe that this isn’t the issue that requires. a bit of a specific attention, not only because of the implication it has on people’s lives, their safety, and their dignity, but also because of how complex the environment is. And from that angle, a conflict-sensitive approach will be necessary. We’re used to discussing a lot on the impact of disinformation, for example, from a point of view of public health campaigns, election campaigns, freedom of speech, et cetera. When it comes to conflict, a conflict-sensitive approach will be necessary. In other words, an approach that really helps us ask how to best assess the potential harm in the information dimension of conflict, and also how that may have impact on civilians that are already affected by several other types of risks, mostly offline. And of course, think of adequate responses that will not cause additional harm or amplify harmful information, whatever the type of that information will be. And happy, of course, to talk a little bit more about that and how it connects to other risks later in the hour. Thank you.

Chantal Joris:
Thank you very much, Joelle. I do find this point very interesting about, as a freedom of expression organization, we look at something like disinformation, obviously, through the lens of the human rights framework and the test to apply to restrict freedom of expression. But it’s interesting to think about it from the perspective, again, of the potential harm, what are the adequate responses, and whether they are the same as the ones we would identify normally as a freedom of expression organization, as the adequate responses to this information that do not have any unintended negative consequences. With that, let me move to Elanai. So, I know that some GNI members are obviously telecommunication and internet. service providers or also hosting platforms and so I’m just curious to hear like what discussions have you had at the GNI specific to two conflicts and perhaps can you talk a bit about what pressures have companies reported to be facing if they operate in these conflicts from from the conflict parties

Speaker:
yeah sure thanks Chantelle and thanks for the opportunity to be on this panel maybe to start just to say GNI is a multi-stakeholder platform working towards responsible decision-making in the ICT sector with respect to government mandates for access to user information and removal of content we bring together companies civil society academics and investors and all of our members commit to the GNI principles on freedom expression and privacy and our company members are assessed against these principles in terms of how they are implementing them in their policies and their processes and their actions and we also do a lot of learning work and and policy advocacy and so as part of our some of our learning work we started a working group on the laws of armed conflict to examine responsible decision-making during times of conflict and the challenges that many of our member companies were facing and then we are also holding a learning series organized by GNI ICRC and CIPRI which is meant to be an enable and honest conversation around the ways that ICT companies can have impact and be impacted in the context of armed conflict and that’s really you know to say that I’m I’m coming to this conversation as GNI not really being or not necessarily being an expert in IHL or working in times of armed conflict but we are trying to bring together the right experts ask their write questions and have the conversations that are necessary to help companies and other stakeholders navigate these really complicated situations. So I think to answer your question, Chantelle, as we’ve heard from a number of our speakers today, armed conflicts are really complex and there is a lot at stake. Technology companies may offer services that support critical functions, provide critical information for citizens, but they can also be used to directly or indirectly facilitate violence, spread false information, potentially prolong and exasperate conflicts. And that’s just a few of the potential impacts. There are a number of different risks that companies may need to navigate during times of conflict and they often have to take difficult decisions that require the balancing of a number of stakeholder interests. This includes risks to people, individual users, journalists, vulnerable communities, societies. As well as navigating risks to a company, including its infrastructure, services, equipment, but probably most importantly, their personnel. And especially for telecom companies who have offices on the ground, often their personnel are at risk. And I think companies may need to navigate a whole range of questions about if they operate in a context and what that impact might be. I don’t think it’s a clear-cut answer. They, on one hand, may be providing access to critical information, they might be a more rights-respecting alternative, but they also might be used to facilitate the violence. They have to navigate questions about how they operate and function during times of conflicts, including how they’re responding to government demands. These can take many different forms, including requests for access to user information, giving access to networks for surveillance purposes, shutting down the networks, carrying messages on networks, removing content, and more. more. I think that we’ve seen that these demands may be informal. The legal basis for the demand may be unclear. The duration of the measure being required may not be specified. For example, it might not be clear when a network shutdown should be ended. The scope of the demand may be extremely broad. And I think something that was said by another speaker that’s important is that these demands can come from both sides of a conflict and not just one government. And so I think as companies manage risk to people and their company, their ability to respond to government mandates in other ways that might be available to them during times of peace can be really limited. For example, during a time of peace, you could say a company should request clarity of the legality of the request and communicate with the government to determine exact requirements. They should be responding in a way that is minimal, refuse to comply, partially comply or challenge a request through legal channels, disclose information about receiving the request to the public or notify the user, maintain a grievance mechanism when the privacy and freedom of expression of users is impacted by complying with the request. But I think in times of conflict, as they face these different risks that they have to manage, it can be really difficult for them to undertake these measures. And I think just from discussions that we’ve heard, things that are useful include companies having risk management frameworks in place, clear escalation channels, clear thresholds to understand what triggers different actions, working with other actors to understand the legality of requests, working with other companies to coordinate actions in a specific context, and importantly, engaging with experts, including to understand the implications of different decisions and ensuring formal and constant review of decisions on how to improve their actions going forward. And I think another challenge that we’ve heard in our discussions is that it can also be challenging to understand when to pull back or to de-escalate different measures that are in place because it’s not always clear when a conflict ends.

Chantal Joris:
Thank you very much and I do also really support in in these contexts that the necessity of a multi-stakeholder approach because perhaps say the ICRC might not be an expert classically with content moderation or maybe not yet maybe that’s still to come ISP providers are not necessarily experts in conflict settings they don’t maybe understand both of them maybe don’t understand the typical threats around this information so I do think it’s extremely important that different actors work together. Let me go back to to Tetiana maybe focus this sort of second half of the discussion a bit more on trying to identify gaps where we need more clarity and and also have Tetiana and Kattab speak to the role of ICT companies specifically in the context of their

Tetiana Avdieieva:
conflict. Tetiana over to you. Yeah thank you very much and I particularly liked how the discussion is currently going I mean what I wanted to briefly follow up and maybe start the discussion around how the ICT companies how platforms generally have to respond is that we have to make the clear distinction when organic spread of harmful information turns into spread of actually illegal content and probably this line has to be specifically identified for the context of our own conflict where the effect of the organic harmful information is amplified by the very context in which it is put. As regards the ICT platforms, for me, since in Ukraine there is no actual mechanism to engage with the platforms on the state level, in terms of we do not have jurisdiction over most of tech giants, and that creates the biggest problem, because there is no opportunity to communicate with the platforms otherwise, except for the voluntary cooperation from their side. That is probably the biggest challenge we as an international community have to resolve, because usually states which face armed conflicts or which face civil unrest, and we can expand this context even to other emergency situations, they do not have the legal mechanisms to communicate with the platforms, and that is the primary stage for the discussion. We have to understand when companies have to respond to the governmental requests, to governmental requests of which governments the companies have to respond, especially when there is suspicion or when we actually know that the government, for example, is authoritarian one, when the government has, the state generally has, the very high index of human rights breaches, whether the companies have to be involved into the discussions with such governments, with such states at all. So that is the primary point probably we have to think about. The second thing is to what extent IHL and IHRL have to collaborate when we are speaking about the activities of the ICT. For example, and I can share the link. in the chat, our organization Digital Security Lab Ukraine has done an extensive research on disinformation, propaganda for war, international humanitarian law, international criminal law and international human rights law. There is a big discourse about what are the definitions, which legal regime is applicable and how the states generally and international community have to react when these kinds of speech is delivered. With companies, it is even more difficult just because for them, they’re rather, and I mean, I can absolutely understand why it happens. They’re rather waiting for international organizations. For example, the UNESCO, the OSCE, the Council of Europe to say, well, there is a reason incitement to genocide. Whether the threshold is reached or not. And that is actually point like, it’s a big plus hundred to multi-stakeholder collaboration. Because there are certain actors which are empowered, which are put in place to say, to call particular legal phenomena by its own name. We have to understand that like, I mean, I wish I could say that there are incitements to genocide in what Russia does in Ukraine. But unfortunately, domestic NGOs won’t probably be the most reliable source and the most trustworthy source in this case. So that’s the point in time when international organizations have to step in. I mean, both international, intergovernmental organizations, international NGOs who can elaborate on those issues. And that might be a potential solution how ICT companies might deal with the prohibited types of content. The prohibited kinds of behavior, which is usually called coordinated innocent behavior online. So… most probably they need assistance on the more global level, as well as assistance on the local level in order to better understand the context. For example, when we are speaking about the slur words, most probably it is more reasonable to resort to the assistance of the local partners. And finally, it is about the issue of enforcement. And here, my main point at any discussion is that we are usually trying to, unfortunately, we are trying to blame and shame companies which are already good phase one. For example, we are constantly pushing Meta to do even more and more and more. And it is nice that Meta is open to a discussion. But on the other hand, we have such companies as Telegram, as TikTok, which are more or less reluctant to cooperate, or in case of Telegram, they’re absolutely closed for cooperation with either a government or civil society. And we also have to solve this issue in particular, because there is a big problem of people migrating from the safe spaces, which are moderated, but have certain gaps in moderation, to the spaces which are absolutely unmoderated, just because people feel over-censored in the moderated spaces. And this over-censorship is often caused by our blaming and shaming strategy. And the very same approach has actually been seen when Meta, for example, was blamed by the increased moderation efforts in Ukraine. I mean, it is good that the ICT companies finally started to do something. And our main task is not to blame and shame them for not doing the same in other regions, but rather to encourage them to apply the very same approach in all the other regions and situations, to develop crisis protocols, to think about… to initiate basically discussions about IHL and IHRL perspectives, to say like publicly what kind of problems they face, probably to launch the public calls for cooperation when local NGOs can apply, when local NGOs can themselves engage with content moderation teams, policy teams, oversight teams in case the ICT company has any. So that’s my main point probably to all the actors involved that when we see the pattern of the good behavioral pattern on behalf of the ICT company, we have to encourage them to expand this good behavioral pattern to other contexts rather than to shame them that they acted in this way only in one situation.

Chantal Joris:
Thank you very much. And I do echo the calls on companies to take all situations of conflicts equally serious and not focus on the ones more that tend to make headlines or where there’s bigger geopolitical pressures behind. So also over to Kataab, then I have two last questions for Elonay and Joël. If you can keep your interest in the topic, if you can keep your interventions relatively short. So we have a couple of minutes also for any questions for the audience that will be appreciated. Kataab, over to you.

Khattab Hamad:
Thank you Chantal and thank you Tatiana for the great intervention. So I will start with the challenges that the ICT companies face during the conflict in Sudan to be specific. So the major challenge that the ICT companies are facing in Sudan during the war is electricity, to be honest. Before the war. the national grid of electricity was only providing 40% of the citizens with power. And after the war, it’s clear that there was a huge shortage in power supply. And this impacted the network stability, the network, by network, I mean the telecom network, not the power network. And the data centers availability, which affected the e-banking service in Sudan and other basic governmental services. However, the ICT companies normalized the power shortage by equipping the devices, stations and data centers with uninterruptible power supply, as well as UPS and power generators. But due to the circumstances of the war, as I mentioned earlier, the companies could not deliver the fuel to the power generators because of security concerns of the workers. So this led a company like MTN Sudan, MTN Sudan, it’s an ISP in Sudan. It led MTN to announce that they had service failure due to the disability of delivering the power fuel. And I will translate to the role of social media platforms in the ongoing conflict. So social media platforms, actually, they played a major role in ousting the National Congress Party of Sudan, which was ruling Sudan for 30 years. And also it assisted us in our pro-democracy movement. But however… These platforms are the main tools of opinion manipulation during the ongoing conflict, as both conflict parties are using these platforms to promote their narrative of the war. However, the new event here is that there is a foreign actor, which is playing a major role in the cyberspace in Sudan, which is META. META took down the official and other related accounts of Rapid Support Forces, and they justified that by saying RSF is considered a dangerous organization, according to the Middle East Eye website. And yes, I confirm that RSF is a dangerous organization, and we know its human rights record and how it’s bad. But this step from META contributed to the efforts of SAF to control the information and the narrative of war, as nowadays only one way of information. You can get information from SAF while RSF is suppressed. My concern is that, yeah, both sides are bad, but making a free environment of information, and then people can get the information that they want, and they can filter by themselves, not making decisions that contribute indirectly. to prolonging the war, and also assisting in the process of polarization. So taking a decision without considering the local context is a big mistake, as I also have another concern, as RSF itself was a part of SAF, as SAF founded RSF in 2013. So it makes sense that both are dangerous organizations. How can you take down one organization and leave the other? Also, the decision impacted the free flow of information. So for example, fact checkers cannot find information to provide verification to the claims, as there is one way of information, and it also has security impact on the people on ground. So there are some gaps that I want to raise, and I think it should be filled. So in this era, the right to access information is related to cyberspace. So the front liners of accessing the information are the telecom workers, the telecom engineers, and other telecom-related workers, because they are the people who provide and operate the infrastructure which allows us to access information. Those workers should be considered by the international law to be extraordinarily protected, like doctors, journalists, and the human rights defenders. Moreover, in Sudan we need more and more training for our people because unfortunately we don’t have enough human resources to grow our internet governance and their knowledge is limited to specific people. And unfortunately these people are using their knowledge to restrict the free flow of information and freedom of expression. And also we have to amend our laws like the Right to Access Act, the Cyber Crimes Law and the Law of National Security as they were being abused using victims by the same people who have this knowledge. So I think that’s it from my side, Chantal and others, back to you Chantal, thank you.

Chantal Joris:
Thank you very much. Yeah, it’s interesting, we’ve heard now twice of these complications around ICT companies potentially sort of de facto asked to choose sides between the parties to a conflict, also like Elonay mentioned earlier. And also I think very interesting point about the key importance of the staff that is in charge of keeping these ICT systems going and perhaps them needing even specific protections to be able to do that. Elonay, so the GNI does refer to the guiding principles on business and human rights, which are key also to the GNI principles as to how companies should respect human rights. They only make very brief reference to humanitarian law, so maybe just an open question as to do you feel that there is a sense from companies that they need more guidance as to what it means for them to respect humanitarian law in addition to human rights?

Speaker:
I mean, yes. I think that is very central to a number of conversations that happen at GNI. I guess I would say so many technology companies approach risk identification and mitigation through the lens of business and human rights. And this includes relying on frameworks such as the OECD guidelines for multinational enterprises and in the UN guiding principles like you just mentioned. And I wanted to highlight that there are a couple of relevant principles and parts of the commentary of the UNGPs for companies and states with respect to operation in conflict affected areas. Importantly, according to the UNGPs, a core principle of the corporate responsibility to respect human rights is that in situations of armed conflict, companies should respect the standards of international humanitarian law. And then also the UNGP state that when operating in areas of armed conflict, business should conduct enhanced due diligence, resulting from potentially heightened risk and negative human rights impacts. And there’s emerging guidance from civil society organizations on how companies can undertake this EHRDD through a conflict lens. I think IHL can help inform tech companies operating in situations of armed conflict about the risks that they might expose themselves, their personnel, as well as other people too. But like you mentioned, I think that more guidance is needed as to how due diligence processes can incorporate IHL, as well as more work can be done on the articulation as to what IHL means for ICT companies.

Chantal Joris:
Thank you very much. Joelle, as the main guardian of IHL, I know the ICRC is looking into some of these also legal and policy challenges that have arisen through these cyber threats. And can you talk a bit about this global advisory board which has? supported the ICRC in addressing some of those. Can you perhaps share some of the initial findings?

Rizk Joelle:
Of course. Would you like me to focus more on ICT companies since that’s where the discussion went? Yes, yes, sure. Okay. So yeah, thanks. It’s a good question, Chantal. The ICRC has set up a sort of a global advisory board about two and a half years ago. So between 2021 and 2023, we brought together at a high level, really at senior level, advising the president and the leadership of the ICRC on basically experts from legal, military policy, tech companies, and also security fields to advise on the emerging digital threats and new digital threats, and to help us improve our preparedness to engage on these issues, not only with parties to armed conflict, but also with new actors that we see are very, that play a very important role in complex situations, including, of course, civil society, but also tech companies. So for these, throughout these two years, we’ve hosted about four different consultations with the advisory board, and hopefully next week on October 19th, we will publish the list of discussions and recommendations. They’re not ICRC recommendations. They won’t be ICRC recommendations, but they will be the advisory board recommendations on digital threats to civilians affected by armed conflict. So I will maybe broadly mentioned the four different trends that were discussed in these consultations between the global advisory board and the ICRC, and then I will focus a little bit on the recommendations linked to the information space, and then to ICT companies. And I’ll try to be quick, because I’m aware of time. So the first trend that was discussed between the ICRC and the global advisory board. board is the harm that cyber operations have on civilians during armed conflict. So focusing again on the emerging behavior of parties to armed conflict in the cyber space, but also other actors in that space by disrupting infrastructure services and data that may be essential to functioning of society, but also to human safety. And there we consider that there’s a real risk that cyber operations will indiscriminately affect widely used computer systems that are connecting and connected civilians and civilian infrastructure, but in a way that goes beyond the conflict. So as a result, it may interrupt access to essential services, but also hinder the delivery of humanitarian aid and cause, of course, offline harm and injury and even death to civilians. The other issue or the trend that was discussed is the question that we are discussing today, and that is connectivity and the digitalization of communication systems and the spread of harmful information. And similar to what we already discussed at length in this session, recognizing information operations have always been part and parcel of conflict, but the digitalization of communication systems and platform is amplifying the scale, reach, and speed for the spread of harmful information. And that, of course, leading to distortion of facts, influencing people’s beliefs and behaviors and raising tensions and all what we have already discussed, but really stressing that the consequences of this is online as well as offline. The third issue discussed, and this is really an issue that we hold very close to heart as the ICRC, and that is the blurring of lines between what is civilian and what is military in the digital dimensions of conflict. And seeing that civilians and civilian infrastructure becoming more targets of attacks in that. space in the digital dimension of conflict. And of course, this is an issue that is of growing concern as digital front lines are really expanding, and they’re expanding also, let’s say, conflict domains. The closer that digital technologies move civilians to hostilities, the greater the risk of harm to them. And the more digital infrastructures or services are shared between civilians and military, the greater the risk of civilian infrastructure being attacked, and of course, as a consequence to that, a harm to civilians, but also undermining the very premise for the principle of distinction between civilians and military objectives. And finally, of course, not by any way the least important, the fourth issue, very important to us as a humanitarian actor and to all humanitarian organizations, the way in which in the cyber domain, cyber operations, data breaches, and also information campaigns are undermining the very trust that people and societies are putting in humanitarian organizations, and as a result, the ability to provide life-saving services to people. So some of the recommend, of course, the board had 25 recommendations. I will, of course, not go through them now, but I will invite you to have a look and read that report that will be launched on October 19th. I think it’s really a beginning of an important conversation between multiple stakeholders in that field. I will maybe speak a little bit on the recommendations in relation to information, to the spread of harmful information, and maybe after listening now to you, I will also add a few on recommendations specific to ICT companies. So, of course, in addition to recommendations on parties to respect their international legal obligations, but also assess the potential harm that their action… and policies are causing to civilians and taking measures to mitigate or prevent that. This is, of course, a broad recommendation. But more specifically, a recommendation to states to build resilience and societies to build resilience against harmful information in ways that uphold the right to freedom of expression, protect journalists, and really improve the resilience of societies. And by a resilience approach, we, of course, understand that this is a multiple stakeholder approach that also involves civil society and companies alike. So thinking about it as a 360-degree approach to addressing the information disorder. Another recommendation to the platforms is recognizing the fact that a lot of this misinformation, disinformation is spreading through social media and digital platforms and calling on them to take additional measures to detect signals, analyze sources, analyze methods of distribution, different types of harmful information in contextual approaches to managing that and analyzing what may exist on their own platforms in this context. But particularly in relation to situations of armed conflict, I think Khattab’s example is a classic example of the importance of contextualizing these policies. And that these policies and procedures, including when it comes to contact moderation, as Khattab mentioned, should also really align with humanitarian law and human rights standards that Shantag also have mentioned. And, of course, lastly on that is a recommendation to us and to humanitarian organizations at large to strive to detect signals of the spread of harmful information, but also assess the impact on people. And keeping in mind that any responses to harmful information does not or must not amplify harmful information in itself. or cause additional or other unintended harm. And of course, a call to contribute to, again, the resilience building of affected people in conflict settings. If I still have a couple of minutes, I’ll maybe just mention some of the recommendations to ICT companies that are at large and more linked to the cyber domain and not necessarily to information operations or harmful information. And some of these recommendations include the segmentation of data and communication infrastructure between what is providing military purposes and those that are used by civilians. So segmentation of communication infrastructure where possible. Also awareness of risk for companies and awareness of the legal consequences around their role and their action and the support they may provide to military operations and private clients as well. And that awareness of the consequences that their involvement and the use of their products and services in situations of conflict may have. Also ensuring that restrictive measures that may be taken in situations of conflict, sanctions or others related to sections or other or self limitations as well, do not impede the functioning and maintenance of medical services and humanitarian activities. And of course the flow of essential services to the needs of civilian population. I’ll stop here. Thank you Chantal for giving me the opportunity to elaborate on that.

Chantal Joris:
Thank you very much. I know we’re basically out of time, but I do wanna, before we get kicked out, see if anyone has something they would like to add, something that you think has been missing from the discussions and should be taken into account by the people working on this, or questions of course also to the speakers if they can stick around. around for five more minutes.

Audience:
Yeah, thank you. My name is Julia. I work for German Development Corporation and I would have one question. Yesterday morning, Maria Risa said we need more upstream solutions for this information topic. And we heard a lot now about more downstream solutions, so content management, taking down certain profiles, et cetera, et cetera. So my question would be, what are your views about questions of design of platforms? So why do we talk, how do we talk about redesigning algorithms, business models, et cetera, and what your perspectives are on these aspects? Thank you.

Speaker:
I mean, I would just say that I think it’s really important that companies start to build in the capacity to apply a conflict lens to the development of their products. And I know that ICRC, for example, is working on building and working with companies to build out this capacity. So I think we have to consider both upstream and downstream solutions.

Chantal Joris:
Cut up, Joelle, Tatiana, do you wanna come in on this question quickly?

Rizk Joelle:
I will just say very briefly, it is in line with a 360 degree approach, of course, that involves not, I mean, in the upstream thinking that the very business model is reinforcing in a way, the way that these policies can be enforced. So from that angle, I would tend to, of course, agree, but realistically, I think this would be a very challenging discussion that also requires expertise that may not be in the hands of those that are currently conducting that feedback loop with the tech companies.

Chantal Joris:
Thank you very much. I will perhaps see if there’s. Any other quick questions in the room? Yes, go ahead.

Audience:
Hi, I’ll be super quick. Lindsay Anderson from BSR. For those who don’t know, we help companies implement the UNGPs and conduct human rights due diligence. And I just wanted to flag a resource that might be useful for folks on this topic. About a year ago, we published a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings, which we developed alongside Just Peace Labs and other organization. And it’s very detailed, obviously targeted to companies, but it might be useful for those who are advocating with companies who wanna understand under the UNGPs specifically what they should be doing and what enhanced human rights due diligence looks like in practice. So if you Google BSR, conflict sensitive due diligence, you’ll find that resource. Hi, I’m Farzaneh Badi. So I’m working on a project related to USAID, and they are looking at human-centered approaches to digital transformation. And they want to know and understand what’s, how it can look like, and how can they can actually engage with the local communities when they are doing this actual digital transformation work. And some, one part of that is dealing with crisis. But the challenges that we see in human-centered approaches and human rights analysis is that in, especially in countries that are in war zones, getting in touch with the communities and receiving their feedback and have that kind of stakeholder consultation is extremely difficult. And I want to know if there are actual recommendations out there. And also, how can we use these mechanisms, these human rights mechanism, human-centered approaches to not to leave anyone behind? Because we are not talking about Afghanistan anymore. And like this is maybe, so thank you so much for this session because I’ve been thinking about Sudan and I’ve been thinking about Afghanistan and how sanctions affect them and how they’re in crisis. But in this meeting, we need to talk more and more about them so that they won’t be forgotten. So thank you for this session, but also like the recommendations to get in touch with the community and kind of address their needs as well. When we are doing the digital developments and after that, during the crisis, that would be great, thank you.

Chantal Joris:
Thank you very much. And I know a lot of material has been mentioned that will come out and then some of them, I think also focuses on stakeholder engagement, but I think you’re absolutely right. There is still a lot more to be learned and improved. So I mean, if anyone has anything in this sense to offer. Yes.

Audience:
Yeah, thank you for giving me a space for I want to support Tatiana’s words and I think that international society should do more pressure on global media platforms because they basically control what people think with their recommendation algorithms. Facebook actually can do a revolution in the click by altering the news feed in some social accounts like in some country. So that we analyze that and we see that global media platforms are extremely unsupported as they’re extremely against publishing their recommendation algorithms. and it was mentioned before that some global media platforms take sides in the informational war happening all across the globe and that’s like some bad condition because they are tend to be neutral because like there is no bad and good side there’s like side A and side B in every conflict and we see that global media platforms tend to take side to tend to alter recommendation algorithms for the profit of one of the war sides but they are not doing it publicly so they try to shadow it out so they pretend to be non-biased and neutral but they are not so I think that the global society and here I support Tatiana for 100% should do more pressure on global media platforms globally thank you so much yes thank you very much and I do think I mean there have been long standing calls around more transparency when it comes to the recommender systems we’ve had digital services act just adopted in the EU let’s see if this will bring improvement and I know that Eliska has strong views on this as well I mainly wanted to since a couple of us mentioned several resources so together with article 19 you kindly co-drafted and so Tatiana actually the joint declaration of principles on content governance and platform accountability in times of crisis we did not manage to come up with a shorter title this is still documented it’s available on our website it’s a joint effort of number of civil societies that have either first hand experience with crisis or similarly to access now an article 19 have global expertise in this area, and I think there are a number of, even though it’s a declaration, we still managed to put together 10 pages of relatively, at some instance, detailed rules for platform accountability. The declaration, why am I mentioning it? It is specifically addressed to digital platforms that find themselves and operate in the situation of crisis. It has different recommendations for what should be done prior to escalation, during the escalation, and post-crisis, emphasizing correctly, as the speaker from GNI mentioned, that there is no clear end or starting point of any crisis. So there are a couple of detailed rules without going into the details. The document was launched at the IJ last year, so it’s already one year old, but I think some important principles and rules can be found in there that can serve at least as a guiding light. Thank you.

Chantal Joris:
Thank you so much. I’ve been told to close, also perhaps to say that Article 19 is also working on two reports, one specific to propaganda for war and how it should be interpreted under the ICCPR, and the other one also trying to identify and address some of these gaps that exist when it comes to the digital space and armed conflict. So as you can tell, a lot more material is coming out, still not enough quite yet, or it’s just the start of a process. So thank you to our excellent speakers, Roel, Tatiana, Kataaf, Elunay. Thanks, it was a pleasure to have you. And thank you for everyone in the room and online who participated. And we will be speaking about this topic for years to come, for sure. Thank you so much. Thank you.

Audience

Speech speed

162 words per minute

Speech length

2320 words

Speech time

861 secs

Chantal Joris

Speech speed

157 words per minute

Speech length

2388 words

Speech time

911 secs

Khattab Hamad

Speech speed

118 words per minute

Speech length

1460 words

Speech time

743 secs

Rizk Joelle

Speech speed

157 words per minute

Speech length

3062 words

Speech time

1168 secs

Speaker

Speech speed

169 words per minute

Speech length

1338 words

Speech time

476 secs

Tetiana Avdieieva

Speech speed

150 words per minute

Speech length

2699 words

Speech time

1080 secs

Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ololade Shyllon

The utilization of sandboxes, which are regulatory frameworks that permit controlled experimentation and innovation in the financial technology (FinTech) sector, has encountered challenges in Africa and the Middle East. Presently, there are only one or two FinTech-related sandboxes in the region, indicating a slow start in this field. This lack of progress is viewed negatively.

However, there is recognition of the necessity for positive outcomes regarding sandboxes across the entire region. Sandboxes can provide a conducive environment to test new ideas, products, and services. Fostering innovation in the FinTech sector is considered crucial for economic growth and development.

In terms of regulatory collaboration and policy-making, there is a positive sentiment towards regional cooperation. This collaboration can enhance the understanding of the FinTech ecosystem and enable stakeholders to learn from one another. By working across borders, stakeholders can share insights and enrich their collective understanding. Moreover, the existence of global treaties provides a basis for common rules, despite variations in individual legal systems. This regional collaboration is seen as a proactive step towards achieving the Sustainable Development Goals (SDG) related to industry, innovation, and infrastructure (SDG 9) as well as partnerships for the goals (SDG 17).

Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes. Specifically, the organisation META supports and promotes a harmonised approach, emphasising the importance of collaboration and experimentation. By identifying basic principles applicable globally, a harmonised approach can help to create a more cohesive regulatory environment.

However, the likelihood of increasing harmonisation beyond the national level is deemed to be complex. This complexity arises from various challenges, such as differences in legal systems and the unique data governance challenges faced in the region. Despite the challenges, sandboxes are considered crucial in stimulating innovation within Africa and the Middle East. Implementing sandboxes requires significant resources and time, given the nascent stage of data governance in the region. Nevertheless, the potential benefits and importance of fostering innovation drive the push for sandboxes.

In conclusion, sandboxes in Africa and the Middle East have faced challenges in their establishment. However, there is a recognised need for positive outcomes regarding sandboxes across the region. Regional collaboration in regulation and policy-making is seen as a means to better understand the FinTech ecosystem. Advocates for a harmonised approach believe it can contribute to a more coherent regulatory environment. Despite the complexity and challenges, sandboxes are seen as crucial for stimulating innovation.

Dennis Wong

Singapore has been implementing sandboxes as a policy mechanism to experiment with uncertain applications and technologies. These sandboxes are widely used to explore frontier technologies and collaborate with the industry to ensure clarity and compliance. The use of sandboxes has proved beneficial, providing confidence in data protection, accelerating the deployment of technologies, facilitating regulatory guidance, and promoting business collaborations. Sandboxes also contribute to regulatory understanding and transparency, as the findings from sandbox experiments are published, offering insights into regulatory issues.

It is important to note that sandboxes are not designed for volume but for specific cases with clear objectives. They are intended to provide a safe environment for experimentation and to understand the underlying technology and industry needs more clearly. Sandboxes also facilitate the publication of the experimental findings, enabling regulators and other interested parties to gain a deeper understanding of the regulatory landscape.

However, the process of identifying technology players for companies can be time-consuming, particularly when companies have specific requirements such as a need for privacy-enhancing technology. In such cases, the process becomes longer and more involved.

While sandboxes offer valuable insights and guidance, there are also other policy innovation tools like policy clinics that can provide quicker advice on accountabilities. Policy clinics can expedite the process by offering timely guidance on accountability matters.

Coordinated efforts among regulators are crucial to address sector-specific challenges. If a regulatory question arises in the finance or healthcare sector, the respective authority is brought in to work jointly on addressing the issue. This emphasizes the need for collaboration and partnerships among regulators.

Furthermore, discussions related to sandboxes are primarily domestic but include industry players who operate globally. The sharing of learning and experiences from sandboxes is seen as essential, with the transferability of such knowledge being highly valued by stakeholders.

Dennis Wong, the Data Protection Deputy Commissioner and the Assistant Chief Executive of IMDA, supports broad conversations and principles that everyone can agree on. As interest in sandboxes as a regulatory tool grows, it leads to more tech conversations and meetings with interested regulators, promoting international collaboration.

It is important to understand that the regulatory sandbox is not a decision-making or exemption-providing mechanism. Instead, it serves as a dialogue-based guidance tool to explore areas of regulation where there may be uncertainty. The emphasis is on dynamic and agile regulatory development involving ongoing engagement and a back-and-forth process, rather than providing a final answer at the end.

To conclude, Singapore’s use of sandboxes as a policy mechanism for experimentation and regulation has proven beneficial in facilitating innovative solutions, promoting compliance, and fostering collaboration between industry and regulators. The findings from sandbox experiments offer valuable insights into regulatory issues, supporting the development of transparent and effective regulatory frameworks. Coordinated efforts, both domestically and internationally, are necessary to address sector-specific challenges and promote the transferability of knowledge gained from sandboxes. The regulatory sandbox, as a guidance tool, contributes to dynamic and agile regulatory development by facilitating ongoing engagement and dialogues.

Kari Laumann

During the discussion, the speakers emphasized the significance of learning from the experiences of others when it comes to implementing and operating sandboxes. They highlighted the importance of reaching out to experts in the field, such as the British Data Protection Authority (ICO), to gather insights and knowledge. The speakers stressed that sharing information and learning from established sandboxes, like the one implemented by ICO, can greatly contribute to the success of a new sandbox.

The speakers also highlighted the need to adapt sandboxes to fit specific contexts when transferring them from one place to another. Cultural and other differences were cited as factors that necessitate customized adaptations. The speakers shared their experience of ensuring that the sandbox they learned from ICO was tailored to suit their own context, making it more effective in achieving their objectives.

Another key point raised during the discussion was the importance of tailoring the sandbox to the needs of the target audience. The speakers emphasized that while sharing information is crucial, it is equally important to create a sandbox that is tailored to the purposes and needs of the group it is meant for. This ensures that the sandbox effectively addresses the specific challenges and requirements of the target audience, maximizing its impact.

The regulatory sandbox was explored as a tool that offers guidance and clarity to companies. It allows for the exploration of areas of regulation where uncertainty exists. The speakers clarified that regulatory sandboxes do not provide exemptions or approvals, but rather facilitate the examination of regulatory gray areas within laws like the General Data Protection Regulation (GDPR). It was emphasized that regulations, including GDPR, continue to apply within the sandbox, ensuring that the applicable regulatory framework is not compromised.

Additionally, it was noted that organizations’ regulatory powers, such as those of META (presumably a regulatory authority), are strictly regulated by GDPR. This serves to maintain the integrity and accountability of regulatory bodies, ensuring that they comply with case handling and enforcement actions under GDPR.

In conclusion, the discussion highlighted the importance of learning from others’ experiences and adapting sandboxes to specific contexts. Tailoring the sandbox to the needs of the target audience and ensuring compliance with relevant regulations, like GDPR, are crucial factors in the successful implementation and operation of sandboxes. The exchange of insights and lessons learned from established sandboxes can greatly contribute to the effectiveness and impact of new sandboxes.

Pascal Koenig

During an online discussion on regulatory sandboxes, the participants emphasized the importance of learning from experiences and promoting international collaboration. There was a consensus on the need for sharing knowledge and transferring sandboxes from one context to another, while also acknowledging the need for adaptation. One example cited was Denise’s sandbox, which provided inspiration to others. The significance of cross-border data flows and enabling collaboration between regulators and authorities were also highlighted. The possibility of increasing harmonization of sandboxes on a regional level was discussed, with different perspectives on likelihood. Overall, the discussion focused on the importance of learning, collaboration, and potential harmonization to advance regulatory sandboxes globally.

Lorrayne Porciuncula

This comprehensive analysis delves into the topic of regulatory sandboxes, which are viewed as a means of policy prototyping for experimentation purposes. It highlights several key points that demonstrate the significance and potential of sandboxes in various contexts.

One important aspect discussed is the diverse skills required for the successful deployment of sandboxes. The analysis emphasizes that there is no single skill or set of skills that is universally applicable to all sandboxes and use cases. Instead, the skills needed depend on factors such as national jurisdiction, institutional framework, and the specific issue being addressed. This insight underscores the flexibility and adaptability of sandboxes, allowing them to be tailored to different circumstances.

Stakeholder engagement is another critical factor highlighted in the analysis. It argues that sandboxes should be designed to engage stakeholders from the very beginning, during the design phase. This approach fosters institutional trust and ensures that the sandboxing process is inclusive and representative of diverse perspectives. The analysis contrasts this approach with the current state of sandbox development, which often involves merely posting a consultation online and then leaving it. Instead, it suggests a more iterative and hands-on process that actively involves stakeholders throughout the sandbox implementation.

The analysis also focuses on the importance of capacity building and the creation of a community of practice to share best practices and reduce the cost of implementing sandboxes. It mentions a project in Africa that aims to build such a community through a Sandbox Forum. The forum’s approach prioritizes direct engagement and practical application over theoretical discussions, reinforcing the need for a collaborative and action-oriented approach to sandboxing.

Evaluation of sandbox implementation is another crucial aspect discussed in the analysis. It emphasizes the need to measure and monitor sandbox success using different methods. Factors influencing sandboxing success include stakeholder involvement, risk mitigation, and the technology used. Sharing this knowledge and evaluating sandbox outcomes can lead to improvements in the sandboxing process overall, enhancing its effectiveness in promoting innovation and achieving desired outcomes.

The analysis also explores the role of sandboxes in regulatory frameworks, particularly in the fintech sector. It highlights how sandboxes allow regulators to go beyond traditionally regulated entities, as exemplified by the success of open calls for different companies and innovative solutions in fintech sandboxes, such as Brazil’s PIX payment system. Ensuring fairness and avoiding regulatory capture are identified as important considerations in sandbox implementation.

Mitigating the risk of bias and regulatory capture in sandboxes is further discussed in the analysis. It suggests that regulatory frameworks should be aware of these risks and develop appropriate measures to anticipate and address them. Open conversations about best practices and framework setup are considered essential in this regard.

The analysis also underscores the impact of international collaboration in the deployment of regulatory sandboxes. It highlights the potential of cross-border perspectives to enhance the understanding and deployment of privacy-enhancing technologies and data intermediaries. Furthermore, it notes that new trade agreements can create opportunities for testing business, societal, and regulatory issues among participating countries. This observation emphasizes the crucial role of international cooperation in addressing complex issues related to innovation, data protection, public health, and climate change.

In conclusion, this analysis advocates for a comprehensive and inclusive approach to regulatory sandboxes. It emphasizes the need for diverse skills, stakeholder engagement, capacity building, evaluation, fairness, and international collaboration. By adopting such an approach, regulatory sandboxes have the potential to foster innovation, reduce inequalities, and tackle complex global challenges. The analysis provides valuable insights and recommendations for policymakers, regulators, and stakeholders involved in the design and implementation of regulatory sandboxes.

Moraes Thiago

The speakers highlighted several important points regarding sandbox initiatives in the analysis. One of the main points emphasized the need to foster dynamic discussions on strategies that stimulate innovation while upholding human values. It was acknowledged that sandbox initiatives play a significant role in promoting innovation and ensuring adherence to fundamental values of humanity. The primary goal of this session was to encourage a dynamic discussion among all relevant stakeholders.

Another significant point discussed in the analysis was the launch of the ANPD Regulatory Sandbox on AI and Data Protection. This initiative was created in collaboration with partners like CAF Consultants, aiming to provide a space for innovative ideas while safeguarding individual privacy and data protection. It was recognized that striking a balance between promoting innovation and protecting privacy is crucial in the development of sandbox initiatives.

The importance of international collaborations in shaping the future landscape of sandboxes was also emphasized. It was acknowledged that international collaborations play a crucial role in shaping the future of data governance and AI innovation. Collaboration among different countries and stakeholders is seen as a key driver for advancing regulatory sandboxes and ensuring collective progress.

Furthermore, the analysis highlighted that the call for contributions for the ANPD Regulatory Sandbox will be inclusive by accepting submissions in English. This inclusivity in language aims to make the dialogue more accessible and enable a broader range of stakeholders to participate. By accepting submissions in English, the call for contributions aims to reduce barriers and promote a more inclusive and diverse discussion.

In conclusion, the analysis underlined the significance of sandbox initiatives in stimulating innovation and upholding human values. The launch of the ANPD Regulatory Sandbox on AI and Data Protection aims to strike a balance between innovation and privacy protection. International collaborations were recognized as an essential element in shaping the future of data governance and AI innovation. Lastly, the call for contributions being inclusive and accepting submissions in English adds to the accessibility and diversity of the dialogue.

Axel Klapp-Hacke

Data is considered a critical asset for economic growth and sustainable development. It provides valuable insights for decision-making in areas such as food security, climate change mitigation, and health policies. Data empowers policymakers and private organizations to allocate resources effectively, solve problems, and prepare for risks. However, to ensure the fair and responsible use of data, regulatory frameworks that protect data sovereignty and security need to be strengthened. These frameworks strike a balance between reaping the benefits of data utilization and safeguarding citizens’ rights. Additionally, data and artificial intelligence (AI) have great potential in achieving the Sustainable Development Goals (SDGs). They can facilitate the delivery of medical services, increase efficiency in agriculture, and improve food security, contributing to broader sustainable development objectives. Regulatory sandboxes are also discussed as a means to promote a free, fair, and open data economy. These sandboxes provide a controlled environment for testing and developing innovative solutions while complying with regulatory requirements. By embracing the full potential of the data economy through regulatory frameworks and innovative approaches like sandboxes, we can harness the transformative power of data for economic growth and sustainable development.

Agne Vaiciukeviciute

GovTech sandboxes have emerged as a key component of Lithuania’s innovation ecosystem. These sandboxes, initiated in 2019, have received recognition at the European level for their positive impact on public governance. They provide a controlled environment for testing and implementing innovative solutions in the government sector. Several artificial intelligence (AI) solutions are currently being used by the Lithuanian government, demonstrating the success of GovTech sandboxes in driving technological advancements.

Lithuania places great emphasis on the potential of 5G technologies for innovation. With 90% coverage of the population, Lithuania has invested over 24 million euros in 5G-based projects, with more than 53 projects worth over 124 million euros in the pipeline. The government’s proactive approach to investing in 5G technologies reflects their commitment to harnessing the power of emerging technologies.

The Lithuanian government advocates for a flexible and adaptive regulatory framework that responds to technological innovation. The Sandbox regime in Lithuania enables the government to adapt regulations in line with advancements in technology. This fosters a regulatory environment that supports innovation and allows for the exploration of new possibilities.

To ensure unbiased and inclusive solutions, Lithuania mandates the participation of diverse stakeholders, including higher education institutions and civil society, in the sandboxes. This approach prevents a one-sided approach in the sandbox solutions and promotes fair outcomes in the innovation process.

In Lithuania, sandboxes primarily focus on mature technologies and ideas rather than early-stage testing. This strategic approach ensures that the sandboxes are effectively used to advance technologies with strong potential for real-world implementation.

While collaboration with other countries, such as the United Kingdom, for the establishment of sandboxes is valued, Lithuania recognizes that harmonization may not be necessary in the short-term. Cross-border collaboration is seen as more beneficial, allowing countries to work together and learn from each other’s experiences.

Learning from experiences and sharing knowledge is considered crucial for the regulation of innovations. Collaborations with the UK have provided valuable insights into the establishment and operation of sandboxes. The importance of learning from experiences is highlighted, although it is too early to implement harmonization as the concept of sandboxes is still actively being discussed.

Sandboxes are viewed as a vital tool in Lithuania to test and validate innovations. Government policies are closely aligned with the process of sandbox testing, and policy-makers work closely with those involved in testing systems. This reflects the country’s commitment to fostering innovation and ensuring that policies and regulations are effective in real-world scenarios.

The need for regulations to be dynamic and adaptable to reality is emphasized in Lithuania. Existing regulations without practical use cases indicate a disconnect with the evolving technology landscape. Additionally, sandbox testing may uncover failures or unforeseen challenges, further highlighting the necessity of regulatory adaptability.

In conclusion, GovTech sandboxes have become a central part of Lithuania’s innovation ecosystem, receiving recognition and awards for their positive impact on public governance. The country’s focus on 5G technologies, flexible regulatory frameworks, diverse stakeholder participation, and testing mature technologies in the sandboxes demonstrate their commitment to fostering innovation. Collaborations with other countries, learning from experiences, and the importance of dynamic regulations contribute to Lithuania’s progressive approach to driving technological advancements.

Audience

The discussion focused on the use of sandboxes in different sectors and explored the advantages and concerns associated with their implementation. One pertinent aspect was the AI Act, which stipulates that a national authority should operate a national sandbox. However, concerns were raised regarding the practicality of implementing this legislation. Specifically, there were apprehensions about the significant amount of time and resources required to study and create a test base for each use case.

Sandboxes were also discussed in relation to their potential role in combatting misinformation. CNET, for example, has developed a sandbox specifically designed to address this issue. An audience member raised a question about how civil society can utilise CNET’s misinformation sandbox beyond government use. This prompted consideration of the broader applications and benefits of sandboxes, including their potential use in tracking and analysing the spread of technology-driven misinformation, as well as developing countermeasures.

The value of sandboxes as a space for companies to engage with civil society and build trust was highlighted. It was suggested that sandboxes could serve as a crucial preliminary step before implementing regulations. This approach allows for flexible collaboration between companies and civil society to find appropriate solutions and establish trust-building efforts.

The sandbox approach was deemed particularly useful in the early stages of policy development or policy interrogation, particularly for framing the problem at hand. This experimental tool offers a unique opportunity to explore different policy options, and was seen as an effective way to address complex regulatory challenges.

However, limitations in participation were raised as a potential issue. Due to their nature, the number of firms that can participate in a sandbox is inevitably limited. This could restrict the diversity and inclusivity of the sandbox ecosystem.

Ensuring fairness and preventing distortion of competition were also identified as important considerations when implementing sandboxes. It was questioned how to guarantee that participation in sandboxes does not result in unfair advantages for certain companies. This issue underscores the importance of maintaining a level playing field and reducing inequalities.

Moreover, concerns were expressed about potential regulatory capture in the sandbox process due to the close interaction between regulatory authorities and participating companies. It was highlighted that mechanisms need to be established to prevent regulatory capture and maintain impartiality.

Additionally, the timeframe for operationalising a sandbox was raised as a concern. Some participants questioned the readiness and strictness of regulators in intervening effectively and efficiently.

Overall, the discussion called for advocacy towards adopting flexible, scalable, and dynamic regulatory methods. Sandboxes were viewed as one of the tools to achieve these objectives. While they offer important benefits, such as fostering conversations between regulators and breaking concentration in smaller financial sectors, the limitations and challenges associated with their implementation must be carefully considered to optimise their potential impact.

An interesting observation from the discussion is that sandboxes can facilitate the growth of digital banks and electronic money issuers. As seen in the case of Pakistan, sandboxes enabled these emerging financial entities by providing a regulated environment in which they could operate.

In conclusion, the use of sandboxes in various sectors offers both benefits and challenges. While they provide a space for experimentation, innovation, and collaboration, concerns exist regarding implementation, participation limits, fairness, regulatory capture, and operationalisation. Efforts must be made to address these concerns, and sandboxes should be integrated into a broader regulatory framework that promotes inclusivity, fairness, and effective policy development.

Armando Guío

Regulatory sandboxes are gaining attention as an effective solution for addressing regulatory concerns related to Artificial Intelligence (AI) and data. These sandboxes, which are being implemented worldwide, provide a controlled environment for testing and evaluating innovative technologies without rigid regulatory constraints. They have the potential to facilitate the development and implementation of responsible and ethical AI and data practices.

Different countries have adopted unique approaches to implementing regulatory sandboxes. The fintech sector, in particular, has been a strong advocate and driver of regulatory sandboxes. The experiences of countries such as Brazil, Lithuania, Ethiopia, Germany, Norway, and Singapore have been discussed in relation to their sandbox implementations. These discussions aim to learn from the successes and challenges faced by these countries and inform the development of best practices.

Regulatory sandboxes offer the opportunity for authorities to better understand the real impact of emerging technologies, such as AI and data, particularly in areas like privacy protection, misinformation, and digital power concentration. By providing a controlled environment, sandboxes enable authorities to assess the effectiveness of their regulatory measures and develop capacities to effectively tackle these major regulatory questions. However, there are ongoing debates about whether regulatory sandboxes alone are enough to develop the necessary capacities, and whether expensive and time-consuming sandboxes are beneficial for all authorities.

The value of data is highlighted as an important consideration in future discussions regarding regulatory sandboxes. The experiences of Latin American governments, who have been studying the Singaporean Sandbox, have been particularly influential. The Singaporean Sandbox is regarded as pivotal and offers a balance between flexibility, responsibility, and unlocking data value. By studying its implementation, other countries can gain insights into how to effectively leverage data and strike the right balance between innovation and regulation.

In addition to addressing AI and data concerns, sandboxes also play a crucial role in tackling misinformation. They provide a flexible and neutral space for collaboration between companies, governments, and civil societies to explore and develop effective measures to address the harmful impact of misinformation. By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implementing robust regulatory measures.

Advocates stress the importance of a multi-stakeholder approach in tackling misinformation, involving civil societies, companies, and governments. Civil societies, in particular, have been recognized for their valuable contributions in this area. By working together, these stakeholders can collaboratively develop effective strategies to combat misinformation and promote responsible information sharing.

Overall, regulatory sandboxes are regarded as valuable tools in building trust and understanding before introducing regulatory measures. They create a space for experimentation and collaboration, allowing authorities to assess the impact, feasibility, and effectiveness of their regulations. However, caution must be exercised in terms of their costs and effectiveness. It is crucial for countries to consider their individual capacities and circumstances before implementing sandboxes as a regulatory solution.

Session transcript

Axel Klapp-Hacke:
all of you here to today’s session on Sandboxes for Data Governance, Global Responsible Innovation. My name is Axel Klapp-Hacke. I’m a Director for Economic and Social Development and Digitalization at GIZ headquarters based in Germany. Data for sure is one of the most strategic assets for both economic growth and sustainable development. It can provide key insights to make better decisions around food, security, climate change mitigation, or health policies. Hence, data can help policymakers and private organizations to better allocate resources, solve problems, and prepare for risks. And as the backbone of AI application, its potential for the achievement of the SDGs cannot be underestimated. But for the use of data to benefit all, data sovereignty and data security need to be strengthened. We need regulatory frameworks that help reap the benefits of data while protecting citizens, and I think that is the key assumption of this and the starting point of this session. This panel gathers experts from around the world to discuss how regulatory sandboxes can unlock the value of data for all and promote responsible innovation in AI. I’m very delighted to welcome on this panel today Deputy Minister of Transport and Communication Agne Wajciukiewicz. So, I knew it would be very challenging and I was trying to pronounce it a bit correctly. Now, very welcome, very happy to have you here. From Deputy Minister from Lithuania and she focuses among others on innovation and open data and she will share her perspectives in a few minutes time. We also welcome Dennis Wong as the Deputy Commissioner at the Personal Data Protection Commission of Singapore. She manages the formulation and implementation of policies relating to the protection of personal data. We also welcome, I think she is joining virtually, Skadi Lauman. She is the Head of Section for Research Analysis and Policy and Project Manager for a regulatory sandbox for the Norwegian Data Protection Authority. She collaborated with stakeholders in the AI industry in Norway and is one of the team members ahead of AI regulations in her country. And then we also welcome Lorraine Poggiankula. She is here on the panel. She is the Co-Founder and Executive Director of the Data Sphere Initiative. An international non-profit foundation with a mission to responsibly unlock the value of data for all. She is an affiliate at Harvard Berkman Klein Center for Internet and Society at Harvard University. And last but not least, we have Olalade Shilon. She’s also here on the panel. She’s the Head of Privacy Policy across Africa, the Middle East and Turkey for META. She is a human rights lawyer who has focused on privacy, access to information and freedom of expression. Our panel this afternoon will be moderated by our friend Armando Guillo, who is an affiliate at Berkman Klein Center for the Internet and Society and doctoral candidate at the Technical University of Munich, focusing on social sciences and technology. And finally, we also welcome our online moderator. Hello Pascal, Pascal Koenig, GIZ colleague. He is a Planning Officer at the GIZ Headquarters. He has served as the John F. Kennedy Memorial Fellow at the Minna de Gunzburg Center for European Studies. And he’s also a postdoctoral researcher at TU Technical University Kaiserslautern. Together, they will discuss now the roles of regulatory sandboxes in the promotion of responsible data governance and AI innovation. Secondly, a regional perspective on the enablers and challenges of implementing those sandboxes. And thirdly, the issue of international collaboration on those regulatory sandboxes. As GIZ, we are very, very happy to facilitate this discussion and to support this session. Regulatory sandboxes can be really a great tool to promote regulation for a free, fair, free and open data economy. In this way, the potential of data and AI can be used to achieve the SDGs. They can facilitate medical service delivery, increase efficiency in agriculture and improve food security. Thank you very much, and please enjoy this wonderful session. And now, over to you.

Armando Guío:
Thank you very much. Thank you, Axel, for your kind introductions. And it’s a real pleasure to be here in such a distinguished panel with these experts on the area of regulatory sandboxes, which are gaining a lot of attention and a lot of traction now. There is a lot of fuss about regulatory sandboxes becoming more important nowadays to deal with many of the regulatory questions there are regarding AI, data, many other technologies and innovations, and of course, that will have an impact on technology. And here, perhaps briefly, just as an introductory remark, I would like to provide this context on regulatory sandboxes. It’s not a comprehensive one in the way in which basically we have, and that’s one of the biggest challenges we have right now, a lot of definitions on what a regulatory sandbox is, how they work, how they’re being implemented, and these kind of questions that we’re going to be answering today perhaps are opening the floor for these kind of discussions to take place. So, we want to start, and that’s one of the basic elements that we have to be have very much in mind, is that regulatory sandboxes are having a lot of definitions, and there are many different ways on defining what a regulatory sandbox can be. You can see regulatory sandboxes that look like innovation labs, or that look like many other projects, which are not necessarily even related with regulation. Some others are related with regulatory questions, but are dealing with them in a very different way. So, here, just to take an approach of what the UN Secretary General’s Special Advocate for Inclusive Finance for Development defined as a sandbox, is that a sandbox is a regulatory approach, not even a space, but an approach typically summarized in writing and published that allows live, time-bound testing of innovations under a regulator’s oversight. That’s a definition. Yeah, and that’s perhaps a definition that some share, some will say not necessarily. I don’t see that it has to be a regulatory approach. Perhaps it’s a regulatory experimentation space or an ecosystem of experimentation. That’s one of the challenges that we are facing right now, and that authorities around the world are facing with their approach to this kind of tool to deal with innovative regulatory measures. From there, we have this big question of how have authorities designed and implemented regulatory sandboxes around the world? And that’s a very interesting thing to analyze, and I have been able to look into this in some of my previous work. So, I have seen sandboxes that have been developed mainly by two people within an authority working on learning more about a technology, and this is called a sandbox. In some other countries, a whole sandbox unit is prepared for developing these kind of projects and developing and deploying an adequate sandbox, and we will hear from experiences from all around the world. We have the sandboxes also, and this is something interesting. Of course, we’re going to talk more on the data sandboxes, but we have seen sandboxes, of course, developing on the fintech sector. On the generative AI, of course, there’s all more attention on why sandboxes can be beneficial to understand many of the challenges posed by generative AI systems. And, of course, on the GovTech and public sectors. So, we have seen these areas and these areas of work as areas that can be of interest for many stakeholders that have been working on this. The fintech sector, of course, has been one of the leading sectors on developing regulatory sandboxes around the world, and that has been perhaps one of the biggest promoters of having sandboxes. Other authorities are trying to follow the same path now on many questions about IP, data protection, antitrust, and many other topics. We have seen, for example, in Latin America, sandboxes being developed. For example, in Brazil now, we have this public announcement, and we will hear from the colleagues from the Brazilian Data Protection Authority. They’re going to tell us a little bit more of these new generative AI sandbox and data protection that is going to be developed. At the same time, Columbia has this fintech regulatory sandbox, which has been also quite big, and a privacy-by-design and by-default sandbox being developed there. We have also sandboxes all around the globe. In Ethiopia, for example, we have seen a sandbox unit being developed there, which is going to be a big unit within the Central Bank of Ethiopia that is going to create some kind of regulatory experimentation environment. Germany, of course, also promoting many of the sandboxes, almost all of them at a regional level, and, of course, with this sandbox handbook that was developed some years ago, which has been quite influential not only in Germany, but in many other countries. At the same time, we have seen sandboxes in Kenya, so the Capital Markets Authority, they’re working on a very interesting fintech sandbox, which has also been quite important to develop the fintech ecosystem in the country, and Lithuania, of course, with the GOP regulatory sandbox for the public sector that we will hear more from the Vice-Minister. So that’s perhaps the whole representation that we want to have here, and many of the experts that are here have been very much involved into these kind of projects, have been working on them, so we have also, for example, the experience of Norway and Singapore working on data protection sandboxes, Singapore developing one of the first frameworks on how to have a regulatory sandbox on data protection and on AI governance, which was also very interesting, and Norway trying to open the black box and trying to develop this idea of more transparency with a regulatory sandbox in Norway for this specific purpose. So with this brief introduction and this brief context and definition of what a sandbox can be, it’s that we are facing now this big question on the relationship between regulatory sandboxes and internet governance. What’s there? Why are we talking about regulatory sandboxes in this specific forum, and when we are talking about technologies such as AI, and when we are talking about the future of data and data protection? Basically, because we are having a lot of questions, so for example, three big topics such as privacy, protection, mis- and disinformation, and digital power concentration, which we definitely have to analyze. How are we going to analyze that, and the authorities are going to analyze that? That’s the biggest question. What are the decisions and the regulatory decisions to be made? That’s where sandboxes perhaps can be helpful to understand the real impact of these technologies, and what can be achieved with the current regulatory frameworks that we have? But that’s the question perhaps. Are regulatory sandboxes enough in order for authorities to develop capacities to deal with many of these big regulatory questions? What has been the experience of other countries that we have here and many other experts that have been working on different contexts that can help us to understand a little bit more about that? And that’s perhaps one of the other big questions that we have. Are sandboxes for all authorities around the world? Are sandboxes effective in any country, or there have to be some initial capacities within some countries and some initial elements for these kind of projects to be developed? With the GHC, with the German corporation, we have also been working on this, and with my colleague Pascal Koenig, also trying to answer some of these questions, because we believe that sandboxes can be expensive. You can spend a lot of time working on them. Are they effective? Are they going to be effective to answer many of these internet governance and many other questions about regulation of technologies such as AI and the use of data and data cross-border data flows and many other big questions on the future of these technologies? That’s what we would like to answer and discuss today. So, with that, I would like to start briefly with a video of the Data Protection Authority from Brazil, that they were very generous to send to us this video. They were very much involved in the preparation of this event. Unfortunately, they were not able to join us, but I think it’s also good to hear from them, and then we will start with the questions with the experts here and the experts on the Zoom room. So, I think we can start. Thank you.

Moraes Thiago:
Ladies and gentlemen, esteemed colleagues and distinguished guests, I stand before you today on behalf of the ANPD, the Brazilian Data Protection Authority, filled with immense gratitude and excitement as we co-organize this workshop in collaboration with our esteemed colleagues from the Berkman Klein Center and the Data Sphere Initiative. It’s a privilege to have the active engagement of representatives from various government bodies and methods. Together, we are embarking on a journey that’s not only significant, but crucial for the future of data governance and AI innovation. Our primary goal in this session is to foster a dynamic discussion among all relevant stakeholders. We aim to deliberate on strategies that can pave the way for the development of sandbox initiatives. Initiatives that not only stimulate innovation, but do so while upholding the fundamental values of humanity. In this session, we will delve into three key areas. First, we will explore the pivotal roles that regulatory sandboxes play in promoting responsible data governance and fostering innovation in the realm of AI. Second, we will examine a regional perspective, shedding light on the enablers and challenges faced in implementing these sandbox initiatives. Lastly, we will discuss the importance of international collaborations in shaping the future landscape of sandboxes. I am thrilled to announce a significant milestone in our journey towards responsible innovation. The launch of the call for contributions for the ANPD Regulatory Sandbox on AI and Data Protection. This initiative, crafted in collaboration with esteemed partners like CAF Consultants, including the distinguished Armando Guilho, who is today’s moderator in this session, seeks to create a space where innovative ideas can flourish while ensuring the safeguarding of individual privacy and data protection. I invite our esteemed panelists and the entire audience to contribute actively to this endeavor. Your valuable insights can shape the very foundation of how we approach AI and data protection. You can submit your contributions via our webpage, which you can access via the QR code presented on this screen. I am delighted to inform you that submissions can be made in English, allowing for a broader and more inclusive dialogue. As we embark on this collective journey of exploration and innovation, let us remember the profound impact our discussions can have on the future. Let us collaborate, ideate, and inspire one another. Together, we can create a future where innovation and ethics coexist harmoniously, fostering progress that benefits all of humanity. With that, I wish you all a very productive session. May our discussions today be illuminating, and may they pave the way for a future that we can all be proud of. Thank you.

Armando Guío:
Thank you. With that, we have this invitation from the Data Protection Authority from Brazil, this very exciting sandbox. We can move then to our first question, and perhaps here for our panelists and vice ministers, I would like to start perhaps with your approach to sandboxes and your experience on this work. For you, what is your practice concerning sandboxes? What are the benefits of sandboxes that you have seen in your experience in Lithuania and the work you are developing right now? It will be very much interesting to hear how the sandboxes have been evolving in your experience and what you have learned from that.

Agne Vaiciukeviciute:
Thank you very much for having me here. I think sandboxes is one of my passions, and while it’s very important to speak about the future of the Internet, it’s sometimes very important to speak on the practical matters, how all those innovations will bring closer to us. In Lithuania, you mentioned one of the good practices is GovTech sandboxes. These are a little bit more on my colleague’s side, but this is already an award-winning way of looking into problem solving. It started in Lithuania in 2019. I think last year it got an award on European level of sandboxes that helps for public governance, to solve issues within the governance, to make it more accessible to the customers. I just figured out I will maybe just tell you some of the examples. For example, there are some examples based on AI solutions to measure the quality of digital government in an innovative way, Kodami solution to automate the detection of illegal gambling operations online, Burbi solution to improve the environmental risk assessment of companies, Open Assessment Technology solution to perform remote examination for civil servants, and many, many solutions that are already used in Lithuanian governance in one or another way. I think that platform was so successful that from the government side, the investments into these kind of sandboxes grew, and now it became a huge part of innovation ecosystem in Lithuania. But what I would like to talk a little bit more, which is more on communication side. Countries these days invest a lot into infrastructure, especially infrastructure for 5G technologies, and we are doing the same. In Lithuania, we do have the coverage of 90% of population, almost the same as here in Japan. But when we want to see the value cycle, so to see the demand side, we do not see enough of technologies there. So I think that’s where the need of Sandbox is coming from. So what we did in this sense, we dedicated more than 24 million euros for applications and solutions based on 5G. And it concerning not only innovations in transport sector, but in any sector. So we are very happy of this possibility to do it a bit in a niche way. So it’s not coming from the whole innovation policy within Lithuania, but the initiative comes from the Ministry of Transport and Communication. So we really want to see what the 5G technology is capable of. And there is a lot of interest from business side, where we just called the tender. So just imagine maybe 53 projects are in the pipeline, more than 124 million euros worth of projects of testing. Testing within the Sandbox regime and in Lithuania, those new technologies and applications. I think why it was so interesting for the companies, because we created the Sandbox in the manner that technology and the result of the innovation will belong to the owners. The only wish from the government side is that the application, the testing side would be in Lithuania. And the idea is that we want as a policy makers to be able to be very flexible and dynamic and respond to all the innovations and changes needed in the regulation framework. And I think this is not only to create more applications on 5G technology based solutions and to solve some of the problems in Lithuania, as it is more of the exercise for the government as well to adapt on the regulation matter as well. So we are very, very excited on this Sandbox regime, because we do believe that now we kind of fill the whole value chain. So we’re not only creating the infrastructure, but we’re encouraging private sector as well as public companies to participate and create applications in autonomous driving, in healthcare, in all other industries. And we’ll see what’s gonna happen. I’m very happy and hope that at the middle of next year, we will see some very great results and we will be able to share about it. So maybe it is for first intervention, that’s it. And later on, we can continue. Thank you.

Armando Guío:
Thank you, Vice Minister. Very interesting to hear many of those, some of those points, especially on the flexibility, attracting the private sector, presenting the results of a Sandbox, which seems to be sometimes an easy task, but it is not as easy as we can imagine. And from there, that I would like to join to perhaps one of the Sandboxes I have been studying the most, and that basically I have been working with governments, especially in Latin America, and they always say, look at the Sandbox in Singapore. What are they doing in Singapore? And how the Singaporean Sandbox is working? How do they were able to achieve these results? And from that, Denise, we would like to hear from you, because your experience, of course, in Sandboxes has been pivotal for Sandboxes to become a reference around the world. I would like to hear, and we would like to hear perhaps some elements on that experience, and how do you think, especially a data protection Sandbox has been helpful to achieve this balance between being responsible, being also flexible, but at the same time, like unlocking the value of data, which is also very important for many of these future conversations that we’re having. So the floor is yours.

Dennis Wong:
Thank you. Thank you very much. And thanks for having me. As you’ve seen, Singapore has experimented in Sandboxes for quite some time. It’s been a very useful tool for us in policy experimentation, and also in experimentation of frontier technologies generally. We tend to use it as a policy mechanism where there are uncertainties in application, as well as use cases. And it’s very much a tool that we use in partnership with industry, where we need clarity on certain technologies or solutions surrounding different types of use cases. We also look at it where organisations need support for compliance, and also to understand the integrity of their business use cases, and their intended sort of business commercial pathways forward. I wear two hats, both as the Data Protection Deputy Commissioner, but also as the Assistant Chief Executive of IMDA. And in that role, I also look at data promotion and growth. And those are, to us, two sides of the same coin. And so we view Sandboxes as a crucial tool to support industry, but to also help them to find appropriate safeguards, guardrails, and protections for the end user. We’ve had a few Sandboxes for a while now. We specifically had a Data Regulatory Sandbox that eventually grew to become the Privacy Enhancing Technology Sandbox. And that’s been something that’s been running for about a year now. We’ve just closed the first stage of it. And I’d just like to highlight sort of pockets of benefits that we saw. There were certainly benefits to individuals because it gives them assurance and confidence that data’s not being misused. It helps with transparency and to flesh out sort of questions of ethical use. We find that with Sandboxing, experimenting in a safe environment cuts down time and efforts for technologies to be deployed. We also see benefits to the organizations that participate in our Sandboxes because they can safely experiment with cutting edge technologies that give them a competitive advantage. And of course, I mean, realistically, that’s what companies are trying to do. We find that organizations very often come to us to provide regulatory support and guidance. They want to understand the potential of technology solutions, but they also want to comply with what the regulator wants. And I think, interestingly, we also find, and this is talked about a little bit less, it also creates opportunities for B2B data collaborations. Very often, companies come with their own use case. They may not necessarily understand the ecosystem the way we see it from a more central point of view. And a lot of what we do in Sandboxes is also putting together different parties within that ecosystem, matching them to technology providers or to end users or to intermediaries that allows that sort of ecosystem to be created in a specific sort of sector or specific use case. That’s not to say we don’t benefit at all. We benefit a lot because it helps us as regulator understand about technology, understand what industry needs, and it allows us to focus on areas that could potentially require regulatory guidance. But I just want to clarify that we don’t necessarily think that Sandboxing must lead to regulatory guidance. For us, it’s just one of a broad range of policy levers and tools that we have. We do, as a modality, I don’t know whether I’m jumping forward a little bit, but do tend to publish use cases and reports at the end of each sort of experiment. And that in itself, sometimes it just ends there, but it gives the sort of sector and people who are interested a sense of what were the regulatory issues, what were the obligations and allocations of responsibility that arose out of us working through that use case. I would just say that as regulator, we do get our hands quite dirty. We do spend a lot of time working through the mechanics of each individual use case to try and understand what the concerns are, what the issues are. We bring other regulators on board where there are issues that don’t fall within our sort of purview. So it is quite an intensive process for us. Thank you.

Armando Guío:
Thank you, and it’s a very amazing experience, and of course, elements that you shared there. And with that, I think, so Lorraine, we have heard about this case. I don’t know if we can already call it a successful case of a sandbox being applied to data protection. We have seen some of the elements that have been used also in Lithuania for the development of the sandbox in Singapore. In your experience, you had work from the DataSphere Initiative, working with different governments, working on reports on how to build this kind of projects. What do you think governments should do? What are like that checklist of elements to develop sandboxes that have the capacities, that have the impact that we would like to see on such projects that have a lot of work to do, that have a lot of resources to be used? We want to be effective on those. What do you see are the best practices, perhaps?

Lorrayne Porciuncula:
Thank you so much for the question. And it’s a pleasure to be here in this panel because sandboxes is also a passion of mine. And so seeing one workshop that we get to discuss this in the IGF, it’s just a pleasure. So on the question on the skill, I think that there isn’t a particular set of skills or a skill that is needed for you to deploy a sandbox. I think there are as many skills as there are sandboxes and there are as many sandboxes as there are use cases because no sandbox is going to be the same depending on the national jurisdiction where it’s located, what’s the institutional framework, what are the core partners that we need to be involved? What is the issue that you’re trying to solve? What’s the timeframe? And the complexity of all of this is just makes it exponential, the number of different skills that you need to have and the people you need to bring in the house. And I think that’s sort of an important step into this mystifying what sandboxes are. And that’s sort of the campaign that I’m trying to lead from my own corner in the Datastore Initiative. We have a report that we published last year called Sandboxes for Data, Building Agile Spaces Across Border to addressing the issue. And in that report, we try to look into the good practices and I consume a lot of the reports that are coming out of experiences such as the one from the Singaporean government, but also in terms of what other actors are doing in different countries and trying to be systematic about understanding what has worked and what hasn’t. We’re still at the early stages of understanding how that can be deployed to other use cases, right? But there is a maturity in terms of trying to understand what are sandboxes and we can all agree that it’s an umbrella term that captures a whole lot of things, right? And I think depending on who you ask what sandboxes are, they’re going to have a different kind of definition and that’s okay. And we should be okay with it as well in terms of seeing it as an anchor for a policy prototyping for experimentation. And the second aspect that we’re looking to also is in terms of the potential of using this internationally, which I’m going to come to later in the panel. And what I realized is that having done that study and that analysis of the experiences internationally and then talking to a number of governments, there’s still a lot of, people are still very much afraid of what it means in terms of resources and skills that are necessary because they’re under the impression that’s something that you need to be a very sophisticated regulator in order to be able to deploy. And I think the first step is trying to exactly say that actually it should be simpler. It should be about looking at a different way in before you design policy and then regulation in terms of engaging stakeholders rather than doing something where it’s sufficient for you just to post a consultation online and then forget about it. How do you actually engage stakeholders from the design phase onwards? And how do you build that trust, that institutional trust with the private sector and civil society and technical community and government and regulators in order to come together and as Denise said, get their hands dirty. And that’s not something that a lot of institutions are prepared to do or have the frameworks that allow them to do it. So for me, it’s less about the skills in itself but rather than being allowed to do that, to actually engage purposefully with stakeholders. And this is an important part of the capacity building that we’re doing now and we are with the support of the Hewlett Foundation now started a project in Africa and through Africa Sandboxes Forum where we’re bringing together stakeholders to create that community of practice in terms of sharing what can be done and what are the issues that you would like to solve from a multi-stakeholder iterative fashion. And doing that in terms of we have a course which we design that takes you through in terms of what sandboxes are and their potential. So that’s an important part into sort of building that skill in terms of words and vocabulary that we are using in the space but also in terms of how do we turn this into practice. So rather than just being a talk shop where we’re talking to them about sandboxes and what it should be, we are actually in the best way of a sandbox bringing them together in terms of can we identify an issue that we can address and can we do so in a way that helps with issues that are relevant among different countries at the same time or different stakeholders through dedicated sandboxes that we are piloting and we’ll be simulating until next year onward. And I think that is a step in terms of just being able to define what are the appropriate stakeholders that need to be involved depending on the use cases, what are the technologies that might be necessary if you’re looking to operational sandboxes and to be transferred data as was mentioned. But also in terms of what are the arrangements in terms of mitigating risks that may emerge, what are the different kind of ways for you to look into measuring and monitoring and evaluating the success of that sandbox as well. And so we are in a process where we are, and I like to say, and I’m not joking, but it’s sandboxing sandboxes really in terms of how they can best function. And I would like to see a space where we are able to share more of those good practices so that we can reduce the cost of actually implementing those sandboxes in sharing resources among each other.

Armando Guío:
Thank you, Lorraine. And I really like this idea of exactly of sharing and we definitely should talk a little bit more later on the global forum for sandboxes and sharing these kind of ideas and having these kind of forums for this interaction. We have two colleagues, actually three colleagues that are on the Zoom connection and I think they’re in Africa and Europe and we would like to very much say good morning to them, I think. So I will start with Kari Laumann from the Data Protection Authority in Norway. Is Kari there?

Ololade Shyllon:
In terms of having like a, you know, good outcome in terms of sandboxes across the board. And just to flag, like I said earlier, that there’s been challenges with getting this going in the region. The only data related, so we have one or two FinTech related sandboxes in Africa as well as in the Middle East, but the only data related one so far started a couple of months ago in Saudi Arabia, still at very early stages of development. So there’s not much to sort of share about, you know, the lessons that were learned in that regard. I think I’ll stop here. Thank you very much.

Armando Guío:
Thank you. We wanted to also finish this first round with your remarks, which I think are very interesting because that idea of building trust among different stakeholders, it’s always a big challenge and I think it has a lot to do with the design of many of the sandboxes and the participation spaces that we have. And talking about also participation, we would like to open the floor for this first round on questions that you have. I think you can stand up over here to the microphones and please, if you could present briefly yourselves, give your name and your questions, we will be more than happy to hear you.

Audience:
Good morning. My name is Claudio Agosti, I’m a platform auditor. And mostly my question is for the expert from Singapore and from Finland. I’m concerned because soon the AI Act will be in place so that we exist a national authority, this national authority need to run a national sandbox. So the question is, in average, for a use case, how many days per person is necessary to study it and to create the test base? Because it seems that is the potential bottom line to handle a lot of cases.

Dennis Wong:
I think you’re right, we don’t handle a lot of cases. So to us, a sandbox is not a tool for volume. It’s not meant to, it’s not like a framework or policy where you set at the general principle level or even an obligation level, and then it applies to thousands or hundreds of thousands of cases. It’s a tool for a lot of cases. In a year, maybe we work on six to 10 cases where we are really just working through what the use case is. I think one of the things we find helps a lot is to set very clear use case objectives. So if it’s fairly tight in scope, the parties already know what they want to do. It’s really about just working through the accountabilities, then it is more straightforward, easier to do. If it’s about the use case, easier to do. If it’s about helping companies to find players, technology players, they know they need, they have a data problem, they want to use a privacy enhancing technology, they don’t know which one, that becomes a longer process, a more involved process, and it can take many months to sort through. So I would say, unfortunately, the way we do it, at least it’s fairly customized to the use case, and it can take usually an average of maybe three to six months to work through a use case. Sometimes even longer than that. But of course, we have other policy innovation tools such as policy clinics, where we’re just giving quick advice on how on accountabilities that one can be much faster.

Armando Guío:
Thank you. I have also additional small remark, but later.

Audience:
Good afternoon. This is A H M Bozulu Rahman, come from Bangladesh IGF. Thank you panel, thank you moderator and honourable minister. We learn so many thing regarding the sandbox from this session. I learn from your presentation, Mr. Moderator, CNET has developed one sandbox regarding the misinformation. So how can we utilize this sandbox regarding the misinformation from the civil society side apart from the government? Thank you.

Armando Guío:
Thank you. Well, that’s a big question. I think that we’re having sandboxes on misinformation. Definitely what we would like to analyze is to gather some good evidence on how these technologies are actually spreading misinformation and what kind of measures can be used. I think that’s one of the biggest questions that we have right now, like what are the kind of measures that can be used and how to implement some of those. That’s where sandboxes become so attractive because you have this kind of flexible space in which basically you can interact with some companies and basically try to make them get involved into these kind of questions, concerns. Let’s work together, let’s involve civil society that has been doing some great work on this area, and let’s try to show you basically what could be the measures there to put into place. I don’t know if the sandbox in misinformation is actually a sandbox on providing flexibility. I think it’s more on providing trust-building efforts and perhaps this multi-stakeholder approach, but that’s how I see it. I think there’s interest in many countries to start with this kind of work even before regulating because of course there’s a lot of regulatory pressure also. Why don’t we regulate these kind of practices? Sandboxes are seen perhaps as a first step before going into that. That’s how I think we will see some sandboxes and misinformation more and more, I think, in the near future. Thank you.

Audience:
Good afternoon. My name is Bertrand de la Chapelle. I’m with the Data Sphere Initiative. I just want to make a quick comment. The word umbrella term has been used, and I think it’s an illustration of the fact that the sandbox approach is a spirit of experimentation, and there is a growing toolkit or toolset for governments to experiment various approaches depending on the topics. You mentioned the clinics and so on. The consequence is that it is particularly adapted to the early stages of any policy development or policy interrogation, which is the agenda setting and the issue framing, which is a stage that is usually skipped because the moment people have identified a problem, they run to say, my solution is A, my solution is B, instead of taking enough time early on to frame the problem as a problem that people have in common rather than a problem that they have with each other. Thinking about sandboxing as sometimes an early tool to identify how to shape the problem before you get into drafting whatever guidelines, regulation, or just code of conduct is probably an important element in the sandbox approach.

Armando Guío:
Thank you. I don’t know if there are any reactions to that. Okay. Thank you.

Audience:
Yes, please. My name is Christian Rumsfeld from the OECD, and I have a question related to one of the risks or potential risks of sandboxes. Given their very nature, the number of firms that can participate in the sandbox are obviously limited, so the question is how can we make sure that there are no distortion of competition going on that is favoring those companies participating, and also how can we avoid regulatory capture given exactly that closer interaction between the regulator and the companies? And so, in general terms, how can we make the sandbox more fair and non-discriminatory? Thank you.

Lorrayne Porciuncula:
Thank you so much, Christian, for a great question. I know very well, having worked and written about sandboxes and all the risks that we actually need to balance, and that’s one of them, right, in terms of competition and regulatory capture, and I think that’s part of the process of trying to ensure that you’re building trust with a broad spectrum of stakeholders, and what’s interesting about sandboxes is that it does allow the regulator usually the flexibility to go beyond the traditionally regulated and regulated entities. That’s been the case around fintech, right, and so for those of you that know the experience around fintech, I mean, it’s a very regulated sector, right? Central banks have banks that they regulate, and that’s, I mean, financial institutions, and that’s a very tightly knit group. Here with the experience with fintech sandboxes, what happened is that they did open calls for different startups and companies to come in and provide different services and innovative services to answer to a demand or to a problem, and here there were telecom companies that came in, startups, a whole bunch of innovators, and the solutions that came through those fintech regulatory sandboxes has been really, really impressive in terms of providing, in the case of Brazil, for example, instantaneous payment system that right now four out of five adults use, so it’s the fastest growing payment system in the world. It’s called PIX, fastest growing than the ones in India and China, surprisingly, and it was the concerted effort that went beyond the traditional companies, the traditionally regulated companies, so in terms of spirit, it is something that it’s meant to be more encompassing than what a traditionally regulated sector looks like. Of course, there are risks that’s not going to be the case, that we’re going to choose our champions and just invite the ones that we know best, but I think being cognizant of that risk is a first step in terms of trying to mitigate it, particularly so there isn’t a regulatory capture, which is always a concern when we look into healthy regulatory frameworks. How do you build the governance of the spaces? Having more conversations in terms of good practices and also on the frameworks that we need to set up at least the minimum condition for regulatory sandboxes, I think is the first step to go to mitigate those risks and anticipate them.

Agne Vaiciukeviciute:
If I may just very shortly to add from Lithuanian perspective, what we’ve done so far as an obligation to participate in sandbox and get the financing for any testing purposes, there should be a group of stakeholders involved, so it’s obligatory to involve higher education institutions, someone from civil society, so there is a range of compositions that is an obligation to be a part of, so we don’t want to, I mean, I clearly understand the threat there, we don’t want to see one side of sandboxes and solutions, therefore the broader stakeholders group has to be involved, and I think that we clearly put it into the rules of participation just to avoid this obstacle.

Armando Guío:
Thank you. I will have to perhaps provide space for one more question, yes, and of course at the end I will, we will have the space, please, I’m so sorry about that, because we have also the online moderator and everything, so thank you.

Audience:
Thank you. Sandboxes in my experience actually break away the concentration that takes place usually in smaller, you know, like financial sector. I was on a committee as a tech lawyer for the Middle East, for Pakistan’s central bank, we did a, exactly right, innovation challenge fund, so there was money as well as the ability to have your idea, you know, sandboxed and approved, and what we noticed was that basically by going through that process we got, you know, start-ups, et cetera, nobody was interested in the money as much as they were interested in the approvals, and then the most amazing thing was that it had a multiplier effect, and I’ll speak about that in a second, but the more important thing was that it started having conversations between regulators, saying you’re not the only ones, you need to actually get approval from another regulator, so the conversations broadened, that was helpful for the ecosystem, and as a result of these things that happened, the central bank was confused about things like should we allow cloud in the financial sector, should we do core treasury systems on cloud, and electronic money issuers and digital banks were enabled because of this exercise, so that was very, very helpful, but I have a question, my question was what I just mentioned regarding the learnings between regulators, have you found that that has been something that you’ve also experienced, that, you know, there’s one regulator maybe that’s doing financial services and there’s other regulatory approvals that are required, and how do you interact and coordinate that effort when you do a sandbox, I’d love to know, thank you.

Dennis Wong:
It’s a great question, we do work, I would say more domestically, because as IMDA we hold the horizontal sort of regulations for data protection, but obviously in a use case of, often they are sectoral and vertical, so for example where we have a finance use case and a finance regulatory question comes up, we will bring in the monetary authority, for example, to sort of work out joint guidance, if it’s a healthcare one, then we’ll bring in the relevant regulator, because very often from the business’s point, or the industry’s point of view, there are regulatory questions, they don’t really care which regulator is going to answer the question, or they realise that it crosses different silos. So that’s also been a fairly sort of interesting way to solve problems, and it’s been quite a helpful exercise, not always the easiest, but I think quite important to move things forward.

Agne Vaiciukeviciute:
If I may just very shortly, it’s a very interesting topic, we could talk about it hours and hours. Once again, in Lithuania case, where we were focusing mostly, was there technologies or ideas which would be at the very high TRL, so we are not talking about sandboxes where the ideas are tested or tried on the very not mature sense, because the money is quite huge, we’re talking about the last TRLs that would later on be scaled on. So it’s really sometimes important to speak on what side of the sandboxes and the ideas maturity we’re talking at.

Armando Guío:
Thank you, thank you for all the questions, and hopefully we have some final minutes for those questions that are left, and many others. I will give the floor then to the online moderator, to Pascale Koenig, from the GAC, Pascale, the floor is yours, I know you have also some interesting questions and the challenge of making this in 30 minutes or less, so the floor is yours, and thank you for also joining, I know it’s early morning for you.

Pascal Koenig:
Yeah, thank you Armando, it’s my pleasure to join you online and to now direct you and guide you through the next set of questions, and I would like to shift the attention a bit and pick up on something that especially Lorraine has already commented upon, and that is I want to adopt a bit more of a regional perspective and look at aspects of international collaboration and cooperation. So my first question is, I’m interested in how important is it to learn from other experiences when implementing and operating a sandbox, and perhaps more specifically you can also say something about how transferable are sandboxes from one context to another, how much work has to go into adapting them when you transfer them, and since I’m online I would like to direct the first question to Kari.

Kari Laumann:
Yes, so I think we were one of the first data protection sandboxes in Europe, but there was one before us and that was the ICO, the British Data Protection Authority, so when we were starting our sandbox we did reach out to them and they were very generous in sharing their experiences and even documents, so we learned so much from them. Of course we had to adapt, we didn’t just implement because there are cultural differences, there are so many differences, so we did adapt it, but that was super useful, and I think also the spirit of sharing we have kind of carried with us, and we’ve had so many different countries in Europe and beyond reaching out to us because we’re one of the first sandboxes, so we’ve also tried to share all that we can from what we have learned and what we have built, and I think it’s been really useful since the sandbox concept is kind of new and a little bit fussy for a lot of people, so I think you know sharing their experiences that are there is very important, and I also agree with what has been said earlier in this panel that there’s not like one definition of sandbox, you can make it your own and make it fit your own purpose, so I think you know sharing is important, but also you know listening to the needs of the target group that you’re trying to reach is very important and tailor it to your own purposes.

Pascal Koenig:
Thanks very much for these insights, and for the panellists in presence I will also direct the question at you, also perhaps at Denise, since your sandbox has been an inspiration to others as we’ve heard before, what’s your perspective on the importance of sharing learning from experiences and the transferability of sandboxes?

Dennis Wong:
No, it’s a great question, I think so far I would say honestly a lot of it has been domestic focus, there isn’t an APEC or an ASEAN framework in the areas that we operate in, a lot of it was about helping industry, and we do work with industry players who operate all over the world, so there is an international element, but I think more and more as we have tech conversations like this, as we meet more and more interested regulators, as the interest in sandboxes grows as a regulatory tool, I think there is a lot that we can learn from each other, and a lot that we can learn from the use cases that we all sort of get our hands dirty on it and do, so very supportive of the sort of broader conversations and principles that we can all buy into, and I think absolutely a lot of these questions about data protection or misinformation or AI are transferable, just by the very nature of the theme, and so we have a lot that we can learn from each other.

Pascal Koenig:
Thank you very much, I would go one step further and also ask about in what ways can international collaboration and exchange on the regulatory sandboxes be most helpful for regulators, for authorities, what do you think are important areas for collaborating, which areas are especially important currently to advancement, and since Lorraine has already said a bit about the importance of exchange and collaboration, I would direct the question to you.

Lorrayne Porciuncula:
Thank you so much for the question, and I think it’s important to consider that while sandboxes have been deployed nationally, there’s so much potential, not only for sharing those experiences internationally, but also on co-constructing and building those internationally from a cross-border perspective. In the report that I mentioned that we published last year, we list a number of different areas where they could be tested, so for example in testing privacy-enhancing technologies, which was already mentioned here, but from a cross-border perspective, by looking as well through issues like new data intermediaries as well that are emerging, so think about the role of data fiduciaries, or for example data commons, data collaboratives that may exist in one country and may want to be certified or recognized in another jurisdiction, so how do we do that, how do we create that space that actually allows for this exchange of what are the minimum requirements, how do you actually get that transferred across borders as well, so we can think through technologies and issues that are more transversal that are emerging within the digital space, but also within more vertical sectors in terms of how cross-border sandboxes could be used, for example to address issues that are already included within trade agreements. Actually DIPA, which is a trade agreement, one of the new trade agreements, Digital Economy Partnership that Singapore is actually a signatory to, together with New Zealand and Chile, with Canada also acceding to it, includes already a provision on the potential of having a data sandbox within DIPA. Now no one knows how to do that right now, but it’s already included as a provision, and I see that I mean this new generation of trade agreements may as well include beyond the lengthy process that it takes to negotiate and balance multiple interests into a static text, that it actually creates the fora for us to test what are the issues that businesses and society and regulators within those different countries care about, care about enough to work together to solve a solution, right, so it’s very much around how do we get, how do we operationalize a lot of those issues that we spend a lot of time negotiating under closed doors, and so trade agreements for me it’s an issue and it’s one that we include in the report. The other one is around health, so think about the issues around transferring sensitive data across border, but also on the opportunities of using that for research and innovation, particularly in the moment of pandemics, right? But also on the complexity of balancing those objectives of innovation and research in public health with issues around data protection and other regulatory systems that somehow interact with health objectives. About the issue of climate change, which is the most transversal challenge that we have in our planet, how are we going to actually get through working on solutions if we don’t have the space to collaborate together, right? And what for me is very encouraging is that we can use this as a blueprint to think about international cooperation in a different way. So I have a career having worked in different international organizations, at the ITU, at the OECD, before I co-founded the DataSphere Initiative. And for me, we need to think about not ways to supplant multilateral processes, but at least to collaborate with them and create a space where we think about solutions and we are concrete about it, right? And so for me, that’s where it lives, the opportunity for cross-border sandboxes, for us to create that space where we are between just do nothing and regulate and forget. We have, we find this sweet spot, so it’s sort of the go-to-lock spots where we can actually work, test solutions.

Pascal Koenig:
Yeah, thank you so much for these interesting comments. Certainly important issues and I have several GSF colleagues who also were interested in this question of enabling cross-border data flows. So that’s certainly something to continue the discussion on. Now I would also like to invite a private sector perspective on the question of international collaboration and those areas that are especially important. Ololade, would you also say a bit on that, perhaps?

Ololade Shyllon:
Thank you, thank you so much, Pascal. I fully agree with what Lorraine has said. Excuse me, I think by their very nature, sandboxes are, they require stakeholder collaboration and there’s a lot of things that can be learned across the board if they’re given a chance. So definitely broadening this kind of collaboration across borders will definitely enrich the learnings and help policymakers definitely better understand the ecosystem and be able to figure out the kind of policies and rules that would apply in different contexts and in different environments. And this in a way would help with harmonization. So for us at META, we believe in having a harmonized approach to regulation and policymaking. And so in a way, whilst we know that different countries will have different rules and different laws or legal systems, but there’s a lot to be learned in terms of working together and collaborating on this kind of approaches because at the end of the day, we all, I mean, globally, there are a lot of treaties that exist even though each country has their own like domestic legal systems. So in the same kind, working together across borders with regulatory sandboxes and the like, for us it’s very, very important to ensure that there’s widespread collaboration across the board and consensus. Of course, there’s cultural nuances, there’s specific nuances, but at the end of the day, at high level, there are basic principles that apply across board and that one can learn from experimenting and collaborating in this space.

Pascal Koenig:
Thank you, Ansel. And yeah, moving, maybe going a bit further in that direction, what are your observations regarding the need, but also how likely it is that there’s an increasing harmonization of sandboxes both beyond the national level, either through new sandboxes that are being created on the regional level, or perhaps through a stronger harmonization of existing sandboxes?

Ololade Shyllon:
Oh, likelihood is a very tough question because I think it’s a complex issue. There’s a lot of factors that come into play in this room. Like I mentioned, differences in legal systems, but like zeroing in on the region that I cover, which is Africa and the Middle East, there is a lot of challenges that I think that exist with sandboxes that are probably more acute in the region. So things around the time it takes for this to be executed, things around the costs that it involves. And the reality is that we have data protection, if you’re talking about data governance related kind of initiatives or sandboxes, it’s fairly nascent in the region. And so most of the regulators are literally trying to figure out exactly how to build the infrastructure. Build their organizations. And at the same time, there’s a lot of impatience from ordinary people with them being able to enforce and being able to show that they’re actually relevant in the ecosystem. And so you find many of them trying to say, okay, how do we prioritize being legitimate and being able to do what we need to do, what we are established to do? In that case, if we have to prioritize that, we don’t have enough resources, financial or technical to be able to focus on sandboxes, which take too much time for us to be able to see any benefit. So that’s, I think, one of the challenges that we’re seeing in the region, but we’re hopeful that with organizations such as Doreen working on this issue in the region, we can see some push and some movement towards having sandboxes, because there’s no doubt that they are very important for ensuring innovation in the ecosystem in the region.

Pascal Koenig:
Okay, great, thanks. And maybe to get a perspective also on a different region, and to get a bit of an insights on the perspective from Lithuania. Darkne, what is your perspective on the need for harmonization on a regional level and how likely is this to be?

Agne Vaiciukeviciute:
Thank you very much for the question. I think that there was already a lot of discussion and there was already a lot of good things said. I think that if we talk in the short-term perspective, harmonization, maybe it’s not the way to go. Maybe I would use a better word, collaboration across borders. That’s what I would expect more happening in the short period. I think that harmonization is always better for those who are not first movers, for countries like Singapore or others who has a lot of experience already there and openly shares it with other countries. This is something that would be maybe not so interesting in a short perspective. I think that we are talking about innovations at this point. So innovations are very important, not only to have a safe space to test it, but also to have a freedom to explore the potential there. I think what our experiences in this field, we also were not unique in the sense of with our sandboxes and I’m proud to say that we got the experience, of course, from UK. We had a very close collaborations. We went there, we invited them. We had a huge conference on the sandboxes just to share their experience because there was a lot of things which they said that we should not do. So it was also very valuable for us. So I think that the harmonization, and maybe it’s too early to have this question at this point. I think now, today, we’re talking a lot about what is the concept of sandboxes, what kind of sandboxes we do have. Then we have some just good initiatives already. So I think what we really need, we need to catch up with the scale on sandboxes in so many different levels and just to show maybe for other policy makers how valuable it is. I’m convinced already, but that’s not enough. I think if we want to make huge changes within the governments, we need to think further. So during this panel, I got so many ideas about how fast we need to go to Singapore with our minister and so on. So I’m joking, of course, but thank you very much.

Armando Guío:
Thank you, Minister. Thank you, Deputy Minister. Yeah, I have more questions and I would love to hear more from you, but I think while keeping an eye on the clock, I think we should leave some time for another round of questions from the audience. And I can see questions online, but of course I cannot see them in the room. So Amanda, you can gladly go ahead. Great. So I will start with a question here in person and I would like to get the questions in the Zoom room because I don’t have them. I don’t know if you can help me with that, but please.

Audience:
Thank you very much. Thanks for the very insightful panel. I’m Claudio Lucena, Paraíba State University Law School in Brazil. And I’m also the co-coordinator of the Open Loop Experience in Brazil. We are addressing privacy enhancing technologies. I’d just like to add a bit on Lorraine’s comment about the happiness of having sandboxes discussed in a privileged space like this. For years, we have talked about the necessity to regulate in a more adequate, dynamic, flexible, scalable way. And bringing sandboxes to this privileged space means that we’re considering it one of the tools to operationalize that smart regulating. Yes, for the digital space, but definitely not only to it. So my question is a little bit more mundane though. It’s a question about timeframe. And I’d like to have these experiences here of Lithuania, maybe Norway and Singapore. You have a framework to operationalize a sandbox and there is a space where you weigh back in as a regulator to say if and which measures are going to be taken out of the experience you have. How strict, the question is how strict you intend to be or have been in these measures? Do you wait until the whole process is finalized as the regular framework foresaw or are you ready to intervene in a point where something stands out as very important not to be waited for? Thank you very much.

Kari Laumann:
Thank you. I don’t know, Carrie, if you want to start there. Yes, I think this is a very good question and very relevant for us as regulators. I think for META it’s a bit different if you’re a private actor and you have a sandbox, but as a regulator, our kind of powers are very strictly regulated in the GDPR. So we are to case handle, we are to do enforcement action and we are to give guidance. And for the sandbox for us, this is a guidance tool. So we call it dialogue-based guidance. So for us, it’s very important to be clear that this is not a decision. We only give guidance in the sandbox and then the company who is participating can decide themselves for what they will actually do. And also we are very clear that we don’t give any exemptions from the regulation. So even if they’re in the sandbox, the regulations still apply. So our sandbox is more about exploring those area in the regulation where there might be questions or uncertainty of how it should be implemented in practice. It’s not about giving exemptions or giving like a stamp of approval. It’s basically guidance. So I think that’s an important kind of also, it’s important to be clear about what the sandbox is and clearly define that for anyone who participates or wants to take part in it.

Dennis Wong:
It’s a great question. I would say that for us, it’s a fairly sort of dynamic process because we waited right from the start of the use case. Very often, right from the get-go, we’re trying to understand what is the regulatory issue they’re trying to solve for. And well, at the end of the process, we come up with the case study or with the published report. So obviously there is that process. But I think throughout in the engagements, we are working on the ground with them to work out what are the regulatory issues, where are the inter-jurisdictional issues, where are the interdisciplinary issues. And we are going back and forth on that process all the time throughout. So it’s definitely in the realm of guidance. For us, it is a fairly agile and dynamic process. And I agree completely with what you said earlier in your speech, which is really, it’s about agile policymaking. So it’s very much in that space for us. And we don’t really sort of see this as, okay, you go figure it out and then we’ll give you an answer at the end of it. It doesn’t really work like that.

Agne Vaiciukeviciute:
So thank you very much. Very good question. I think our perspective is a bit from different angles just because I’m not from the regulatory kind of authority. I’m from the policymaker’s kind of side. This was initiated from our side. So we understand sandboxes as a part of work, working very closer with the ones who are testing all those systems. You know, testing all those innovations. And once again, obviously it’s a very dynamic process. Nobody wants to implement or change any rules that is absurd or whatever. But the idea was to kind of open it and be dynamic in regulation as well. Because we have some of the regulations already in place, but there are no usage cases. So it means that it’s written on the paper, but the reality does not work. So that’s what we are having. So our perspective with sandboxes try to kind of close this gap. And obviously we understand that nothing could be taken for granted or fully within the process because we did not even touch upon the fact that while doing those sandboxes, there could be some not usable cases in the future. You’re just testing some, there could be some failures as well. So we are looking into this more in a relaxed manner, just to see what’s going to happen, just to create a playground for everyone. Thank you.

Armando Guío:
Thank you. This has been an amazing panel, very much on a topic that is still on the making. I think there are many things coming on the way, global forum, sandboxes, Lorraine, perhaps, of course, that will be coming and projects on different sites, Lithuania working on these, Norway still continue the good work, although I think META is going to be also very important actor on many of these conversations and as a participant of the sandbox, Singapore continue the great work. So this has been already amazing. Also with GIC working on this assessment on how to help countries to be more efficient on implementation of sandboxes, something that we’re working with Pascal. So this has been already an amazing experience. Hope that you continue the great work. Hope that you continue with all the great questions and thank you again for joining and this has been an amazing experience. Thank you again. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Agne Vaiciukeviciute

Speech speed

151 words per minute

Speech length

1687 words

Speech time

670 secs

Armando Guío

Speech speed

193 words per minute

Speech length

3113 words

Speech time

969 secs

Audience

Speech speed

172 words per minute

Speech length

1071 words

Speech time

375 secs

Axel Klapp-Hacke

Speech speed

169 words per minute

Speech length

718 words

Speech time

255 secs

Dennis Wong

Speech speed

189 words per minute

Speech length

1707 words

Speech time

542 secs

Kari Laumann

Speech speed

166 words per minute

Speech length

541 words

Speech time

196 secs

Lorrayne Porciuncula

Speech speed

173 words per minute

Speech length

2284 words

Speech time

794 secs

Moraes Thiago

Speech speed

169 words per minute

Speech length

464 words

Speech time

165 secs

Ololade Shyllon

Speech speed

199 words per minute

Speech length

756 words

Speech time

228 secs

Pascal Koenig

Speech speed

164 words per minute

Speech length

491 words

Speech time

179 secs

Quantum-IoT-Infrastructure: Security for Cyberspace | IGF 2023 WS #421

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Wout de Natris

The lack of cybersecurity measures in Internet of Things (IoT) devices is a pressing issue that demands attention. While the technical community has made efforts to address this concern, the majority of governments and industries have not yet prioritised security by design in IoT. This oversight has resulted in widespread vulnerability and the potential for malicious attacks.

Initially, cybersecurity was not a concern during the early days of the internet, as worldwide connectivity was limited. However, with the rapid expansion and integration of IoT devices into our daily lives, the need for robust security measures has become increasingly evident. Unfortunately, IoT devices are often designed without adequate security measures, making them susceptible to cyber threats and potentially compromising users’ personal data.

One argument put forth is that governments and large corporations should play a crucial role in setting the standard for security in IoT. An example of this proactive approach is seen in the Dutch government, which has taken the lead by imposing the deployment of 43 different security standards. This demonstrates the importance of demanding high levels of security in IoT devices.

Another concerning aspect is the lack of rigorous security testing before new technology, including ICT, enters the market. The fast pace of innovation and the urgency to bring products to market often result in inadequate security measures. It is argued that security should be a fundamental consideration and undergo formal testing before any form of ICT is released, minimising risks for users.

On a more positive note, international cooperation and information sharing are emphasised as pivotal factors in staying ahead in terms of cybersecurity. The power of the internet lies in its ability to facilitate global discussions, enabling the sharing of knowledge and experiences across borders. Governments and larger industries need to be made aware of their role and potential influence in addressing cybersecurity challenges, fostering collaboration and cooperation on a global scale.

In conclusion, the lack of cybersecurity measures in IoT devices poses a significant challenge that needs to be addressed urgently. Efforts from both the technical community and various stakeholders are required to push for security by design and the implementation of robust standards. Governments and large corporations hold the responsibility of leading the way, setting the standards for security in IoT. In addition, rigorous security testing should become a prerequisite before any form of ICT is introduced to the market. Furthermore, international cooperation and information sharing are critical for staying ahead in the ever-evolving landscape of cybersecurity. Only through collaboration can we tackle the challenges and vulnerabilities inherent in the interconnected world of IoT.

Moderator – Carina Birarda

This extended summary highlights the main points and arguments presented in the given information on cybersecurity. It also provides more details, evidence, and conclusions drawn from the analysis.

The first argument states that there has been a significant increase in cybersecurity incidents at the international level, which is viewed as a negative trend. This can be attributed to the global connectivity that has become a key factor behind this increase. Additionally, the emergence of sophisticated criminal activities, such as crime as a service, has further contributed to the rise in cybersecurity incidents. The supporting evidence for this argument is the fact that cyberattacks are often conducted by actors in multiple countries, indicating the global nature of the issue.

The second argument emphasizes the fundamental challenge of adopting internationally-recognised cybersecurity best practices. It is highlighted that only a few organisations currently practise these standards, and the lack of adoption is a global issue. The evidence supporting this argument includes the observation that just a small number of organisations implement these best practices, indicating a need for widespread adoption to enhance cybersecurity at both national and international levels.

The third argument stresses that cybersecurity is a global issue that necessitates international collaboration for effective mitigation. The fact that cyberattacks do not respect borders or jurisdictions is put forward as evidence for the need for international cooperation. Additionally, it is stated that information sharing at the international level is imperative for combating cybersecurity threats. This argument highlights the importance of collaboration between countries to establish a robust global cybersecurity framework.

The fourth argument suggests that understanding the threats facing IoT, web, and quantum technologies is essential for implementing proper cybersecurity practices. By gaining a comprehensive understanding of these threats, appropriate best practices can be selected and implemented. The evidence supporting this argument is the observation that proper implementation of cybersecurity practices can only be achieved by addressing the specific threats posed by emerging technologies.

In conclusion, the extended summary highlights the increasing number of cybersecurity incidents on an international scale as a negative trend. The adoption of internationally-recognised cybersecurity best practices is identified as a fundamental challenge, with only a small number of organisations currently practising these standards. It is established that cybersecurity is a global issue requiring international collaboration for effective mitigation. Understanding the specific threats posed by emerging technologies is emphasised as crucial for implementing proper cybersecurity practices. Overall, the analysis underscores the need for international cooperation and comprehensive measures to address the growing cybersecurity challenges.

Maria Luque

Quantum technologies, specifically quantum computing, present challenges and opportunities in terms of cybersecurity. The concern is that quantum computing has the potential to break current cryptographic systems and expose sensitive information. To combat this threat, researchers are developing technologies like Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD). PQC, although not yet standardized, can be applied today as a software-based solution, while QKD requires substantial investment and the creation of new secure communication infrastructures.

It is argued that governments and the technology industry need to continuously and significantly invest in quantum technologies to ensure data security in the face of the quantum threat. QKD, in particular, requires high investment and the establishment of entirely new infrastructures for secure communication. On the other hand, tech companies have already started implementing PQC into their solutions, showing their recognition of the need to adapt to quantum technologies.

Organizations also need to assess and adapt their information security structures to prepare for the quantum threat. They should understand their information architectures, level of encryption, and capabilities necessary for transitioning to quantum security. The approach for organizations may vary depending on their size, with smaller ones potentially adopting PQC and larger ones engaging in quantum communication networks.

For small tech companies, the infrastructure provided by large tech companies like AWS, Microsoft Azure, and Google is crucial for addressing the challenges posed by quantum technologies. These platforms serve as a foundation for smaller companies to navigate the complexities of quantum computing.

Deploying PQC algorithms in the cloud is considered a potential solution for securing data for small companies in the next five to ten years. Despite not being favoured by some, it is argued that deploying PQC algorithms in the cloud offers optimal data security for small companies. However, there is debate regarding this approach, with some opposing the practice for maintaining data security.

Countries are encouraged to focus on their strengths and specialties when planning their national quantum strategies. For example, Spain has chosen to invest in areas where it excels, such as optics and mathematics, to drive its quantum technology development.

In conclusion, quantum technologies pose both challenges and opportunities in cybersecurity. Addressing the quantum threat requires significant investments in quantum technologies, assessments and adaptations of information security structures, and consideration of alternative solutions like deploying PQC algorithms in the cloud. Additionally, countries should strategically focus on their strengths and specialities to plan effective national quantum strategies. Ongoing research and discussions are needed in this rapidly evolving field.

Olga Cavalli

Latin America faces unique technological and internet infrastructure challenges due to economic and distribution inequalities. These challenges stem from the disparities in wealth and resources within the countries of the region. As a result, access to and the quality of technology and internet infrastructure vary greatly across Latin America.

To address these challenges, there is a need for increased participation in policy dialogues related to the internet in Latin America. Olga Cavalli, a university teacher at the University of Buenos Aires, has played a key role in creating a training program for professionals to learn about the rules of the Internet, understand its challenges, and participate more actively in policy dialogues. This initiative aims to empower Latin American countries to have a stronger voice in shaping internet policies that are suitable for their specific needs and circumstances.

Furthermore, the rapid adoption of Information and Communication Technology (ICT) and Internet of Things (IoT) devices in Latin America has raised concerns about increased vulnerabilities due to the lack of initial security designs. It is estimated that there will be between 22,000 million to 50,000 IoT devices in the region next year. The fast pace of adoption leaves little time for proper security measures to be implemented, which could lead to potential breaches and threats in the future.

Argentina has taken proactive steps in addressing cybersecurity concerns. The national administration has implemented binding resolutions that require the preparation of a security plan, the assignment of a focal point for contact, and information sharing in the event of a cyber incident. Additionally, a manual has been developed to guide the national administration on how to respond to such incidents. A new cybersecurity strategy has also been approved, showcasing Argentina’s commitment to ensuring security in the digital realm.

Developing countries and small to medium enterprises (SMEs) face significant challenges in keeping up with rapid technological changes. These challenges include restrictions on importing certain products and hardware, as well as a lack of human resources, as trained professionals often migrate to developed countries in search of better opportunities. The combination of limited resources and a lack of technical expertise hampers their ability to understand and afford new technologies, creating a widening technology gap.

Moreover, developing economies and small to medium enterprises are often consumers of technologies developed elsewhere, which raises concerns about the global technology gap. While major technology companies like AWS, Microsoft Azure, and Google are expected to provide solutions based on emerging technologies like Post-Quantum Cryptography (PQC) algorithms and cloud computing, developing economies and SMEs rely on these technologies without actively contributing to their development. This dependence on technologies developed elsewhere puts them at a disadvantage.

To address these challenges, capacity building and awareness are advocated as essential measures. By investing in the development of local technological capabilities and creating awareness about the importance of technology, Latin American countries can reduce their reliance on technologies developed by other countries. This would help narrow the global technology gap and allow them to actively contribute to technological advancements that suit their specific needs.

In conclusion, Latin America faces unique challenges in technological and internet infrastructure due to economic and distribution inequalities. Increasing participation in policy dialogues, addressing cybersecurity concerns, and bridging the technology gap are crucial steps towards creating a more inclusive and technologically advanced region. Additionally, capacity building and raising awareness about technology will empower Latin American countries to shape their own technological future.

Nicolas Fiumarelli

During the discussion, the speakers emphasised the necessity of implementing security technologies, such as RPKI, DNSSEC, IoT security standards, and quantum-resistant algorithms, through legislation. They pointed out that the rising number of Internet of Things (IoT) devices and the advancements in quantum computing pose significant security risks. These risks can be mitigated by the adoption of robust security measures.

The speakers also highlighted the existence of security standards developed by the Internet Engineering Task Force (IETF) specifically for IoT devices. These standards provide guidelines and best practices to ensure the security of IoT networks and data. However, one speaker questioned why these security technologies are not universally enforced in all Information and Communication Technology (ICT) systems through legal obligations.

It was acknowledged that the implementation of advanced security technologies comes with a high cost. This cost may pose a challenge to widespread adoption. Nonetheless, the importance of safeguarding critical infrastructure and personal information against cyber threats and data breaches justifies the investment in these technologies.

Overall, the sentiment during the discussion was neutral, indicating a balanced examination of the topic. The speakers’ arguments and evidence provided a comprehensive understanding of the urgency to implement security technologies, alongside the challenges associated with their implementation. The discussion aligned with SDG 9: Industry, Innovation and Infrastructure, as it emphasised the need for secure and resilient ICT systems to support sustainable development.

Through this analysis, it becomes evident that the adoption of security technologies through legislation should be encouraged and prioritised. This will help ensure the protection of IoT devices and networks, while also addressing the growing threat of quantum computing to traditional encryption methods. Additionally, the development and enforcement of security standards can play a crucial role in enhancing cybersecurity practices across various industries.

In conclusion, the discussion underscored the significance of deploying advanced security technologies and standards to safeguard ICT systems. Although challenges such as high implementation costs exist, the speakers highlighted the urgency to address these concerns and apply security measures throughout the industry. By doing so, they aimed to emphasise the need for a comprehensive approach to cybersecurity, simultaneously addressing both technological advancements and legal enforcement.

Carlos Martinez

The discussion centres around the vital role of DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure) in securing the fundamental structure of the internet. These security protocols are instrumental in safeguarding the integrity and authenticity of DNS responses and BGP (Border Gateway Protocol) announcements, respectively.

DNSSEC and RPKI operate by utilising digital signatures to verify the legitimacy of DNS responses and BGP announcements. This verification process ensures that the network delivers data packets to the correct destination, maintaining the proper functioning of the internet. The speakers unanimously recognise the crucial importance of DNSSEC and RPKI, highlighting their shared responsibility in both signing and validation processes.

On a related topic, there has been a debate concerning the potential weakening of cryptographic algorithms and the inclusion of backdoors to enable access. However, Carlos, one of the speakers, expresses a negative sentiment towards this notion. He asserts that such actions would be unwise, potentially compromising the security of cryptographic systems. This viewpoint aligns with SDG 16, which focuses on ensuring peace, justice, and strong institutions.

A positive aspect discussed is that both DNSSEC and RPKI have algorithm agility built into their design. This feature ensures that they can adapt to incipient post-quantum cryptographic scenarios. Consequently, when post-quantum cryptographic algorithms are standardized, they can be effectively incorporated into DNSSEC and RPKI, providing continued security measures against quantum threats.

The debate also encompasses the challenge of mandating technology, with the speakers highlighting instances where such endeavors have proven unsuccessful. They note the issues surrounding cost and benefit discrepancies, particularly in the context of the Internet of Things (IoT) and DNSSEC/RPKI implementation. Furthermore, while post-quantum algorithms have been proposed, they have not yet achieved a satisfactory level of performance.

In conclusion, the speakers collectively emphasize the importance of DNSSEC and RPKI in securing the core infrastructure of the internet. Their positive sentiment towards the efficacy of these protocols underscores their significance in maintaining a properly functioning internet. Nonetheless, there is a negative sentiment towards weakening cryptographic algorithms, highlighting the potential risks associated with such actions. The speakers also acknowledge the need for flexibility and tailored approaches when addressing different technologies, rather than enforcing a one-size-fits-all mandate. Ultimately, this discussion highlights the ongoing challenges and complexities associated with internet security and the need for continued research and adaptation to effectively counter emerging threats.

Session transcript

Moderator – Carina Birarda:
Okay. We are going to start it. Good morning, everyone. Good afternoon. Good night. I want to interpret my gratitude for sharing this workshop, Quantum IoT Infrastructure Security for Service Space. It’s an honor to moderate such a disengaged colleagues and friends. I am Karina Virarda from Argentina, a member of the Multiple Stakeholders Advisory Group of ICF, co-facilitator of the Best Practices Forum on Cybersecurity. I’m passionate about technology and all things related to digital protection. As we know, in recent years, we have seen a significant increase in cybersecurity incidents at the international level, which alarming statistics is showing and consistent wise. Global interconnectivity depends on technology and the sophistication of criminal as crime as a service are the key factor behind the trend. So maybe we have more work. The lack of adoption of internationality-recognized cybersecurity best practices is one of the fundamental challenges. Recognizing cybersecurity as a global issue is essential as cyber attacks do not respect borders or jurisdictions. Organizations such as the UN’s World Economic Forum, ICF Forum, promote internationally-recognized cybersecurity standards. Sorry. Such a need cybersecurity framework and ISO 27001 Information Security Guidance, which provide a solid framework for protecting digital assets. Collaboration and international cooperation are equally essential as a cyber attack often involves actors operating in multiple countries. Sharing information about threats and cybersecurity tactics is vital to on a step ahead in the fight against these attacks. In summary, the increase in international cybersecurity incidents is a challenge that requires a global response. The adoption of the cybersecurity best practices and international collaboration are the fundamental pillars to addressing this growing threat and protecting our digital assets and increasing interconnected world. In order to determine with the best practice can be implemented, it is essential to understand the threats we are facing. So we have two opening questions for all the panelists, which are as follows. Number one, what are the leading cybersecurity threats across the IoT critical internet infrastructure, web, and quantum technology, and what are existing best practices to counter this threat? Number two, how can diverse stakeholders, including the ICF community, the Best Practices Forum on cybersecurity, dynamic coalitions, and the other relevant groups collaborate and contribute actively to development and implementation of these best practices? And number three, in the context of the continuality involving cybersecurity landscape, what key considerations are essential to ensure a safer and more trustworthy internet for our users across these areas? I kindly request that each of you introduce yourself. And you have 10 minute limit for your presentation. And the number one, please, Wouter Natrisch. Your turn. Thank you.

Wout de Natris:
Thank you, Carina. My name is Wouter Natrisch, and I am a consultant based in the Netherlands. And as such, I am the coordinator of the dynamic coalition at the IGF called Internet Standards Security and Safety Coalition. And this coalition has one primary goal that is to make the internet more secure and safer for all users, so whether public, private, or individuals. We do that through different working groups. And these working groups focus on different topics on the topic of cybersecurity. So we have a topic called on internet of things, so security by design built into the internet of things. And I’m sure that Nicolas will tell more about that later. We published our first report yesterday morning here in Kyoto, which can be found online. We have a working group on procurement and supply chain management. And I think that’s what we’re going to focus on most in a moment. That we have a one on education and skills to make sure that tertiary education deliver what industry needs in this field and not codes programs from 20 years ago. We have one on data governance. We have one on the consumer protection. We have a working group on emerging technologies and one on deployment of two specific standards, but then focusing on not the technical side, but I’m sure that also what we’re discussing here is not about the technique. It’s about political, economical, social, and security choices that we have to make in a society. I think that what we try to aim to do, and I think that that answers one of the questions that I heard, is that when governments and larger industries start demanding security by design, when they procure their ICT services, devices, or products, that would mean that any company is not able to deliver these demands, will not get big assignments. And that would be a major driver for getting everything, including IoT, more secure by design. What I think is important to understand is that The internet works as it does and let’s face it, it works fantastically because anybody in the world can at this moment follow us, they can ask questions to us, they can use the chat to interact with us and it’s all because of the way the internet functions and the way it is scalable. But unfortunately when they built these rules, security was not an issue because people who were then connecting were working at either the U.S. government department of defense or they worked in some U.S. universities and everybody knew each other, so there was no need for security. And then the world came online on the same principle and then showed that it was inherently obscure. The technical community has made reparations, they made changes to the code that runs the internet and that code running the internet is the public core of the internet that people talk about. So when you talk about protecting the public core of the internet, you’re not just protecting undersea cables or land cables or server parks, you’re also protecting the software that makes it work. And that is the weird thing about this story that software that makes the internet and IoT more secure is not even recognized by any government in the world as such. So if you talk about standards, they talk about government bodies making standards or they talk about organizations like ISO making standards, but not about the internet standards. They are made by the technical community on a voluntary basis, but that is what makes the internet run and not ISO because that is an administrative ticking box. So if we get governments to understand that it’s the other standards they have to recognize formally as well, but also use them when they procure their services, their services, their products, their devices, the world will change. And what is the current situation? The current situation is that there’s not a living playing field for industry. When industry is not asked for a level of security built in, apparently they don’t do it. And what if I was a single company and I decided I am going to deploy all these standards? That costs me money, it costs me time, it costs me effort. I have to train people. And if the competition does not do it, it means my product becomes more expensive and most likely governments won’t buy it because they go for the cheapest option. So in other words, I would be out of business. So there’s no living playing field. There’s no demand from the big players. So there’s no interest to deploy. So all the IoT devices coming to the market are usually insecure by design. And from that moment on, are a threat factor for everybody in society. So if we don’t put this pressure on industry to deploy, nobody will most likely, except a few that are more idealistic. And this is shown in the research that we’ve done on IoT security by design. And I will not take anything away from what Niklas will be telling us, but we found that there’s no pressure to make IoT secure. There’s no pressure from the outside. We’ve seen it also in the procurement study we’ve done. We’ve analyzed the documents around the world on procurement. And if security is mentioned, it is not always cybersecurity. And if it’s cybersecurity, it’s seldom on internet standards. There’s one big example that does, that’s the Dutch government. They’re mandatorily have to deploy 43 different standards when procuring or explain why they cannot do that. And that is reported to the Dutch parliament once a year. So why is this relevant? I think this is extremely relevant because we’re discussing our future. IoT is already among us. AI is among us for far longer than most people realize. And who knows what is coming with a metaverse or quantum and who knows what is invented tomorrow, because we’re in a society that changes every two hours. And it looks like that time and time again, the same mistakes that we made. The product is invented and it comes into markets usually untested for security. So is that something that we should be discussing that when a new technology enters the market, that at least be tested formally in one way or another? Probably not legislated because you can’t legislate what you don’t know. You can at least demand a certain amount of testing. So ICT in whatever form is allowed to the markets from outside, usually it’s also almost irreparable. So when they find the flaws, it’s almost too difficult to repair them in some cases. So they remain a threat factor for sometimes decades. And with AI, perhaps with quantum or the metaverse and all else that is in store, we can demand at least security from the outset. Demand is before we start procuring it and certainly before we buy it. So large corporations and governments can set that example. And when they do, they become a standard and the security will become available for all of us. So if we make governments and larger industry aware of their role, their potential influence, and to provide them with the information they perhaps lack now, they will change the world for us. And that’s our ICT goal, to make the internet more secure and safer by the widespread deployment of security-related internet standards and ICT best practices. And if you’re interested to join, you can do that at is3coalition.org, and the three is the number three. Our reports are there, also the report Nicolas will be telling about. And I think that is about what I would like to contribute for now. So thank you very much for the opportunity.

Moderator – Carina Birarda:
Thank you very much. The second panelist is Carlos Martinez. He’s online. Carlos, I can see you online. Hello, how are you? I am very well, thank you.

Carlos Martinez:
can you guys hear me? Yes. Okay. I have like four or five slides that I would like to share. I hope that I can share my screen. Yes. Okay. Okay. So, I’ll be right to the point. Well, my name is Carlos Martinez. I work for LACNIC, the Regional Internet Registry for Latin America and the Caribbean. I’ve been working for LACNIC the best part of the last 15 years. I’m currently the head of technology or the CTO for LACNIC. One of the things that has initially caught my attention when I started working for LACNIC was the need for deploying two technologies that at the time were just not very well-known actually. These are DNSSEC and RPKI. I’m sort of grouping them because I believe that there’s a common theme between them, which is securing the infrastructure or securing the core of the internet. I would describe, I would say, a bit of a dire situation regarding the security on IoT, but that’s one part of things. When you have devices, the devices maybe secure themselves, but you still have to traverse the internet to get information from one point to another. I will try to go through this very quickly. When I speak about internet infrastructure, I’m not thinking about the physical layer in this case, not about fibers, cellular, or satellites, but I’m thinking particularly about what I used to call the three pillars of a properly functioning internet. The internet to work, as we know, it depends on three functions, basically. One is routing, the other is control and forwarding, basically the ability of the network to have one packet on ingress and deliver that packet destination to the proper destination, and a complementary function, which is domain name resolution, or DNS, okay? So the three things are necessary. There’s a subtle difference between routing and forwarding. Forwarding is the actual decision of a router when it has a packet and needs to analyze the packet and decide which interface it should be sent off, and routing, which is a control function where the router learns a table that it uses to decide how to forward packets. Both things are necessary, of course, are complementary. So this is a very high-level thread overview of these two or three functions, and each, you could probably identify more than this. Name resolution, for example, suffers from domain spoofing, where a server pretends to host a DNS zone that it shouldn’t, or it’s not authorized to hold, and this is widely used, for example, for phishing attacks. Cache poisoning is another very well-known thread to DNS, and where a specially crafted packet can poison, in a way, a server and allow an attacker to actually… they instruct a server to lie to its customers. This has been widely discussed in the industry and has in a way caused a bit of, I would say, loss of trust on the part of users, something that we’ve been in different industries and in different ways. Routing suffers from something in a way similar, if you will. Route hijacking is probably one of the most well-known effects on attacks on the routing system where an autonomous system publishes a network it shouldn’t, or it doesn’t have authorization to do so. Recently, we have witnessed some instances of internet instability due to hijacks or to a related situation called route leaks where there is a network within the internet that announces some prefixes, but it cannot fulfill the promise of actually carrying the traffic to the destination. It usually happens when a small network announces the whole routing table of the internet and it basically cannot transport all the traffic that every other network starts sending through it. So, as it was mentioned previously, security on some of these protocols was in a way an afterthought. These protocols were created when the internet was a much, I would say, naive place and some security had to be, I would say, backported into them. DNS, for DNS we have the DNS security extensions or DNSSEC, which introduced this. digital signatures within the DNS responses. And this allows a resolver to actually verify a response. This is, of course, not supposed to be a complete explanation of the NSA. This is just the general idea. And RPKI does a similar thing for routing. Again, there is some cryptography introduced into the BGP protocol and some additional decision points that are introduced in the BGP algorithm that allows a router based on some signatures, which I’m going to call ROAS because that’s the name they have, allows a router to make a decision on whether route is a correct one or not. So again, this is RPKI particularly has a lot of complexity that I’m not describing and I don’t have the time to get into, but there’s a lot of documentation in their internet. So a few considerations regarding, for example, the use of cryptography within these protocols. Some people have the misconception that every time you use cryptography is to ensure encryption or ensure secrecy in a way. Both RPKI and DNSSEC make heavy use of cryptography, but they not encrypt messages. They are not intended to provide privacy per se. Maybe privacy is a consequence of implementing these protocols, but they are not. Cryptography in DNS and RPKI is not used for providing secrecy. What is it used for? Cryptography here is used for authenticating and verifying signature chains that ensure either a correct DNS response or a correct BGP announcement. There is a slight difference between them. A PKI requires a well-defined PKI or a public key infrastructure with a trust anchor and CRLs, all the complexity that comes with a PKI. The RIRs have taken the role of operating the trust anchors of this RPKI. On the other hand, the NSSEC uses a simpler chain of trust because it can depend on some features that the DNS already has, for example, the tree-like structure. These technologies are basically useless unless the community, I would say, realizes that there is a shared responsibility here. In both RPKI and DNSSEC, there is a function, which is the signing, either of the DNS or the routes, and the validation. And both are necessary. Signing becomes useless if no one validates, and the other way around. If you’re validating but you have nothing to compare these signatures with, again, it’s useless. And there’s a shared responsibility here. And this is probably my, if you remember one thing of what I’ve been saying, please remember that the message of shared responsibility, in this case, it’s something that we need to get across the industry. Regarding quantum, the previous panelists mentioned that security was sort of an afterthought, and that’s completely true. And there’s a silver lining to it, which is that this afterthought was implemented in the form of an overlay. The core protocol remains unchanged. And there is, I would say, a layer of. cryptography applied over it. The cryptography here didn’t exist before. It was added afterwards. And it was added in a way that can be replaced. There’s a term that is technically used here, which is algorithm agility. And all this, both the NSAIC and RPKI have algorithm agility built-ins. So eventually, when a post-quantum cryptographic algorithm is designed or is standardized, it will be positively applied to both the NSAIC and RPKI. I don’t have it here in the slide, but I have another thing that I would like to mention, which is that I have a strong position on initiatives that point towards weakening of cryptographic algorithms. They’re having some discussions in governments and other fora regarding the necessity of weakening or providing backdoors to algorithms. And I think that would be a very poor decision to implement something like that. So that’s all I have for now. Thank you.

Moderator – Carina Birarda:
Thank you very much, Carlos, for your presentation. Very clear. I am thinking the same. I am support very strongly. And the third panelist is Maria Luque. She’s online. Maria, the floor is yours.

Maria Luque:
Good morning, everyone. Good morning from Madrid, actually. Very glad to be here with you today. It’s 2 AM in the morning in Madrid. And today, it seems that we are going to speak about software. It’s a key point of our discussion. So give me a second to find my presentation, see if I can share my screen. Okay. Can you see it? Yes. Perfectly. Okay. I take it as a yes. So, we’re starting today. I was saying that we were speaking about software, and software is at the core of my presentation about quantum security. First of all, I am Maria Llocke, and for the past three years, I have been working on quantum security. Software is at the core of my presentation about quantum security. First of all, I am Maria Llocke, and for the past 10 years, I have been advising national governments, local government agencies, and mostly in Spain and in the European Union on what to do with emerging technologies, for example, new technologies, space connectivity, or quantum technologies, and how to do it so that whatever we do with these technologies can benefit society in great ways. So I’ve also been working with quantum organizations, quantum startups, and national quantum strategies for the past three years, and I’m very glad to be here. So, the focus of today. Today, for me, we have a challenge, and the challenge is understanding how quantum technologies are going to disrupt not only cybersecurity, but our entire conception of how we process and how we store and how we communicate information. As you may have probably seen in the media, the protagonist is quantum computing. Now, its potential is immense to bring about new solutions to all challenges, computational or not. But once it is live, it will somehow imply that our current cryptographic systems are unsafe and won’t be able to safeguard our privacy. So let’s try to understand today in 10 minutes. how to look at the quantum threat and how to take advantage of quantum to actually be quantum safe. Now, we’re in the IGF, and the IGF’s motto this year is an internet for everyone. An internet for everyone is possible through universal access and privacy. And the fact that our communications can be kept secret is the base of our integrity as individuals and as nations, of course. And to keep the confidentiality of our online interactions, we trust what we call cryptographic algorithms, what Carlos was speaking about. And this trust is built on something we call computational harness assumptions. The fact that they will be able to withstand a cyber attack no matter what. But the truth is that a breakthrough in cryptanalysis can make the system vulnerable in one night. Now, we all know of a company who suffered a cyber attack in the past three or four months. And as my mates were saying, when it’s not a cyber attack on a company, it’s a cyber attack on a national health system or a security infrastructure. We do live in cyberspace. Thanks to 5G, among others, of course, we rely each time more on cyber physical systems, such as IoT, the critical infrastructure, and the web. And the more digital our infrastructure is, the more attack vectors we have to withstand. And each domain is vulnerable in its own very unique way. For example, as Carlos was saying before, critical infrastructures depend on scarce systems that are normally very outdated. IoT environments have very limited computing resources by design and very limited security schemes by design. as my mate, Buddha Nadir was saying. And also when we’re speaking about the internet and telecom networks, we are shifting subtly to our software defined networks, meaning that they will be more susceptible to cyber attacks. So we can say in a way that the cryptographic systems that protect our data infrastructure are shaky ground. Today, we can really say that they are a weak point to watch. And during the past decades, we’ve discovered quantum algorithms. Quantum algorithms with a crypto analytical potential that can break the cryptographic techniques that we use today to protect our data. We just need quantum processor that are big enough to run them. Quantum processors, meaning quantum computers. A new type of computing device, you’ve heard about it, that is capable of performing very specific calculations. Some of which are actually intractable by current classical computers. And quantum computer is truly a game changer. Uses the principles of superposition and entanglement, whatever they mean, to change the way we store and process information. And while large scale quantum computers are not a reality, they’re not available yet, of course. The fact is that creating a strong computer, quantum computer, can accelerate our process of solving the schemes we use in public key cryptographic algorithms to protect our data. I’m gonna give you an example. Thanks to a quantum algorithm like Shure, we could store RSA encryption. And this can break and destabilize us. And it’s not about data breaches. And it’s not only about financial loss. It’s about losing the integrity of digital documents, all of them, losing the sanctity of our personal data. data and losing control over the health and the financial systems that keep us together. And the truth is that we don’t have to wait for quantum computing to come because by harvesting now, decrypting later, which I assume you’ve heard a million times by now, someone can store encrypted information to decrypt it once quantum technology becomes more advanced. And this means that the impact of quantum computing truly started yesterday, as we can say. Now the paradox is that quantum can also give us the key back to our integrity. And in fact, quantum technologies and some classical techniques are the bet of the tech industry and governments when it comes to cybersecurity in the future to come. Now today, as you can say in the presentation, we’re going to focus, we don’t have time, we’re going to focus on the tools we are developing today to be quantum safe in the short term and in the midterm. The first one is post-quantum cryptography, Karloff was talking about it before, and the second one is quantum key distribution. Now let’s focus on the solution that we have more at hand. We were saying that encrypted communication that is intercepted today can be decrypted in the future by a quantum computer that is strong enough. Now post-quantum cryptography, what it offers to us is new classical algorithms that we believe to be secure against a quantum threat. There’s nothing quantum in these algorithms, but we have seen computational hardness that can withstand the brute force of a quantum computer that tries to decipher it. PQC is software. PQC is a short-term solution. We’re making an effort. to standardize them, guided by the NIST from the U.S. And also you probably heard of them, there’s Kyber for secure key exchange, and there is Lithium, Sphinx, and Falcon for digital signatures. And the interesting thing here, talking about best practices, is that the tech industry can enforce these algorithms into the solutions they offer to us today, even though they haven’t been standardized. And in fact, they do this, which is interesting, for example, for government agencies that use technologies in the cloud or store sensitive data on the cloud. Here we’re gonna see a couple of examples of major tech companies taking a hybrid approach via the cloud. For example, AWS has a cloud commercial environment, but it allows you to apply this algorithm Kyber within your security shell, and that’s nice. Google has started combining classical cryptography algorithms with potential quantum resistant algorithms for the FIDO2 standard, which is the standard that you use to authenticate yourself when you initiate your session on a website. And Cloudflare, for example, has done something that’s more or less the same, right? So PQC, what I want you to get from this is that it requires new software stacks. It can be started, it can be implemented starting now. And due to the comparatively low cost doing that, the private sector can take the lead, guided by standard, but it can take the lead. Now, we get to QKV, which is a crown jewel to me, is my favorite. QKV, quantum key distribution, can be the midterm solution to the quantum threat to cybersecurity. It is hardware-based, it is not software. base. Now QKD uses the principles of quantum mechanics to establish a shared secret random key between two parties that have a secure communication channel and alerts you of any eavesdropping attempts. Now for QKD, what I want you to imagine, because we love to talk about the quantum internet but we’re not close to that, what I’d like you to imagine for QKD is an entire infrastructure like those of the ISPs of the internet, tier 1, 2, 3 for telecom networks, by using quantum information processing techniques. That is a quantum network and if we are successful in implementing quantum networks we’re going to have unhackable networks for secure communications. Now I’m optimistic about the future of QKD but it’s definitely not a stable pallet and there are many challenges to solve before it’s deployed at scale. It’s a bumpy road for a start and it is very costly. QKD is a moonshot because we need to have entirely new infrastructures for secure communication. There is still these limitations, for example if you have a quantum network that is hyper big you will probably, I mean your quantum states of the photons can be degraded and the information maybe cannot make it, so we have to work on that. Also these quantum networks, they have to be integrated in classical telecom networks because that’s the interesting thing that we can go about and it requires compatibility, it requires us to work on interoperability and this is such a technical challenge. And also scalability and the potential for the service to work 99% of the time. Why? Because quantum networks are going to be designed for the first use case to be secure government communications. It’s going to be defense, it’s going to be intelligence and they need to work. But the thing is, despite the limitations, I want you to understand that Quantum networking is starting to work. We can see that in Madrid, in the Madrid quantum communications infrastructure, because it is able to send info over a radius of 40 square kilometers. We can also see that in New York with Connect and the NYU, because they have a quantum network that actually works. And also in China, you already seen the news, they’re very good at doing ground segment to space segment communication with quantum teleportation. So with QKD, we have PQC for the short term. With QKD, the investment needs to be very big and very continued, and only nations and federations can kickstart design and deployment of these technologies. For example, the European Commission has the Euro QCI program, and the strongest use case, as I was telling you, is secure government communications. Now, I have one minute for this. What I want you to get from this presentation is that, of course, there is a threat that may come with quantum computer in 10 to 15 to 20 to 25 years, but there are things and techniques that we can implement, standardize, and use together in a phased approach in this 20 years till quantum computing comes. The first one to me is going to be PQC, because it’s classical and we can do it now. The second one is going to be quantum networking. And the end game is going to be full deployment of quantum communication infrastructure networks, and also quantum computer, the quantum internet, sensors, computers, everything connected protecting your data. So, taking this into mind, how can we participate in- making this happen, we can do many things, right? But first of all, for me, is always thinking about yourselves and think about yourselves means that you have an organization, you need to think about how we can be quantum safe. And the way you can do this is understanding what you have in terms of information architecture, not that we were used to mix on premise and cloud services to house and communicate your data, understand which infosecurity scheme you’re following, your level of encryption, as Carlos was saying, is it robust, is it not? Have an inventory of your cryptographic algorithms and also see how much you can invest in your transition to quantum security. If you’re a small organization, you may get to BQC and that’s all for the next 10 years. If you are a stronger, bigger organization, maybe you can also try to understand how to engage in quantum communication networks. The industry is already busy working in interoperability and compatibility together with governments for PQC and also for quantum networking. The governments are already launching national strategies and engaging quantum solutions into their cybersecurity strategies. For example, the European Union is working on this right now. There is some box in PQC and QKD to have software stacks, to have hardware that actually works. And for the IEF community and I’m counting me in the IEF community, I would tell you that quantum is still a mystery to most of us in the policy community. So what I think we need is to engage, we need to learn, we need to study this, we need to understand this, we need to create spaces for discussion and engagement. I think it’s on us to introduce something else beyond policy thoughts on how to collaborate and then some that is. standardize these technologies. And also, let me finish with this. I think that quantum technologies bring both light and darkness to our lives, because our lives are digital. And that our privacy is our health, is our identity. And the digital rights of the people cannot be lost in translation in a global race towards being quantum safe and hackable that no one understands. So I hope we can work together on this. And thank you very much for listening.

Moderator – Carina Birarda:
Thank you very much, Maria, for your presentation. And we thank you for sharing your ideas. And we invite you to ask questions, to have an interactive session. And Olga is our next panelist. The microphone is yours.

Olga Cavalli:
Thank you both. Thank you. Thank you for inviting me. This is extremely interesting. And I have a question for the experts once we have the questions and answers as part of the session. Thank you for inviting me. I would like to bring to you a different perspective now, first from the capacity building concept and then from the public policy concept. First, let me tell you, my name is Olga Cavalli. I am a university teacher at University of Buenos Aires. I teach internet infrastructure and telecommunications infrastructure, which is where I have worked most of my first stage of my career. Then for 20 years, I’ve been working in public policy in Ministry of Foreign Affairs. I’m now in the Secretary of Innovation in Argentina. Presently, I am the National Director of Cybersecurity. So I want to bring you some ideas from these two perspectives. The school was created 15 years ago because we realized that the participation of Latin America in all these dialogue spaces where the policy related with the internet are defined was very scarce, was few and was perhaps not so much relevant prepared to participate in dialogues and comments and shaping the policies that are totally different from perspective from Latin America to other regions. Latin America has a different challenge from other regions. It’s extremely unequal in relation with economic distribution, infrastructure distribution. So our problems are not the same like other regions. So this is why we created this space, to train professionals at any age, and any background is welcome, whether technical, policymakers, journalists, lawyers, in order to learn all the rules that make the Internet work and how to participate and understand the problems and challenges that Latin America has. We have been doing that for 15 years, and for the first time this year we went out from big cities. We rotate among the Americas, and we had one totally focused on cybersecurity in the venue of the Organization of American States. That was very interesting. This year, for the first time, we went away from big cities and we went to a city inside one state in Brazil, the city of Campina Grande, with 400 fellows. So you can find information in our website, governanceinternet.org. And what I would like also to talk about is the extremely fast pace of the adoption of ICT technologies by human beings. There are different estimations. Maybe Nico will know more details about that. I had a report from Ericsson that next year we will have 22,000 million of IoT devices, and then I found another one from Cisco saying that the number will be 50,000. So the difference is interesting, but I think that the amount of devices is enormous compared to what we have been dealing with up to now, which is a reasonable number of devices per person. Considering that the population of the world is 88,000 million people, the pace of adoption of all these digital infrastructures, especially the new ones, is very, very fast. It’s much, much faster, five times faster than electricity and telephony. Much, much faster. Also, it was already mentioned by Wode and colleagues that most of these technologies were not designed with concept of security from scratch. They were designed in a different environment, in a different time, and with different ideas. So that’s, it’s extremely challenging. And I would like to consider now some public policy that we have been implementing in Argentina, although I am participating here as an academic, I have a public policy role. So I want to tell you what we have been doing in Argentina. Our role in the national government, we have a target, which is the national administration. So for that, there is a resolution that establishes minimum requirements of cybersecurity for them. What they have to do, they have to prepare a security plan. They have to share it with us. We have a database with all the security plans. And the most important thing is that they must design, assign a one focal point. That focal point is in contact with us in a permanent basis. We provide training for them every month and sometimes more frequently with news about technology and also we share with them all the vulnerabilities that the national assert that depend on our administration also can detect. We share with them all this information on a daily basis. If they have an incident, they have to share that with us and the national assert and our experts can help them. And this communication and this establishment of the security plans and the communication is mandatory for them. So there is a binding resolution. It’s not that voluntary or aspirational, but it’s mandatory for them. Also, we have developed a manual on what to do if they have an incident. So it describes the different stages that they have to go through. they have an incident. And I think that that would fit into the question about best practices and also the public policy that I mentioned to you. Also, we have published the new or approved the new cybersecurity strategy for Argentina. This is the second one that was produced after a public comment period during the month of January this year. And let me check if I’m forgetting something. That would be all that I want to share with you. I have a question for Maria, for Wout and for Nico. What I see now, it’s an increasing gap and challenging for developing countries, especially for small and medium enterprises in catching up with all these new changes in technology. And I see this gap really being very, very big, not only because of understanding technology, but also about buying it. It’s extremely expensive. And in some countries, we have some restrictions for import some products and some hardware. And also the lack of human resources that we all know that it’s a big challenge for all countries, not only for developing countries, but also for developed ones. But some human resources go away. Like my son is living in Europe because he was captured by a company that thought that he was very well prepared. So he was trained in Argentina in a public university and now he’s working in another country, which is good for him, but maybe not good for developing economies. Just an example of the challenge that we are facing. And looking at all these quantum technologies that are being developed, how do you see the small and medium enterprises or developing countries catching up with this changing, fast changing technologies that will be used and will be implemented very quickly? Thank you. I did two things. I spoke and then the question.

Moderator – Carina Birarda:
Thank you very much, Olga. We have only seven minutes for questions. If you want to answer the questions, this is okay. Yes, Olga? Yes, yes, go ahead. Let me see. Mohamed, do you have any questions in the chat? No, no, we don’t have any questions yet.

Nicolas Fiumarelli:
Yes, maybe I could accumulate one question, and we could, the panelists could respond as well. Because you all talked about different technologies. It’s known that the IOT number of devices is increasing. And in the case of the quantum computing, it’s already been developed. And also, ICT is not showing, deploying the best practices for security in every service. And as Olga said, it’s so expensive to have all of this. So, yes. So my question is, do you think that, also in the case of RPKI and DNSSEC, do you think that law enforcing these technologies is a good way to go? What are the threats or the risks, maybe commercial risks, in having this? Why are we not having this as a mandatory thing in the case of DNSSEC and RPKI for the networks? In the case of the IOT security standards made by the IETF, sometimes for these constraint devices, there are solutions already in standardizing the entities. And also for ICT, right? Why this is not like quantum resistant algorithms that we are seeing in the core Internet? Why these technologies are not applied for all the ICTs by a mandate, by a law enforced thing? Maybe if you want to have two minutes. per panelist to try to respond and also accumulate on the other questions we have had from Olga and the rest of the panelists. Thank you. Maybe starting with Carlos, then yes.

Carlos Martinez:
Those were a bunch of questions in a single one. I will try to make a couple of points. I personally don’t believe that mandating technology is a good idea and I’ve seen many examples where that has failed. That said, I think the situation for DNS second RPA is vastly different than the situation from IOT. IOT has a serious issue with cost, with cost per device. There’s a race to the bottom in cost per device because since you have so many million devices, it makes sense to have the cheapest device that you can actually manufacture. There’s a race to the bottom that this certainly doesn’t help in developing new technologies. DNS second RPA, I think there’s a difference there. I think one of the issues that the internet has faced over the year in deploying many new technologies, it happens for IPv6 as well, is that the thing that mainly affects in the internet are externalities. Those are things that you as part of the internet have to do at your own cost on behalf of another party to benefit another party. Sometimes that is commercially a hard sell. I think that’s what has been one of the barriers in deploying new technologies on the internet. I think there’s two different phenomena there that need to be addressed differently. You mentioned about why you’re not seeing post-quantum algorithms being applied. In my opinion, I mean, the post-quantum algorithms that have been proposed so far are less than satisfactory. They’re basically variations of elliptic curve algorithms with very, very long keys that are simply not practical. I mean, they exist, but they are not practical. They would create these huge signatures that are a threat in themselves. So sorry, I think I took more than two minutes. Sorry about that.

Nicolas Fiumarelli:
So now going to Maria, two minutes, please, and then Olga.

Maria Luque:
OK. OK, so thank you very much, Olga, for your question. I think it’s very interesting, and I would like to expand on this with you for an hour and a half. Regarding what you say about BIMES, basically, like small companies faced with the challenge of trying to keep up with these quantum technologies and all of the buzz that comes with it, and also with something very interesting, because in Spain, for example, we have the National Security Scheme, which was updated on October 2022 last year. And it doesn’t speak about quantum yet, but the standards that it enforced for information security are very high. It talks about, for example, multilevel security schemes, and it talks about path for hardware, et cetera. And I can see this strategy, for example, in Spain being updated with PQC requirements and best practices. And the thing here, although I don’t like it and I don’t think it’s positive, the thing here is that a small company, given that normally a small company, if it’s a tech company or a normal company, they rely on the infrastructure of big tech companies. And that infrastructure providers, to serve themselves, they don’t have problems. proprietary technology architecture scheme. So they rely on AWS, Microsoft Azure, they rely on Google. And these companies are going to be able to offer this solution that Carlos and I don’t like very much, which is PQC algorithms inserted in the cloud as an option for you to try to make your data safer in the place that it is. So this is going to be the option in the next five to 10 years for small companies, although I don’t like it, but I can see it as a way. And also regarding national quantum strategies for developing countries and for any country in general, I can tell you that the tendency is to be very, try to be very specialized and try to prioritize the one thing that you think you can invest in. For example, you can see that in the European Union, everybody’s very ambitious in the European Union, every country, but what we see is, for example, Spain says, hey, we have, we’re very good at optics. We’re very good at, we have very good mathematicians. So we’re going to go for developing quantum algorithms and we’re not going to invest so much on quantum computer because maybe we don’t have the resources, right? So different countries are trying to understand which role they can play in the quantum supply internationally. And it can be betting on talent workforce. It can be betting on developing algorithms or it can be betting on theoretical physicists. It really depends and it’s a challenge for every country and I would love to expand on it more with you. Thank you.

Olga Cavalli:
Thank you, Maria. I take your word of expanding this in among us. I may, I may get. touch with you. So it’s interesting what you said first about that the most important companies in the world will develop some technologies that others will start using, which is true and which is happening now perhaps with cloud computing and other technologies. My fear is that developing economies and small and medium enterprises will be just consumers of technologies developed elsewhere, mainly in the States and China, which are the main poles where all these technologies are being developed now. But that’s something that we can change with capacity building and awareness. And I’m always positive about technology. So I think that we have to go in that way. Thank you. Thank you for inviting me and for comments and Maria Carlos and both the left. Thank you.

Nicolas Fiumarelli:
Okay. Thank you so much. So we are ending the session here. Good insights about the law enforcement. Maybe it’s not the solution, but the capacity building and awareness are there. And we need to be in the loop, in the loop of what is happening regarding requirements on the national agencies and all these entire world of different technologies are approaching. So thank you so much to all the panelists and see you next year in hopefully with new news about these technologies. Thank you so much. Thank you very much. Thank you very much. Have a great day.

Carlos Martinez

Speech speed

143 words per minute

Speech length

1822 words

Speech time

763 secs


Arguments

Importance of DNSSEC and RPKI to secure the core of the internet

Supporting facts:

  • The DNS and routing, or the ability of the network to deliver a packet to the proper destination, are three necessary functions for a properly functioning internet.
  • DNSSEC and RPKI are security protocols that use digital signatures to verify DNS responses and BGP announcements respectively
  • Both DNSSEC and RPKI have a shared responsibility between signing and validation


Both DNSSEC and RPKI are prepared for a potential post-quantum scenario

Supporting facts:

  • Both DNSSEC and RPKI have algorithm agility built-in
  • When a post-quantum cryptographic algorithm is standardized, it can be applied to both DNSSEC and RPKI


Mandating technology is generally not a good idea

Supporting facts:

  • Mandating technology has failed in past instances
  • Issues with cost and benefit discrepancy in IOT and DNSSEC/RPA
  • Post-quantum algorithms proposed so far are less than satisfactory


Report

The discussion centres around the vital role of DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure) in securing the fundamental structure of the internet. These security protocols are instrumental in safeguarding the integrity and authenticity of DNS responses and BGP (Border Gateway Protocol) announcements, respectively.

DNSSEC and RPKI operate by utilising digital signatures to verify the legitimacy of DNS responses and BGP announcements. This verification process ensures that the network delivers data packets to the correct destination, maintaining the proper functioning of the internet. The speakers unanimously recognise the crucial importance of DNSSEC and RPKI, highlighting their shared responsibility in both signing and validation processes.

On a related topic, there has been a debate concerning the potential weakening of cryptographic algorithms and the inclusion of backdoors to enable access. However, Carlos, one of the speakers, expresses a negative sentiment towards this notion. He asserts that such actions would be unwise, potentially compromising the security of cryptographic systems.

This viewpoint aligns with SDG 16, which focuses on ensuring peace, justice, and strong institutions. A positive aspect discussed is that both DNSSEC and RPKI have algorithm agility built into their design. This feature ensures that they can adapt to incipient post-quantum cryptographic scenarios.

Consequently, when post-quantum cryptographic algorithms are standardized, they can be effectively incorporated into DNSSEC and RPKI, providing continued security measures against quantum threats. The debate also encompasses the challenge of mandating technology, with the speakers highlighting instances where such endeavors have proven unsuccessful.

They note the issues surrounding cost and benefit discrepancies, particularly in the context of the Internet of Things (IoT) and DNSSEC/RPKI implementation. Furthermore, while post-quantum algorithms have been proposed, they have not yet achieved a satisfactory level of performance.

In conclusion, the speakers collectively emphasize the importance of DNSSEC and RPKI in securing the core infrastructure of the internet. Their positive sentiment towards the efficacy of these protocols underscores their significance in maintaining a properly functioning internet. Nonetheless, there is a negative sentiment towards weakening cryptographic algorithms, highlighting the potential risks associated with such actions.

The speakers also acknowledge the need for flexibility and tailored approaches when addressing different technologies, rather than enforcing a one-size-fits-all mandate. Ultimately, this discussion highlights the ongoing challenges and complexities associated with internet security and the need for continued research and adaptation to effectively counter emerging threats.

Maria Luque

Speech speed

158 words per minute

Speech length

3182 words

Speech time

1205 secs


Arguments

Quantum technologies, especially quantum computing, pose both significant challenges and opportunities in terms of cybersecurity

Supporting facts:

  • Quantum computing has the potential to break our current cryptographic systems and expose confidential information
  • Technologies like Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD) are being developed to combat this threat
  • PQC, while not yet standardized, can be applied today and is software-based, while QKD is hardware-based and might serve as a mid-term solution


Small tech companies rely mainly on the infrastructure of large tech companies when it comes to meeting the challenges of quantum technologies

Supporting facts:

  • Small companies generally use platforms like AWS, Microsoft Azure, Google, etc.


Despite being not favourable, PQC algorithms inserted in the cloud will be the optimal solution for small companies to secure their data in the next five to 10 years


Different countries should focus on their strengths and specialties when it comes to planning their national quantum strategies

Supporting facts:

  • Spain chooses to invest in areas they excel at such as optics and mathematics


Report

Quantum technologies, specifically quantum computing, present challenges and opportunities in terms of cybersecurity. The concern is that quantum computing has the potential to break current cryptographic systems and expose sensitive information. To combat this threat, researchers are developing technologies like Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD).

PQC, although not yet standardized, can be applied today as a software-based solution, while QKD requires substantial investment and the creation of new secure communication infrastructures. It is argued that governments and the technology industry need to continuously and significantly invest in quantum technologies to ensure data security in the face of the quantum threat.

QKD, in particular, requires high investment and the establishment of entirely new infrastructures for secure communication. On the other hand, tech companies have already started implementing PQC into their solutions, showing their recognition of the need to adapt to quantum technologies.

Organizations also need to assess and adapt their information security structures to prepare for the quantum threat. They should understand their information architectures, level of encryption, and capabilities necessary for transitioning to quantum security. The approach for organizations may vary depending on their size, with smaller ones potentially adopting PQC and larger ones engaging in quantum communication networks.

For small tech companies, the infrastructure provided by large tech companies like AWS, Microsoft Azure, and Google is crucial for addressing the challenges posed by quantum technologies. These platforms serve as a foundation for smaller companies to navigate the complexities of quantum computing.

Deploying PQC algorithms in the cloud is considered a potential solution for securing data for small companies in the next five to ten years. Despite not being favoured by some, it is argued that deploying PQC algorithms in the cloud offers optimal data security for small companies.

However, there is debate regarding this approach, with some opposing the practice for maintaining data security. Countries are encouraged to focus on their strengths and specialties when planning their national quantum strategies. For example, Spain has chosen to invest in areas where it excels, such as optics and mathematics, to drive its quantum technology development.

In conclusion, quantum technologies pose both challenges and opportunities in cybersecurity. Addressing the quantum threat requires significant investments in quantum technologies, assessments and adaptations of information security structures, and consideration of alternative solutions like deploying PQC algorithms in the cloud.

Additionally, countries should strategically focus on their strengths and specialities to plan effective national quantum strategies. Ongoing research and discussions are needed in this rapidly evolving field.

Moderator – Carina Birarda

Speech speed

114 words per minute

Speech length

644 words

Speech time

340 secs


Arguments

There has been a significant increase in cybersecurity incidents at the international level.

Supporting facts:

  • Global interconnectivity is a key factor behind this trend.
  • Emergence of sophisticated criminal activities like crime as a service


Adoption of internationally-recognised cybersecurity best practices is a fundamental challenge

Supporting facts:

  • Just a few number of organizations practice these standards
  • The lack of adoption is a global issue


Cybersecurity is a global issue that necessitating international collaboration for combating it

Supporting facts:

  • Cyberattacks do not respect borders or jurisdictions.
  • Information sharing in international level is imperative.


It is essential to understand the threats we are facing for proper implementation of cybersecurity practices.

Supporting facts:

  • By understanding threats that IoT, web, quantum technologies are facing, best practises can be selected.


Report

This extended summary highlights the main points and arguments presented in the given information on cybersecurity. It also provides more details, evidence, and conclusions drawn from the analysis. The first argument states that there has been a significant increase in cybersecurity incidents at the international level, which is viewed as a negative trend.

This can be attributed to the global connectivity that has become a key factor behind this increase. Additionally, the emergence of sophisticated criminal activities, such as crime as a service, has further contributed to the rise in cybersecurity incidents. The supporting evidence for this argument is the fact that cyberattacks are often conducted by actors in multiple countries, indicating the global nature of the issue.

The second argument emphasizes the fundamental challenge of adopting internationally-recognised cybersecurity best practices. It is highlighted that only a few organisations currently practise these standards, and the lack of adoption is a global issue. The evidence supporting this argument includes the observation that just a small number of organisations implement these best practices, indicating a need for widespread adoption to enhance cybersecurity at both national and international levels.

The third argument stresses that cybersecurity is a global issue that necessitates international collaboration for effective mitigation. The fact that cyberattacks do not respect borders or jurisdictions is put forward as evidence for the need for international cooperation. Additionally, it is stated that information sharing at the international level is imperative for combating cybersecurity threats.

This argument highlights the importance of collaboration between countries to establish a robust global cybersecurity framework. The fourth argument suggests that understanding the threats facing IoT, web, and quantum technologies is essential for implementing proper cybersecurity practices. By gaining a comprehensive understanding of these threats, appropriate best practices can be selected and implemented.

The evidence supporting this argument is the observation that proper implementation of cybersecurity practices can only be achieved by addressing the specific threats posed by emerging technologies. In conclusion, the extended summary highlights the increasing number of cybersecurity incidents on an international scale as a negative trend.

The adoption of internationally-recognised cybersecurity best practices is identified as a fundamental challenge, with only a small number of organisations currently practising these standards. It is established that cybersecurity is a global issue requiring international collaboration for effective mitigation. Understanding the specific threats posed by emerging technologies is emphasised as crucial for implementing proper cybersecurity practices.

Overall, the analysis underscores the need for international cooperation and comprehensive measures to address the growing cybersecurity challenges.

Nicolas Fiumarelli

Speech speed

158 words per minute

Speech length

380 words

Speech time

145 secs


Arguments

Enforcing the adoption of technologies like RPKI, DNSSEC, IOT security standards, and quantum resistant algorithms via legislation

Supporting facts:

  • Increasing number of IoT devices
  • Development of quantum computing
  • Existence of security standards made by IETF for IoT devices
  • The cost of implementing advanced technologies is high


Report

During the discussion, the speakers emphasised the necessity of implementing security technologies, such as RPKI, DNSSEC, IoT security standards, and quantum-resistant algorithms, through legislation. They pointed out that the rising number of Internet of Things (IoT) devices and the advancements in quantum computing pose significant security risks.

These risks can be mitigated by the adoption of robust security measures. The speakers also highlighted the existence of security standards developed by the Internet Engineering Task Force (IETF) specifically for IoT devices. These standards provide guidelines and best practices to ensure the security of IoT networks and data.

However, one speaker questioned why these security technologies are not universally enforced in all Information and Communication Technology (ICT) systems through legal obligations. It was acknowledged that the implementation of advanced security technologies comes with a high cost. This cost may pose a challenge to widespread adoption.

Nonetheless, the importance of safeguarding critical infrastructure and personal information against cyber threats and data breaches justifies the investment in these technologies. Overall, the sentiment during the discussion was neutral, indicating a balanced examination of the topic. The speakers’ arguments and evidence provided a comprehensive understanding of the urgency to implement security technologies, alongside the challenges associated with their implementation.

The discussion aligned with SDG 9: Industry, Innovation and Infrastructure, as it emphasised the need for secure and resilient ICT systems to support sustainable development. Through this analysis, it becomes evident that the adoption of security technologies through legislation should be encouraged and prioritised.

This will help ensure the protection of IoT devices and networks, while also addressing the growing threat of quantum computing to traditional encryption methods. Additionally, the development and enforcement of security standards can play a crucial role in enhancing cybersecurity practices across various industries.

In conclusion, the discussion underscored the significance of deploying advanced security technologies and standards to safeguard ICT systems. Although challenges such as high implementation costs exist, the speakers highlighted the urgency to address these concerns and apply security measures throughout the industry.

By doing so, they aimed to emphasise the need for a comprehensive approach to cybersecurity, simultaneously addressing both technological advancements and legal enforcement.

Olga Cavalli

Speech speed

151 words per minute

Speech length

1399 words

Speech time

555 secs


Arguments

Latin America faces unique technological and internet infrastructure challenges due to economic and distribution inequalities

Supporting facts:

  • Olga Cavalli is a university teacher at University of Buenos Aires, teaching internet infrastructure and telecommunications infrastructure
  • She works in public policy in Ministry of Foreign Affairs
  • She’s currently the National Director of Cybersecurity


Latin America needs to increase its participation in policy dialogues related to internet, as it’s different from other regions

Supporting facts:

  • Cavalli helped in creation of a training program for professionals to learn the rules of the Internet, understand its challenges and to participate more in policy dialogues


The pace of ICT and IoT adoption is very fast, likely leading to increased vulnerabilities due to lack of initial security designs

Supporting facts:

  • There are estimates that there will be 22,000 million to 50,000 IoT devices next year


Argentina has implemented several cybersecurity policies for national administration

Supporting facts:

  • In Argentina, there’s a binding resolution that the national administration must prepare a security plan, assign a focal point for contact and share information in case of an incident
  • A manual has been developed for them on what to do in the event of an incident
  • New cybersecurity strategy has also been approved


Expresses concern over developing economies and small to medium enterprises being consumers of technologies developed elsewhere

Supporting facts:

  • The most significant technology companies, such as AWS, Microsoft Azure, Google, are expected to provide solutions based in technologies like PQC algorithms and cloud computing
  • Countries like Spain specializing on certain aspects of quantum technology development due to lack of resources


Report

Latin America faces unique technological and internet infrastructure challenges due to economic and distribution inequalities. These challenges stem from the disparities in wealth and resources within the countries of the region. As a result, access to and the quality of technology and internet infrastructure vary greatly across Latin America.

To address these challenges, there is a need for increased participation in policy dialogues related to the internet in Latin America. Olga Cavalli, a university teacher at the University of Buenos Aires, has played a key role in creating a training program for professionals to learn about the rules of the Internet, understand its challenges, and participate more actively in policy dialogues.

This initiative aims to empower Latin American countries to have a stronger voice in shaping internet policies that are suitable for their specific needs and circumstances. Furthermore, the rapid adoption of Information and Communication Technology (ICT) and Internet of Things (IoT) devices in Latin America has raised concerns about increased vulnerabilities due to the lack of initial security designs.

It is estimated that there will be between 22,000 million to 50,000 IoT devices in the region next year. The fast pace of adoption leaves little time for proper security measures to be implemented, which could lead to potential breaches and threats in the future.

Argentina has taken proactive steps in addressing cybersecurity concerns. The national administration has implemented binding resolutions that require the preparation of a security plan, the assignment of a focal point for contact, and information sharing in the event of a cyber incident.

Additionally, a manual has been developed to guide the national administration on how to respond to such incidents. A new cybersecurity strategy has also been approved, showcasing Argentina’s commitment to ensuring security in the digital realm. Developing countries and small to medium enterprises (SMEs) face significant challenges in keeping up with rapid technological changes.

These challenges include restrictions on importing certain products and hardware, as well as a lack of human resources, as trained professionals often migrate to developed countries in search of better opportunities. The combination of limited resources and a lack of technical expertise hampers their ability to understand and afford new technologies, creating a widening technology gap.

Moreover, developing economies and small to medium enterprises are often consumers of technologies developed elsewhere, which raises concerns about the global technology gap. While major technology companies like AWS, Microsoft Azure, and Google are expected to provide solutions based on emerging technologies like Post-Quantum Cryptography (PQC) algorithms and cloud computing, developing economies and SMEs rely on these technologies without actively contributing to their development.

This dependence on technologies developed elsewhere puts them at a disadvantage. To address these challenges, capacity building and awareness are advocated as essential measures. By investing in the development of local technological capabilities and creating awareness about the importance of technology, Latin American countries can reduce their reliance on technologies developed by other countries.

This would help narrow the global technology gap and allow them to actively contribute to technological advancements that suit their specific needs. In conclusion, Latin America faces unique challenges in technological and internet infrastructure due to economic and distribution inequalities.

Increasing participation in policy dialogues, addressing cybersecurity concerns, and bridging the technology gap are crucial steps towards creating a more inclusive and technologically advanced region. Additionally, capacity building and raising awareness about technology will empower Latin American countries to shape their own technological future.

Wout de Natris

Speech speed

158 words per minute

Speech length

1388 words

Speech time

528 secs


Arguments

Lack of deployment of cybersecurity measures in IoT is a major issue

Supporting facts:

  • Cybersecurity was not an issue in the early internet, but has become a problem with worldwide connectivity
  • The technical community has adjusted the code, but most governments and industries do not demand security by design
  • IoT devices are usually insecure by design


Security should be inherent in all forms of ICT and should undergo formal testing before entering the market

Supporting facts:

  • When new technology enters the market, it is usually untested for security
  • ICT cannot be legislatively controlled because of the rate of innovation


Report

The lack of cybersecurity measures in Internet of Things (IoT) devices is a pressing issue that demands attention. While the technical community has made efforts to address this concern, the majority of governments and industries have not yet prioritised security by design in IoT.

This oversight has resulted in widespread vulnerability and the potential for malicious attacks. Initially, cybersecurity was not a concern during the early days of the internet, as worldwide connectivity was limited. However, with the rapid expansion and integration of IoT devices into our daily lives, the need for robust security measures has become increasingly evident.

Unfortunately, IoT devices are often designed without adequate security measures, making them susceptible to cyber threats and potentially compromising users’ personal data. One argument put forth is that governments and large corporations should play a crucial role in setting the standard for security in IoT.

An example of this proactive approach is seen in the Dutch government, which has taken the lead by imposing the deployment of 43 different security standards. This demonstrates the importance of demanding high levels of security in IoT devices. Another concerning aspect is the lack of rigorous security testing before new technology, including ICT, enters the market.

The fast pace of innovation and the urgency to bring products to market often result in inadequate security measures. It is argued that security should be a fundamental consideration and undergo formal testing before any form of ICT is released, minimising risks for users.

On a more positive note, international cooperation and information sharing are emphasised as pivotal factors in staying ahead in terms of cybersecurity. The power of the internet lies in its ability to facilitate global discussions, enabling the sharing of knowledge and experiences across borders.

Governments and larger industries need to be made aware of their role and potential influence in addressing cybersecurity challenges, fostering collaboration and cooperation on a global scale. In conclusion, the lack of cybersecurity measures in IoT devices poses a significant challenge that needs to be addressed urgently.

Efforts from both the technical community and various stakeholders are required to push for security by design and the implementation of robust standards. Governments and large corporations hold the responsibility of leading the way, setting the standards for security in IoT.

In addition, rigorous security testing should become a prerequisite before any form of ICT is introduced to the market. Furthermore, international cooperation and information sharing are critical for staying ahead in the ever-evolving landscape of cybersecurity. Only through collaboration can we tackle the challenges and vulnerabilities inherent in the interconnected world of IoT.

Resilient and Responsible AI | IGF 2023 Town Hall #105

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Mariam Jobe

The Africa Youth Internet Governance Forum was recently held, highlighting the significant role of young people in shaping the digital future. The forum covered various topics, including cyber security, data privacy, digital inclusion, and the need for comprehensive data laws. One key argument was the lack of knowledge among young people about these important issues, emphasizing the need for educational outreach efforts.

The forum also emphasized the importance of internet access and digital literacy in underserved and rural communities. It recognized that improving internet access and digital literacy is crucial to ensuring equal opportunities and promoting socio-economic development.

Discussions addressed the issue of cyber crimes and the need for safe spaces to report such problems. The importance of an ethical framework surrounding artificial intelligence was also highlighted. It was noted that some countries lack comprehensive data laws, hindering their ability to effectively address cyber crimes.

An intergenerational session between the youth and Members of Parliament (MPs) fostered collaboration and highlighted the importance of government-youth partnerships. Involving young people in policy development and decision-making processes is crucial.

In conclusion, the Africa Youth Internet Governance Forum underscored the pivotal role of young individuals in shaping the continent’s digital future. Increased education and awareness, inclusivity, ethical considerations, and citizen participation were identified as crucial components. Internet access and digital literacy in underserved communities were recognized, along with the need for collaboration between the youth and the government. The forum provided a platform to address pressing issues and generate innovative solutions for Africa’s digital transformation.

Audience

The analysis covered a range of topics discussed by various speakers. The first speaker expressed a disagreement with the commonly held belief that advanced tech elements such as AI and blockchain are the keys to innovation and development. Instead, they emphasized the importance of isolating, understanding, and tackling diseases like COVID-19. The speaker pointed out their seven years of engagement with individuals afflicted with COVID-19 and emphasized the importance of isolation in dealing with such diseases. They considered AI and blockchain distractions when it comes to public health crises.

Another speaker focused on the role of traditional forms of innovation and governance in driving improvement. They highlighted the contributions of African engineers and economists who are actively tackling COVID-19. The speaker emphasized the crucial role played by telecommunications regulators and considered traditional forms of governance, such as the rule of law, essential for improvement.

The role of AI in technological advancement was also discussed by a speaker with 40 years of experience in technology. They cited the example of human genomics and how integration of technology did not eliminate medical jobs but enhanced precision medicine. The speaker viewed AI as just another technology that should be adapted and integrated, rather than feared.

Legislators’ role in adapting technology and their potential to get distracted by job loss fears were highlighted by another speaker, who was both a lawmaker and technologist. They emphasized the importance of focusing on adapting technology quickly to avoid being left behind.

The importance of sharing reports and learnings with the leadership of each respective National Assembly was emphasized by a participant who presented a report in Abuja. They urged not to turn legislative participation into a mere holiday or jamboree but to make meaningful contributions.

Another participant suggested the need for a directory of ongoing initiatives at the continental level to be shared among all parliamentarians. They mentioned learning about several initiatives at the continental level for the first time during the meeting.

The need for international development partners to customize their support based on the priorities of each country was emphasized by a participant. They believed that generic support often does not address the priorities of individual countries and asserted that each country should determine its own priorities and approach development partners accordingly.

Concerns were raised over the limited participation of African countries in hosting the Internet Governance Forum (IGF), with less than 20 out of 54 African nations being active in hosting it. The speaker expressed the need for African nations to be more involved and accountable in terms of hosting IGF.

The establishment of an accountability framework within IGF for multi-stakeholders and countries was advocated by a speaker. They urged the need for a mechanism to hold stakeholders accountable.

The need for a vision and strategic plan for growing and strengthening IGF within Africa was also highlighted by a speaker. They emphasized that having such a plan would be instrumental in achieving the goals of IGF.

The potential contribution of assistive technologies to the GDP was mentioned, highlighting the importance of utilizing these technologies to serve disabled communities.

It was noted that African meetings and conferences often neglect the discussion on disabled communities, indicating a lack of attention and inclusivity in these forums.

The utilization of traditional African communal values to ensure the realization of IGF goals was suggested by a speaker, emphasizing the importance of cultural context in achieving the goals.

Overall, the analysis highlighted the need for innovation, inclusive policies, and partnerships to achieve sustainable development goals. It shed light on the importance of integrating advanced technology responsibly, prioritizing country-specific priorities, and ensuring inclusivity in decision-making processes. The speakers’ perspectives provide valuable insights into various aspects of development, governance, and technology, contributing to the ongoing discourse on achieving sustainable development.

Martin Koyabe

The analysis provided covers several key points related to cyber capacity building and cybersecurity in Africa.

The first point discussed is the AU-GFCE collaboration project, which aimed to build resilience and ensure cyber capacity building within the continent. This project focused on three key areas: assessment of priorities for African countries regarding cyber issues, sustainability through investment in expertise, and establishment of institutional memory. The analysis highlights that significant investment has been made in digital infrastructure due to the increased demand for these services during the COVID-19 pandemic.

The next area of focus is the need to enhance security in Africa through investing in training and developing cyber skills. It is mentioned that protecting infrastructure is a high priority for African countries. The GFCE has established the Africa Cyber Experts Community, which consists of over 80 experts from 37 countries. Additionally, there is a call to facilitate opportunities and development of cyber skills for individuals in marginalized areas. The GFCE and AU have also established the network of African women in cyber.

The importance of political will and funding in boosting cybersecurity is emphasized in the analysis. It is noted that many projects in Africa lack sustainable funding or resources, leading to their discontinuation or inadequate sustainment after the primary funding ends. The analysis argues that countries need to internally invest in cybersecurity to ensure the sustainability of projects. Furthermore, there is a critical need for sensitization at various political and decision-making levels to enhance cybersecurity efforts.

The analysis also mentions an upcoming meeting in Ghana, where cybersecurity experts and capacity building development partners will discuss cyber-related issues. It is highlighted that this meeting is of significant importance as it is the first of its kind.

The situation of AFRINIC, an organization facing challenges and undergoing issues, is also addressed. The analysis mentions that AFRINIC is currently under litigation and requires the resolution of its problems. However, it is recommended to reserve extended comments on this situation and let the process take its due course.

Finally, the importance of sustaining mechanisms for auditing authentic organizations is emphasized. This is seen as crucial in ensuring the effectiveness and credibility of these organizations.

Overall, the analysis focuses on the need for cyber capacity building and cybersecurity in Africa, highlighting the importance of various factors such as collaboration, investment, political will, funding, and sustainability. It also provides insights into specific initiatives and challenges, contributing to a comprehensive understanding of the topic.

Chidi

The African IGF (AIGF) emphasises the importance of a multi-stakeholder approach to ensure its success. This approach involves collaboration from various stakeholders including government, civil society, academia, and the private sector. The AIGF recognises that for effective addressing of the challenges and opportunities of the digital landscape, involvement of all these stakeholders in decision-making is necessary.

Creating an enabling environment is a key factor for the success of the AIGF. This refers to the need for policies and regulations that support the growth of the digital economy and ensure equal access to digital technologies for all. It is also crucial to enforce instant cyber laws to protect individuals and organisations from cyber threats and ensure the security of digital systems.

In addition to an enabling environment and instant cyber laws, political will is essential for shaping the digital landscape. The AIGF highlights the importance of political leaders showing commitment to promoting digital inclusion and embracing technology for development. This includes providing necessary resources and support for digital initiatives.

Another important aspect discussed is the need for inclusivity and ethical AI principles. The AIGF argues that an inclusive digital environment should be created to ensure everyone benefits from technological advancements. This includes addressing the issues of digital divide and ensuring no one is left behind. The AIGF also highlights the importance of a legislative framework to promote ethical AI principles and prioritize inclusivity.

Nigeria is recognised as a country playing a pivotal role in shaping the trajectory of technological advancement. The country has put in place strategic objectives, initiatives, regulatory instruments, and platforms to foster the growth of the digital economy. Nigeria has also taken major steps towards harmonising rights of way, which are crucial for the development of ICT infrastructure.

However, Africa still faces challenges such as inadequate visibility of individual countries’ activities and insufficient collaborations within the African region. It is imperative for African countries to share information in real-time and work together to achieve their technological goals.

Investment in research and development for emerging technologies is seen as a fundamental step towards technological advancement. The AIGF urges stakeholders to seize the opportunity and increase research capacity to drive innovation and stay at the forefront of emerging technologies.

AFRINIC, responsible for managing internet resources in Africa, is mentioned to be in a state of crisis or dysfunctionality. This raises concerns about its impact on internet security and sustainability in Africa. The source of the internet, considered a commodity, lies with the IP networks.

Another key argument made is the importance of Africa taking charge of its internet infrastructure to maintain cybersecurity. The AIGF highlights the need for African countries to have control over their internet infrastructure to effectively combat cybersecurity issues. This requires strengthening internet governance and building strong institutions to ensure the security and stability of the internet.

In conclusion, the African IGF advocates for a multi-stakeholder approach, an enabling environment, instant cyber laws, and political will to shape the digital landscape. Inclusivity and ethical AI principles, backed by a legislative framework, are also considered essential. Nigeria plays a crucial role in technological advancement, but challenges such as inadequate visibility and insufficient collaborations persist. Investment in research and development is necessary, and concerns are raised about the crisis within AFRINIC and its impact on internet security in Africa. Taking charge of internet infrastructure is crucial for cybersecurity on the continent.

Moctar Seck

Africa is facing significant digital challenges that hinder its progress in the digital age. One of the main obstacles is the deficit of connectivity, with infrastructure issues preventing 6% of the African population from accessing the internet. This lack of connectivity acts as a barrier to economic development and social inclusion. To overcome this challenge, Africa needs to ensure that broadband is accessible to everyone on the continent by 2030, which would require substantial investment from the private sector.

Another crucial challenge is the gender digital divide. Presently, only 45% of females in Africa are connected to the internet, compared to 85% of males. Bridging this divide is essential for achieving gender equality in the digital era. It is worth noting that the internet market in Africa has the potential to reach $180 million by 2025, further highlighting the economic opportunities that can be unlocked by addressing the gender digital divide.

Furthermore, the lack of legal identity for 500 million Africans poses a significant obstacle to digital transformation. Without legal identification, individuals are unable to fully participate in the digital economy and access essential services. Resolving this issue is crucial to ensure that every African can benefit from the opportunities presented by the digital age.

Cybersecurity challenges are also prevalent in Africa, with the cost of cybersecurity issues amounting to 10% of the continent’s GDP. Additionally, terrorists are increasingly exploiting digital avenues, underscoring the need for robust cybersecurity measures to protect individuals and institutions in Africa.

While artificial intelligence (AI) presents opportunities for growth and innovation, it also brings challenges that require regulation. Africa’s young population, projected to constitute 70% of the continent’s population by 2050, needs to be prepared for advancements in AI. Implementing regulations around AI is necessary to harness its potential benefits while mitigating associated risks.

The Global Digital Compact, which will shape the future of digital development globally, necessitates African input to ensure equitable sharing of digital technology benefits. Active participation from Africa in shaping this compact is essential.

Resolving the AFIRINIC issue, considered of utmost importance, requires a meeting between the African Economic Community (AEC), the Economic Commission for Africa (ECA), and Smart Africa. The resolution of this issue is crucial for the development of the continent’s digital infrastructure.

Network access and control are vital for digital transformation, particularly for Africa’s large youth population, accounting for 42% of global youth. Lack of access and control stifles progress, hindering the continent from fully harnessing the potential of digital technologies.

The reliance on AI and its perpetual usage of network data raise concerns about privacy and security. Establishing a regulatory framework is important to address these issues and ensure responsible and ethical use of AI.

Capacity building for regulators is essential to keep up with rapid technological advancements, such as AI, blockchain, IoT, and nanotechnology. Regulators need to stay ahead of these developments and understand their implications to effectively safeguard users’ rights and interests.

The African Internet Governance Forum (IGF) is a growing multi-stakeholder forum where key issues related to digital technology are discussed. It distinctively differs from the World Summit on the Information Society (WSIS) forum, where government decisions are made. Increased participation from government, the private sector, and civil society in the African IGF is necessary for a more inclusive and comprehensive discussion on digital technology.

The organization of the IGF in Africa depends on the renewal of its mandate. The successful hosting of the IGF in Ethiopia highlights the potential for its further expansion in Africa. However, the renewal of its mandate by 2030 is crucial to ensure its continuity and effective contribution to digital governance in the region.

In conclusion, Africa faces significant digital challenges that need to be addressed for the continent to fully participate in the digital age. These challenges include the deficit of connectivity, the gender digital divide, the lack of legal identity, cybersecurity issues, opportunities and challenges posed by AI, and the need for capacity building for regulators. Active participation by Africa in shaping the Global Digital Compact is crucial, while the resolution of the AFIRINIC issue is of utmost importance. Furthermore, the African IGF provides a platform for important discussions on digital technology, and its expansion and inclusive participation are necessary for effective digital governance in Africa.

Sam George

The analysis focuses on discussions among speakers regarding various topics related to data policies and digital infrastructure in Africa. One key point highlighted is the important role played by parliamentarians in bridging the gaps between civil society, the technical community, and the government. By attending events such as the African School on Internet Governance (AfriSEEG) and the Internet Governance Forum (IGF), parliamentarians gain insights into the challenges and opportunities in the digital realm. They can then initiate or support the government in developing legislation to implement data policies.

Another crucial aspect emphasized is the need for harmonisation of data policies across African countries. The case of the Nigerian company Jumia operating in multiple African nations illustrates how challenges can arise without proper data flow across borders. Without harmonisation, these challenges can hinder the growth and development of businesses operating across countries. Therefore, speakers argue for the adoption of consistent and coordinated data policies across the continent to promote a conducive environment for cross-border data flow.

The importance of prioritising funding for digital infrastructure also emerged as a key point. In upcoming budgeting cycles, it is recommended to improve funding for digital public infrastructure. This infrastructure would serve as a secure space to house data and support the stability and growth of digital services in Africa. Given the increasing importance of digital technology in various sectors, adequate funding for digital infrastructure is seen as crucial for the continent’s socioeconomic development.

Regarding the intersection of state security and digital rights, a neutral stance is taken. While it is recognised that the state has the right to secure data, it should not infringe upon the digital rights of citizens. Striking a balance between these two aspects is necessary to ensure the protection and privacy of individuals’ data while maintaining an environment of national security.

Another noteworthy point is the significance of building the capacity of parliament members through civil society engagement. Deepening knowledge on legislative subjects and engaging with parliamentary portfolio committees are seen as important steps in empowering parliamentarians to effectively address the complex challenges of data policies and digital infrastructure.

Lastly, the analysis also highlights specific stances taken by some speakers. One speaker supports the implementation of the Automated Union (AU) data policy framework and emphasises the need for legislation to support its implementation. Additionally, the speaker suggests the importance of data policy harmonisation across the African continent.

Another speaker advocates for increased funding towards digital public infrastructure. The Parliamentary Network on Internet Governance aims to improve funding allocation, and it is noted that most parliaments will be resuming work in a few weeks, providing an opportunity to further push for increased funding.

In conclusion, the analysis highlights the key points and arguments made by speakers on various aspects of data policies and digital infrastructure in Africa. These include the vital role of parliamentarians, the need for harmonised data policies, prioritisation of funding for digital infrastructure, and the balance between state security and digital rights. Civil society engagement and capacity building for parliament members are also seen as crucial. The implementation of the AU data policy framework and increased funding towards digital public infrastructure are supported. Overall, the analysis provides valuable insights into the discussions surrounding data policies and digital infrastructure in Africa.

Moses Bayingana

Digital transformation is considered critical for Africa’s development and plays a significant role in achieving Agenda 2063 and the UN Sustainable Development Goals. The African Union Commission has developed strategies to drive digital transformation and boost Africa’s digital economy. Over the past decade, Africa’s contribution to GDP has increased from 1.5% to over 3% due to advancements in the digital sector. This growth highlights the potential for further economic development through increased digitisation.

To facilitate the digitisation process, the AU has adopted the AU Data Policy Framework to ensure the smooth flow of data. Moreover, support has been extended to Internet Governance Forum organisations, demonstrating the commitment to fostering a conducive environment for digital transformation.

Investing in Africa’s youth is crucial as they hold the potential to drive Africa’s digital economy. With approximately 60% of the continent’s population below the age of 25, Africa’s youth play a significant role in shaping its future. Additionally, it is projected that Africa’s population will reach 2.5 billion by 2050, further emphasising the importance of youth empowerment to harness their potential in the digital sector.

The need to bridge the digital divide is also addressed. Efforts are being made by the African Union Commission to develop strategies and frameworks to regulate digital transformation and ensure the continent’s digital future. The adoption of the AU Convention on Cyber Security and Personal Data Protection is a notable step in safeguarding Africa from cybercrime, as it is identified as a prime target due to its low awareness rate.

In terms of implementation, an institutional architecture and information framework have been devised to monitor the progress of the digital transformation strategy. Member states have nominated focal points for digital transformation, ensuring a collective and coordinated approach towards achieving the set goals. Engagement with all actors across the continent is planned to foster collaboration and support in the implementation process. Furthermore, a comprehensive evaluation is scheduled for 2025, which will provide insights into the progress made and identify areas that require further attention.

Finally, a consultative approach is being employed to grow the African Internet Governance Forum. Recognising the importance of partnerships, strategies are developed through a consultative process, and collaboration is maintained with the European Commission and other stakeholders.

In conclusion, Africa recognises the significance of digital transformation for driving development and achieving its strategic goals. With a focus on youth empowerment, bridging the digital divide, regulating digital transformation, and monitoring implementation, Africa is positioning itself for a prosperous digital future. The efforts of the African Union Commission, coupled with collaboration from key stakeholders, demonstrate the commitment to harnessing the power of digitisation for the benefit of the continent and its people.

Moderator

The African Internet Governance Forum (IGF) discussed various key topics related to internet governance in Africa, including the role of parliament members as a bridge between civil society, the technical community, and the government. It was emphasized that parliament members play a critical role in initiating or supporting government efforts to implement data policy frameworks. Harmonized data policies across African countries were also identified as necessary for seamless data management and operations of companies like Jumia. Furthermore, the forum highlighted the need to increase funding for digital public infrastructure, and the importance of civil society’s engagement with parliamentary portfolio committees for effective legislation. The pivotal role of youth in shaping the digital future was emphasized, as well as the need for advocacy for changes in the digital landscape involving all stakeholders. The forum also stressed the importance of improving internet access and digital literacy from grassroots levels, and the need for safe spaces to report cyber crimes and ethical frameworks tailored to the African context. Investment in Africa’s young generation for driving the digital economy and sustained funding for cybersecurity projects were identified as crucial. The African Network Information Centre (AFRINIC) dysfunctionality was acknowledged, while Nigeria’s readiness to host the global IGF was welcomed. The forum also highlighted the need to address the gap in internet penetration in Africa. Overall, the African IGF provided a platform for valuable discussions and emphasized collaboration, policy development, and investment in various areas of internet governance. The active monitoring of the digital transformation strategy for Africa was also highlighted as a positive step.

Lillian Nalwoga

The African Internet Governance Forum (IGF) has seen increasing interest and participation from African stakeholders. The effectiveness of the multi-stakeholder approach in the African IGF has become evident, with support from parliamentarians, ministers, and the private sector. The presence of the parliamentarian network and African Parliamentarian Symposium highlights the importance of collaboration in shaping internet governance in Africa. Additionally, governments and the private sector at the regional and national levels have shown interest in engaging with the IGF.

One key argument is the need to implement recommendations and discussions from the IGF at the regional and national levels. These recommendations, discussed in Choto and Abuja, should be applied to enhance internet governance practices across Africa.

There has been an increase in interest and participation of African stakeholders in the Internet Governance (IG) processes. Statistics from the host country show that 3,100 people registered for the IGF, both onsite and online. The eagerness of countries to host future forums indicates the growing importance of these conversations in Africa. This increasing interest reflects the recognition of effective internet governance’s impact on achieving Industry, Innovation, and Infrastructure (SDG 9) and Partnerships for the Goals (SDG 17) in Africa.

It is concluded that the IGF should continue to support and partner with African stakeholders, with significant support received from global partners through the UN IGF Secretariat. This support acknowledges Africa’s potential in the digital future and encourages collaboration and learning opportunities for the continent’s development.

In summary, the multi-stakeholder approach in the African IGF is effective and relevant. Implementing recommendations at regional and national levels is vital, considering the increasing interest and participation of African stakeholders in IG processes. Continued support and partnership between the IGF and African stakeholders are essential for the digital future of Africa.

Session transcript

Moderator:
Good afternoon, everyone. We just give ourselves 30 seconds. We should be on. As we wait, I want to confirm that our online panelists are there. Are you there? Yes, I am. Thank you. Perfect. Thanks. Moses, are you there? Moses from African Union Commission, are you there? Yes, I am. I’m connected. Thank you, Moses. So good afternoon again, and welcome to the African Union Open Forum 2023. We are delighted that you are all able to join us, and considering we have already taken a few minutes, we’ll just go into the program. And to start with, I will have Dr. Chidi Diogo, who will be giving us an overview or a highlight of what happened in the Africa IGF that was held in Abuja. And he is the head, New Media and Information Security, Nigeria Communication Commission. Dr. Chidi, the floor is yours.

Chidi:
Thank you very much. This report refers to the African Internet Governance Forum that was held between the 19th and the 24th. of September 2023 in Nigeria. To do my presentation, I prepared an outline that is written as follows. Distinguished guests, honorable delegates, ladies and gentlemen, we express our appreciation for the cooperation displayed during the recently concluded African Internet Governance Forum held in Abuja, Nigeria, between the 19th and the 21st of September 2023. Now the details of the forum are as shown on the board. The theme of the forum was transforming Africa’s digital landscape, empowering inclusion, security and innovation, followed by the facilitators of the program, which included the government of Nigeria and specifically the National Assembly, through the efforts of the various committees led by Senator Shuaibu Afolabi-Salusi and Honorable Adedjiji Stanley Olajide, and the Ministry of Communications and Innovations, the Nigerian Communications Commission and other relevant stakeholders. In general, the forum had an impressive turnout, recording nearly 3,105 participants, and then 700 of those were in person, while about 1,600 were virtually participated. Throughout the forum, there have been various activities leading up to the main celebration. First, there was African School of Internet Governance between the 13th and the 18th September 2023. The trust of the school was to build Internet governance in Africa, focusing on the African Union data policy framework. This was closely followed by African Parliamentary Symposium, and the trust for that was the contribution of the parliamentarians to shape digital trust on the African continent. Lastly, there was an African Youth Internet Governance Forum on the 18th of September. The trust was basically emerging technologies, leveraging innovation for sustainable development and youth empowerment. There were various sub-themes, numbering up to 40, but then the major caps had to do with cybercrime, human rights and freedom, universal access and meaningful connectivity, cyber security, digital device and inclusion, artificial intelligence and emerging technologies. So we had a very fruitful time in Nigeria, and then very meaningful deliberations we are undertaking, the summary of which I’m presenting as follows. Multi-stakeholder approach is key and required for the AIGF. The need for enabling environment cannot be overemphasized. Enforcement of instant cyber laws are very necessary, and the display of political will to shape the digital landscape is required. Not to mention the legislative framework, which in essence would promote ethical artificial intelligence principles and make inclusivity a data priority. And there’s a need to develop a strong foundation of digital identities across the continent. And lastly, the adoption of an African payment and settlement system. These five pillars were entirely agreed upon by the forum. And now to localize our efforts in Nigeria, the federal government of Nigeria, and just as we believe that so many other countries are doing, is playing a pivotal role of multi-stakeholderism in shaping the trajectory of technological advancement. And in doing so, it has put in place strategic objectives, initiatives, regulatory instruments and platforms for all stakeholders to come together from time to time to assess where we are, and then to most importantly determine where we’re going. And in overall, this ensures inclusivity, security, and innovation. Nigeria, like most countries that we have heard about, has also taken major steps towards the harmonization of rights of way across the 36 states of the federation, which means that the barrier to entry to play in our industry has been lowered. It also ensures that there’s a thorough connectivity and that there’s fair competition amongst the players, which also translates to ease of licensing intended operators. And of course, the price regime has been regulated, is open access and non-discriminatory. And finally, all these have contributed to what you might call the universal access and service obligations of the commission. As a continent, our work is just beginning in a very simple way, and we need to continue to work together to make Africa a shining example of digital progress. Together we’ll overcome the challenges and insist the opportunities that are emerging in our markets. While some countries have made significant progress, other countries cannot be said to have made similar progress. We identify some challenges that we must overcome as a continent, and these include inadequate visibility of individual countries’ activities, which can create problems in terms of sharing information. While our different countries are working tirelessly to ensure that our digital footprints are all over, it’s very important that we come together to share information in a way and a manner that will be real-time and achievable. We also fear that there’s insufficient collaborations within the African region. This second point somehow points to the first one, which means there’s need for continuous handshake amongst the various continental stakeholders. It did appear that research and development are inadequate across the favourite units. We all know how disruptive the emerging technologies can be, and then the speed with which we are eroding our everyday life, and therefore the need for collaboration, for research efforts to be made, cannot be overemphasized. What we don’t want to do is to continue to dwell in crying about the disruption of the OTTs and the other emerging technologies, where we can literally seize the opportunity to increase our research capacity and get funded. And lastly, there have been concerns raised about the inadequate platforms for capacity development, especially for the digital grassroots. So with the appreciation that I rendered at the beginning of this, I would like to conclude by saying that the African Internet’s Governance Forum held in Abuja in September attracted good participation across the entire Africa. As the host country, we are grateful for all those that attended. And we are even more grateful to those who took out the time to write to us to express their profound gratitude and to say to us how beautiful our country is. And so having talked about the prospects that we all stand to benefit, especially in research and collaborations, and then also identify the few challenges, it is very important that, as Africans, we’ll do what is needed for us to be able to compete effectively in the digital world. Thank you very much.

Moderator:
Thank you, Dr. Chidi, for that elaborate report about the Africa IGF. I’m now going to ask Honorable Sam George just to give key highlights from the parliamentarian symposium. Thank you.

Sam George:
Thank you very much, Madam Chair, and to the honorable members in the house, Big Mommy, Madam Mary, and everyone gathered here. For us as members of parliament, it was a very long IGF because we started with the AfriSEEG, the African School on Internet Governance, which was not a school. It was a boot camp. It was a boot camp that really stretched members of parliament. We started at 8 AM, closed at 8 PM, and had to submit our assignments by 5 AM the following day, thanks to Henriette. She’s not a very big friend of members. of Parliament. But we put out a very important document from that session that led us into the parliamentary track and and what that did for us was to highlight the opportunities that exist for members of Parliament to begin to act as the bridges between civil society, technical community and the executive or government in ensuring that we don’t leave anyone behind and we close the digital gap. Part of the key things that we discussed was the role of and highlighted in our parliamentary sessions was the role that members of Parliament have to play in ensuring that we either initiate or support executive government to bring legislation that will help with the implementation of the AU data policy framework because for us we realize that it’s important that as a continent we need to have harmonization of our data policies across the board. And one of the things that we realized was that data policies do not necessarily end, data policies do not necessarily just end with data protection legislation but also with the necessary harmonization and synchronization and one of the big examples we used was a Nigerian big tech company called Jumia. Jumia works as a Nigerian company but works across about 16 African countries including Ghana. So if we do not have proper data flows across African continents across the African continent we’re going to have challenges. So the issues of data sovereignty and data cross-border data flows came up highly and how as members of Parliament we need to ensure that even as we look at protecting critical and sensitive national data under the precepts of data sovereignty we also need to realize that we are increasingly connected and that we cannot survive on our own without cross-border data flows. Another key thing that we also looked at was the need for us to prioritize funding. Funding for digital infrastructure, digital public infrastructure is a very key thing that we need to look at in our country. So we’re looking to see how well we can improve the funding that goes to digital public infrastructure. We’re looking to see how well we can improve the funding that goes to digital public infrastructure. And that’s a key thing that African Parliamentary Network and Internet Governance is looking to do in terms of the budgeting cycle that’s going to happen in our various countries when parliaments resume the next few weeks. Most parliaments are resuming in about a week or two. So we’re looking to see how well we can improve the funding that goes to digital public infrastructure and the free flow of data. If you don’t have the infrastructure in your country in the first place to house the data, that houses it in a secure manner. Another key thing we discussed was the issue of that fine line between state security and digital rights of citizens. We recognize as members of parliament that the state has a right to have access to the data, to have access to the information, to have access to the data. However, the state must do so in a manner that does not infringe on the digital rights of citizens. So these are some of the things that as members of parliament we left Abuja with. And we’re very confident that as a network of members of parliament, we can pat ourselves on the back that most times they say MPs don’t sit in a room for very long. They don’t sit in a room for a very long time. They don’t sit in a room for a very long time. And we’re more passionate about internet governance than the tech community themselves. And you’ve got champions. You’ve got champions for you. But one of the big takeaways, which I’ll end on, if you leave me as a politician, we’ll talk beyond, until tomorrow, one of the key takeaways we left Abuja with was the fact that civil society and technical skills are very important to the people of the world, and that they don’t help build the capacity of members of parliament. And we’re very confident that as members of parliament, we can pat ourselves on the back that most times they say MPs don’t sit in a room for very long time. And we’re more passionate about internet governance than the tech community themselves. And you’ve got champions for you. But one of the key takeaways we left Abuja with was the fact that civil society and technical skills are very important to of parliament. You can only push legislation based on how deep your knowledge of a subject is. And so if civil society does not engage with parliamentary portfolio committees and members on those committees, and if we wouldn’t do a sample here, many people from civil society and ask them to mention five members of the portfolio committees in their national parliaments, many of them cannot. If you don’t build relationships with these members of this portfolio committees, you can only continue to cry outside but you won’t have the change that you want to see. And it was refreshing for us that we had members of parliament like Honorable Stanley Adediji from the Nigerian House of Reps and the chairman of the Senate committee. We’re told Nigerian senators don’t like to sit in meetings, but I mean they’ve shown us that they have first competence and secondly they’re willing to work if they are engaged. So civil society, you have champions of internet governance in parliamentarians, work with us to get what you need from government. Thank you very much.

Moderator:
Thank you Honorable Sam for those insights and we challenge you next year, same time we want to see what has been done in those areas. Now I give the floor to Mariam Jobe to give highlights of what happened in the youth session.

Mariam Jobe:
Hi, good afternoon. I’m Mariam Jobe as she already introduced and I will just highlight some of the key takeaways that we had from the Africa Youth Internet Governance Forum held a day before the main Africa IGF and it brought together a very diverse set of new voices in the youth perspective and you know we address critical issues related to internet governance, youth empowerment and emerging technology and we highly emphasize the pivotal role that young people play in shaping digital future and they’re important in their policy development and enforcement in this regard. We urge everyone, you know policymakers, relevant stakeholders including the members of Parliament, civil society and government, irrespective of their positions honestly, to advocate for changes in the digital landscape. We also address a very concerning issue which is the lack of knowledge among young people about issues around internet governance, particularly cyber security laws, data privacy, digital inclusion and you know continuous outreach efforts to educate and empower youth who are unaware of internet governance issues and how it affects them in their daily lives and their daily usage of the internet and participants call for initiatives to integrate internet governance and technology into the education system in our various African countries and especially the underserved communities and the rural communities. We also highlighted the importance of improving internet access and digital literacy from the grassroots level. Another key highlight that was made during the session was that we delved into discussions around artificial intelligence and the need for safe spaces to report problems and cases that are cyber crimes for instance and the importance of ethical framework works that are tailored to the African context, and the lack of comprehensive data laws in some countries. I know that Nigeria has, you know, made a progress in that, but there’s some countries, many African countries, that still lack comprehensive data laws that require a lot of attention. And we also, you know, concluded the event with an intergenerational session between the youth and the MPs, where we had an open dialogue where we talked with the MPs. The MPs heard what the youth want, what they want, what we want them to consider, and we talked about how they can support youth and their visions. While specifics were not fully detailed, you know, we talked about fostering collaboration between the youth and the government, and representatives emerged as a crucial step in addressing digital challenges. In conclusion, overall, I think we talked most, the key highlights was that we need to have increased education and awareness, inclusivity, ethical considerations, and citizen participation in order to build a sustainable digital future for Africa. Yes, thank you.

Moderator:
Thank you, Maria, that we need collaboration, we need innovative views of engaging our youth, and we need to ensure that we are all holistically moving together in capacity building, safety online, and the like, so we all have to work together. Now, my next speaker is online, Mr. Moses Biyangana, the acting head, Information Society Division, African Union Commission. Moses, can you take the floor? Yeah, thank you, Mumbula.

Moses Bayingana:
Distinguished participants, ladies and gentlemen, on behalf of the African Union Commission, I welcome you all to this AAU Open Forum. Let me use this opportunity to thank the government in Nigeria and the African IDMACC for the successful organization of 23 editions in Abuja, Nigeria. Our leaders have recognized digital transformation as a driver for development and critical to the attainment of Gender 2063 and UN Sustainable Development Goals, developing the digital transformation for Africa as a master plan that will drive our digital agenda up to 2030. Across Africa, the digital economy is on the rise. In the past decade, its contribution to GDP has brought in many economies from 1.5 percent to more than 3 percent. While there is progress, there is still a lot to be done. Connectivity currently has its usage in Europe, and Africa’s race for cybercrime remains low, making it a prime target for cybermen. At the continental level, the AECU has made progress in the digital environment to facilitate and implement partners across the continent to build on these common strategies and frameworks to regulate Africa’s digital transformation. These strategies and frameworks will also facilitate harmonization across the continent. This includes the development and adoption of the Digital Transformation Strategy for Africa that sets out a vision to build an inclusive digital society and economy in Africa. Sectoral digital strategies in the critical sectors of education, agriculture, health, and e-commerce have also developed to facilitate and scale up access to smart digital technologies and associated data-driven services across all sectors. Furthermore, the AU Data Policy Framework has been adopted to facilitate the flow of data across sectors and borders. The Interoperability Framework for Digital Union has also been adopted to facilitate the development of digital solutions that are inclusive, trusted, and interoperable. The African Union has also developed a broader policy and empowerment policy and conducted a study on the cyber security in Africa. develop a continual cyber security strategy. I am pleased to inform you that King AU must adopt the AU concept on cyber security and optimal data protection, which anchored its entry in court. This item gives impetus to our endeavors to promote cyber security while advancing the digital. With regards to internet governance, through the first phase of WIDA, support has been extended to the organization of the Internet Governance Forum, national, regional, and NGOs. And together with the European Union, we are working on the second phase of WIDA. Distinguished participants, ladies and gentlemen, moving forward, the continent’s useful population factor is the demographic dividend that could be a game-changer in accelerating access to digital platforms and boost economic development, create jobs, and improve lives. Recent statistics show that 60% of Africa’s population is below 25 years old. By 2050, Africa’s population is expected to grow to 2.5 billion people. And this will undercut of the world’s youth. We’ll be in Africa. Africa’s youth are there for an opportunity to drive Africa’s digital economy. Hence, the need to invest in them. a key development to relate innovation growth on the continent. In conclusion, I would like to thank everyone who contributed to the organization of the AU Open Forum, and invite all stakeholders to work together to bridge the Afro digital divide, have the Indiagra close, and secure Africa’s digital future.

Moderator:
Thank you, Moses, for those insightful highlights. And for the documents that Moses mentioned, they are in the AUC websites, and we can consult them. Our next speaker is Dr. Mark Tasek. He’s the Chief of Section Innovation and Technology, United Nation Economic Commission of Africa. Dr. Tasek, can you take the floor?

Moctar Seck:
Thank you. Good afternoon, and good morning, wherever you are. I think we have some people connected online, and it’s still morning there. Just as a beginning, I would like to thank all of you to attend this Open Forum organized by African Union. And also, I would like to thank the federal government of Nigeria for the successful organization for the African Internet Governance Forum. In the first presentation, the two first presentation, they highlight the key funding of this very interesting. the world. We have a very strong African community, and we have a very interesting forum organized in Africa. But let me try to highlight the outcome on the work we are doing now at the United Nations Economic Commission for Africa. As you know, one of our key missions is to support African development. And when we support African development, it’s not only about Africa, it’s also about Africa as a whole. So we have a very strong health sector, as well as statistics, and as a sector. And we try in our sector, in digital technology, how we can leverage all this sector through digital technology. Why it is very important to listen to the presentation done by the government of Nigeria, and Ms. On this, why it is important for Africa? Well, let me get start on the first point. As you know, there is a deficit of connectivity in Africa. We have 6 percent of our population offline. This is due to several problems. The first one is infrastructure. We need to make sure that we have the infrastructure to provide broadband to everybody by 2030. We need to make sure all people will be connected by 2030. And for this, we need to involve the private sector and also have a sound regulation to attract investment in the development of the infrastructure in the continent. We need to make sure that we have the infrastructure to make sure that we have the infrastructure to work with our regulator to look at the way we regulate now the new system, because there is an advance of digital technology around the world. Second, the continent, the digital divide, I’m going to focus only on the gender digital divide. As you know, we have a connectivity. Foreigners who come to Jackie it is a market that is rich in technology, it 85% of connected compared to people at 45% 40% it is a gap of 11% and it is very important to involve the women and youth in the technology sector. Why? It is very important to have the women and youth in the technology sector. It is $1.5 trillion. It is something very important if you want to take benefits of the digital technology. Because we have several studies to highlight estimated the cost of the internet market in Africa will be $180 million in 2025. It is very important to have the women and youth in the technology sector. It is a very important amount. It is important to put in place activity and policy to make sure we get all people included in this digital era. Third point, we talk a lot about cyber security. As you know, cyber security is very important. We have a lot of work to do. We have a lot of work to do on cyber security. We have a lot of work to do to make sure our continent is secure. And the cyber security remains a big challenge because it costs now 10% of our GDP. We have 10% of African GDP, how many schools you can build, how many hospital you can build, how many people can be moved from this poverty. So we have to be very careful. We have to be very careful. We have to be very careful to fight this cyber security as well as we have this issue of terrorism using this cyber security to kill people. is something very key in this continent. The fourth point, it is the issue of people offline. We have 500 million people in the continent without any legal form of identity. These people doesn’t exist anywhere. And we can’t do any planification without these people. We need to take into consideration the issue of digital ID, to provide digital ID to all, to see how we can interoperate with our system. Since it’s a service system, digital ID system, health system, license ID system, passport system. We need to work on this digital ID to make sure these people have identity and can participate to this digital information. Last but not least, it is emerging technology. I will focus only on this artificial intelligence. It is a big opportunity, but we need to be very careful. We need to build the capacity of our new generation. How we build the capacity? Why we are building the capacity of our new generation? Because we have this demographic dividend. By 2050, 70% of our population will be under 35 years. If you would like to participate in this digital era, we need to build their capacity to be ready to participate to this for industrial revolution. And also, we need also to look at the regulation of this artificial intelligence. Because otherwise, we can miss some sector. The sector on reading, booking, or developing, booking, we have to look at this sector carefully with this artificial intelligence. The artificial intelligence can offer a lot of opportunity, but we have also a lot of challenge with this artificial intelligence. We need to look at this carefully. And what we do in ECA? to overcome all these challenges and to support African countries. I’m going to highlight some of our key activities. In 2018, we have set up a Center of Excellence on Digital ID, Digital Threats, and Digital Economy to support African countries to use digital technology for their sustainable development. And now we are supporting African countries to implement the African Digital Transformation Strategy developed by the African Union in collaboration with UNECA and other partners. And this strategy is a blueprint for the African digital sector from 2020 to 2030. Now a lot of countries have benefited from the support of ECA. On cybersecurity, we organized two years ago the first African Summit on Cybersecurity in Togo. One outcome of the cybersecurity summit was the declaration of Lomé. And since the declaration of Lomé, we have seen a lot of progress. Now we have 15 countries who ratify this African Union Convention on Cybersecurity called Malibu Convention. And also we are establishing now a Center of Cybersecurity in Togo. On digital ID also, we are supporting a lot of countries to develop their digital ID program. We can give an example of Nigeria in the region of Kaduna, Gambia, and other countries also benefit from this support on digital ID. On capacity building, we talk about we need to build the capacity of the young generation. Why? We already established an African Center on Artificial Intelligence in Congo, Congo-Brazzaville. This center is functional since last year. And last week, this week, they have their academic, I think, for 2023 starting. I think we can build to be more relevant, more performant in the sector of artificial intelligence, applicable to the sector of health, environment, climate change, and industry and economy. Another point also, we support African countries, it is to promote the youth generation. We have several initiatives for youth generation, one, it is an STI forum we are going to organize, we organize every year, and to have a lot of young innovators to promote some innovation ideas in the continent. We have also this foremost program for girls, African Girl Recruiting Camp, and focus on the girl age from 12 to 25 years, and we provide them skills in several areas, artificial intelligence, web gaming, and now the program has trained around 35,000 girls across the continent. Another issue also, for the Parliamentary also, we have a program for the Parliamentary, an important program focused to build the capacity of the Parliamentary on the making decision also, it is very important for them to understand the issue of this digital technology, because at the end, it is them to adopt the rule and the regulation for the digital technology, and this training focus also on FinTech, we have one program with Alibaba, and also in cyber security, we have the Global Forum of Cyber Security Experts. We need also to promote the voice of Africa, why we have several forums, the one forum, several UN-led forums, one it is WSIS Outcomes for Africa, World Summit Information Society Forum for Africa, we organize every year in Africa, and to focus on the 11 action lines of the WSIS, to see the progress made by African countries in the implementation of this 11 action line, going to the role of the government, the cyber security development infrastructure issue. Africa, but also, we are focused on the African digital compact as a service and as an activity. And also, we support the organization of the African Internet Governance Forum every year, and now we are focused on this global digital compact. I’m going to stay to focus one minute before I conclude on this global digital compact, because it is, it will be one of the key framework for the world. So, we are focused on the African digital compact. We are focused on the African digital compact. How we can participate is not just to attend the meeting. Now, we are in the consultation period, and we need to provide input. I think all sectors we discuss, we can provide input based on the need of Africa. We talk about this digital public infrastructure, this access affordability. We talk about capacity building, emerging technologies, and we need Africa to be a part of this. We need Africa to be a part of this. We need Africa to provide, and this global digital compact is open to everyone. Private sector, government, civil society, academia, everybody should be involved, because this global digital compact will define the future we want on the development of technology for the world, and we need Africa to be part. We need Africa to be a part of this global digital compact. We need Africa to be a part of this global digital compact. We already have one African input developed at the meeting organized by ECA in July 2023 in the Cap Town, South Africa, and the document is available on our website, but you can still provide your input before final submission. We need Africa to be a part of this global digital compact, and we need Africa to be a part of this global digital compact. We need Africa to be a part of this global digital compact, and to think about also on the WSIS plus 20. We are going to organize the WSIS plus 20. We start the reflection to organize WSIS plus 20 in November, early December in Victoria and then continue to sort through Africa, but that’s something also to investigate. I understand that that has been the 1946 proposal for Africa for the continuation of WSIS beyond 2025 and to look at also what is, the benefit, the impact that it will have on the world and exports. I think it is something I would like to highlight to share with you. And thank you very much for involving ECA on this important forum. And we invite also to attend the several side event. We start organizing since Saturday, but we have some side event tomorrow and after tomorrow, and I would like to invite you all to attend. And I would like to invite you all to attend this side event. Thank you.

Moderator:
And I would like to invite all of you to attend this side event. Thank you. Thank you. Thank you, Dr. Seck, for reminding us that we need to focus on women among the many things you have said, and for us to contextualize the issue, we need to analyze and get the opportunity cost of this issue, and we need to get the opportunity cost of this issue. And we need to be intentional and innovative to be able to address this issue. And this can only be supported if we have the facts and statistics, and we are able to, again, link it with the livelihoods opportunities. My next speaker is online as well, that is Dr. Martin Koyabe, senior manager, African Union, Global Forum on Cybersecurity Project.

Martin Koyabe:
And first of all, thank you so much for inviting me, and also for giving the GFCE an opportunity to share some of the aspects of this intervention. Secondly, I want also to pay tribute to the speakers who have just come before me in some of the issues that they have articulated going forward. As I said before, my name is Martin Koyabe. I actually lead the GFCE on cyber security. And I also coordinate the activities between the AU and the GFCE. Some of the issues that I’ll highlight on have also been contributed by our partners. And that is the AU development agency, that’s Aouda Nepad, and also colleagues within the GFCE ecosystem. If you allow me, let me just give you the context of what we’ve been doing with the AU. And this refers to the AU-GFCE collaboration project. This project has actually come to an end, and we’re now moving to the next phase of this particular project, where we were looking at how do we build resilience and ensure that African countries have the capacity to sustain what we call cyber capacity building within the continent. There were three areas that we were looking at. And these areas were very, very pertinent. One was the issue around taking an assessment to really look carefully and look at what are the priorities of African countries when it comes to issues of cyber. Remember, COVID really interfered with many of the plans of many of the African countries, and therefore, there was a shift from cyber capacity building to cyber capacity building. And we were looking at how do we build resilience and ensure that African countries have the capacity to sustain what we call the areas where they had planned, for example, in the digital infrastructure and other areas which actually saw a massive investment since there was a need, and also with other ministries such as health, which saw a massive increase in terms of funding. So, therefore, the priorities of African countries really shifted along the path. The other aspect was to look at how do you sustain capacity within the continent. And as I said earlier, it is true that by 2050, the continent will have roughly about 2.5 billion people, and out of that, actually a good chunk will be young people. So, therefore, there was a need to make sure that there was an investment, especially in looking at the expertise that exists within the continent. So, therefore, the issues around sustainability through the resources, especially the expertise of the government, was very critical. And then, thirdly, was the issue around institutional memory. How do we make sure that we establish the issues around knowledge so that we can be able to have many institutions, citizens, and also participants to be able to learn about cyber in the future? So, therefore, the development of what we call knowledge modules was very critical. So, these are what you call best practice or what you call good practice type of platforms that enable people who are in cyber to either learn about experiences in different parts of the continent, but more importantly, to be able to share their expertise and also new ideas on specific areas. When it comes to the areas of interventions and also the lessons learned, Madam Chair, if you allow me, I’ll just go very briefly, very quickly. There were several areas that came up and several interventions that we saw. It is important that the area of especially the need to sustain and to be able to protect the infrastructure became very, very, very high in the priorities of many countries. So, therefore, establishment of SADS, enhancement of CSADS and SADS became very, very high in the agenda of many of the African countries. And in this aspect, it was an issue around how do you ensure that we actually have the knowledge and the expertise to ensure that we actually beef up the capacities of SADS in many of the African countries? Because some of the people who get trained move on to other jobs and, therefore, many African countries struggle to maintain the capacity, to maintain the skill sets that are required. So, therefore, an issue around SADS and the critical national infrastructure was very high in the agenda. Some of the areas that were proposed as the way forward is to ensure that we have what we call an identification of critical infrastructures in these countries. These countries require that people or rather the institutions or the agencies to identify what is critical. It is also important to conduct what we call the risk assessments of the critical infrastructure so that we know how much investment we need in those particular areas. But more importantly, it was to develop what we call the critical infrastructure. So that countries can understand what they need to do going forward in terms of protection of the critical infrastructure. This was also exacerbated by the fact that many countries depend on the digital infrastructure for most of the services that we see today and that is why that was very, very critical. The second dimension was the issue around development of skills, and I really support some of the sentiments that have been expressed by some of the countries. The GFCE through the project actually with the AU has established what we call the Africa cyber experts community. This is a community that comprises of over 80 experts and some of them, I can see them in the room. They are from roughly about over 37 countries. And they have a lot of experience in the field of cybersecurity and they have a lot of experience in the field of cyber security and they have a lot of experience in the field of cyber security and they have a lot of experience in the field of cyber security and they have done a lot. They traveled to zoos and they And we call them the continue from going from strength to strength in order to establish what we call the Southern to Southern expertise, which can actually be able to help many of the countries to be able to converge and be able to address some of the issues that they have. So for example, if you have an expert who is good in certs, in Malawi and the country that probably requires that expertise in North Africa, surely there’s a, there’s no need of going to the North to seek such an expert. Therefore, if we have experts within the continent that can be established and known for their, for their development, it makes a lot of sense to build that particular capacity in order to support future need in those particular areas of need, especially when it comes to cyber capacity building. So therefore the development of skills is important. There’s a need to provide the opportunities, especially for individuals in marginalized areas. I think this is something that came out within the project and also strengthening cyber diplomacy and the understanding of normality and that process. And this is something that we’ve discussed in detail. There is a need for African countries to understand what is the process for being involved in the discussion of cyber diplomacy. What are the tenets of the understanding of the basics that are required within this area? And then thirdly, there’s a need to also promote what we call diversity and inclusivity. And I like the presentation that was given earlier by the member of parliament and also by the Nigerian minister to make sure that we can build this diversity. And these are sentiments that were expressed at the IGF Africa for those of us who are there, that we need to make sure that all communities, especially the diverse communities are built within this. So therefore we need to encourage the need for the young people and both old and other people who are actually vulnerable within the community to be able to be involved in these areas. Within the GFCE and in the collaboration project, we’ve established the network of African women in cyber. This network has grown from strength to strength. Thank you, Madam Chair. Since you are the co-founder for this particular organization, you have moved it from where it is to the next level. And I think that is one area where we’ve seen a lot of effort being put in a good edge, that more effort is put in, as Markta said, that when you have more than 50% of the population is women, then it’s obvious that we need to have women and girls in cyber, you know, taking their role and being able to support the efforts. As I come to wind up, there were areas of concern, especially when it comes to resources and funding. And this is something that is not new. Many of the projects that we’ve seen in the continent do not necessarily have what we call the sustainment built into those particular projects. So therefore, after the funding is over, these projects normally either end or these projects are never sustained to the level that is expected. There is a need for the African countries to also invest more in terms of funding. So therefore when you develop cybersecurity strategies, or when you develop these particular interventions, it’s important to factor in how these countries can be able to sustain some of these particular projects. We know there are some good examples in the continent of countries that have been able to sustain their certs, or they’ve been able to sustain specific projects internally without necessarily seeking external funding. So therefore in terms of budgeting, especially for parliamentarians and other decision makers who are in the room, it’s important that we think about funding as a critical component when it comes to issues of cyber capacity building. And then finally, Madam Chair, the issue around the political will cannot be emphasized any further. And I really want to emphasize what the representative from parliament from Ghana just said just a few minutes ago, that the political will is important. And the reason why this is important is because many of the political leaders, many of the legislators do make decisions that affect you and I, especially when it comes to the continent. So therefore the issue around sensitizing the executive, sensitizing members of parliament, sensitizing people who make decisions that might not necessarily be the decisions that can actually have an impact now, but might have an impact in future. It’s very important that we really sensitize those levels or those echelons in the society so that they can understand what are the critical issues when it comes to cyber. As I finalize Madam Chair, there are some interventions that the GFCE continues to make, and we really want to thank some of the partners that are in the room that we’ve worked together. I know we’ve been able to actually support some of the IGF regional capacity development, especially when it comes to issues of the school on internet governance and other areas. We’ve also been able to work in tandem with some of the organizations in order to push specific areas of cyber capacity building forward. In summary, and as I come to a conclusion, Madam Chair.

Moderator:
Sorry, Dr. Koyabe, you only have 30 seconds.

Martin Koyabe:
Okay. So the last bit here is the upcoming meeting that is coming up in Ghana. And I know many of you are looking forward to it. There will be, for the first time, we shall have the cyber security experts and cyber capacity building development partners coming together in Ghana on the 29th to the 30th of November to talk about an issue of cyber. But thank you very much, Madam Chair.

Moderator:
Thank you, Dr. Koyabe. We are now going to the Q&A session and I want to ask all the participants, if you ask a question, please state who should answer the question so that we are able to align ourselves. We have about 15 minutes for that. Kindly only one question per person and don’t make it as if it’s another presentation so that we save on time. And to start with, I’ll ask Dr. Chidi next to me to start.

Chidi:
Yes. It’s actually a question. Thank you, Dr. Martins Koyabe, for your beautiful and intelligent presentation. Okay. We talked about critical resources or infrastructure that are required to undertake all the massive projects that you mentioned. But for us as regulators in Nigeria, we’ve received inquiries from a good number of stakeholders, which had to do with AFRINIC. the internet and the internet in Africa, to which we have not been able to give substantive answer to. And in all your presentation, I have not heard you mention the crisis or the problem or the dysfunctionality within the AFRINIC. The reason being that to sustain the internet and to fight cyber security, it is very important that the continent takes charge of the internet and the internet is the source of the internet. So, the internet is the commodity, the bandwidth, which is the IP networks. Thank you very much. » Thank you, Dr. Chidi. We’ll take two, three more questions before the panel start answering them. » Thank you very much, James. I would like to really use the opportunity to commend AU for starting African IGF 2011. I was in the room that day and it was quite tough, but the dividend is for us to see today. And also to appreciate NCC for the opportunity to host the global IGF. And to actually say that Nigeria is ripe to host the global IGF. Do you agree? Yes. Yes. Yes. So, now to the question. Dr. MacTarr provided us that data, which we know that we are really behind the global average with regard to internet penetration. I want to ask, how about the use of the data and the use of the data to actually, I mean, how can we use that to really reach the under south, the tools, the technical know-how, it’s available. I hear people say we don’t have the capability, the technical capability. We have the technical capability, we can deploy a lot of infrastructural tools. My company, we have data centers, we So digital wide spaces from the digital dividend is just there for us to use. With a bandwidth of 100 megabits per second, we can reach the underserved. So what is holding us? Thank you.

Moderator:
Thank you. Any other question? As we wait for more question, I give the floor to Dr. Koyabin to answer the first question and Dr. Marksek for the second one.

Martin Koyabe:
Thank you very much. I don’t know whether I’m on the chopping board here, but let me try and be very, very, very careful in how I respond to this issue of AFRINIC. But more importantly, I think we all agree that the continent requires consistency, it requires organizations that can be able to deliver in some of these aspects that we are discussing if we are to make a difference. It’s very, very unfortunate, especially from what I understand, and I want to be very, very careful here on the situation around AFRINIC because I think the challenge has been the litigation that has been launched in terms of the challenge of the problem that AFRINIC has. I really don’t want to go into the details of that because I know it’s in the public domain. And if you allow me another go, I want to make sure that we really assist where we can to make sure that the organization comes back to what it’s meant to be because the continent requires that organization. But more importantly, let’s build some, what we call sustainment in terms of how these organizations can function in future so that we know how we build mechanisms for auditing, mechanisms that can be able to create what we call an authentic organization that can be able to serve the people and the continent. So for now, I’ll want to reserve my extended comments if you allow, but to let what we call the process take its due course as AFRINIC tries to unsolve its issues as we all know it. Thank you very much.

Moctar Seck:
Thank you, Martin. I think I’m going to start with the AFRINIC problem. It is a big issue for the continent now. We have seen two days ago the resolution of the court and we need to take seriously into consideration this resolution. Now we can’t say anything more, but we are going to call a meeting between AEC, ECA and Smart Africa to see how we can do to sort it out with this problem. Because when you talk about this digital transformation, job creation, FinTech opportunity, e-commerce, if you don’t have your IP address, what are you going to do? Nothing. Yeah, it is a problem. And the CCTL, TCTL, This is a problem in several African countries. There is no digital sovereignty in several continents. We have our young generation, digital dividends, 70% of youth, to represent 42% of the youth in the world. But if you don’t have the access, if you don’t control your network, anything will happen. It is something we have to take into consideration. Second, regulation, it is a big problem. It is not something easy now. Before, it was easy when you have only the telecommunication sector, we have this mobile and some added value service, it was easy to regulate. But now you have this artificial intelligence. We don’t know where we are going with this artificial intelligence. It is very clear. Even for the development of all the world, they don’t know where we are going with this artificial intelligence. You want to write a book, just you ask a GPT to write for you the book, you can write. Yeah, what is the problem? Everything you can do, you can ask this artificial intelligence. It is something like, what call it in French, a lavage fall. A lavage fall, because there is no electricity, there is no electricity, there is no electricity, there is no electricity. Lavage fall, because this cow eat other cow. But now artificial intelligence use the data from all network and the use of the data of the artificial intelligence, of the service provided by artificial intelligence. We don’t need what happened now. We are not safe in this. We are not safe in this. We are not safe in this. The issue is cyber security. Now, when you use this cryptography, all this software, to save your network, the issue is you are this quantum computer now. You can’t block anything with this, it’s clear. We need to work closely with the African government, with the African government, and we can find any code you put in your system. And with this artificial intelligence, it will continue. We need to work closely while we have this working group, AUC, on artificial intelligence to see how, what we can do in Africa, what kind of framework we can put in place, what kind of measure we can put in place, what we can do in Africa, what we can do in Africa. So, even for the spectrum, it is very important to look at the issue of spectrum with the development of the 5G. Maybe later 6G will come, and we are not ready for 5G. We are not ready. Because some people use this, what call it, this 4G. But, we have to increase the 4G to make sure, to make it is like 5G. It’s not 5G. Yeah. It is generally what the operator did. We have to see this allocation of this bandwidth, also this spectrum. Regulator now has a big role to play, and it is important also all regulators to start building their capacity on this emerging technology. We have artificial intelligence. We have this blockchain. We have all this Internet of thing. Tomorrow, nanotechnology will be there. We have this quantum computer. We don’t know what’s happening in the world. Yeah. I’m going to stop there. Thank you.

Moderator:
APPLAUSE . Sorry. Thank you, Dr. Seck. I think we have time for one more question, and then we’re going to have more questions.

Audience:
Thank you, madam. My name is Katia Sarajeva. I come from Spider at Stockholm University. I would slightly disagree with the previous speaker. Just a comment. Do not get distracted by AI or blockchain. Spider has been working for seven years and has been working engaging individuals, individuals with COVID, it is not isolation. It is still spectrum. It is still a living disease, and that’s why the being of isolation is so important. That’s, you’re right, one of the key parts of my point, because if the second component of isolation is a biological respirator that’s infrastructure sharing and all of these things are done by African engineers, economists and software engineers that are locally in Africa and are constantly working on this. It is complicated and it is hard but everybody’s doing it so please remember your regulators and also your judiciary because everything rests on the rule of law so it’s not AI. It’s a lot of interesting work and good work that’s being done on national level but also in regional harmonization and working together on the basic stuff. Everybody’s talking about AI and how blingy it is. It’s just a dream. A lot of the work and a lot of the progress is being done is done right now by really highly skilled experts on the African continent and by supporting those people who are working on the bolts and nuts that are not glamorous. That is the everyday work of the telecom regulators. You are actually spreading both connectivity and use and empowering a lot of people as we speak and a lot of it is done in meetings like this. Just everybody’s struggling. It’s not just Africa. In Sweden, the north of Sweden didn’t get connectivity on their own. It’s the people who had to make it happen. So the problems are everywhere and Africa is no different and you’re doing really good because I work with these people. Sorry.

Moderator:
Thank you for the comment. I will give Honorable Stanley then two more questions and answers and the finalists. Probably to save on time, can you kind of just stand behind the mics because we have them.

Audience:
challenges. Thank you, everybody. Permit me to stand on existing protocol. I am Honorable Adedeji Stanley Olagide, House Committee Chairman for ICT and cybersecurity, also representative Nigeria. I think we’ve been having, I’ll cite an example. When we started this whole world of human genomics, where we have to do a lot of analytics around DNA for precision medicine. A lot of doctors were agitating, will this thing take away medical doctors role? What is it going to do? Is computer simulation or analytics going to take away all of our jobs? It is not true. I’ve been around technology for almost 40 years. I want to say this, let’s not get distracted with what AI is just another technology. As far as I’m concerned, I’m a technologist and now a lawmaker. Let’s not get distracted. Let’s focus and keep our eye on the ball. And the ball is how we are going to make this integrated into the future. Either we take it and run with it, or you just keep running and we’ll be behind. So as legislators in the House, let’s not get distracted. We will skin this cat. There’s so many ways we are going to skin this cat. We are going to unravel it. It’s just the reality. But the question now is, how quickly are we going to train ourselves to catch up with the rest of the world? Let me stop right there. Thank you.

Moderator:
Thank you. I give you the floor. I give Onika and then the last one.

Audience:
I have three interventions and I’m going to do it in two minutes. One is at national level, one is at continental level, and one is at global level. On the national level, we were in Abuja, had a very beautiful report that had been presented by Dr. Chidi, complimented by Honorable Sam and Marian. The first thing I believe we need to do as legislators and participants, when we go back, let’s put this together as a report and share it with the leadership of each respective National Assembly. Otherwise, this report will be circulating amongst us. No one else will know that a wonderful job took place in Abuja and has been complimented Don’t let us turn this participation into a holiday or a jamboree. The only way we can get meaningful things out of this is in our view. Let’s put this report together, let Nigerian senators share with the president of the Nigerian Senate, let the Ghana parliamentarians do so with the Ghana parliamentarians, let us also do so with representatives of our various countries in inter-parliamentary unions like ECOWAS parliament and African Union. This is one way to ensure that what we are discussing here gets traction among others. That’s number one. Number two, I listened to a number of initiatives being done at global level, at continental level. The truth of the matter is that I am hearing some of them for the first time. You cannot be a champion or an advocate of something you are not aware of. Can we have a directory of ongoing initiatives at the continental level to be shared by all parliamentarians? That is the only way you can mainstream it into your national agenda. If there’s something that has taken place in Malawi that needed to be domesticated, it cannot happen until the parliamentarians are aware of it. And to not be aware of it, unless we have a directory of it. So the gentleman who spoke about, from the CE, yeah, he spoke about some initiative going on. The man from, he spoke about something going on. Do we have a directory? Of saying, these are the ongoing initiatives at African level, at African Union level, and let’s share with the parliamentarians, and let us become the evangelists and champions in our various countries. The last one. The last one, we must be grateful to international development partners. GIZ, yeah, a number of them are here. They have been supportive of initiatives in Africa. But that’s a caveat. Sometimes those supports do not address our priorities. Those supports come in a generic manner. Malawi requires supports for education. That will become a template for the rest of Africa. I want to beg of you, let each country determine its own priorities. And let us approach those development partners with our priorities. Let the funding, let the support be tailored to the priorities of each country. That’s where we make the maximum of it. And lastly, I thank you for coming to Nigeria. We are open again.

Moderator:
Thank you for the, thank you for the comments. We have Onika Denzanyo.

Audience:
Good evening, Onika Makwakwa. I am asking a question more specifically around a concern of what is the vision and the strategic plan for growing Africa IGF. I think it’s really troubling that we’ve got, what, 54 nations? And less than 20, if I’m not mistaken, that are actually active and hosting IGF. If we have a vision of hosting more global IGFs in Africa, in Nigeria or wherever is going to take us actually showing up en masse and holding each other accountable. I think what’s been missing with IGF is an accountability framework amongst the multi-stakeholders that are involved, but also amongst us as countries within the continent. So I’d like either ECA or AU to speak to what is the vision and the strategic plan for growing and strengthening IGF within the continent. Thank you.

Moderator:
Thank you, Onika. Thank you. Thank you.

Audience:
Good afternoon, everyone, and evening, morning to everyone around the world. My name is Zanyu Ntatisiasare, CEO of Digitally Legal, and standing on existing protocols, and also I think the questions and comments that were articulated. I’m really excited to hear that. But mine is more around, there are a lot of statistics given today and a lot of vulnerable groups that were mentioned. But I think as Africa, we need to be honest about one thing. When we have meetings of this kind, we really neglect our disabled communities. And the reason why I’m saying this is if you look at use cases, and I won’t even mention where, I’ll actually give you the homework to do that yourselves, the potential of effectively using assistive technologies in your country has the immense potential of one, not just obviously fighting equality and all the really good stuff that we all say we’re here for, but actually injecting to your GDP. So I think with the leaders that are sitting here, that’s one of the questions that I would like you to sort of research for yourself and ask yourself in the context of your country, your communities, even your own hometown, what has been done from that perspective? That’s the first one. The second one is more of a comment, I think, Honorable, just bootstrapping on what you said. We’re Africans, and we’ve got our own norms, our own cultures. There’s an English saying that says you can take a horse to water, but you can’t make it drink. We are Africans. If your brother, your cousin, your sister, your child, whoever cannot drink that water themselves, you make sure that water goes into that body. And I’d like to leave that as an actionable item for each one of us here to know that if whatever is required for us to achieve our goals from an IGF perspective, as Africans, is that water conundrum of water drinking horses, what, what, we make our horses drink in Africa. Thank you very much. Thank you. Thank you.

Moderator:
Thank you very much to all of us. Most of these have been comments. We are going to continue with these discussions going forward. I’ll give now the floor to Lillian, who will attempt probably to answer some of the questions, but then also give our vote of thanks. Okay, Mata, 45 seconds. Okay. I’m counting.

Moctar Seck:
I think IGF, African IGF, is growing in the continent. As you have to know exactly what is IGF, it is a multi-stakeholder forum. It’s different to the WSIS forum, where government come and make decision. Here, it is just to discuss on the key of the issue, key issue on the digital technology around the world. It is very important to forum, one of the outcome of the WSIS 25, and now I think everybody can discuss about issue related to Africa, to the world. You have seen several opinion when we talk about artificial intelligence. People coming from the north have their own idea, but we have our own idea on what’s happen in the continent, because we know very well what’s happen on this continent. And we’ll work with African Union to try to make up more successful African IGF, and to involve more government, more private sector, more civil society. I think we have seen this very good participation in. I think next year we will get more participant to the next African IGF, but in the meantime we discuss between all key actors to see how we can make better African IGF functional in the benefits of the continent. Internet governance, the global one, we already have one last year in Ethiopia. It was very successful, but we can’t organise it, because the IGF will end in 2025. By 2025, it will not organise in Africa. Maybe if the mandate is renewed by 2030, we can see this African country, when African country can organise. But it is a competition to organise also this Internet Governance Forum. Thank you.

Moderator:
Thank you, Makta. I realise we didn’t give Moses time to, chance to answer any questions, so Moses, I give you one minute.

Moses Bayingana:
Yeah, thank you. I just wanted to make a quick comment. I will be brief. One is on the issue of initiatives on the continent, and I want to thank the distinguished speaker for raising it up. Yes, as part of monitoring the implementation of the digital transformation strategy for Africa, we have put in an institutional architecture and information framework, where we have identified who is doing what to implement into a plan. As part of the monitoring evaluation framework, we have also appointed, we requested member states and member states have nominated focal points for digital transformation. So we will be collecting initiatives from member states, from all actors across the continent who are supporting us to implement the digital transformation strategy. And in 2025, we will do a comprehensive, the monitoring of course will be early, but as a mid-term comprehensive evaluation, we will also do one in 2025. Regarding the strategy to grow African IGF, I think my colleague, Makta, is right, we have to be very, very careful, and I thank my colleague, Makta, with respect to that. You know, strategies are always a consultative process. At least your inputs are also welcome, but we will continue working with the EC and other stakeholders so that we can continue to grow the African IGF. There is always room for improvement, but it will be a consultative process involving everyone in the spirit of the multi-agenda process. Thank you.

Moderator:
Thank you, Moses. And finally, we have run over time, but I’ll ask Lilian to give our final remarks.

Lillian Nalwoga:
Thank you so much, Madam Moderator. It’s only kind of ironic that I have just one minute to say something, but it’s also wonderful and a good opportunity, being the MAG Chair for the African IGF, to hear all these kind of nice recommendations and deliberations from the actors from the region, very far, far away from our continent. I think to me, when I was listening in, and part of the things that I was seeing coming out of the multi-stakeholder approach, where we are seeing some of the recommendations or some of the resolutions that we got from Abuja already happening, you know, the issue of exercising political power all to shape the digital economy. to a future for Africa. We already have the parliamentarian network. We had the Africa Parliamentarian Symposium. We had quite a number of ministers participating in the regional, in the continental forum. But also when we do a quick kind of lens, we zoom into the regional, the sub-regional forums and the national forums, we already seeing that governments are taking interest in these conversations and so is private sector. So for me, I’m seeing that the multi-stakeholder approach is already there. So I think one of the recommendations that we got from Abuja was to further strengthen that. The role of the MAG and from Onika, we’ve had the vision is there, the plan is there. When the Africa IGF was launched in 2021, 2011, oh God, sorry, 2011, we started off with fewer countries participating, but it has grown. And if we go back to the statistics that were given from the host country, from the four days, we had about 3,100 people who registered. On site, we had 1,414 participants and online we had 1,683. This is interest, that people are interested in this, partners are interested in this and we have some of our key partners who have been with us for the past years and they’re still continuing to see that we are strengthening and growing our continent and conversation. So these are good things. These are good things that we are seeing and we are hoping that, like Honorable mentioned, that we don’t just stop at having conversations in Choto or in Abuja. We need to be able to take the recommendations and take back whatever has been discussed and see how we can implement this at the regional and national level. So the vision we’ve had for Africa is there. Of course, our role as the MAG is to see how can we strengthen the coordination of the forum, but also to increase participation of African stakeholders in the IG processes, whether it’s a national, regional or subcontinental level. So this is what we are working on and listening on to all the conversations that have come through. These are things that we are going to take on. You see that next year is even bigger than Abuja. I’m glad that Nigeria is already expressing for us to go back there, but in the spirit of my stakeholderism, we need to go to another country. Unless all factors are being held constant and there’s not any other host, then we can come back to Nigeria. Already we’ve had conversations from, interest from South Africa to host from Benin. So there’s already interest and we are seeing that the community is increasingly interested in hosting and taking these conversations to these countries. So through you, moderator, I would like to thank members, our partners at the global level, at the UN through the UN IGF Secretariat, who have been able to join us today, bring our conversations all the way from Abuja to Tshoto to be able to listen on to the outcomes. But we also encourage you to continue supporting us to see how we can grow, but also learn from what you’re doing and help, not help, but partner with us in making the digital future for Africa more successful and more wonderful for the development of the continent. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Moderator:
So thank you very much to our panelists, to our participants, and we look forward to working with you and making our next open forum to show output, outcome, and show how we have improved as it has been said. Many thanks and bye from all of us. Bye. Thank you. Thank you. Thank you. Thank you very much. Apologies for the next session. So let’s kindly move out or we join them to listen to them.

Audience

Speech speed

167 words per minute

Speech length

1740 words

Speech time

625 secs

Chidi

Speech speed

120 words per minute

Speech length

1559 words

Speech time

780 secs

Lillian Nalwoga

Speech speed

169 words per minute

Speech length

782 words

Speech time

278 secs

Mariam Jobe

Speech speed

157 words per minute

Speech length

491 words

Speech time

188 secs

Martin Koyabe

Speech speed

221 words per minute

Speech length

2498 words

Speech time

679 secs

Moctar Seck

Speech speed

189 words per minute

Speech length

3247 words

Speech time

1032 secs

Moderator

Speech speed

144 words per minute

Speech length

984 words

Speech time

409 secs

Moses Bayingana

Speech speed

102 words per minute

Speech length

893 words

Speech time

523 secs

Sam George

Speech speed

201 words per minute

Speech length

1140 words

Speech time

341 secs

Protecting children online with emerging technologies | IGF 2023 Open Forum #15

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Shenrui LI

During the discussion on protecting children online, the speakers placed great emphasis on the importance of safeguarding children in the digital space. Li Shenrui, a Child Protection Officer from UNICEF China Council Office, highlighted the need for collective responsibility among various stakeholders, including governments, industries, and civil society, in order to effectively protect children from online harms. Shenrui stressed that it is not enough to rely solely on policies; education and awareness are also crucial elements in ensuring children’s safety online.

China is dedicated to leading the way in creating a safe digital environment for children globally. The Chinese government has introduced provisions to protect children’s personal information in the cyberspace. Additionally, the country has organised forums on children’s online protection for consecutive years, demonstrating their commitment to addressing this issue.

Xianliang Ren further contributed to the discussion by highlighting the importance of adaptability in laws and regulations for addressing emerging technologies. Ren recommended regulating these technologies in accordance with the law and suggested that platforms should establish mechanisms such as ‘kid mode’ to protect children from inappropriate content. This highlights the need for clear roles and responsibilities in the digital space.

Improving children’s digital literacy was also identified as a crucial aspect in protecting them online. The importance of education in equipping children with the necessary skills to navigate the digital world effectively was acknowledged.

The discussion also highlighted the significance of international cooperation in addressing the issue of children’s online safety. China has partnered with UNICEF for activities related to children’s online safety, demonstrating their commitment to working together on a global scale to protect children.

In conclusion, the discussion on protecting children online emphasised the need for collective responsibility, adaptable laws and regulations, improved digital literacy, and international cooperation. These recommendations and efforts aim to create a safe and secure digital environment for children, ensuring their well-being in the increasingly connected world.

Patrick Burton

Emerging technologies offer both opportunities and risks for child online protection. These technologies, such as BORN’s child sexual abuse material classifier, the Finnish and Swedish somebody initiative, and machine learning-based redirection programs for potential offenders, have proved valuable in combating online child exploitation. However, their implementation also raises concerns about privacy and security. Potential risks include threats to children’s autonomy of consent and the lack of accountability, transparency, and explainability.

To address these concerns, it is crucial to prioritize the collective rights of children in the design, regulation, and legislation of these technologies. Any policies or regulations should ensure the protection and promotion of children’s rights. States have a responsibility to enforce these principles and ensure that businesses comply. This approach aims to create a safe online environment for children while harnessing the benefits of emerging technologies.

The implementation of age verification systems also requires careful consideration. While age verification can play a role in protecting children online, it is essential to ensure that no populations are excluded from accessing online services due to these systems. Legislation should prevent the exacerbation of existing biases or the introduction of new ones. Recent trends indicate an increasing inclination towards the adoption of age verification systems, but fairness and inclusivity should guide their implementation.

Additionally, it is important to question whether certain technologies, particularly AI, should be built at all. Relying solely on AI to solve problems often perpetuated by AI itself raises concerns. The potential consequences and limitations of AI in addressing these issues must be carefully assessed. While AI can offer valuable solutions, alternative approaches may be more effective in some situations.

In summary, emerging technologies present both opportunities and challenges for child online protection. Prioritizing the collective rights of children through thoughtful design, regulation, and legislation is crucial to leverage the benefits of technology while mitigating risks. Age verification systems should be implemented in a way that considers biases and ensures inclusivity. Moreover, a critical evaluation of whether certain technologies should be developed is necessary to effectively address the issues at hand.

Xianliang Ren

There is a global consensus on the need to strengthen online protection for children. Studies have revealed that in China alone, there are almost 200 million minors who have access to the internet, and 52% of minors start using it before the age of 10. This highlights the importance of safeguarding children’s online experiences and ensuring their safety in the digital world.

In response to this concern, the Chinese government has introduced provisions for the cyber protection of children’s personal information. Special rules and user agreements have been put in place, and interim measures have been implemented for the administration of generative artificial intelligence services. These efforts are aimed at protecting the privacy and security of children when they engage with various online platforms and services.

There is a growing belief that platforms should take social responsibility for protecting children online. It is suggested that they should implement features like kid mode, which can help create a safer online environment for young users. By providing child-friendly settings and content filters, platforms can mitigate potential risks and ensure age-appropriate online experiences for children.

Additionally, it is argued that the development and regulation of science and technologies should be done in accordance with the law. This calls for ethical considerations and responsible practices within the industry. By adhering to regulations, technological innovations can be harnessed for the greater good while avoiding potential harm or misuse.

Improving children’s digital literacy through education and awareness is seen as crucial in tackling online risks. Schools, families, and society as a whole need to work together to raise awareness among minors about the internet and equip them with the knowledge and skills to recognize risks and protect themselves. This can be achieved by integrating digital literacy education into school curricula and empowering parents and caregivers to guide children’s online experiences.

Furthermore, it is important for the internet community to strengthen dialogue and cooperation based on mutual respect and trust. By fostering a collaborative approach, stakeholders can work together to address the challenges of online protection for children. This includes engaging in constructive discussions, sharing best practices, and developing collective strategies to create a safer digital environment for children.

In conclusion, there is a consensus that online protection for children needs to be strengthened. The Chinese government has introduced provisions for the cyber protection of children’s personal information, and there is a call for platforms to implement features like kid mode and take social responsibility. It is crucial to develop and regulate science and technologies in accordance with the law, improve children’s digital literacy through education, and promote dialogue and cooperation within the internet community. By taking these steps, we can create a safer and more secure online environment for children worldwide.

Mengyin Wang

Tencent, a prominent technology company, is leveraging technology to ensure the safety of minors and promote education. With a positive sentiment, Tencent places a strong emphasis on delivering high-quality content and advocating for the well-being of minor internet users. In line with their mission and vision, the company has initiated several key initiatives.

In 2019, Tencent launched the T-mode, a platform that consolidates and promotes high-quality content related to AI, digital learning, and positive content. This initiative aligns with Goal 4 (Quality Education) and Goal 9 (Industry, Innovation, and Infrastructure) of the Sustainable Development Goals (SDGs). The T-mode platform aims to provide a safe and valuable online experience for minors by curating content that meets strict quality standards.

To promote education and inspire learning, Tencent has taken significant steps. They released an AI and programming lesson series, offering a free introductory course to young users. This initiative aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. The course is designed to cater to schools with limited teaching resources and aims to reduce educational inequalities.

Tencent has also partnered with Tsinghua University to organize the Tencent Young Science Fair, an annual popular science event. This event aims to engage and inspire young minds in science and aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. Through interactive exhibits and demonstrations, the fair encourages the next generation to explore the wonders of science and fosters a love for learning.

In addressing the protection and development of minors in the digital age, Tencent has harnessed the power of AI technology. They compiled guidelines for constructing internet applications specifically designed for minors based on AI technology. This shows Tencent’s commitment to creating safe and age-appropriate digital environments for young users. Additionally, Tencent offered the Real Action initiative technology for free to improve the user experience, including children with cochlear implants. This initiative aligns with Goal 3 (Good Health and Well-being) and Goal 9 (Industry, Innovation, and Infrastructure) of the SDGs.

In conclusion, Tencent’s initiatives in ensuring minor safety online and promoting education demonstrate their commitment to making a positive impact. Their focus on providing high-quality content, offering free AI and programming lessons, organizing the Tencent Young Science Fair, compiling guidelines for internet applications, and enhancing accessibility for individuals with cochlear implants showcases their dedication to the protection and development of minors in the digital age. Through these initiatives, Tencent is paving the way for a safer and more inclusive online environment for the younger generation.

DORA GIUSTI

The rapidly evolving digital landscape poses potential risks to children’s safety, with statistics showing that one in three internet users are children. This alarming figure highlights the vulnerability of children in the online world. Additionally, the US-based National Center for Missing and Exploited Children reported 32 million cases of suspected child sexual exploitation and abuse in 2022, further emphasizing the urgent need for action.

To protect child rights in the digital realm, there is a pressing need for increased cooperation and multidisciplinary efforts. The emerging risks presented by immersive digital spaces and AI-facilitated environments necessitate a collective approach to address these challenges. The UN Committee on the Rights of the Child has provided principles to guide efforts in safeguarding child rights in the ever-changing digital environment. By adhering to these principles, stakeholders can ensure the protection of children and the upholding of their rights online.

In addition to cooperation and multistakeholder efforts, raising awareness and promoting digital literacy are crucial in creating a safer digital ecosystem for children. Educating children about the potential risks they may encounter online empowers them to make informed decisions and stay safe. Responsible design principles that prioritize the safety, privacy and inclusion of child users should also be implemented. By adhering to these principles, developers can create platforms and technologies that provide a secure and positive digital experience for children.

The analysis highlights the urgent need for action to address the risks children face in the digital landscape. It underscores the importance of collaboration, guided by the principles set forth by the UN Committee on the Rights of the Child, to protect child rights in the digital world. Furthermore, it emphasizes the significance of raising awareness, promoting digital literacy, and implementing responsible design principles to ensure the safety and well-being of children online. Integrating these strategies will support the creation of a safer and more inclusive digital environment for children.

ZENGRUI LI

The Communication University of China (CUC) has made a significant move by incorporating Artificial Intelligence (AI) as a major, recognizing the transformative potential of this emerging technology. This integration showcases the university’s commitment to preparing students for the future and aligns with the United Nations’ Sustainable Development Goals (SDGs) of Quality Education and Industry, Innovation, and Infrastructure.

In addition to integrating AI into its programs, CUC has also established research centers focused on exploring and advancing emerging technologies. This demonstrates the university’s dedication to technological progress and interdisciplinary construction related to Internet technology.

CUC has also recognized the importance of protecting children online and the need for guidelines to safeguard their well-being in the face of emerging technologies. It is suggested that collaboration among government departments, scientific research institutions, social organizations, and relevant enterprises is crucial in establishing these guidelines. CUC’s scientific research teams have actively participated in the AI for Children project group, playing key roles in formulating guidelines for Internet applications for minors based on AI technology.

The comprehensive integration of AI as a major and the establishment of research centers at CUC reflect the university’s commitment to technological advancement. It highlights the importance of recognizing both the benefits and risks of emerging technologies and equipping students with the necessary skills and knowledge to navigate the digital landscape responsibly.

Overall, CUC’s initiative to integrate AI as a major and its involvement in protecting children online demonstrate a proactive approach towards technology, education, and social responsibility. The university’s collaboration with various stakeholders signifies the importance of interdisciplinary cooperation in addressing complex challenges in the digital age.

Sun Yi

The discussion revolves around concerns and initiatives related to online safety for children in Japan. It is noted that a staggering 98.5% of young people in Japan use the internet, with a high rate of usage starting as early as elementary school. In response, the Ministry of Internal Affairs and Communications has implemented an information security program aimed at educating children on safe internet practices. The program addresses the increasing need for online safety and provides children with the necessary knowledge and skills to navigate the online world securely.

Additionally, the NPO Information Security Forum plays a crucial role in co-hosting internet safety education initiatives with local authorities. These collaborative efforts highlight the significance placed on educating children about online safety and promoting responsible internet usage.

However, the discussions also highlight challenges associated with current online safety measures in Japan. Specifically, concerns arise regarding the need to keep filter application databases up-to-date to effectively protect children from harmful content. Moreover, the ability of children to disable parental controls poses a significant challenge in ensuring their online safety. Efforts must be made to address these issues and develop robust safety measures that effectively protect children from potential online threats.

On a positive note, there is recognition of the potential of artificial intelligence (AI) and big data in ensuring online safety for children. The National Institute of Advanced Industrial Science and Technology (AIST) provides real-time AI analysis for assessing the risk of child abuse. This highlights the use of advanced technology in identifying and preventing potential dangers that children may encounter online.

Furthermore, discussions highlight the use of collected student activity data to understand learning behaviors and identify potential distractions. This demonstrates how big data can be leveraged to create a safer online environment for children by identifying and mitigating potential risks and challenges related to online learning platforms.

To create supportive systems and enhance online safety efforts, collaboration with large platform providers is essential. However, challenges exist in collecting detailed data on student use, particularly on major e-learning platforms such as Google and Microsoft. Addressing these challenges is crucial to developing effective strategies and implementing measures to ensure the safety of children using these platforms.

In summary, the discussions on online safety for children in Japan emphasize the importance of addressing concerns and implementing initiatives to protect children in the digital space. Progress has been made through information security programs and collaborative efforts, but challenges remain in keeping filter applications up-to-date, configuring parental controls, and collecting detailed data from major e-learning platforms. The potential of AI and big data in enhancing online safety is recognized, and future collaborations with platform providers are necessary to create safer online environments for children.

Session transcript

Moderator – Shenrui LI:
Okay, hello everyone, excellence, ladies and gentlemen, and also young, young, young people Friends, because I saw there are some children also joining us for this session Welcome all to the Internet Governance Forum 2023 Open Forum No. 15 Protecting Children Online with Emerging Technologies My name is Li Shenrui, I’m from UNICEF China Council Office as a Child Protection Officer It’s my honor to welcome you as the moderator of this session on behalf of the China Federation of Internet Societies, UNICEF China, and Communication University of China to convey the warm greetings to all of you together to this important forum And a big thank you for being here today And today in this session we will discuss the most trendy topics around protecting children with emerging technologies As many of you may know that two years ago ago, the unions have released a policy guidance 2.0 on AI for children globally. It’s a global policy guidance for governance and industry. So the conversations kept going on in the last two years on how to protect children online and how to adjust our policy actions, practices, not only from the government side, but also from the industry and from the social civil society side to engage and leverage resources to protect our children. So taking this opportunity, we have guests and guest speakers with various backgrounds, and we will share more on their insights around this topic. So without further ado, let’s welcome our honored guest, Mr. Ren Xianliang, the Secretary General of the World Internet Conference and the President of China Federation of Internet Societies, to give us opening remarks. Please, welcome. REN XIANLIANG, SECRETARY GENERAL, WORLD WIDE INTERNET CONFERENCE

Xianliang Ren:
Ladies and gentlemen, I am pleased to attend the UN Internet Governance Forum in 2023, which is a forum on the protection of children’s Internet security with new technologies. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. Ladies and gentlemen and friends, it’s great to be here at the IGF 2023 Open Forum, Protecting Children Online with Emerging Technologies. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. It’s great to be here at the IGF 2023 Open Forum, Protecting Children Online with Emerging Technologies. In today’s world, technologies like AI, big data, the Internet of Things are everywhere. They have a huge impact on our lives and raise new issues for Internet governance, especially when it comes to protecting children online. On one hand, the Internet is an important tool for children to learn and communicate. On the other hand, it brings risks like harmful content, addiction, fraud, and privacy breaches. There is a global consensus that we need to strengthen online protection for children. Studies show that in China alone, there are almost 200 million minors who have access to the Internet. The age of the first exposure is getting younger too, with 52% of the minors Internet users before the age of 10. That’s why the Chinese government needs to strengthen online protection for children. The Chinese government and society have taken this issue seriously. The government has introduced provision on the cyber protection of personal information of children, which require operators set up special rules and user agreement for the protection, and interim manners for the administration of generative artificial intelligence services, which make sure that generative AI, like how it works and what data it uses, is regulated. After regulation on the online protection of manners, and a dedicated chapter to cyber protection in the new law on protection of manners, make sure kids are protected when they are online. Special efforts have been made to clean up the online environment. Platforms have taken social responsibility by implementing features like kid mode and As a social organization, World Internet Conference and China Federation of Internet Societies are actively involved in children’s online protection too. WIC Wuzhen Summit has held forums on children’s online protection for consecutive years, and CFIS has collected with UNICEF to host or participate in activities related to children’s online safety at IGF, collecting cases of AI for children and promoting them globally. These efforts have yielded positive results. To protect children’s online security with emerging technologies, we need to communicate more, build consensus, and take collective action. Here, I’d like to share three suggestions. First, we should regulate emerging technologies in accordance with the law. 建立完善相关的法律法规,发挥法制对新技术应用、新业台发展的引领、规范、保障作用,依法规范新型技术应用场景。 It is important to establish and improve laws and regulations related to the application and development of emerging technologies. This will ensure that these technologies are used responsibly and in a way that safeguards everyone’s interests. 建议政府部门强化监管、持续开展各类整治行动,纠正各种网络乱象,行使儿童网络安全防护墙。 I recommended that government departments enhance insuperation, continue to take collective measures and crack down on bad online behaviors, to strengthen the role of production for children’s online security. 二是,推动科技向上、向善。 Second, we should make sure science and technologies are developed to do good. 网站平台作为各类应用服务的提供者,要强化主体责任,建立健全青少年模式,防沉迷机制,举报处置机制等,防范打击侵害儿童合法权益的内容和行为。 鼓励倡导企业加强儿童网络保护的技术研发,以技术对技术提高儿童网络保护的防疫能力。 三是,提升数字素养。 Third, it’s crucial to improve children’s digital literacy. 推进学校、家庭和全社会共同参与加强未成年人网络素养的宣传教育,提升儿童防范风险意识和自我保护能力,提升学校、家长以引导规范儿童安全上网用网的能力素养。 Schools, families and society as a whole should work together to raise awareness and educate minors about the Internet. equipped with knowledge and skills to recognize risk and protect themselves. In addition, schools and parents should be more prepared to guide children through internet use. Social organizations and research institutions should utilize social and industrial resources and work on ethical governance of emerging technologies. These include establishing machines for ethical review and certification. We should also develop cross-regional and cross-platform cooperation to study and solve the problems of black market and hidden network disruptions of children. We should jointly create a network space for children to grow healthily. Last but not least, I suggest that the internet community strengthen dialogue and cooperation based on mutual respect and trust. We could not tackle difficult issues such as illegal industry targeting children and hidden cyber threats without cooperation across regions and platforms. Together, we can build a community with a shared future in cyberspace that fosters the healthy growth of children. We will continue to make dedicated efforts towards this goal and contribute to a better and safer cyber world for children. I wish this forum great success. Thank you.

Moderator – Shenrui LI:
Okay, thanks to Mr. Ren for during the opening remarks. It’s always thrilled to see that China dedicates to be a pioneer, to explore and lead the positive pathways towards enabling and safe digital environment for children globally, while emphasizing, as Mr. Ren mentioned, the adaptability of laws and regulations, and also the clear roles and responsibilities of different sectors, including industrial and social science sectors, and also improving the children’s digital literacy. While we’re glad to see that China keeps seeking opportunities on international cooperation among this important topic, and we hope to unpack those suggestions later in our discussion today. Next, let’s welcome Mr. Li Zengrui, the Deputy Director of the Council of the Communication University of China. Let’s welcome.

ZENGRUI LI:
Distinguished Mr. Ren Xianliang, Ms. Dora, ladies and gentlemen from around the world, good afternoon, good evening, good morning. I’m very pleased to participate in this open forum with the theme, Protecting Children Online with a Major Technology. First of all, please allow me to represent Communication University of China, or CUC, one of the organizers of this forum, to warmly welcome all experts and scholars for your attendance. Thank you for your attention to the topic of children online protection. With rapid development of Internet, the wave of digital technology and information networking has swept the world. By June 2023, netizens in China had outnumbered 1 billion, about 20 percent of which are adolescents and students. Taking the largest proportion, the popularity of the Internet has enabled children more access to reach out emerging technologies and further use them. The major technologies not only bring great convenience to children’s education, health and entertainment, but also arise people’s concern for policy protection and fairness. CUC has always valued the integration of disciplinary construction related to Internet technology, technological progress and social responsibility, and has deepened academic accumulation in the intelligence media network. A number of emerging technology-related research centers have also been established, including State Key Laboratory of Media Convergence and Communication, Key Laboratory of Intelligent Media of the Ministry of Education, Key Laboratory of Audiovisual Technology and the Intelligent Control System of the Ministry of Culture and Tourism. In addition, the School of Information and Communication Engineering has set up AI as a major to cultivate compound senior talent for AI-related scientific research, design, and development, and integrated applications in fields such as information, culture, radio, and television, and the media industry. With the strengthening of the inheritance of academic and social research and the vantage of amazing Internet technology, and the invitation of CFIS and UNICEF, one of the scientific research teams from the CUC joined the AI for Children project group. As a key member, our team conducted in-depth research on the application of AI for children and participated in the formulation of guidelines for the construction of Internet applications for minors based on AI technology. Different from traditional Internet applications, the Internet applications driven by emerging technologies introduce intelligent technologies such as machine learning, deep learning, natural language processing, and knowledge graphs. The use of these technologies helps to provide more well-being for children, such as health monitoring of children, recommendation of quality content, company of special group. However, emerging technologies also bring many risks to children. such as unfairness, data policy security, and internet education. Therefore, stakeholders such as government departments, scientific research institutions, social organizations, and relevant enterprises should deepen exchanges, enhance consensus, and strengthen cooperation, and found guidelines and rules of global common development of protecting children online with emerging technologies, so as to promote the health development of emerging technologies, and better benefit people around the world. I hope that through exchange of this open forum, we can all get inspiration from the application of emerging technologies for children, and contribute to the development and application of emerging technologies in the children-related field. At the end, I hope this open forum will be success, and promote global awareness of children online protection. Thank you very much. Thank you.

Moderator – Shenrui LI:
Okay, thank you Mr. Li for sharing, and also for expressing the commitment of the CUC on generating more evidence on child online protection. It was good cooperation between CUC and UNICEF China on working on the documentation of AI for children cases. We definitely hope to see more of those collaboration joined. Please let us welcome Mr. Patrick Burden, the Child Online Protection Consultant to share about the key considerations in. in regulating emerging technologies for protection of children. So the floor is yours, Patrick.

Patrick Burton:
Thank you very much. Can I just check that everybody can see my screen? Sound and clear, please. Perfect. Thank you. Sorry. Give me a second. I just need to turn translation off, but I’ve got an echo. There we go. Hopefully that will be better. So thank you very much, Chairperson, Secretary-General, colleagues, fellow speakers, experts, participants in the room, friends that I know are there. Thank you so much for the opportunity to speak to you and for convening this forum in the first place. So it’s difficult to watch or to read the news these days, obviously, without hearing about AI, the impact of artificial intelligence or digital technology on children’s lives. Often this is phrased in negative terms, for example, the impact of screen time, as problematic as that phrase is, unless it’s impact on children’s concentration and well-being or on the escalating reports of child sexual abuse material or exposure to explicit images by children or sometimes the tragic results of cyberbullying that children are experiencing. And I think this is only surpassed perhaps by the growing attention on the impact of AI and emerging technologies specifically, not least in feeding these risks and in exacerbating and catalyzing harmful outcomes for children. Yet, as the title of this forum suggests, at the same time, that same technology can certainly offer a wealth of opportunities, many of which have already been alluded to by the previous speakers in the right context with the appropriate oversight, regulation and design to mitigate some of the potential for harms that the underlying fabric and construction algorithms and machine learning introduced for escalating into children’s everyday use of digital technology. These range from the use of predictive analytics and behavioral models for prevention, deterrence and response to cyber bullying, child sexual offending and other risks and to the use of machine learning and deep neural networks for scanning and hashing of child sexual abuse material. And each of these offers exciting and important guardrails to emerging adaptation of risks that exponentially changing technology and this rapidly changing speed of technology introduced into children’s lives. Now, I’ll just touch on a couple of examples of how digital technology, of how emerging technology using AI in different forms are being used to keep children safe online. Many of you, I’m sure, will have heard of some of these. BORN’s child sexual abuse material classifier is a machine learning based tool that can find new or unknown child sexual abuse material in both images and videos. When potential CSAM is flagged for review and the moderator confirms the decision, the classifier learns it. Now, it continually improves from those decisions and those moderator reviews in a feedback loop and it’s significant in that it uses AI to generate a departure from existing child sexual abuse material mechanisms which depend on existing reports, existing in databases, existing databases using hashing and matching technology. Rather, it detects new and unknown or unclassified child sexual abuse materials. That’s just one example. Another example, which is somewhat different but so important and often overlooked is the use of AI to support children in responding, dealing with issues they encounter online. The example I’ve got here is somebody, a Finnish and Swedish example, which has been developed to support children and adolescents who have potentially experienced online harassment. And often the chatbot, through which cases are analyzed and what it calls the first aid kit, are offered to children with step-by-step guidance on how to deal with each situation on a case-by-case basis. Importantly, it also has a mechanism to review by legal experts, ensuring that the safety and the child-friendliness of the system is ensured through constant human oversight, something which I touch on again later. The third example that I’d like to give is somewhat different from the previous examples, and something which I think we are only starting to pay enough attention to, and that’s looking at deterrence and behavior change for potential offenders. The redirection programs, and there’s a similar initiative out of the UK, uses machine learning to offer self-help programs to prevent child sexual offending, specifically through focusing on deterrence to use child sexual abuse material. It constantly and iteratively learns from information and data shared by users, and importantly is transparent in the collection and use of this data. Like the previous example, the somebody initiative, it’s also subject to oversight and training from human operators. Similar initiatives use predictive analytics to promote behavior change and help seeking among child sexual abuse offenders. Those are just three out of a multitude of examples of how emerging technology is being used, practical examples to keep children safe online. Yet, as much as these technologies in keeping children safe offer immense opportunities, so do these technologies themselves introduce risks to children. They’re not necessarily new risks, but rather new or exacerbated manifestations of existing risks that digital technologies present in children’s lives. These risks pose important questions for how the tech is designed, how it’s regulated, and how it’s legislated. For example, a couple of key questions that you need to take into account when thinking about this. What data is used for machine learning? How is it collected? What biases might it introduce into operations? How are these biases mitigated? How is data collected? Where is it stored? Who has access to it? intentionally and unintentionally. And what’s the purpose of that access to the data? Predictive models, machine learning required immense amounts of children’s data, the collection of storage, which might introduce new risks into children’s lives, might introduce new privacy and security risks for children. There are a number of ethical dilemmas around this. To what degrees are approaches such as predictive analytics and nudge techniques, when applied using AI, allowing for the personal freedom of choice, the autonomy of decision-making, rather than manipulating users. Particularly if those users, those children are not aware of the facts or fully understand the facts, how that technology is being used, how the data is being used, or how an intervention is being applied. And somewhat related to this is the ring fencing of data that is collected and used to inform these models for purposes of the minimization of purpose and use. Now, just, I think the moderator introduced or made reference to a couple of documents that UNICEF has produced, both the model legislation policy for AI and also UNICEF Innocenti have produced a number of papers that highlight some of these challenges. Just to carry on. Risks to children’s autonomy of consent. Technology deployed to detect new child sexual abuse material or grooming, for example, using classifiers, such as those provided in the example before, would not necessarily be able to differentiate between consensual sexual conversations or image sharing between two adolescents of a legal age in that jurisdiction, on the one hand, and otherwise unknown and unhashed child sexual abuse material, potentially introducing risks and biases to those children. Related to this, what are the underlying assumptions that underpin those algorithms? or the machine learning or what is age appropriate, contextually appropriate, culturally appropriate, consensual behavior, and how are those differences by context, by region, by location taken into account. What about the lack of accountability, transparency, and explainability? Machine learning systems are making decisions related to data and algorithmic determination. How and when are these decisions and explained to children in a way that they understand or to their parents as well? And do they detract from individual decision making? There are many more perceptual hashing potential for false positives. Some of these risks are more applicable to some forms of emerging technology than others and in particular uses compared to others, but most are common to some degree across the different forms of technology that use machine learning and deep learning. I don’t have five days, so I’m just going to draw attention to some of the key issues around regulation and particularly around addressing some of these challenges that the use of emerging technologies pose. I say I don’t have five days because this is a challenge that countries throughout the world and regions throughout the world are battling with and I think while we have some really good promising examples and some good examples relating to some of the challenges in legislation, it is an evolving conversation and something where I think, you know, it’s going to take us a while to get this framing and the regulatory and policy environment really sound in order to protect the collective rights of children. And I’m starting with the protective rights of children because underlying any legislational policy has to be an assurance that all technology and regulation are used in the mandate to protect and ensure that collective, equal, indivisible, and inseparable rights of children rather than prioritizing one right over the other. That means anticipating many of the potential unintended consequences that that technology might have down the road on children’s rights, collective rights. It ranges from the obligations of due diligence by industry, designing and implementing that technology to anticipate and address adverse effects on the rights of the child, to the responsibility of states to ensure that businesses adopt and adhere to these principles and are held accountable, and also to ensuring that states themselves respect and adhere to these principles and its mandates. Now, these are enshrined in the Convention on the Rights of the Child. They’re enshrined or they are certainly contained in the general comment number 25 and emerging sort of global guidance and treaties and instruments that have been designed to look at the protection of children’s rights. And a couple of more recent pieces of legislation and policy frameworks are starting to incorporate these effectively, and the Australian Online Safety Act, the UK Online Safety Bills, which is addressing this to some degree, and I say to some degree, the EU DSA, and the recent draft directive regulations that explicitly address the need to anticipate and detect online harms before they occur. What’s interesting, the recent EU directive calls for relevant judicial bodies to ensure that technology companies objectively and diligently assess, identify and weigh on a case-by-case basis, which is critical, not only the likelihood and seriousness of the potential consequences of services being misused for the types of online child sexual abuse at issue, but also the likelihood and seriousness of any potential negative consequences for other parties affected. One of the things I don’t have on the slide here that is also critical, that is contained in EU legislation as well as Australian legislation, at least, I’m sure it’s in others, require the importance of requiring third party independent and public annual audits to assess the impact on child rights as detailed in the CRC and general comment number 25. Moving on, some more examples. If age verification is to be adopted and most recent pieces of legislation are pointing to that contained in various EU documents contained in the draft UK Online Safety Bill, in Australian legislation, if age verification is to be adopted, and I’m saying if because we can’t say that age verification is perfect yet, it is not where it should be in order to function effectively, it is very likely to get there, then significant steps will need to be taken prior to its implementation. To ensure that the child population is equitably equipped with required identification or whatever is required in order to be able to verify the age and that certain populations are not excluded from that. So we need to make sure that age verifications do not reinforce existing biases or introduce new biases or exclusionary practices. Okay, Patrick, sorry to interrupt. I had to interrupt you but we are running out of time, so probably you could wrap up within one minute, please. I will wrap up within one minute. I’ve already spoken, almost there. I’ve already spoken about AI oversight bodies. Importantly, with attached mechanisms for redress and that’s just something I’ve got to say, we know from speaking to children throughout the world, one of their major concerns is that when they make reports or when AI is used or when automated report systems are used that there’s no response. We need to make sure there’s accountability for those responses. And then we need to make sure that regulation is designed in a way or policies are designed in a way that are not limited to existing emerging technologies that will provide but rather provide scope for future developments and definitions. The very last point I’d like to make, this is a. quote from recent paper by Amanda Lennard and Coddy Goins on common myths and evidence, and she makes a point that sometimes some technology cannot be fixed by more design. We cannot necessarily design our way out of problems. Sometimes those technologies should not be built at all. And I guess my final comment is, do we and can we and will we rely on emerging technologies and AI to fix the problems that often result from AI in the first place? Do we rely on AI to create the internet that we want? And that’s perhaps a question more than an answer. Thank you and apologies for going over time.

Moderator – Shenrui LI:
Okay, thank you, Patrick, for your thoughtful sharing. And we all know that is never an easy question to answer on this topic. And we’re all devoting to find the fine balance between the trade-off and against on-child online protection. We definitely want to hear from you more in future. But next, let’s welcome Professor Sun Yi from Kobe Institute of Computing Graduate School of Information Technology to share his thoughts on this topic. Please.

Sun Yi:
Okay. Good afternoon, everyone. Thank you for the Unicef China and CFS and CFC giving me the opportunity to share my experience and here. My name is Sun Yi. I’m Chinese, but I live in Japan more than 20 years. And now I’m Associate Professor of Graduate School Information Technology at the Kobe Institute of Computing. Today I want to share some of my personal experience of the internet safety technology for children in Japan. Next slide, okay. And yes, and first I want to share the internet use rate of youngs in Japan. About actual internet use by Japanese youngs, we find the date published by Cabinet Office Government of Japan in 2022. In this date, 98.5% of the youngs people response they are using the internet. The most used device is smartphone. And there’s a high rate, you can see the graph in the right, there is a high rate of internet use starting in the elementary school. Yes, and my daughter is In Japan, they also have a smartphone. Okay. In the digital age, ensure the safety of the children as online is a parliament’s concern. In Japan, constant efforts are underway to address this issue. For the government side, the Ministry of Internal Affairs and Communications runs a program called information security side for the citizens. They have a key vision to educating the children on safety internet practice. For the NPO initiative, the NPO information security forum co-hosts the internet safety education program with local authorities and organizations, extending the reach of the internet safety education to various communities. These efforts help make the internet safe for our kids, let them enjoy its benefits while protecting them from its dangers. From the technology side, various technologies are also offered. Field dust technology stands as a popular measure in safeguarding children’s internet use. Developed with smartphones application, all set in the network device at the school and homes, and some network service provider also provide the service. Moreover, smartphones parents’ controls help limited use time and accessible application. However, there are big challenges. When you use filter, it’s important to keep the filter applications database always up to date in order to provide the most effective production. Moreover, if you are using a network side filter, it’s very simple to change, switch to another network will disable the filter. For parentless controls, maybe fine even for me. and it’s the set-up of the controls, a smartphone. It’s very complicated. And often, the parents cannot to configure it correctly. And believe me, the kids is very smarter than we imagine. They always can find the way to disable the parent’s controls. More than one, I heard from some young boys, proud to tell me how he removed the restriction on his school’s PC. Okay, next slide. Use of big data and AI technology to protect the students safe use of internet is a new technology trend. Some, the AIST, the National Institute of Research, institute provides real-time AI analysis for children, abuse risk assignment, and support decision-making. Use this system, yes, they can provide the separate of abuse, a potential recurrent threat to help the kids. Okay, the next slide. Oh, this one, it’s okay. Sorry, okay. So at the same time, another research, in our research group is working on a value study about e-learning. So the open-source learning management system, we collect all the activity students while they are interacting with the system. All the click and what they watched, how long they watched some page. All the data collected to utilize, to patternize the students’ learning behaviors. Enables real-time personalized feedback, significantly improving the learning experience. Interestingly, we also developed a method to identify student, why they struggle with learning. Upon the investigation, we discovered that the struggle is all. . So we often not with learning materials, but distractions like online game. Next slide, please. So our research group is working and so we realise that is some support system, the same support system can be help ensure kids use internet safety. So we can use internet safety without need external set-up. It’s more easy to use. But the challenge is many school use learning platform like Google and Microsoft. This platform is very easy to create the learning materials, but even you haven’t IT skill, but don’t need us to connect the detail date and how the students use it. So if we want to enhance internet safety, we need to create the learning materials. So this is why we team up with the big platform providers, very important. In addition, there are many issues related to personal privacy when you state, there’s a trend off between protecting privacy and improving the date available, which will be a big challenge. Okay. That’s all my presentation. Thank you.

Moderator – Shenrui LI:
Thank you, everyone, for joining us today. We have a lot of questions about how to employ something we have already have to inform our practices. Next, please join us to welcome Ms. Wang Mengying, senior director of culture and content division of Tencent to share with us.

Mengyin Wang:
Thank you. Welcome. APPLAUSE I’m Wang Mengying, senior director of culture and content division of Tencent to share with us how to use the emerging technologies to keep children safe online as we are all aware, emerging digital technologies such as AI and large language models and rapid development and enables internet applications to scale and expand substantially, offering children a much richer digital world for learning, living, and engaging with the world. There are nearly 200 million netizens in China, with the internet adoption rate among the population reaching almost 100%. Children now access the internet as younger ages, with an evident rural and urban information gap, as well as a lack of risk awareness going online, given the large number of children under the age of 12. The digital world is changing rapidly and the development is always at the top of the agenda to protect the rest and interest in the digital world. Just now, Professor Sun Yi shared with us his research and thoughts on the children online protection in Japan, which is tremendously enlightening. And now I’m going to offer an industrial perspective on how the digital world is changing. Tencent firmly committed to the mission and vision, which is value for users and tech for good. Actually, we actively explore and improve our online safety solutions for minors, making full use of the company’s experience in information and digital technologies, and also mobilizing resources in societies at large. Tencent is committed to providing high-quality content for the young users, and this is what Tencent is working on at this moment. For status, we bring together quality contents and provide netizens with a sense to use the internet positively. In 2019, Tencent kicked off the T-mode in a handful of its products, consolidating high-quality content, so it’s not just for young users, but also for the general public. Tencent is also working with China’s foundation in the initiative of the master class for young, which top-class scientists, experts, and educators were invited to teach our young audience their lesson one in various areas. Nobel Prize-winning physicist Professor Yang Zhening, Mr. Chief designer of China’s spacecraft, and also Mr. President of the Chinese Academy of Sciences, and also the president of the Chinese Academy of Sciences, and also the president of the Society of Cultural Relics. The master classes were then turned into featured video lectures in 4K resolutions for circulation in the hope that these great materials can truly benefit more children, offer fascinating learning content, and inspire their future professional pursuits. Secondly, Tencent provides professional education and help young people in the digital age. Today’s young people need to keep a finger on the pulse of the emerging technologies so as to prepare for the future. On September 1st this year, Tencent released AI and programming lesson one, a pro bono project offering young users a free introductory course at home on AI and programming through a lightweight package on WeChat, notably for schools in rural areas with low-income children. Despite suffering limited teaching resources at school in March, Tencent’s mission prompted the advocates to adjust Blackboard’s and equipment. Our course can also take place in computer-free mode, allowing students to learn AI as their urban counterparts do, such as role-playing. This program deputed at already 21 palisades in 14 primary schools in around four cities, including Beijing, Shanghai, Shenzhen, and Guangzhou. Most students found it captivating to let the machine identify objects through simple labeling, and many teachers said it was very important to build up children’s creative mindset through such programs, enable them to spot potential questions, and also try to troubleshoot using contributing thinking. Certainly, as an advocate for scientific thinking, Tencent strives to guide minors to understand the internet and their own development in a positive manner. Curiosity of the young mind is very much treasured. They need diversified channels to explore the real world, and the proper education to experience the pervasive world beyond the screens. Starting from 2019, Tencent and Tsinghua University jointly carried out an annual popular science event named Tencent Young Science Fire. More than 2,000 young scientists and enthusiasts met face-to-face with top international scientists at the fair, and 40 million online views were impressed by the charms of the science. More and more youngsters in China now are taking scientists as new models and new adults, and also scientific explorations is becoming a new fashion. Helping the minors growing up healthily is a vision shared by the international community. In 2022, Tencent teamed up with a number of companies and organizations to compel and release guidelines for construction and internet applications for minors based on AI technology, bringing the synergies of the industry to promote online safety for children while developing digital technologies. Tencent is also exploring AI technology to improve the growing environment for minors. For example, the Israel Action Initiative by Tencent in 2020 offered charities, groups, and equipment manufacturers the Israel audio technology for free by improving user experience for those with cochlear implants, including the children. Children is the future and the hope of the mankind, and minors protection is by all means a common cause, as wonderful and daunting as it is. I’m pleased to share with you that in September, the AI program lesson one was rolled out in primary schools all around the country, so in the seas of AI and in the hearts of the many children in rural areas. The master classes for young now total 139 episodes, already reached 10 million young people with more than 100 million views so far. And at last, Tencent is looking forward to join hands with you all and to build in building a clean internet and a safe digital world for our children. Thank you for all.

Moderator – Shenrui LI:
Okay, thank you. sharing good practices from Tencent. And last but not least, to conclude this session let’s welcome Ms. Dora Giusti, the Chief Child Protection of UNICEF China country office to deliver the closing remarks. Please welcome.

DORA GIUSTI:
Distinguished experts and participants, as we bring this forum to a closure, allow me to thank you for your insightful ideas and also for the participation in this important forum on emerging technologies and child online protection. We live in an era driven by technologies such as artificial intelligence, blockchain, newer technologies that are poised to reshape our society. Globally, a child goes online for the first time every half a second. One in three internet users are children. We’ve heard today how this has positive connotations and impact in terms of learning and accessing information, but we’ve also heard that there are potential risks. Children may be exposed to harms like illegal content, privacy breaches, cyber bullying, and most seriously sexual abuse and exploitation through the use of technology. In 2022, the US-based National Center for Missing and Exploited Children received 32 million reports from around the world of suspected child sexual exploitation and abuse cases, an increase by 9% from 2021. Europol identified that the increase had been going on year by year, but during COVID, due to increased activity related to the lockdowns, this increase, this rise was particularly significant. As today we talked about emerging technologies, we need to consider that the use of immersive digital spaces, which are virtual environments that create a sense of presence or immersion for users and are facilitated by AI, may expose children to environments that are not designed for them, amplifying the risks of sexual grooming and exploitation, for instance, through the use by potential abusers of virtual rooms or personas that groom them. As technology evolves, immersive digital spaces will become more widespread in all fields and therefore the risk will also increase. We need therefore to understand in depth the implications and impact of the risks for children. On a positive note, we’ve heard today how AI technologies can offer help to address child sexual exploitation and abuse online. For instance, there exists an array of techniques based on AI that can be designed to detect different elements of the spectrum of illegal materials, behaviors, and practices linked to child sexual exploitation and abuse online. In addition to identifying preventing abuse, AI can also be used to support children who have experienced abuse, as we saw in Patrick’s presentation. this is positive for prevention, detection, and investigation of cases of child sexual abuse and exploitation online, the use of AI may also impact data protection, safeguards, and users’ privacy. Therefore, protecting child rights in the digital world and ensuring safety relies on striking a balance between the right to protection from harm as well as to privacy. This is one of the guiding principle of the UN Committee on the Rights of the Child, general comment number 25 on children’s rights in relation to the digital environment. This document has provided us with important principles to address the issue of child rights in a rapidly changing technology environment with the objective of preventing risks from becoming harms and to ensure children’s rights to be informed and at the same time become digital citizens. We know much more today than a decade ago. We heard today, echoing also the Secretary General’s words, Patrick’s and all the other speakers, that we need to cooperate. We need to work together. We need to look at different dimensions. We need to coordinate efforts at the legal and policy level, criminal justice, victim support, society and culture, the technology industry, investing in research and data. Before I conclude, allow me to emphasize some key actions to ensure that we have a safe digital environment for children, echoing also the words of the Secretary General and other speakers. First of all, we need to enhance our understanding on child safety within this evolving landscape. Increase evidence generation on trends, patterns and risks for children to be engaged in this evolving digital environment, but also to bring forward solutions that are effective. Secondly, we need to strengthen and develop laws, policies and standards that can evolve as rapidly as the changing environment and that can also assess the critical benefits and risks. We need harmonization of these legislation standards across the globe because this is a global problem and we need to involve experts from different disciplines. Third, we need tech companies to embrace responsible design principles and standards, prioritizing safety, privacy and inclusion of child users and conducting frequently child rights reviews for their products and services. And we’ve heard a few example during this forum. Fourth, we need to continue raising awareness on safety and digital literacy for children, parents, caregivers, society as a whole. We rally for a collective action by governments, private sector, civil society organization, international organization, academia, families and children themselves. Together, we must ensure emerging technologies create a safer, more accessible digital world for children. Thank you very much.

Moderator – Shenrui LI:
Okay, thank you, Dora, for the very comprehensive and encouraging closing remarks. As you mentioned, they’re all essential building blocks for enabling a safe digital environment for all children. And we hope today’s session has brought some enlightening insights to all of you, and thank you for your attention and participation. We are looking forward to seeing you in our session next year at IGF 2024. Okay, thank you all.

DORA GIUSTI

Speech speed

136 words per minute

Speech length

898 words

Speech time

397 secs

Mengyin Wang

Speech speed

153 words per minute

Speech length

1071 words

Speech time

420 secs

Moderator – Shenrui LI

Speech speed

141 words per minute

Speech length

797 words

Speech time

340 secs

Patrick Burton

Speech speed

167 words per minute

Speech length

2464 words

Speech time

884 secs

Sun Yi

Speech speed

147 words per minute

Speech length

904 words

Speech time

369 secs

Xianliang Ren

Speech speed

82 words per minute

Speech length

864 words

Speech time

635 secs

ZENGRUI LI

Speech speed

97 words per minute

Speech length

606 words

Speech time

375 secs

Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sophie

The importance of children’s digital rights in the digital world is underscored by the United Nations. These rights encompass provision, protection, and participation, which are essential for children’s empowerment and safety in online spaces. General Commendation 25 by the UN specifically emphasises the significance of children’s digital rights. It is crucial to ensure that children have access to digital resources, that they are protected from harm and exploitation, and that they have the opportunity to actively engage and participate in the digital world.

Young children often seek support from their parents and teachers when faced with online risks. They rely on them as safety contact persons for any issues they encounter on the internet. As they grow older, children develop their own coping strategies by employing technical measures to mitigate online risks. This highlights the importance of parental and teacher support in assisting children in navigating the digital landscape and promoting their online safety.

Furthermore, the design of online spaces needs to be tailored to cater to the diverse needs of different age groups. Children, as active users, should have digital platforms that are user-friendly and age-appropriate. Children are critical of long processing times for reports on platforms, advocating for more efficient and responsive mechanisms. It is important to consider children’s perspectives and ensure that their voices are heard when designing and developing online spaces.

Human resources play a significant role in fostering safe interactions online. Children are more likely to use reporting tools that establish a human connection, thereby enhancing their sense of safety and anonymity. The THORN study conducted in the United States supports this viewpoint and suggests that human involvement positively affects children’s willingness to report online incidents.

The introduction of the Digital Services Act in the European Union is seen as a critical tool for protecting children’s data. This legislation is set to come into force next year and aims to enhance data protection measures for individuals, including children, in the digital sphere. The act aims to address issues related to privacy, security, and the responsible use of digital services to safeguard children’s personal information.

Children’s rights by design and their active participation in decision-making processes regarding the digital environment should be prioritised. The United Nations’ General Comment 25 highlights the importance of young people’s participation in decisions about the digital space. The German Children’s Fund has also conducted research that emphasises the need for quality criteria for children’s participation in digital regulations. By involving children in decision-making, their perspectives and experiences can inform policies and ensure that their rights are respected and protected.

Creating safe socio-digital spaces for children and adolescents is of paramount importance. These spaces should not be primarily influenced by product guidelines or market-driven interests but rather should prioritise the well-being and safety of children and young people. Civil society and educational organisations are seen as key stakeholders in shaping and creating these safe social spaces for children to engage in the digital world.

In conclusion, a holistic approach is necessary to advocate for children’s rights in the digital world. This entails promoting children’s digital rights, providing support and guidance from parents and teachers, adapting the design of online spaces to meet the needs of different age groups, harnessing the potential of human resources for safe interactions, and enacting legislation such as the Digital Services Act for protecting children’s data. Children and young people should be actively involved in their rights advocacy and be included in decision-making processes in the digital environment. The involvement of all stakeholders, including governments, organisations, and communities, is essential in advancing and safeguarding children’s rights in the digital world.

Steve Del Bianco

In the United States, the states of Arkansas and California faced legal action for implementing a controversial rule that required legal consent from a parent or guardian for individuals under the age of 18 to use social media sites. Steve Del Bianco, representing an organization, sued the states and deemed this measure to be aggressive.

The sentiment expressed towards this rule was negative, as it was seen as a potential infringement upon the rights of children and young individuals. The argument presented was that broad child protection laws have the potential to restrict a child’s access to information and their ability to freely express themselves. Judges who presided over the case acknowledged the importance of striking a balance between child rights and the need for protection from harm.

Steve Del Bianco, in the course of the proceedings, emphasized the significance of considering the best interest of the child. He argued that the state’s laws should undergo a test that balances the rights of the child with their protection from potential harm. According to Del Bianco, these laws should not excessively limit a child’s access to information or their ability to express their beliefs.

Moreover, it became evident that lawmakers lacked an understanding of the broader implications of their laws. This led to legal challenges and raised concerns about the effectiveness of these policies. Del Bianco’s organization obtained an injunction that effectively blocked the states from enforcing these laws. It was suggested that lawmakers should be educated and gain a better understanding of the potential consequences of their legislative decisions to avoid such legal challenges.

To summarize, the implementation of a rule requiring verifiable consent for underage individuals to use social media sites in certain US states sparked controversy and legal disputes. The negative sentiment towards this rule arose from concerns about potential limitations on the rights of children to access information and express themselves freely. The need to strike a balance between child rights and protection from harm was highlighted. Additionally, the lack of understanding by lawmakers about the broader implications of their laws was emphasized, underscoring the importance of better education and consideration in the legislative process.

B. Adharsan Baksha

AI adoption among children can pose significant risks, particularly in terms of data privacy. The presence of chatbots such as Synapse and MyAI has raised concerns as these tools have the capability to rapidly extract and process vast amounts of personal information. This raises the potential for exposing children to various cyber threats, targeted advertising, and inappropriate content.

The ability of chatbots to collect personal data is alarming as it puts children at risk of having their sensitive information compromised. Cyber threats, such as hacking or identity theft, can have devastating consequences for individuals, and children are especially vulnerable in this regard. Moreover, the information gathered by chatbots can be used by marketers to target children with ads, leading to potential exploitation and manipulation in the digital realm.

Inappropriate content is another concerning aspect of AI adoption among children. Without proper safeguards, chatbots may inadvertently expose children to age-inappropriate material, which can have a negative impact on their emotional and psychological well-being. Children need a secure and regulated online environment that protects them from exposure to harmful content.

It is crucial to recognise the need to ensure a secure cyberspace for children. This includes focusing on the development and implementation of effective measures related to artificial intelligence, children, and cybersecurity. Governments, organisations, and parents must work together to mitigate the risks associated with AI adoption among children.

In conclusion, AI adoption among children brings forth various risks, with data privacy issues at the forefront. Chatbots that possess the ability to collect personal data may expose children to cyber threats, targeted advertising, and inappropriate content. To safeguard children’s well-being and protect their privacy, it is essential to establish a secure online environment that addresses the potential risks posed by AI technology. The responsibility lies with all stakeholders involved in ensuring a safe and regulated cyberspace for children.

Katz

Child rights are considered fundamental and should be promoted. Katz’s child-focused agency actively advocates for the promotion of child rights. However, conflicts between child rights and freedom of expression can arise. Survey results revealed such conflicts, underscoring the need for balance between these two important aspects.

Misunderstandings or misinterpretations of child rights are common and must be addressed. Some people mistakenly believe that virtual child sexual abuse material (CSAM/SEM) can prevent real crime, indicating a lack of understanding or misinterpretation of child rights. Efforts should be made to educate and provide correct information regarding child rights to combat these misunderstandings.

Regulating AI in the context of child protection is a topic under discussion. Many respondents believe that AI should be regulated to ensure child protection, particularly in relation to CSAM/SEM. However, opinions on this matter are mixed, highlighting the need for further dialogue and research to determine the most appropriate approach.

Public awareness of the risks and opportunities of AI needs to be raised. Approximately 20% of respondents admitted to having limited knowledge about AI matters and associated risks. This signifies the need for increased education and awareness programs to ensure the public understands the potential benefits and dangers of AI technology.

Japan currently lacks regulations and policies concerning AI-generated imagery. Katz’s observation reveals a gap in the legal framework, emphasizing the necessity of establishing guidelines and regulations to effectively address this issue.

There is also a need for greater awareness and information dissemination about AI developments. Katz suggests that the media should take more responsibility in informing the public about advancements and implications of AI. Currently, people in Japan are not adequately informed about ongoing AI developments, highlighting the need for improved communication and awareness campaigns.

Katz recommends that the public should gather information from social networking services (SNS) about AI developments. This highlights the importance of utilizing various platforms to stay updated and informed about the latest developments in the field of AI.

A rights-based approach is crucial in designing regulation policies. It is essential to ensure that the rights of children and humans are protected in the digital world. Advocating for the enhancement of child and human rights in the digital sphere is a vital aspect of creating an inclusive and safe environment.

In conclusion, promoting child rights is essential, although conflicts with freedom of expression may arise. Addressing misunderstandings and misinterpretations of child rights is crucial. The regulation of AI in the context of child protection requires further examination and consideration. Public awareness about the risks and opportunities of AI needs to be improved. Japan lacks regulations for AI-generated imagery, and greater awareness about AI developments is necessary. Gathering information from SNS can help individuals stay informed about AI happenings. A rights-based approach is needed when designing regulation policies, and enhancing child and human rights in the digital world is vital.

Amy Crocker

During the event, the speakers highlighted the significant importance of children’s digital rights in creating a safe and secure online environment. They stressed that children’s rights should be protected online, just as they are in the offline world. General Comment Number 25 to the UN Convention on the Rights of the Child was mentioned as a recognition of the importance of children’s digital rights, with state parties being obligated to protect children from all forms of online exploitation and abuse.

In terms of internet governance, the speakers advocated for a proactive and preventive approach, rather than a reactive one. They argued that governments often find themselves playing catch-up with digital issues, reacting to problems after they have already occurred. A shift towards a preventive model of online safety was deemed necessary, which involves designing for safety before potential issues arise.

Effective implementation was seen as the key to turning digital policies into practice. The speakers emphasized the need to understand how to implement policies in specific local contexts to realize the full benefits. They argued that implementation is crucial in ensuring that children’s rights are protected and upheld online.

The need for public understanding of technology and its risks and opportunities was also highlighted. It was mentioned that improving public understanding is necessary for individuals to make informed decisions about their online activities. Empowering parents to understand technology and facilitate their children’s rights was seen as an important aspect of ensuring a safe online environment for children.

Trust was identified as a crucial element in the digital age, particularly with the growing reliance on technology. The speakers discussed the importance of trust against the backdrop of emerging risks related to data breaches, data privacy problems, and unethical practices. Building and maintaining trust were seen as essential for a secure online environment.

Safeguarding the younger generations online was viewed as a collective responsibility. The speakers stressed that parents and guardians cannot solely shoulder this responsibility and must have a certain level of knowledge of online safety. The importance of all stakeholders, including businesses, industries, and governments, working together to protect children’s rights online was emphasized.

Regulation was seen as an important tool for keeping children safe online. However, it was noted that regulation alone is not a solution for the challenges posed by emerging technologies. The speakers argued that both regulation and prevention through education and awareness are crucial in effectively addressing these challenges.

Differentiated regulation based on context was advocated for. The speakers highlighted that different online services offer different opportunities for children to learn and be creative. They also emphasized that children’s evolving capacities are influenced by various factors, such as their geographical and household contexts. Understanding the link between online and offline contexts was seen as essential in developing effective regulation.

Transparency, a culture of child rights, and collaborative efforts were identified as crucial for the protection of children’s rights online. All stakeholders, including businesses, industries, and governments, were urged to work together and have a shared understanding of child rights. The need for transparency in their commitment to protecting child rights was emphasized.

The challenges faced by developing countries in terms of technology and capacity building were acknowledged. The speakers discussed the specific challenges faced by countries like Bangladesh and Afghanistan in terms of accessing technology and building the necessary capacity. Opportunities for codes of conduct that can be adapted to different contexts were also explored.

Consulting children and young people was highlighted as an important approach to addressing online safety issues. The speakers emphasized the need to understand how children and young people feel about these issues and to learn from approaches to regulation that have been successful.

Amy Crocker, one of the speakers, encouraged people interested in children’s rights issues to join the Dynamic Coalition and continue similar conversations. Flyers and a QR code were mentioned as ways to sign up for the mailing list. The importance of creating more space within the IGF for discussing children’s rights issues was also emphasized.

In conclusion, the event highlighted the significant importance of protecting children’s digital rights and creating a safe and secure online environment for them. It emphasized the need for proactive and preventive internet governance, effective implementation of digital policies, public understanding of technology, empowering parents, trust, collective responsibility, regulation along with education and awareness, differentiated regulation based on context, transparency, and collaborative efforts. The challenges faced by developing countries were acknowledged, and the involvement of children and young people was seen as essential in addressing online safety issues.

Ahmad Karim

In a discussion concerning the design of advancing technology, Ahmad Karim, representing the UN Women Regional Office for Asia and the Pacific, stressed the importance of carefully considering the needs of girls, young adults, females, and marginalized and fragile groups. It was noted that, in such discussions, there is often a tendency to overlook gender-related issues, which indicates a gender-blind approach.

Another argument put forth during the discussion underscored the significance of making the design of the metaverse and technologies more considerate towards marginalized and fragile groups, especially girls and women. The rapid advancements in technology were acknowledged as having disproportionate effects on females and marginalized sectors of society. It was highlighted that national laws frequently do not adequately account for the specific needs and challenges faced by these groups.

The supporting evidence provided includes the fact that girls, young adults, and women are often underrepresented and encounter barriers in accessing and benefiting from technological advancements. Additionally, marginalized and fragile groups, such as those from low-income backgrounds or with disabilities, are particularly vulnerable to exclusion and discrimination in the design and implementation of technology.

The conclusion drawn from the discussion is that there is an urgent need for greater attention and inclusivity in the design of advancing technology. Consideration must be given to the unique needs and challenges faced by girls, young adults, females, and marginalized and fragile groups. It is imperative that national laws and policies reflect these considerations and ensure that these groups are not left behind in the technological progress.

This discussion highlights the significance of addressing gender inequality and reducing inequalities in the design and implementation of technology. It sheds light on the potential pitfalls and repercussions of disregarding the needs of marginalized and fragile groups, and calls for a more inclusive and equitable approach to technological advancements.

Tasneet Choudhury

During the discussion, the speakers highlighted the importance of ensuring the protection and promotion of child rights within AI strategies, policies, and ethical guidelines. They particularly emphasized the significance of these efforts in developing countries, such as Bangladesh. Both speakers stressed the need to include provisions that safeguard child rights in AI policies, especially in nations that are still in the process of development.

The speakers also connected their arguments to the Sustainable Development Goals (SDGs), specifically SDG 4: Quality Education and SDG 16: Peace, Justice, and Strong Institutions. They proposed that by embedding measures to protect child rights in AI strategies and policies, countries can contribute to the achievement of these SDGs. This link between AI development and the attainment of global goals highlights AI’s potential role in promoting inclusive and sustainable development.

Although no specific supporting facts were mentioned during the discussion, the speakers expressed a neutral sentiment towards the topic. This indicates their desire for a balanced and equitable approach to integrating child rights into AI strategies and policies. By addressing this issue neutrally, the speakers emphasized the need for a comprehensive and ethical framework that protects the rights and well-being of children in the context of AI development.

One notable observation from the analysis is the focus on child rights in the discussion of AI policies. This underscores the growing recognition of the potential risks and ethical implications that AI may pose for children, particularly in countries with limited resources and regulations. The emphasis on child rights serves as a reminder that as AI continues to advance, it is crucial to ensure that these technologies are developed with the best interests of children in mind.

In conclusion, the discussion underscored the importance of protecting and upholding child rights within AI strategies, policies, and ethical guidelines. The speakers highlighted the specific significance of this endeavor in developing countries like Bangladesh. The incorporation of child rights in AI policies aligns with the Sustainable Development Goals of Quality Education and Peace, Justice, and Strong Institutions. The neutral sentiment expressed by both speakers indicates the need for a balanced approach to addressing this issue. Overall, the discussion shed light on the need for a comprehensive and ethical framework that safeguards the rights of children amidst the development of AI technologies.

Jenna

Children today are immersed in the online world from a very young age, practically being born with access to the internet and technology. This exposure to the digital age has led to an increased need for trust in this new environment. Trust is seen as a cornerstone of the digital age, particularly as we rely on technology for almost every aspect of our lives. Without trust, our reliance on technology becomes more precarious.

Creating a reliable and ethical digital environment for younger generations requires imparting fundamental digital knowledge and nurturing trust. Building trust and instilling digital literacy are essential steps in safeguarding children online. Parents play a crucial role in this process, but it is also a shared responsibility that extends to all stakeholders. Informed parents are key as they are often the first line of defense for children facing challenges online. However, they cannot do it alone, and it is important for all stakeholders to be aware of their responsibility in protecting younger generations.

The challenges faced by teenagers today in the online world are more multifaceted and harmful than ever before. Cyberbullying has evolved from early stages of internet flaming and harassment via emails to more advanced forms like cyberstalking and doxing. The rise of generative AI has made creating hate image-based abuse relatively easier, contributing to a growing concern for online safety. It is important to address these issues effectively and efficiently to ensure the well-being of young people online.

The approach to online safety varies across different jurisdictions, with each adopting their own strategies and measures. For example, Australia has an industry code in place, while Singapore employs a government-driven approach. This diversity highlights the need for clear definitions and standards regarding online safety threats. A cohesive understanding of these threats is imperative to effectively combat them and ensure consistency across different regions.

Capacity building is essential for addressing the challenges of the digital age. Empowering young people and ensuring their voices are heard can lead to a better understanding of their needs and concerns. Additionally, understanding the technical aspects of internet governance is vital in developing effective solutions to address issues of online safety and security.

Inclusion and diversity are crucial in creating a safe online space. It is important to include the voices of different stakeholders and ensure that everyone has a seat at the table. Language can be a barrier, causing loss in translation, so efforts must be made to overcome this and make conversations more inclusive.

The perspective and insights of young people are valued in discussions on gender and technology. Gaining fresh and unique insights from the younger generation can contribute to the development of more inclusive and gender-responsive approaches. Jenna, a participant in the discussion, highlighted the need to engage young people in discussions related to explicit content and self-expression, as well as providing safe spaces for their voices to be heard.

Modernizing existing legal frameworks is seen as a more effective approach to addressing the impacts of AI and other technological advancements. Rather than a single legislative solution, updating legislation such as the Broadcasting Act, Consumer Protection Act, and Competition Act is seen as crucial in integrating present issues and adapting to the digital age.

Collaboration among stakeholders is essential for success. Capacity building requires research support, and the cooperation of multiple stakeholders is crucial in terms of legislation and regulations. By working together and leveraging each other’s strengths, stakeholders can more effectively address the challenges faced in the digital world.

Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical community possesses valuable knowledge and insights that can contribute to the development of effective policies. However, it is acknowledged that their involvement may not always be the best fit for all policy-making decisions. Striking a balance between technical expertise and broader considerations is key to ensuring policies are robust and comprehensive.

In conclusion, children today are growing up in a digital age where they are exposed to the internet and technology from a young age. Building a reliable and ethical digital environment requires imparting digital knowledge and nurturing trust. Safeguarding younger generations online is a shared responsibility, requiring the involvement of all stakeholders. The challenges faced by teenagers today, such as cyberbullying and hate speech, are advanced and harmful. Different jurisdictions have varying approaches to online safety, emphasizing the need for clear definitions and standards. Capacity building and the inclusion of diverse voices are crucial in creating a safe online space. The perspective and insights of young people are valuable in discussions on gender and technology. Modernizing existing legal frameworks is advocated, and engaging young people in discussions on explicit content and self-expression is important. Collaboration among stakeholders and the inclusion of the technical community in policy-making processes are considered essential for success in addressing the impacts of the digital age.

Larry Magid

In the analysis, the speakers engage in a discussion regarding the delicate balance between protecting children and upholding their rights. Larry argues that protection and children’s rights are sometimes in conflict. He cites examples of proposed US laws that could suppress children’s rights in the guise of protection. Larry also highlights the UN Convention, which guarantees children’s rights to freedom of expression, participation, and more.

On the other side of the debate, another speaker opposes legislation that infringes upon children’s rights. They point out instances where such legislation may limit children’s rights, such as requiring parental permission for individuals under 18 to access the internet. Their sentiment towards these laws is negative.

Lastly, a speaker emphasises the need for a balanced approach to regulation, one that can protect and ensure children’s rights while acknowledging the inherent risks involved in being active in the world. They argue for a fair equilibrium between rights and protection. Their sentiment remains neutral.

Throughout the analysis, the speakers recognize the challenge in finding the proper balance between protecting children and preserving their rights. The discussion highlights the complexities and potential conflicts that arise in this area, and stresses the importance of striking a balance that safeguards children’s well-being while still allowing them to exercise their rights and freedoms.

Katarzyna Staciewa

In a recent discussion focusing on the relationship between the metaverse and various sectors such as criminology and child safety, Katarzyna Staciewa, a representative from the National Research Institute in Poland, shared her insights and emphasized the need for further discussions and research in criminology and other problematic sectors. Staciewa drew upon her experiences in law enforcement and criminology to support her argument.

Staciewa discussed her research on the metaverse, highlighting its significance in guiding the development of developing countries. The metaverse, an immersive virtual reality space, has the potential to shape the future of these countries by offering new opportunities and addressing socio-economic challenges. Staciewa’s positive sentiment towards the metaverse underscored its potential as a tool for fostering quality education and promoting peace, justice, and strong institutions, as outlined in the relevant Sustainable Development Goals (SDGs).

However, concerns were raised during the discussion regarding the potential misuse of the metaverse and AI technology, particularly in relation to child safety. Staciewa analyzed the darknet and shed light on potentially sexually interested groups involving children, revealing alarming trends. The risks associated with the metaverse lie in the possibility of AI-generated child sexual abuse material (CSAM) and the potential for existing CSAM to be transformed into virtual reality or metaverse frames. The negative sentiment expressed by Staciewa and others reflected the urgency to address these risks and prevent harm to vulnerable children.

The speakers placed strong emphasis on the importance of research in taking appropriate actions to ensure child safety. Staciewa’s research findings highlighted the constant revictimization faced by child victims, further underscoring the need for comprehensive measures to protect them. By conducting further research in the field of child safety and child rights, stakeholders can gain a deeper understanding of the challenges posed by the metaverse and AI technology and develop effective strategies to mitigate these risks.

In conclusion, the discussion on the metaverse and its impact on various sectors, including criminology and child safety, highlighted the need for more research and discussions to harness the potential of the metaverse while safeguarding vulnerable populations. While acknowledging the metaverse’s ability to guide the development of developing countries and the positive impact it can have on education and institutions, concerns were expressed about the possibility of misuse, particularly with regards to child safety. The importance of research in understanding and addressing these risks was strongly emphasized, particularly in the context of the continuous victimization of child victims.

Patrick

During the discussion on child safety and online policies, the speakers emphasised the importance of taking a balanced approach. While regulation was acknowledged as a crucial tool in ensuring child safety, the speakers also highlighted the significance of prevention, education, and awareness.

It was noted that regulation often receives more attention due to its visibility as a commitment to child safety. However, the lack of proportional investment in prevention aspects, such as awareness-raising and education, was seen as a gap.

Addressing the specific needs of children in relation to their evolving capacities and contexts was deemed crucial. A differentiated approach to regulation was recommended, taking into consideration the diverse services and opportunities available for children to learn digital skills. The household environment, geographical context, and access to non-digital services were identified as factors that influence children’s evolving capacities.

A unified understanding and commitment to child rights were highlighted as prerequisites for effective regulation. The speakers pointed out that there is often a significant variation in how child rights are interpreted or emphasised in different regional, cultural, or religious contexts. It was stressed that a transparent commitment and culture of child rights are necessary from industries, businesses, and governments for any successful regulation to be established.

The tendency of developing countries to adopt policies and legislation from key countries without critically analysing the unique challenges they face was criticised. The speakers observed this trend in policy-making from Southern Africa to North Africa and the Asia Pacific region. The need for developing countries to contextualise policies and legislation according to their own specific circumstances was emphasised.

An issue of concern raised during the discussion was the reluctance of countries to update their legislation dealing with sexual violence. The process for legislation update was noted to be lengthy, often taking up to five to ten years. This delay was seen as a significant barrier to effectively addressing the issue and protecting children from sexual violence.

The role of industries and companies in ensuring child safety was also highlighted. It was advocated that industries should act as frontrunners in adopting definitions and staying updated on technologically enhanced crimes, such as AI-generated child sexual abuse material (CSAM). The speakers argued that industries should not wait for national policies to change but should instead take initiative in adhering to certain definitions and guidelines.

The importance of engaging with children and listening to their experiences and voices in different contexts was emphasised. The speakers stressed that children should have a critical say in the internet space, and adults should be open to challenging their own thinking and assumptions. Meaningful engagement with children was seen as essential to understanding their needs and desires in using the internet safely.

In addition, the speakers highlighted the need for cross-sector participation in discussing internet safety. They recommended involving experts from various fields, such as criminologists, educators, social workers, public health specialists, violence prevention experts, and child rights legal experts. A holistic and interdisciplinary approach was deemed necessary to address the complex issue of internet safety effectively.

Overall, the discussion on child safety and online policies emphasised the need for a balanced approach, taking into account regulation, prevention, education, and awareness. The importance of considering the evolving capacities and contexts of children, a unified understanding and commitment to child rights, and the role of industries and companies in taking initiative were also highlighted. Additionally, the speakers stressed the significance of engaging with children and adopting a cross-sector approach to ensure internet safety.

Andrew Campling

The discussions revolve around the significant impact that algorithms have on child safety in the digital realm. One particularly tragic incident occurred in the UK, where a child took their own life after being exposed to suicide-relevant content recommended by an algorithm. This heartbreaking event highlights the dangerous potential of algorithms to make malicious content more accessible, leading to harmful consequences for children.

One key argument suggests that restrictions should be placed on surveillance capitalism as it applies to children. The aim is to prevent the exposure of children to malicious content by prohibiting the gathering of data from known child users on platforms. These restrictions aim to protect children from potential harms caused by algorithmic recommendations of harmful content.

Another concerning issue raised during these discussions is the use of AI models to generate Child Sexual Abuse Material (CSAM). It is alarming that in some countries, this AI-generated CSAM is not yet considered illegal. The argument is that both the AI models used in generating CSAM and the circulation of prompts to create such content should be made illegal. There is a clear need for legal measures to address this concerning loophole and protect children from the creation and circulation of CSAM.

Furthermore, it is argued that platforms have a responsibility towards their users, particularly in light of the rapid pace of technological change. It is suggested that platforms should impose a duty of care on themselves to ensure the safety and well-being of their users. This duty of care would help manage the risks associated with algorithmic recommendations and the potential harms they could cause to vulnerable individuals, especially children. Importantly, the argument highlights the difficulty regulators face in keeping up with the ever-evolving technology, making it crucial for platforms to step up and take responsibility.

In conclusion, the discussions surrounding the impact of algorithms on child safety in the digital realm reveal significant concerns and arguments. The tragic incident of a child’s suicide underscores the urgency of addressing the issue. Suggestions include imposing restrictions on surveillance capitalism as it applies to children, making AI-generated CSAM illegal, and holding platforms accountable for their users’ safety. These measures aim to protect children and ensure a safer digital environment for their well-being.

Amyana

The analysis addresses several concerns regarding child protection and the legal framework surrounding it. Firstly, there is concern about the unequal application of international standards for child protection, particularly between children from the Global South and the Global North. This suggests that children in developing countries may not receive the same level of protection as those in more developed regions. Factors such as resource distribution, economic disparities, and varying levels of political commitment contribute to this discrepancy in child protection standards.

Another notable concern highlighted in the analysis is the inadequacy of current legislation in dealing with images of child abuse created by artificial intelligence (AI). As technology advances, AI is increasingly being used to generate explicit and harmful content involving children. However, existing laws appear ineffective in addressing the complexities associated with such content, raising questions about the efficacy of the legal framework in the face of rapidly evolving technology.

On a positive note, there is support for taking proactive measures and demanding better protection measures from online platforms. Efforts are being made to provide guidelines and recommendations to agencies working with children and adolescents, aimed at enhancing child protection in the digital space and promoting the well-being of young individuals online. This demonstrates an awareness of the need to keep pace with technological advancements and adapt legal frameworks accordingly.

Overall, the analysis underscores the importance of addressing the unequal application of international standards for child protection and the challenges posed by AI-generated images of child abuse. It emphasizes the need for updated legislation that aligns with emerging technologies, while also advocating for proactive measures to enhance protection on online platforms. These insights provide valuable considerations for policymakers, child protection agencies, and stakeholders working towards establishing robust and inclusive frameworks for child protection globally.

Jim

The discussion emphasized the importance of regulating and supporting internet technology in developing countries, as evidenced by the interest and concern of participants from regions such as Bangladesh and Kabul University. This real-world engagement highlights the relevance and urgency of the issue in developing regions.

Jim, during the discussion, summarised and acknowledged the questions raised by participants from developing nations, demonstrating his support for addressing the challenges and needs specific to these countries. He stressed the need to consider these perspectives when dealing with the issues surrounding internet technology in developing countries. This recognition of diverse needs and experiences reflects a commitment to inclusivity and ensuring that solutions are tailored to the circumstances of each country.

The overall sentiment observed in the discussion was neutral to positive. This indicates a recognition of the importance of regulating and supporting internet technology in developing countries, and a willingness to address the challenges and concerns associated with it. The positive sentiment suggests support for efforts to enhance access to, and the effectiveness of, internet technology in these regions, contributing to the United Nations Sustainable Development Goals of Industry, Innovation and Infrastructure (SDG 9) and Reduced Inequalities (SDG 10).

In conclusion, the discussion highlights the crucial role of regulation and support for internet technology in developing countries. The participation and engagement of individuals from these regions further validate the significance and necessity of addressing their specific needs and challenges. By considering the perspectives of those in developing nations and taking appropriate actions to bridge the digital divide, we can work towards achieving a more inclusive and equitable global digital landscape.

Liz

In a recent discussion on online safety, Microsoft emphasised its responsibility in protecting users, particularly children, from harmful content. They acknowledged that tailored safety measures, based on the type of service, are necessary for an effective approach. However, they also highlighted the importance of striking a balance between safety and considerations for privacy and freedom of expression.

One speaker raised an interesting point about the potential risks of a “one size fits all” approach to addressing online safety. They argued that different services, such as gaming or professional social networks, require context-specific interventions. Implementing broad-scoped regulation could inadvertently capture services that have unique safety requirements.

Both legislation and voluntary actions were deemed necessary to address children’s online safety. Microsoft highlighted their focus on building safety and privacy by design. By incorporating safety measures from the very beginning during product development, they aim to create a safer online environment for users.

However, concerns were also raised about the current state of legislation related to online safety and privacy. It was noted that legislative efforts often lack a holistic approach and can sometimes contradict each other. Some safety and privacy legislations contain concepts that may not optimise online safety measures.

Microsoft also recognised the risks posed by AI-generated child sexual abuse material (CSAM) and emphasised the need for responsible AI practices. They are actively considering these risks in their approach to ensure the responsible use of AI technologies.

The discussion strongly advocated for the importance of regulation in addressing online harms. Microsoft believes that effective regulation and a whole society approach are crucial in tackling the various challenges posed by online safety. They emphasised the need for ongoing collaboration with experts and stakeholders to continuously improve online child safety measures and access controls.

Another key aspect discussed was the need for a better understanding of the gendered impacts of technology. It was highlighted that current research lacks a comprehensive understanding of youth experiences, particularly for females and different cultures. Additional research, empowerment, and capacity building were suggested as ways to better understand the gendered implications of technology.

In conclusion, the discussion stressed the importance of collaboration, open-mindedness, and continuous learning in addressing online safety. Microsoft’s commitment to protecting users, especially children, from harmful content was evident in their approach to building safety and privacy by design. The speakers highlighted the complexities of the topic and emphasised the need for context-specific interventions and effective regulation to ensure a safer online environment for all users.

Session transcript

Amy Crocker:
Thank you very much. Sorry for the short delay, but it was a good opportunity to bring more people into the room. So thank you very much for being here for the 2023 session of the Dynamic Coalition on Children’s Rights in the Digital Environment. I know you can go and navigate many paths in the agenda, the impressive agenda of the IDF, and so we’re really happy that you are here. There are also, as we speak, some similar child rights-focused sessions going on, so thank you for choosing this, and I hope that you’ll have the opportunity to perhaps watch online some of the other sessions and engage with the speakers in those sessions as well. So as we all know, the theme for this year’s IDF is the internet we want empowering all people, and the Dynamic Coalition, which I will explain a little bit and we can talk about throughout this session, has a clear starting point that for us as children’s rights advocates, there can be no empowerment on or through the internet without a foundation of safety, and the internet we want and the internet we need is one where children’s rights are guaranteed, and that includes speaking to them about their views about their digital lives and the online world. And of course that’s not just me or our coalition or my fellow panelists saying this. We can also refer, and for those of you coming from the previous session on digital rights in different regions around the world, we have now something called the General Comment Number 25 to the UN Convention on the Rights of the Child that recognizes children’s rights in relation to the digital environment, and that was adopted two years ago. And this obliges state parties to protect children in digital environments from all forms of exploitation and abuse. So what this means is the rights that children have in the offline world, if we can call it that, are also guaranteed online, and I think this is crucial for the context in which we are meeting today. So in that context, when we talk about the AI, the metaverse, new technologies, frontier technologies, as we’ve seen at this IDF, it’s clearly at the forefront of discussion. It’s across the agenda very heavily. There are a lot of sessions talking about regulation, frameworks, guidance, opportunities, risks of these kind of new technologies, and we know they are increasingly embedded in the lives of digital platform users worldwide. So we see that legislation, digital policy, safety policy, design practice, digital experiences are at a critical moment of transition, and innovation is not new. It’s core to our human societies. It does actually define us. There is a pace of change, perhaps, that we’re seeing right now that requires us to really stop and pay attention, and consider what these implications may be, and how we can harness the positive opportunities for the next generations. Yes, indeed, I think we all agree, and a starting point for this panel, we will be balancing this conversation about the transformative power of technologies, but also looking at how we mitigate the risks, and address harms, some of which we can talk about very directly and concretely today, some of which we can probably predict, and some which we cannot predict. This is the nature of the evolving environment. We do know that governments often find themselves playing catch-up. There is a huge regulatory debate right now, but in many ways, in too many ways, it’s responsive to the problem after it’s happened. We’ll be talking a little bit about moving to a more preventive upstream model of safety by design. How do we prevent things happening before they take place, and how can we build, at the same time, those environments and communities online for children and everyone to thrive, and be well, and progress. We’ve also seen that some online companies, technology providers, are not equal in that understanding your commitment to design. I think that’s something that’s crucial for us to address. How can we all work with companies of different sizes to actually scale, and share best practice and knowledge in these areas. The questions I think that we need to ask is how we move from talk to action, how we move from policy to practice. This is also something that has come up in many of the sessions I have attended. We need to act. We need to be smart about the policies and laws we develop, but really the proof is in the implementation. The proof is in how we actually use these for the benefit of society, and how we localize these and make them relevant to the specific context in which we are implementing these policies and practices. We also need to think very seriously about how we assess and mitigate the risks of new technologies, so that we can assure safety, but also champion opportunity that tech provides for millions, billions of people living on this planet. Some of the goals of the session are to identify the main impacts of AI and new technologies on children globally, understand, hear from one young panelist, but also I see some younger participants in the room. I’m really looking forward to hearing your views on this, and to raise awareness for a child rights-based approach to AI-based service development. Perhaps at the end I’ll take the opportunity to talk a little bit about the dynamic coalition on children’s rights as a vehicle within the IGF to really bring together organizations interested in ensuring children’s rights are mainstream within Internet governance policies worldwide, and we would love you to join us. We have some flyers and some QR codes, so you can’t escape. You don’t have to write anything down, and you can consider joining the coalition so we can actually move forward. I’m really pleased to introduce our speakers as well. We have two speakers online and three speakers sitting next to me. Perhaps I’ll start with the online participants, since they’ve joined us very early, so they get the special prize. I have Patrick Burton, who is the Executive Director of the Center for Justice and Crime Prevention in South Africa. Patrick, good morning. I have Sophie Poehler. She’s a media education consultant at the German Children’s Fund. Thank you very much for joining us, Sophie. Here in the room I have, to my right, Liz Thomas, who is the Director of Public Policy and Digital Safety at Microsoft. I have Jenna Fung, who is a youth advocate for youth-led initiatives online. She’s representing the Asia-Pacific IGF, Youth IGF, and she’s part of the Youth Track Organizing Team as well. And last, but very much not least, I have Kats Takeda, who is the Executive Director of Child Fund Japan, who can also give us a perspective from the wonderful country in which we are attending this event. So thank you very much. Before we go forward, I wanted to just take a show of hands, because this is a round table. The seating makes it a little bit harder to make it a round table. So, you know, a bit of audience participation. So perhaps you could raise your hand if you’re from civil society in the room. And from government? Raised hand. From private sector? Good to have you. And from any other? And from the different regions? We have some, I think, some colleagues from Asia-Pacific region and European. Yeah. From any colleagues from Middle East? Hello. Thank you for joining us. Latin America? No. And Europe? Some Europe? Well, I’m from Europe, so. Yeah. And the Americas. Yeah. Great. Great to have you. So we have, we can have a global conversation, I think. We are lacking some regions, but it’s really great to have you all here. Thank you. Thank you for being here. I should introduce myself. My name is Amy Crocker. I work for an organization, please, called ECPAT International. We are a global civil society network dedicated to ending the sexual exploitation of children. And I’m here as moderator today, as the chair or coordinator of the Dynamic Coalition on Children’s Rights in the digital environment here at the IGF. So this is a 90-minute roundtable. You’ve already, I’ve already taken up a lot of the time, so we will go on. We’re going to organize this in terms of three themes. And what we’d like is, you know, within each theme to hear your reflections, take your questions. So we make this as much of a conversation as we can. And the first theme is broad, but crucial. And it’s on safety and children’s rights being a cornerstone of the internet that we want to need. This is our proposition, but it’s also a challenge, I think, to the internet governance community and to governments and companies and society worldwide. So what I’m going to do is perhaps start with you, Sophie, online, if I may. And perhaps you could tell us a little bit about your views on why children’s rights are so, digital rights, are so fundamental to our construction of a safe, equitable and secure online world.

Sophie:
Yes, thanks, Amy. And hello, everybody from Germany. It’s very early here in the morning. But I hope I can give you some insights in the German perspective on children’s rights in the digital world. Maybe just a quick background. I work in the coordination office for children’s rights on the German Children’s Fund, and we accompany the strategies of the German Children’s Union and the Council of Europe on the Rights of the Child in Germany here. And among other things with a strong focus on children’s rights in the digital world. Yes, Amy, you’ve already mentioned it. The General Commend 25 published by the UN Committee on the Rights of the Child in 2021 sums up the importance of children’s digital rights and provides a very comprehensive framework for this context. And yeah, the rights are crucial really to protect children from harm, but also promote their access to information and empower them with digital skills. And also important is that the rights of provision, protection and participation must be given equal consideration and are really of fundamental importance for the digital world. So upholding these rights is not only an ethical imperative, but also an investment in the well-being of future generations and the society as a whole. And maybe a quick German perspective, which is quite concrete. We have, as German Children’s Fund, we have looked into needs and challenges voiced by children when it comes to risks arising from online interaction. We have analyzed the research field on this question on how children deal with interaction risks such as insults, bullying or harassment in online environments. And therefore we’ve conducted a meta-research and compiled an overview of relevant studies with a focus on German children, how they develop coping strategies and how we can promote this, focused on the age group from nine to 13. And we’ve gained some interesting findings from the reviewed studies when it comes to children’s perspectives on online safety. Just a quick disclaimer, this was not in the context of artificial intelligence, but we still consider the results relevant to our discussion today. And I’d like to pick some important points for the discussion today and later maybe. The younger the children, the more important it is for them to have a social safety net. In case of online risks, they particularly want support from parents, confidants or teachers. And especially, particularly parents are perceived as the most significant and desired safety contact persons for young children. As children grow older, they increasingly resort to technical strategies to deal with online risks, such as blocking, reporting, deleting comments, enabling comment function. And this points to the considerable importance of the safe design in online spaces, which must be adapted to the needs of each age group. The youngsters voiced that platform-related reporting functions are seen critically by them, because the platform side processing of reports takes too long in their eyes and sometimes even fails to occur altogether. They want more information on how to report people, how to block people, and how to protect themselves from uncomfortable or risky interactions, especially sexual interactions. And there’s any case that they need more education to make a more informed decision when coping. And last but not least, two points from a study from THORN, conducted two years ago in the US, so not from Germany, but they have some interesting findings when it comes to reporting. First, anonymity plays an important role for adolescents, especially for young girls. They report that they would be more likely to use technical tools if they could be sure their report would remain anonymous. And very interesting, at the same time, this study results also show that adolescents would welcome a human connection in the reporting process in addition to anonymity. So the big majority of the 9 to 12 year olds we’ve looked at said they would be more willing to use reporting tools that connect users with a human than with an automatic system. And yeah, just a quick insight, there are more findings, but those highlight the importance really of human resources, as well as safe design for children in coping with risks online. Thank you, Sophie. And you’ve touched upon, you know, the second

Amy Crocker:
and third thing that we would be talking about, which is on the one side regulation and policy for safety, and that can be, you know, government policy, platform policy, and then also the issue of safe design, and we’ll go into those. And I think, you know, it’s really interesting, you know, obviously drawing on research conducted with children, when we take a rights-based approach, you said, you know, you won’t be talking, this study wasn’t specifically talking about AI, but indeed, if we take a rights-based approach, it is about rights, perhaps about principles and values, and the technology itself should be responding to those needs rather than the other way around. So I think before we go on to sort of some of the other issues around regulation and policy, I also want to turn to you, Katz, if I may, to talk about, I mean, we’ve heard from Germany, to talk about in the Japanese perspective, your experience of doing your work based on children’s rights, and what that means in terms of creating safety nets, meeting the needs of children, understanding their thoughts, so that you can help advocate for them based on their rights.

Katz:
Thank you for inviting me to this IGF, and especially this dynamic coalition session. So let me share some of the facts from Japan. But before then, I have to say, child rights is a fundamental part of this, I would say, work and societies everywhere. And we are, as a child fund, as child-focused agencies, we are promoting child rights everywhere. But we face several challenges so far. So we conducted some kind of omnibus survey recently. It’s in August. So this is age from 15 to 75 years old. It’s a quite long, quite wide range. This is a kind of image of the public opinion. So we have a question about the definition of CSAM. and also including some of the questions about AI. So, let me share some of the challenges here. Then, the results said is, how say, is some of the internal conflict between the human rights, especially the child rights, and also freedom of expression. So, this is maybe never-ending conflict everywhere, maybe not only in Japan, some other countries. I want to know some other countries’ practices or situation later on. But we think is we need to kind of balance between the two conflicts. Otherwise, we cannot continue to never-ending discussion between the child rights and freedom of expression. And secondly, I have to say, I want to share this one, some kind of misunderstanding or misinterpretation of human rights. So, we ask this kind of a virtual see-sam, and also see-sem, to the responders. Is some of the respond and some comments, narrative comments said is a virtual see-sam, see-sem, will prevent real crime. So, this is kind of a misunderstanding or misinterpretation of human rights, especially the child rights. We need more awareness or education to the public for this one. And thirdly, so I want to share this one, is the one of the result of the public opinion is the question about AIs. Is many of them is how say, we should regulate AI under the context of the see-sam, see-sem. But still is a minority is disagree on this how say, regulation. But interestingly, is 20% of the how say, answer is respond is we don’t know or I don’t know about the AI matters or AI risk. This is quite interesting and also some kind of risk in the future. So probably we should more focus how say, awareness to the public about the risk of the AI and also the opportunity of AI in the future. So that is one of the, some of our results. So, I just how say, share these three points but maybe later on, so I want to hear from you about some other thoughts or insight or some kind of a result or research about the similar work on your country. So, yeah, that’s it. Thank you.

Amy Crocker:
Thank you. And I think you pick up on a really crucial point is how children’s rights are understood and sort of made real within societies, how they’re realized, which often will be dependent on a local context based on principles that we have agreed on globally. But also helping people understand technology and the risks and opportunities. And I think this is a challenge and maybe something Liz, you will speak to later, how people, how you make technology explainable enough that people understand the different sides of it when they’re using it. And indeed, I think we will talk a little bit later about parents and the empowerment of parents. And I think this is something that has come up many times in conversations I’ve been hearing this week. So, speaking about children’s rights, Jenna, I’ll turn to you to tell us that we’re all talking rubbish. No, I’d love to hear your sort of, your perspective on how your experience of how children’s rights can be used to advocate for youth and whether you think we’re doing that in the right way.

Jenna:
Sure, I will try my best. As I work so closely with the youth in my own region in Asia Pacific, most of the people who are involved in this YIGF, they kind of have some sort of knowledge about what we’re doing here. And the youth that is engaged in those conversations, they’re over 18. But then, as we talk about children, they’re very young. And so, today I will add some and bring out some points from those outcome that we have discussed in Asia Pacific, but try to, we’ll try to have some more representation and we definitely not represent like teenager, which I personally see that they are the one that face a lot of challenges online these days. But I would touch on it a little bit later as I prepare some notes here and hope that I won’t disappoint the audience here today. I believe youth, not youth, sorry, correct myself, kids today, they leave and breathe the online world. They practically born with internet and tech gadgets in the hand, which many of us don’t really get to experience or dream of back in the day, even myself as a Gen C-er. I don’t get to experience that. I only get to get introduced to a computer or internet when I was in kindergarten. But kids these days, they have their smartphones or iPads in hand. As soon as their parents play the baby shark, they stop crying, right? That’s what they’re dealing with these days. It may be a bit dramatic to frame it this way, but before they’re born, the photos, everything, are filling up the parents’ social media feed. That’s basically how I find out my high school buddies become parents, and probably because their parents are Gen Z and posting on social media a lot. These kids today, they don’t really get to choose because they’re not born yet, but they’re already online. So it complicates our conversation even more. It might not always be the case because there are people who choose to be online, but somehow it is happening a lot more because of how different generations will use internet or technologies. And I think with all this, we must talk about trust. This is one of the biggest thing we also touch on a lot in the Asia-Pacific Youth IGF and within our own youth community as well because this is basically the bedrock of digital age, we believe. In a world where tech, we rely on technology for almost everything, I guess we don’t have to explain too much after the pandemic. Without the internet or technologies, we can’t really live during that time. So trust is really become the glue that holds everything together. And digital age make trust really crucial against the backdrop of growing reliance on technologies and possible risks related to data breaches, data privacy problems and unethical practices. So building trust and imparting fundamental digital knowledge are essential steps in creating a reliable and ethically responsible digital environment for the younger generations. Our society has evolved a lot to embrace diversity in terms of backgrounds, culture, sexual orientations, and more. With the progress that we have accomplished, potential harms and risks multiply. And the challenges of teenager that face and encounter today probably way more multifaceted than those of the past. And I myself can’t even relate. And I really hope that we will have a mechanism to engage those teenager, technically they’re underage, to be in the conversation so I can hear from them. I can’t speak for them because I am not them. Naming some classic examples from that, cyberbullying. We’re still talking about it. In early age, I mean early stage of the internet, you know, flaming, trolling, harassment through emails. Now it’s different, it’s not more than emails. Where today, where younger generations facing more than just social media bullying, it’s now that they’re encompassing a wider range of challenges like hate speech, doxing, cyberstalking, or one of the most concerned ones, like hate image-based abuse, especially with the rise of generative AI. It just make everything relatively easier to do. And so that’s one of the things that I think there are, the encounters, like just complicated than before. And when underage face such challenge, it’s very natural for them to turn to parents. Because, I mean, it’s just natural. Talk to someone you trust. Sometimes it may not be their own parent, but someone they trust. But you know, to provide a safety net for underage, the guardian can’t do it alone, and they must know something. And not all the parents or guardian would have the same level of knowledge of anyone in this room. And so, you know, especially when there’s nuances on the wrist that young children and teenagers ask, both do, it’s a totally different thing. I think it’s like a responsibility, it’s shared by all stakeholders in terms of safeguarding the younger generations on this very topic. And I probably should stop here and save the rest of the point when we move on to team two, theme two and three. And I hope that I have already brought some new insights from the younger generations. Because as I observed, there’s like only a few youth that’s interested in these kind of topic. And I hope that I represent a small portion of it here today. Thank you so much.

Amy Crocker:
No, you absolutely did. And you’ve touched upon some really good points that set us up for the next topic. But of course, they’re all interrelated. And I really liked that you mentioned the word trust. I think this is a really important word in these times. Trust in algorithms when we talk about AI, trust in institutions, trust in companies, trust in parents. You know, some children, many children don’t have a trusted adult that they can rely upon to help them. So I think we have a lot of different issues we need to sort of unpack. Before we go on to talk about everyone’s favorite topic of regulations and policies, just after lunch. So we’re at risk of everyone falling asleep. No, I’d love to hear from the room if there are any perspectives on how you’ve found building your work upon a basis of children’s rights, useful, challenging, difficult. I don’t know if there are any perspectives. I could call out, I think we have some colleagues from Brazil who just did a wonderful session and you had some videos of children themselves speaking. I don’t know if you’d like to speak or anyone else in the room about how you’ve used children’s rights practically in your work to do the work that you do. Just use the microphone because we have online participants.

Larry Magid:
Yeah. Thank you, I’m Larry Magid from Connect Safely. So in previous IGFs, we’ve had some workshops that I would co-lead called children’s rights versus child protection and the tension between the two. We could protect everyone in this room by putting in bubble wrap and never letting you out of your bed, although you would probably die from some bed-related disease. But the point is that being active in the world automatically creates some risk and clearly being online creates some risk, everyone knows that. And so we want to protect children, but at the same time, we want to protect their rights. And sometimes those are in conflict. And where it becomes particularly critical is in the area of legislation. Because even the United States, which as you all know, has something we call the First Amendment, which if you read the First Amendment in the American Constitution, it says nothing about how old you have to be. It doesn’t say people over 18 have the right to free speech. Everyone has the right to free speech. Well, it doesn’t really say that, but that’s how it’s interpreted. But at the same time, there are laws being proposed in America which would, for example, prohibit children from under 18 to go online without parental permission. So that means a 17-year-old exploring their sexuality, their politics, their religion, or whatever, would have to go to their parents for the right to express themselves. As everybody here I’m sure is aware, the UN Convention on the Rights of the Child guarantees children the right of freedom of expression, participation, assembly, et cetera. So these are in conflict, which is not to say that we should allow five-year-olds to look at hardcore pornography. I mean, I’m not arguing that we completely enable, empower all children to do all things, but at the same time, how do we ensure their rights and protect them at the same time without suppressing their rights? And frankly, if you were to ask some legislators, at least in the United States, and I think it’s true in other countries, they would favor protection over rights and would take away their rights in the name of protection. And it becomes particularly of an issue when there are marginalized groups that are engaged in controversial activities, whether it’s politics or transsexual issues or other issues, where their rights are being suppressed by legislation in name, reportedly, to protect them. So I just think that’s an important backdrop. And even though that workshop is not on the agenda at this IGF, it’s probably more important today than it was even the last time we had that conversation two or three years ago, because again, I can only speak for my country, there is more and more legislation that would essentially deny children their rights for participation online. Thank you.

Amy Crocker:
Thanks, I don’t know if anyone wants to speak to that, but I think absolutely, and there may not be a session on the agenda, but it’s certainly something that has come up many times in the conversations we’ve all been having and at different sessions. And it is a huge challenge we face. I wish I had the answer. In some ways, I feel like we need to embrace those conflicts because we’re always gonna be navigating those conflicts. But I think when we go on to regulation and policy, we need to really critically assess what we’re trying to gain through different regulations and how those should be shaped. I’ll be speaking to that. So we have two questions online. Bangladesh, can you comment on that? Maybe you’d like to ask your question while we’re waiting, yeah.

Steve Del Bianco:
Well, thank you, and it’s a follow-up on what Larry pointed out, Larry Magid. I’m Steve Del Bianco with NetChoice, and two of the US states which have aggressively attempted to ostensibly protect children extended all the way up to the age of 18 is a requirement that any user of any social media site, even something like YouTube.com, would have to present two forms of government-issued ID to make sure that the services knew that was an adult. And if they were younger than 18, they would have had to show that a legal guardian or parent had given verifiable consent for them to use a site. It’s fine to protect a 13-year-old or a 12-year-old, but it was a little ridiculous applying to a 17-year-old. And my organization, NetChoice, sued two states that had these laws, the state of Arkansas, the state of California. And last month, just a few weeks ago, we obtained a preliminary injunction blocking the state from enforcing those laws, which looks terrible for the tech industry to be suggesting that a state was wrong to try to protect children. But in fact, the judges ruled that the state was wrong to do it the way they were doing it. And in that mix will be an argument about the rights of a 17-year-old to access the kind of content that Larry brought up. And since your question was specifically about the rights of the child, if you dive into the document that’s on every other chair, the best interest of the child is supposed to be a balancing test. Whenever I say that, I get heartburn thinking about GDPR, but it’s a balancing test about the rights of the child to access and express versus the need to protect the child from harm. So I think you bring up the right framing of the question. And I realize that other nations that run into the same problem, Larry and I are in the United States, they may not be able to rely upon a court system and a First Amendment and the Constitution toward a block. block a state from going that way. But we need to educate lawmakers, or they will write laws that are mainly messaging bills, where they get to claim they’re trying to protect children, when in fact the mechanisms to do it on age verification just don’t exist. Thank you.

Amy Crocker:
Thank you, yeah. I mean, and of course we could have a whole kind of week-long session about these topics. I’m gonna move now, because in the interest of time, to the Bangladesh Remote Hub. There seem to be many of you. Great, please, go away. Tell us your question. I’m from England, but my English is poor. I do apologize. Please, please give us your question.

Tasneet Choudhury:
Hello, all. I am Tasneet Choudhury, Joint Secretary of Women, IGF, Bangladesh, and Media Personality. Dear moderator and today’s event, greetings to all present. Thank you for giving me this opportunity to ask my question. How do we ensure that AI strategies, policies, and ethical guidelines protect and uphold child rights across the world, especially developing countries like Bangladesh?

Amy Crocker:
Thank you. Thank you so much. Oh, for the question.

B. Adharsan Baksha:
We have another question from Bangladesh Remote Hub. Can we speak? Yes, please. Okay. Thanks a lot to all of us. I’m B. Adharsan Baksha, Bangladesh. Hi, sir, Bangladesh IGF. My question is, AI adoption among children can present many real risks. Data privacy being cheap among them. How popular chatbots like Synapse and MyAI can quickly extract and process vast amounts of personal data, potentially exposing children to cyber threats, targeted advertising, and inappropriate content. How we ensure a secure cyberspace for the children? Thank you.

Amy Crocker:
Thank you very much for those questions. And I think that they are big questions, but I think it leads us very well to sort of the topic of regulation policies around some of these really challenging child rights and child protection issues. And I suppose I’m gonna put a question to you, Liz, from Microsoft about, you know, what are the risks? Are there risks in a kind of one size fits all approach to dealing with some of these issues? Because clearly we have a number of different harms. We have different, as our colleagues from Bangladesh have just said, different contexts in which we have to consider these issues.

Liz:
Fantastic. Thanks so much, Amy. And thank you for the great questions online. It’s awesome to see the remote hub. I didn’t know folks were gathering in different spaces, but that’s brilliant. I mean, so starting from our starting point is Microsoft. You know, we absolutely recognize that we have a responsibility to protect our users and particularly our youngest users and children from illegal and harmful online content and conduct. And part of the way in which we have to do that is through that incredibly necessary balancing of rights. So children’s rights in the round, thinking about it as holistically as possible. So advancing safety, but also thinking about privacy is the questions just raised around freedom of expression, around access to information and everything else. And I think in part answer to the question that was just raised as well, I think the way that that happens is gonna be a combination of an ongoing need for both regulations, but also voluntary activities as we look to take on, you know, and build in safety and privacy by design. But for us as Microsoft to really do that balancing effectively, one of the things we really have to think about is the differentiation. So thinking about the differences between the wide variety of online services that we have. I suspect most of you in the room will be familiar with one or more of the wide variety of Microsoft’s product suite. But I think, you know, what we have to really think about when we’re thinking about a gaming versus a professional social network versus productivity tools is how we really tailor our safety interventions to the nature of that service. And so when we think about this, that’s really at the heart of our approach is how we think about safety and rights in a way that’s proportionate and really, really tailored to the service and the harms in place. And that’s at the heart of our internal standards and the way we think about safety by design as a company. And that includes when we think about what’s appropriate in terms of parental controls, the guardrails that are in place, whether we’re thinking about what the business model looks like and the kind of platform architecture or what’s needed by the way the culture of the service and what we wanna try foster in terms of user behavior and the way that we educate users and parents on those services. And really, we have seen some challenges start to arise internationally where regulation has been really, really broadly scoped and creating that sort of risk of one size fits all requirements. And a really good example of that that we see a lot is a real enthusiasm and desire to address some of the well-known issues arising from some of the social media services. But the definitions that can come through here may actually inadvertently capture a range of other services with measures that might not be appropriate or proportionate on those services. And so again, we really wanna help think through what the right, what the appropriate safety measures are to really think about rights in a holistic way. And then I think that comes a little bit to the points that have just been made on thinking about privacy and safety and isolation as well. Because we, particularly in legislation, thinking about kids’ privacy and safety, we see some kids’ privacy bills. We see some safety bills. And again, these are not taking that holistic approach. Or actually, there are some laws as well we are coming through where there are concepts from safety legislation and concepts in privacy legislation in ways that may not entirely work together here. I mean, it’s a challenge for us all because I don’t think there is a perfect regulatory model for this yet. We are all still learning. One of the things that we are starting to see come through more is really a set of focus on outcomes-based codes. And so really thinking about what the flexibility is for different services in the scope of those codes to achieve the safety and privacy outcomes that are desired. That does start to create a bit more of a web of granular and complex secondary regulation. But I think it’s the starting point where we really come to a place where we can evolve our approaches, really think systematically about risks, about rights, about impact on kids, and really think about what that looks like for the products where children are most vulnerable but also where the opportunities arise. And so enabling us to think really holistically about risks and the mitigations for those going through design and other choices. And I think we are also still learning on the process of learning about what that looks like for some of those products. I know there are folks here at the IGF who are doing some amazing work in this space. I mean, I think one of the things we’ll talk about as we come through, too, is there is still a need, I think, to grow some of the evidence base, particularly on emerging tech, to think about how we do this best. And so I’ll come to that in the next part of the conversation. But I think the other piece I just wanna flag as well as we think about different legal regimes culturally is that there is a risk that globally we see existing economic and social disparities and other inequities really enhanced if there are regimes created where kids are unable to access technology. Thank you.

Amy Crocker:
And that really brings together the importance of elevating children’s rights and how we design and how those are reflected within policies. And indeed, I mean, Patrick, I’ll go to you now. I think it’s interesting. I’ve also been, there’s been some talk of fragmentation of regulatory policies. I’m also told that we shouldn’t be using the word fragmentation in this context. But I think it is interesting in the United States. I know that’s been a challenge that you have state-based laws that main conflict with federal laws and that will be in other countries as well where you have those kinds of structures. And I think there’s richness in diversity, perhaps in testing what goes wrong, but regulations take a long time to develop. So we can’t just pivot in one month and decide we’re gonna create something new. And I think this is a challenge. So Patrick, you’ve seen this issue from many perspectives, both from South Africa and your region, and of course globally. And I know you and I in the past have also spoken about prevention versus regulatory approaches. So I just wonder what your perspective is on sort of differentiated approaches to regulations in different digital spaces, but also the balance between these different kind of, not conflicting, but different factors.

Patrick:
Yeah, thanks, Amy. And it’s quite hard to come after these amazing speakers who kind of taken all your thoughts and put them far more coherently than you could have. So I think I’m just gonna start off by reiterating what almost every speaker has said, that I think while we speak quite glibly about child rights, what those means in different contexts, I’m not sure that we can all together count on how child rights, even as they are contained in the CRC and in general comment number 25, translate into practice in different cultural, religious, national, geographic contexts. It’s huge variation in how child rights are interpreted or where countries or states choose to place the emphasis. And inevitably we see that emphasis being placed on particular rights rather than equitable embracing of all child rights. And that really translates so much into the digital space. I apologize, it’s six o’clock, I’m still not altogether coherent. But I also just want to say that, I think start off by saying that I don’t think we can regulate our way out of the challenges that emerging technologies, immersive spaces, AI present us with. Regulation, we need to bear in mind, is just one of those tools, one of those arrows in our quiver. And I think we often place so much emphasis on regulation and states put place so much, and when I say states, I mean nation states or states with provinces, whatever they might be within national boundaries, place so much emphasis on regulation because they see it as not an easy win, but it’s a very visible commitment to what they are doing and to their commitment to making sure that children stay safe online without investing, without putting the sort of proportionate investment into, as you say, the prevention side of things, the education, the awareness raising, building capacity of parents, building capacity of children, to deal with building children’s resilience, the one thing that we haven’t spoken about. And so regulation is critical. We can’t do away with regulation, but it really is just one component of what we need in order to make sure that children’s rights are realized online. Now, what does that mean for regulation? Liz mentioned this increasing focus on looking at secondary regulation, which is often quite messy. And I think there is a lot to be said for that approach because ultimately, platforms operate in different ways, services operate in different ways. There are some global standards, how data is managed, how data is protected, how data is collected, how data is used, for example, relating to children’s privacy online and the right to protection. Those are standard, but at the same time, different services offer different opportunities for children to learn digital skills, to be creative online. We need to recognize that children have different evolving capacities at different ages, in different contexts, and those evolving capacities are largely influenced by different contexts, geographical contexts in which they live. Those evolving capacities are influenced by the households they live in, by the access to non-digital services they have access. We know the link between what happens online and what happens offline. And so I think having sort of differentiated approach makes sense. I think it is a logical approach, but we can’t wait for that sort of regulatory environment to concretize. I think, Amy, you just summed it up perfectly. Regulations take a long time to implement, and we need to learn from the failures of regulation, and we need to see what’s working, what isn’t working. The same with legislation. You started off the session talking about the gap between legislation and implementation. Well, from the time we start formulating policy to the implementation and the evaluation of the implementation, you’re talking 10 years, by which point we are in a whole different universe in terms of emerging technology. And so we need to look at what individual services can do, platforms can do. And I can’t think about this without thinking that, in order to achieve that, we need to make sure that we are all singing of the same hymn sheet when it comes to what child rights are, and the transparent commitment and culture of child rights that any business, industry, government needs to be working from, and the transparency around that. Am I making sense? I mean, hopefully you can kind of bring all that together. I’m gonna stop, otherwise I’ll just keep talking.

Amy Crocker:
No, thank you, and I hope you have some coffee, coffee or tea by your side. But no, absolutely, it does make sense. And indeed, I mean, I’ll ask, you know, Jenna for your input on this, but I think absolutely the, I’m losing my train of thought, the need for, and we will talk now about a design approach and a child rights-based design approach, that we can’t wait for regulation. I think there is a strong role for it to provide a framework, to provide a legal basis on which we can have conversations and decide how to act. But I mean, each one of us in this room has probably five, 10 stories about the uptake of AI models or AI products. We won’t name any in particular. And some of those are good, some of those are bad, but it’s happening faster than we have the ability to take action. So we need to think very critically about where we go and actually build those into decision-making processes earlier on in the design of products, the building of products, and that’s what we will go on to. But Jenna, I just wanted, before we do that, and then if we can take any reflections or questions from other participants in the room or online. Jenna, what is your view on, from engaging with youth and through the youth sort of IGF perspective, what is your view on regulation as a, not as a solution, but as a part of the solution to some of the challenges we face? How do young people see that? What are the priorities for building a safe and empowering environment?

Jenna:
I’ve prepared some notes around it, of course. But before I respond to your questions or theme two overall, I wanna quickly respond to what Patrick mentioned earlier about how cultural factors and just culture in general, let’s frame it that way, will be so different. Because earlier this year, I partnered with a group of amateur, we just do it voluntarily, all this policy research actually, we had from Bangladesh local hub, actually we worked together to make a study and try to see how different jurisdiction in Asia-Pacific deal with online safety. And from part of our study is that Australia adopted industry code to mitigate this issue, where Singapore use a more government driven way. So it’s kind of reflected some cultural influence in how we approach things. And I just find that it’s really a fact that we have to admit, because especially in Asia-Pacific is really different. Myself and East Asian, there are things that I can’t understand completely from those who are from Southeast Asian and South Asian. And sometimes we will be unconsciously biased and people sometimes from Western world do not think that Indians are Asian as well. I find it quite interesting when I hear from some people sometimes. But anyway, that’s my quick respond to it. And I will try to touch on the question that you asked with the notes I prepared. But most of these are part of the outcome that we have from the discussion we had last month in Brisbane in our annual meeting. I think to deal with this very topic that we are trying to address today, the youth think that we need to have a clear definition and scope about all this online safety threat. Because sometimes different people of different background will have different definition and it’s important to have international standard of course, but also to have some localization to adapt into it. And so it’s relevant to their environment. I think the other day I was attending a workshop. and then they were doing capacity building even at a municipal level, because that might be even more effective, because I personally work so closely with youth as a project manager for the Asia Pacific or IGF. I figure out that we have to empower them at many level in order to get their voice heard, especially when we talk about internet governance, child rights online. If they don’t really know about the technical aspect, sometimes they will suggest something that is not really relevant. Putting my other hat on, I actually work for the top level domain registry as well. Sometimes we think that we understand how the technology of internet work, but then when I talk to those engineer, they were like, all this details that entered a head and they were like, that’s not exactly what it is, but sure. So we need to have more stakeholder get into the conversation, because there’s no way for everyone to understand everything. So we need to put all of them together. And if we are circling back to here, I’m going too far. If we are trying to bring in the younger voice, I really want to shout out to Bangladesh actually. They started way far ahead, because I know that they have this kids IGF happening in the past two years, which is very progressive. It’s hard to get a five years old into our conversation here, because there’s like different levels, but at a kid’s level, it’s really a good way to start early for them to start engaging them. There’s no way for my mom to understand what we are here talking about. Been here for a long time, she still have no idea what I’m doing. But what we really want to stress is that we need to have a multi-stakeholder approach, but in order to achieve that, we must have capacity building along, try to make information accessible, use more accessible languages as well. So people with different level of knowledge can understand, and sometimes, and myself included, don’t really speak English as the mother tongue, and so there’s like loss in translation sometimes. That’s also one of the barrier. And so if we really want to regulate and stuff, I think we really need to bring different voices into the process, and democratizing the process eventually.

Amy Crocker:
Thank you so much. I mean, you’ve hit on so many important points, and I think I love this. I often think of it as the regulation is being top-down, but your point about the bottom-up approach, not only among children themselves, but in communities, and actually building solutions through that. And then when we go on to sort of the safety dimension that helps support that, I think that will be crucial. I know we have, Jim, some collective comments or questions.

Jim:
Yeah. I’ll just summarize it, but just to pick up on the Bangladesh point, the second question was actually from the vice chair of the Bangladesh Youth IGF, so they’re actively engaged there. But between those questions, and we have a question here as well from Mohammed, who’s an instructor at Kabul University in Afghanistan. I think as you’re addressing these issues going forward, what about the perspective, and what can be done to help in developing countries like Bangladesh and Afghanistan address these problems? I think we all know the history of the challenges that these countries have with technology, and access, and capacity building. So as we’re discussing this forward, maybe think about that as part of your comments.

Amy Crocker:
Absolutely. I mean, would anyone on the panel like to talk about how we can address some of those issues? I mean, I think, Jenna, even you spoke a little bit about, looking at different opportunities for codes of conduct that can be not copied exactly, but that are based on some values, principles, some guidelines possibly, that can be translated into your own context for the participant from Afghanistan. I think learning from approaches to regulation that can work possibly, but obviously understanding the context there. And I suppose also back to Jenna’s point, making sure that the children and young people are consulted, find out what do they think, how do they feel about these issues, and trying to drive that. But again, I don’t know if anyone, even in the room would like to comment on that. Yeah, otherwise, we’ll take a question.

Andrew Campling:
Okay, thank you. Andrew Campling, I run a public policy, public affairs consultancy, but also a trustee of the Internet Watch Foundation, so probably more with that hat. It’s a very big topic, so I’m trying to give two fairly narrow points that are at least linked in token ways to AI. So first one, so first one, algorithms quite obviously make malicious content much more accessible through their recommendations. So for example, in the UK, we’ve seen a child who unfortunately was shown suicide-relevant content, committed suicide. It’s highly improbable she would have found that content had the algorithm not shown it to her. So first question, should there be restrictions on the application of surveillance capitalism to children? A blanket prohibition of doing the data gathering of known child users on platforms in the first place to try and prevent that from happening. Secondly, AI models are already being used to generate CSAM. So should AI-generated CSAM be illegal? It is in some countries, but it’s a loophole in others. And should prompts that are deliberately intended to generate CSAM, should the circulation of those be made illegal? Because there’s an active trade, that’s the right phrase, and the best prompts to use to get the images. And then more generally, so given the pace of technology change, and you said how difficult it is to create regulation, it’s easily been outpaced by the changes in the tech. Dare I say it, learning from the UK experience, should we try and avoid being caught out by the pace of change simply by imposing a duty of care on platforms of their users? Because otherwise it’s pretty much impossible for regulators to keep up with the changes. So just give the blanket duty of care and put the problem on the platform operators to do that responsibly. Thank you.

Amy Crocker:
Thank you. Big questions. I know that Patrick wants to come in. Oh, do you wanna quickly speak to that and then we’ll bring Patrick in? Patrick, go ahead.

Patrick:
Thanks, Amy. Just two very quick responses. The first to the question from Afghanistan, and it’s just a general observation. In so many of the countries in which I work, where governments are trying to catch up on policy, they’re trying to catch up on legislation, they’re looking to key countries for model legislation. They’re desperate to look at best practice. And so what tends to happen is there are three or four countries that come to mind, and they look at those countries and try to model their own legislation based on that, without recognizing some of the challenges and the dilemmas that those pieces of legislation face, or where they haven’t got it right. And so there’s a real danger in developing countries saying, okay, well, this is what country A has done, we’re gonna follow that model without any critical engagement as to what some of the challenges in implementation might be. So that’s just an observation. I think there’s a real danger of doing that. And I do see that a lot in many of the countries that I work in, Southern Africa, North Africa, some of the Asian Pacific, smaller island countries and territories. And then if I can just use my position, my mic, just in response to the question or the observation from the IWF colleague. The other thing that I’ve seen so many of the developing countries that I work is this issue around definitions. And now you raised the example of AI-generated CSAM. What tends to happen is countries are loathe to update their sexual violence. Whatever legislation, their child sexual abuse, exploitation, crimes, offenses are contained because it takes so long. And that’s why I think it’s also up to individual industries and companies to say, we are going to adhere to these definitions of CSAM and that includes AI-generated CSAM. So that it’s actually a step ahead of changing national policy because that is going to take five to 10 years for that policy to update because it’s such a process for legislation to be updated. Thanks.

Amy Crocker:
Thanks, Patrick. Go ahead, Liz, and then I’ve got many follow-ups to give to people in the room. Great.

Liz:
Well, I will try to be brief. I mean, I think a couple of great questions from the IWF here in the room and things that are really top of mind for us. And actually, I think this goes to some of the points I was hoping to raise anyway. So excellent segue. I mean, on the topic of AI-generated CSAM, I think certainly for us in industry, thinking about these risks has absolutely been at the core of our responsible AI approach at Microsoft, but also how we’re thinking about applying safety by design across the services where that’s being deployed and the features in that. On the question of legality, I think this really goes to some of the conversation we’ve just had around A, the criticality of regulation, but also B, regulation not being the only tool in the toolkit. And I think it goes to, again, we have to have the whole of society approach to addressing these problems. And part of that will be us taking responsibility to make sure that this particular horrific harm type is not being disseminated or created on our services. But secondly, that need for urgency in some regulation. I know in some jurisdictions, there have already been statements around the legality of CSAM, but I think it speaks to some of the great work by the We Protect Global Alliance and others as well with the model national response to really help support harmonization on legal regimes in this, so there are not spaces where this crime is permitted. On the question of whether children should be able to access some services or not, two quick points in response. And I think part of this goes to the references to safety by design across diverse services that I made before. And part of that is really thinking about where there are recommendation systems or other features, what impact that has on the risks to young people on the service and understanding the potential mitigations for that. But more broadly, I think you’ve kind of raised one of the major topics under discussion in child rights and child safety conversations at the moment, which is obviously age assurance and the ability to identify whether users are indeed actually children. And there are multiple strands of work, I think, that are needed here really to A, help us find the right tech solutions, noting that there are a range of trade-offs between sort of getting the right degree of accuracy around the age of a child versus privacy, security, and other factors. But then B, once we do know the age of a child, what are the choices that we make around the safety interventions and indeed access to services on that? And I think this is where we are really, certainly as Microsoft, very keen to continue the conversations with the experts and grow our evidence on these topics.

Amy Crocker:
Thanks, I know we have some questions. I just wanna, Sophie, I don’t know, because I know you’re waiting there with us online. I wonder if, picking up on the point I think made about the use of children’s data, for example, I wonder if you have anything you’d like to say about, for example, the Digital Services Act and what that may kind of mean for sort of protecting children’s data within the EU. Is that something you’d like to speak to or speak about the European context?

Sophie:
Yes, I can give a short insight. So we have the Digital Services Act and the European Union, which is going to be in point next year. And we also have regulations following the DSA in Germany. Right now, we are discussing it a lot. And from a child rights perspective, we consider it a really important point to make and a good way to protect the data of children, especially when it comes to advertising, but also when it comes to the responsibility of very large online platforms to protect children and young people from certain risks. I’d also like to add something to the idea of children’s rights by design and also children’s participation in regulation, because I think this is a crucial aspect if we really want to think children’s rights in a holistic way, not only to focus on the protection point all the time, but to do it in a holistic way and also looking for how can we empower children, how can regulation support empowerment of children and also how can regulation support the participation of children. And because how digital media are regulated and designed has a really direct influence on the lives of children and young people, but they, if we are honest, rarely have a say in these issues. The GC25 also addresses this right of young people to participate in questions and decisions about the digital environment. And here in Germany, we’ve already seen some efforts to involve children and young people in the design and implementation of legal youth and media protection. And as a German children’s fund, we’ve conducted also an exploratory research and concluded in this context that we need quality criteria for participation in this point. And we’ve encountered already a wide variety of participation oriented formats, such as consultations or comment processes where children are included and involved in regulation processes. We have youth juries, editorial boards, and also young people who design products and even design and conduct events on their own, get involved in peer-to-peer networks or consultations. And I’d be very interested in experiences from other countries. And this also leads me to the point of safety by design, child rights by design. Children and adolescents need social spaces where they can really implement their own ideas without being primarily affected by product guidelines or market-driven interests, allowing them to exercise their right to open creative processes. And this likely clashes a bit with a metaverse concept whose hosts also target young audiences. So safe social spaces are more likely created by civil society and educational organizations. That’s what we’ve seen so far. And the approach of children’s rights by design offers providers the opportunity to place children and adolescents’ self-realization and participation really at the forefront and develop ideas on how to involve them as informants and full-fledged design partners. And this is also, I think Patrick already mentioned it, an opportunity to bring in the aspect of the evolving capacities. And to really look how to develop age-appropriate social online spaces. And yeah.

Amy Crocker:
Thank you. That may be on this part. Yeah, thank you so much, Sophie. And I’m sorry to cut you off because we have a queue of questions in the room. So we’ll take some questions. Please go ahead.

Amyana:
Hello, Aymana. I’m from Brazil. And right now in the National Council for Children’s Rights we are preparing a document with some guidelines and recommendations for the prosecutors, the public ministry, and all the services that work with children and adolescents. And what these agencies should do and require from platforms to protect children. because how can platforms manage to remove content from films, for example, and not remove violent or dangerous content for children? So how can we focus on protecting by design, like you were saying, because yes, there are standards, international standards, but they are not applied equally. Children, especially from the global south, have a much lower level from protection than those from the north, and we already have data to affirm that. And another question is about how we can legally framework, for example, images of child abuse created by AI, because we are thinking about this now, because our legislation doesn’t fit for these actions. So how have you been dealing with this in your countries, like apology for crime or incitations? So that’s it. Thank you. I will quickly just

Amy Crocker:
see if, and then we’ll take, Kasia, your question. Kat, I don’t know if you would like to respond, because on this point of how you can think about legislating for this, because this is the point you raised earlier in Japan, and how you can sort of build awareness about the need to criminalize these types of content. Thank you for raising the issues. It’s quite important, but for

Katz:
Japan, so we don’t have any regulation and policies to regulate that kind of AI generated image so far. As quite recently, the BBC, how they focus on some kind of AI generated, some kind of a system, and but we couldn’t, how say, we couldn’t know about that kind of news from the Japanese media. I think kind of a more responsibility of media in Japan, so they have to inform us that kind of a situation right now. Otherwise, the normal people, we don’t know about what’s going on, the AIs. So I think we need to know more about that kind of a new information. Maybe not only the media, we can, how say, collect information from SNS, whatever, so. Thank you. I’m going to declare that we’ll all stay here for

Amy Crocker:
three more hours, so I hope you all have time. Unfortunately, we cannot, so Kasia,

Katarzyna Staciewa:
please. Thank you very much. Hello, everyone. My name is Katarzyna Staciewa, and I represent the National Research Institute in Poland, but I would like to link my intervention to my previous experience in law enforcement and also in research based on my education in criminology. So actually, it’s so live discussion that it only proves that we need more room in the future for these sort of discussions, and I wanted to thank you, Katsuhiko, if my Japanese pronunciation is well, and Liz for all the comments related to research and child rights in the so dynamically developing space. I have recently conducted research on the metaverse, and I believe research is a key, and research can also guide the developing countries, because there is a chance to benefit from what has been found out already, and the research can also guide our future actions. So in this research, I analyzed the darknet, and I analyzed the teams of conversations of people that are potentially sexually interested in children, and I found out that there are three teams that are absolutely worrying. The first one is that it’s an environment in which people like that can meet a child or can move conversation from publicly available spaces. The second is that they can create something that has already been said, that they can create AI-generated CSAM. So imagine that someone uses a picture or a video of a real child and transfer it into that sort of material. It will be constant re-victimization of a child that was absolutely innocent. And the third one is even more scary, because it was about updating, upgrading the existing CSAM into the VR or metaverse-oriented frame. So it means for the victims, the past ones and the future ones, a constant re-victimization, and we should be definitely looking at this perspective, and the call for more robust research has never been more valid. So I would just like to finalize this intervention with a focus on research as a potential gateway to more tailor-made, oriented actions for the safety of

Amy Crocker:
children. Thank you. Thank you so much, and actually, I mean, it points to a really interesting point, Sophie, that you made about safe spaces being created by civil society organizations, communities, families offline, and what should that look like in the metaverse? What can that look like? And are we ready, really, for that? And of course, we haven’t, I mean, we are short on time. We could speak about safety boat design for a long time, but I think these are crucial issues we have to grapple with as we allow children to operate as they want to. Young people want to be engaged in these environments, and also picking up on the point about what that means in different contexts, because a tool or an environment designed by a company in one country, one region, will not necessarily be meeting the needs of children in other environments, or children of diverse identities. So please. Hi, thank

Ahmad Karim:
you so much for all the intervention. My name is Ahmad Karim. I’m from UN Women Regional Office for Asia and Pacific, and I come from that angle of the discussion where whenever we have those kind of big topics, we tend to be gender-blind in the conversation, and I wonder if there is some specificities related to gender design that would relate more or give more attention to girls and young adults and females, and those who could be affected more by the advancement of technology and where national loss is not considerate, where we put all the children in one basket, but then there are some marginalized and fragile groups that deserve more attention, especially from the design of

Liz:
technology itself. Thank you. I can jump in briefly on that. I mean, fundamentally the lens we’re certainly coming at this from is we want to unlock the economic, social, and educational power of technology, but really find a way to do that. They’re using it mindfully and safely, and you can’t do that without being alive to the gender element. So I think absolutely. Where I think we are still, again, in need of a better understanding, we’ve done consumer research for a long time now. There’s a lot of good work underway, but I still don’t think we necessarily have the right level of understanding of some of those gendered impacts, and I think one of the only ways to do that actually goes back to some of the first conversation we had around youth participation, because as a millennial who got a device in high school rather than in kindergarten, I know that I don’t have an understanding of what it looks for a teenage girl online, let alone in a diverse range of cultures. I’m a New Zealander. I come with that particular lens. There are a whole range of lenses I don’t bring. So finding those ways to do that research and to get those perspectives, and we know that as a company, you know, we don’t always have the right ways of doing that either, and doing that mindfully in a way that is really asking questions of kids at the right age in the right places, and doing that safely as well so that they’re feeling really empowered to share, and I think it goes a little bit to some of the capacity building you talked about as well. Maybe I can jump in a little bit to quickly respond to

Jenna:
Emmett’s points about gender and youth participation. Actually, my colleagues right here, they are going to talk about something about gender tomorrow morning, about like, they are even younger than me, let’s be real, and then they often give up points that I don’t even touch on. They design the workshop from that perspective because they think it’s very important. Their interpretations about gender is different from what we historically define, and that’s really important, and I got invited to a panel about how we leverage AI to ensure gender inclusivity, and suddenly when I prepared a session, I was like, why am I even invited to that? Because I am just one of those ordinary one heterosexual person with really ordinary point. Why am I even on there? And so I feel like, you know, with talking to more young people, you will get some new insights from how they think as much as we dedicate a time to talk about CSAM. I think it’s really important for us to address that, and I do think that we should, you know, instead of just create one big bill to deal with how AI influences all this matter, I think government and all stakeholders should modernize different existing legal framework, like modernizing like Broadcasting Act, Consumer Protection Act, Competition Act, to make sure all these matters are integrated into it, and so public interest or the younger generations ideas are considered while we are creating all these like policies. While we are talking so much about CSAM, last month when I was like in Brisbane talking with all these Asia-Pacific youth, their workshop designed something about explicit content. They have a totally different approach, because I think when it comes to CSAM, maybe as an adult, we care how we protect them, which is very important, but they actually want to explore how they and maybe we use explicit content to express themselves, and so they actually talk about like OnlyFans, a kind of platform, how we create a safe space for those who want to express themselves through those content, which sometimes we forget to talk about it. This is also their right to express themselves if they want to, and so that’s just one thing that actually surprised me a lot, because I never thought about it. Probably I’m too conservative in some way, but that’s something we should, that is why we must bring them, because we will always find something new. We as an adult think they need this, but maybe it’s actually not, so we should have them. We have a few minutes left for

Amy Crocker:
final reflections, but that’s a perfect place to bring us home, because ultimately this is about creating safe, empowering spaces where you need regulation to do certain things, you need design to be mindful and informed by a child participant, so by child consultation and participation. So in two minutes, but I’m maybe gonna take an extra minute if we can, I’d like to invite all our panellists to just give maybe a final reflection on what they’ve heard today, something that really stands out, what perhaps even just what would be your takeaway, but what thing you would do tomorrow in response to this session, and I’ll go first online, so Patrick. Thanks Amy, and

Patrick:
it’s really hard to follow up from Jenna, because as you say, I think that is the perfect way to wrap it up. I had two notes, the one was speak and engage, not speak to engage and hear from, meaningfully, children in different contexts, their understanding, their experiences, both positive and negative, and how they want to use the internet. That means we need to be open as adults to challenge our own thinking around this, because we need to let young people who are the core focus here, determine or dictate, feed into that space. The second is also, my second point I just wanted to conclude with, was it was great to hear a speaker, the speaker from Poland, in the audience, who’s a criminologist. The other point that I wanted to make when I was talking is, we need to have criminologists, violence prevention, public health, educators, social workers, all of those sectors and specialities around, and child rights legal experts, in this conversation. It cannot come down to industry, to government, to regulation. We need to make sure that we have all of those pieces fitting together, in order to make this work. Thank you Amy, and thanks to speakers for a great

Amy Crocker:
conversation. Thank you, Sophie. Very, very short, if possible, just your main

Sophie:
reflection. Yeah, thanks to everyone for your inputs, to the speakers, to the audience. I think my learning from today is that, to advocate children’s rights in the digital world, in terms of a holistic approach, we need so many stakeholders, and it’s important really to grab them all and go this way, and especially to go this way with children and young people themselves, as a really important participant group in this context. Thank you. Kat, I’ll go to you for a final

Katz:
reflection, if I may. Yeah, thank you so much for your brainstorming sessions, so I really appreciate your input and encouragement everywhere. So, I think whatever the design, whatever the house of regulation policies, all the time we should move on to the rights-based approach, that is most important, whatever the human rights or child rights, very important, significant approach. Then also, in the past, probably we made an effort to more approach to public or people, but in the future, maybe to approach to AI in the future, so we need to more have an approach, the target will be increased in

Amy Crocker:
the future, I think. Yeah, thank you. Thank you. Very briefly, Jenna and then Liz.

Jenna:
I will be really brief, because I think I’ve had taken enough air time. I think one last takeaway is collaborations, I would say, because as someone who works on capacity building, I need research to back up all the things that I do, and then we, you know, all the stakeholders work together, is that, you know, in terms of legislation regulations, we need government, private sector, and everyone to work together to give a safe environment, and of course, don’t miss out the technical community, please, because they are very important, they have all the knowledge, and sometimes they might not be the best involved in the policymaking process. So yeah, that’s just my final words. Thank you.

Liz:
I’ll be really brief. I think my takeaway today is to continue to try to approach this in the spirit of learning, learning from others, learning to try to keep the holistic approach in mind, and we need to grapple with different harms, but we need to find a way to do that while also thinking about rights, and I think it’s a complex area, and we will have to keep learning together.

Amy Crocker:
Hello? Yeah, sorry, I won’t summarize this, we are over time, but it’s been a really fascinating conversation, and I genuinely wish we had more time, but as someone commented, we need to continue this conversation. If anyone is interested to join the Dynamic Coalition and to continue these types of conversations, we have some flyers, there is a QR code, you can go to the website, you can also go on the IJF website, find us, and sign up to the mailing list. We want to help create a space within the IJF, a bigger space, a renewed space for children’s rights issues to be discussed. I will end it now. Thank you so much for being here, and thank you to all our speakers, and thank you to Jim as our online moderator, and thank you to Bangladesh Remote Hub, it was so lovely to have you here, and all participants online. Thank you.

Ahmad Karim

Speech speed

172 words per minute

Speech length

130 words

Speech time

45 secs

Amy Crocker

Speech speed

178 words per minute

Speech length

4308 words

Speech time

1452 secs

Amyana

Speech speed

120 words per minute

Speech length

206 words

Speech time

103 secs

Andrew Campling

Speech speed

162 words per minute

Speech length

355 words

Speech time

131 secs

B. Adharsan Baksha

Speech speed

173 words per minute

Speech length

104 words

Speech time

36 secs

Jenna

Speech speed

169 words per minute

Speech length

2471 words

Speech time

877 secs

Jim

Speech speed

202 words per minute

Speech length

140 words

Speech time

42 secs

Katarzyna Staciewa

Speech speed

141 words per minute

Speech length

375 words

Speech time

159 secs

Katz

Speech speed

123 words per minute

Speech length

791 words

Speech time

386 secs

Larry Magid

Speech speed

203 words per minute

Speech length

547 words

Speech time

162 secs

Liz

Speech speed

228 words per minute

Speech length

2029 words

Speech time

534 secs

Patrick

Speech speed

169 words per minute

Speech length

1483 words

Speech time

526 secs

Sophie

Speech speed

137 words per minute

Speech length

1442 words

Speech time

631 secs

Steve Del Bianco

Speech speed

216 words per minute

Speech length

455 words

Speech time

126 secs

Tasneet Choudhury

Speech speed

159 words per minute

Speech length

69 words

Speech time

26 secs

Policy Network on Internet Fragmentation | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bruna Marlins dos Santos

During the session, a comprehensive presentation will be given on the Policy Network’s discussion paper. The paper examines various aspects outlined in the Policy Network framework, and debates will be held to delve further into these topics. The aim of the session is to foster a thorough understanding of the discussion paper and encourage insightful discussions among participants.

The presentation and subsequent debates are of significant importance to the Policy Network as they provide an opportunity to seek feedback, gather perspectives, and refine the framework. The Policy Network values the contribution of its volunteers and acknowledges their role in shaping the document. Bruna, in particular, expresses profound gratitude to all the volunteers who helped shape the document with their time and effort. It is heartening to note that some of these volunteers are present during the session, indicating their continued commitment to the Policy Network’s values and goals.

The discussions and presentations align with two Sustainable Development Goals (SDGs): SDG 16 and SDG 17. SDG 16 focuses on promoting peaceful and inclusive societies, providing access to justice for all, and building effective, accountable, and inclusive institutions. The Policy Network’s efforts to facilitate debates and discussions on the various aspects outlined in the framework contribute to these goals. Furthermore, SDG 17 emphasizes the importance of partnerships and collaboration to achieve the SDGs. The Policy Network recognizes the significance of collaboration and appreciates the volunteers who have worked alongside them, highlighting the importance of partnership for the goals.

In conclusion, the upcoming session will involve a detailed presentation of the Policy Network’s discussion paper, as well as debates on the various aspects outlined in the framework. The volunteers of the Policy Network are greatly appreciated and thanked for their invaluable contribution in shaping the document. The discussions and presentations align with SDG 16 and SDG 17, incorporating elements of peace, justice, strong institutions, and partnerships for the goals. By engaging in these activities, the Policy Network aims to further progress towards achieving the SDGs and creating positive change.

Olaf Kolkman

The discussion revolves around the topic of internet fragmentation and its implications on connectivity and global inclusivity. One aspect highlighted is the lack of a clear and operationalized definition for technical fragmentation, resulting in different frameworks for understanding the concept. While fragmentation is often seen as a negative phenomenon, certain types of fragmentation, such as decentralisation, lack of connectivity by choice, or temporary network glitches, are considered to be non-problematic.

However, the evolving nature of the internet and its changing routing behaviour may lead to a different kind of fragmentation, potentially increasing the digital divide. This digital divide could be more prominent in lesser connected parts of the world and could result in a disparity in user experience. Therefore, it is important to address and mitigate these effects to ensure global connectivity.

A key argument presented is the need to protect the critical properties of the internet for global connectivity. Fragmentation in the technical infrastructure is likely to be reflected in the user space, affecting the overall user experience. It is crucial to continually evolve the internet and avoid ossifying it in its current state.

Furthermore, a multi-stakeholder approach is deemed necessary to ensure global connectivity and prevent fragmentation. Stakeholders include the private sector, technical communities, civil society, and governments. By involving various stakeholders, it is believed that a collaborative effort can be made to address global connectivity issues effectively.

One notable observation is the call for a more nuanced understanding of the issues surrounding internet fragmentation. It is suggested that a broader perspective is required to fully comprehend the implications and consequences of different forms of fragmentation.

Another important point raised is the protection of an open internet architecture. This open architecture should be safeguarded to promote common protocols and interoperability. It is argued that an open internet architecture allows for the evolution of the internet and ensures its continued effectiveness and accessibility.

Additionally, the affordability and accessibility of the internet are highlighted as crucial factors in preventing the creation of a digital divide. Issues such as the concept of the “death of transit” and pricing disparities are mentioned, which can hinder individuals’ ability to access the internet. To prevent exclusion, it is important to address these affordability and accessibility challenges, ensuring that everyone who wants to connect can do so.

In conclusion, the analysis emphasises the need for a clear definition of internet fragmentation and a comprehensive understanding of its various forms. Protecting the critical properties of the internet, adopting a multi-stakeholder approach, preserving an open internet architecture, and addressing affordability and accessibility issues are crucial steps towards ensuring global connectivity and preventing the creation of a digital divide. The ultimate goal is to provide equitable access to the internet, ensuring that everyone who desires to connect can do so.

Rosalind Kenny Birch

Fragmentation at the governance layer of internet governance can have negative consequences, such as duplicative discussions and excluding certain groups from the decision-making process. This fragmentation occurs when global internet governance and standards bodies fail to coordinate inclusively. The lack of coordination can lead to redundant conversations and the marginalisation of specific stakeholders.

Furthermore, this fragmentation at the governance layer does not just impact that particular level; it can also have knock-on effects on other layers of the internet user experience and the technical layer. The issues arising from governance fragmentation can trickle down to affect the overall user experience and technical functionalities of the internet. This highlights the interconnectedness of the different layers and the need for holistic approaches to address fragmentation.

To combat fragmentation, inclusivity is considered a central approach. When multi-stakeholder community participation is limited or not fully empowered, fragmentation tends to occur. Therefore, promoting inclusivity becomes crucial in combating governance fragmentation.

Instead of introducing new bodies into the internet governance landscape, it is recommended that existing internet governance bodies focus on improving coordination. Introducing additional bodies may further complicate the already complex governance landscape. Therefore, enhancing coordination among existing bodies is seen as a preferable solution to address fragmentation.

Moreover, it is important to ensure regional nuances and cultural contexts are considered in global internet governance bodies. Internet governance bodies should strive to accommodate the perspectives and voices of all stakeholders, regardless of their cultural or regional background. This can be achieved through better coordination and utilising platforms like National and Regional Initiatives (NRIs) or the Internet Governance Forum (IGF). These platforms provide opportunities to discuss local nuances, regional contexts, and ensure diverse perspectives are heard. For instance, the Africa IGF was identified as a fruitful opportunity to learn about regional perspectives and the importance of cultural and regional inclusions.

In conclusion, fragmentation at the governance layer of internet governance has negative implications, including duplicative discussions and exclusion of certain groups. Inclusivity is crucial to address this fragmentation, and existing internet governance bodies should focus on improving coordination rather than introducing new bodies. Additionally, considering regional nuances and cultural contexts in global internet governance is vital for inclusive decision-making processes. Platforms like NRIs and IGF can play a significant role in fostering regional and cultural inclusivity.

Suresh Krishnan

The internet is a decentralised set of networks that lacks a single point of control. It is a collaborative effort involving multiple individuals who have built this expansive network. This characteristic of decentralisation is a fundamental aspect of the internet, allowing for its widespread connectivity and accessibility.

Technology plays a crucial role in the internet’s functioning by enabling interoperability between these networks. It provides the means to bind different networks together, allowing seamless communication and data exchange. This interoperability is essential for the smooth operation of the internet and facilitates the flow of information across various platforms and devices.

Openness and incremental deployability are critical properties of the internet. The internet constantly evolves with the deployment of new technologies. This adaptability and openness enable the integration of innovative technologies onto the internet, keeping it up to date and capable of supporting new applications and services.

Content filtering is an important consideration in the context of the internet. It is argued that content filtering should occur at higher layers, taking into account the differences in laws across countries, states, and localities worldwide. This approach acknowledges the diverse legal frameworks and ensures that filtering is done in a way that respects local regulations whilst maintaining the internet as an open and inclusive platform.

The multi-stakeholder approach has played a significant role in the development and governance of the internet. This collaborative approach involves stakeholders from various sectors working together to shape policies and decisions regarding the internet’s management. The internet has thrived and evolved due to this inclusive approach, allowing for diverse perspectives and expertise to contribute to its growth and stability.

Efforts in internet measurement are critical for understanding and improving the internet’s performance. There is a need for more measurement points across the globe and a platform for individuals to conduct their own experiments and assessments. By increasing the focus on internet measurement, we can gain valuable insights into the network’s strengths, weaknesses, and overall quality, leading to targeted improvements and advancements.

However, a noteworthy critique is the lack of references in the document. It is important to provide credible sources and citations to support the arguments and claims made. For example, referencing RFC 1958, which discusses the architecture of the internet, would add credibility and depth to the document’s assertions.

In conclusion, the internet’s decentralised nature, enabled by technology’s interoperability, openness, and incremental deployability, has shaped its development. Content filtering should be approached in a way that considers the differences in laws worldwide whilst maintaining the internet’s accessibility. The multi-stakeholder approach has been instrumental in managing and evolving the internet. Finally, efforts in internet measurement are necessary for ongoing improvement, but it is crucial to provide proper references to support the document’s claims and arguments.

Sheetal Kumar

The Policy Network on Internet Fragmentation has spent the year exploring the complexities of Internet fragmentation. They have developed a comprehensive framework that allows them to understand and address fragmentation from different perspectives. The network aims to unpack the elements of the framework, identify priorities, and formulate recommendations for action. They advocate for a multi-stakeholder approach, recognizing the involvement of diverse stakeholders in fragmentation. Seeking feedback from the community, the network wants to align their priorities with the international community and ensure comprehensive recommendations. Their ultimate goal is to provide clarity to the complex and contentious issue of Internet fragmentation, foster ongoing dialogue and engagement, and contribute towards a more connected digital landscape.

Marielza Oliveira

User experience fragmentation refers to the division or segregation of users into different information environments or platforms, resulting in varying levels of access to content and features. This issue has both positive and negative aspects.

On the positive side, user experience fragmentation can include features and content that are specifically designed to benefit the user. For example, certain platforms may tailor recommendations based on the user’s preferences, resulting in a more personalised experience. Additionally, some users may appreciate being able to navigate through smaller, more specialised content ecosystems that align with their interests or values.

However, on the negative side, user experience fragmentation can restrict users’ access to certain content and limit their exposure to diverse perspectives. This can create information bubbles or echo chambers, where users are only exposed to information that supports their existing beliefs or biases. As a result, users may be deprived of opportunities to engage with differing opinions and challenge their own viewpoints. Moreover, this kind of fragmentation can lead to the reinforcement of social, political, or cultural divides, as it inhibits the free flow of information and impedes dialogue and understanding among different groups.

Negative user experience fragmentation affects all users and is a cause for concern. It has significant implications for the rights to access information and freedom of expression. When users are unable to access certain content or are forced into specific information environments, their right to freely seek and impart information is restricted. Additionally, non-targeted users, who may have diverse perspectives, are hindered in their ability to associate with those who are isolated in different information spaces. This ultimately curtails the richness of public discourse and limits the potential for fostering inclusive and diverse dialogue.

Furthermore, user experience fragmentation can be classified as either good or bad. Good fragmentation describes situations where fragmentation is achieved through a multi-stakeholder process and upholds principles of openness and accessibility. On the other hand, bad fragmentation tends to be the result of unilateral decision-making processes, disregarding the interests of users and reducing openness and accessibility.

It is argued that principles regarding user experience should be rooted in human rights standards. Human rights standards are globally accepted and provide a solid jurisprudence foundation for assessing the legitimacy of interfering with the freedom of expression. Adhering to these principles ensures that user experience is guided by ethical considerations and serves the broader goal of promoting peace, justice, and strong institutions.

To mitigate the negative effects of fragmentation, it is suggested that enforcing platform interoperability, data portability, and enhancing users’ media and information literacy can be effective strategies. Platform interoperability allows users to seamlessly navigate between different information environments, fostering exposure to diverse sources and perspectives. Data portability enables users to retain control over their personal information and move it between platforms, preserving their agency and reducing reliance on a single platform. Strengthening users’ media and information literacy empowers individuals to critically evaluate information and navigate the vast amount of content available on the internet in a safe and informed manner. These measures can counteract the negative consequences of fragmentation, such as echo chambers and the spread of misinformation.

In conclusion, user experience fragmentation has both positive and negative dimensions, with its impact extending beyond individual users to society as a whole. While it can provide tailored experiences and niche content, it also limits access to diverse perspectives and contributes to societal divisions. Adhering to human rights standards and implementing measures to mitigate the negative effects are essential in ensuring that user experiences are inclusive, ethical, and conducive to fostering an informed and democratic society.

Jordan Carter

In the analysis of internet governance, several key points were highlighted. Firstly, there was a strong argument for the need for broad-based participation in standards bodies and global internet governance organisations. The analysis acknowledged the Western bias in participation that currently exists and stressed the importance of greater inclusivity to ensure a more equitable representation.

Another critical issue discussed was the definition of governance fragmentation in internet governance. The analysis criticised the current definition, stating that it is too narrow. This suggests that a more comprehensive understanding of fragmentation is required to effectively address the challenges.

Further examination revealed that the narrow mandates of many technical internet governance organisations contribute to governance fragmentation. While these mandates serve important purposes, they can restrict organisations from adopting a systemic view of the internet. This limitation hinders their ability to address the complex governance challenges faced in the digital age.

The analysis also emphasised the need for better coordination between internet governance bodies. It highlighted the potential for meaningful collaboration among the individuals involved in global internet governance bodies, stressing that improved coordination would enhance effectiveness and outcomes.

Lastly, the analysis touched upon the relationship between the multi-stakeholder-driven internet governance system and the multilateral or state-based regulatory and legal system. It argued that these two systems should work together and influence each other positively. By shaping policies and practices collaboratively, a more effective and balanced internet governance framework could be achieved.

Overall, the analysis underscored the importance of broad-based participation, the need for a broader definition of governance fragmentation, and the significance of coordination and collaboration between internet governance bodies. It also highlighted the potential benefits of aligning the multi-stakeholder-driven system with the multilateral or state-based system. These insights bring attention to key areas where improvements are necessary to ensure a more inclusive, effective, and cohesive approach to internet governance.

Roswitharu

The issue of user experience level fragmentation is a complex one, with perspectives depending on one’s geographic and socio-economic context. People in Silicon Valley and the US West Coast express major complaints about actions taken by governments in authoritarian countries or the privacy laws of the European Union. Conversely, Europeans primarily complain about the actions of Silicon Valley platforms.

Maintaining a balance between the global nature of the internet and the preservation of local sovereignty is vital. The original vision of the internet was to unite the planet by enabling unrestricted communication. However, disparities in values, economic systems, and languages have caused tension and division.

Efforts to address these issues should focus on pragmatism and determining the existence of a problem rather than getting caught up in semantics. Rather than engaging in unproductive debates over definitions, it is more constructive to seek agreement on the existence of a problem. This pragmatic approach allows for practical solutions and avoids getting stuck in semantic disputes that do not lead to meaningful progress.

In conclusion, addressing user experience level fragmentation requires considering different perspectives based on geographic and socio-economic contexts. Acknowledging concerns raised by individuals in Silicon Valley and the US West Coast about governments in authoritarian countries or EU privacy laws, as well as addressing European concerns about the actions of Silicon Valley platforms, is essential for improving overall user experience. Striking a balance between the global nature of the internet and the preservation of local sovereignty is crucial. Taking a pragmatic approach that focuses on assessing the existence of a problem rather than getting caught up in semantics will drive progress towards resolving these challenges.

Wim Degezelle

Internet fragmentation is a complex concept without a clear definition, as there are different views on the subject. However, three categories or “baskets” of fragmentation have been identified: fragmentation of Internet user experience, fragmentation of Internet governance and coordination, and fragmentation of the technical layer. The complexity of the topic led to the abandonment of creating a precise definition for Internet fragmentation.

To facilitate discussions and understanding of Internet fragmentation, a framework was developed. This framework aims to provide a structure for discussing the various aspects of Internet fragmentation rather than providing a strict definition. It outlines the three aforementioned categories or “baskets” of fragmentation: fragmentation of Internet user experience, fragmentation of Internet governance and coordination, and fragmentation of the technical layer.

Multi-stakeholder discussions are crucial when addressing Internet fragmentation. These discussions involve various stakeholders, who may differ depending on the specific category of fragmentation being discussed. This highlights the importance of different groups coming together to discuss Internet fragmentation, with each category attracting different stakeholders.

To effectively address Internet fragmentation, it is necessary to have discussions that span across all categories. This is because guidelines for avoiding or addressing fragmentation may not be fully complementary between different categories. By having discussions across the “baskets” or categories, a cross-category approach can be developed to better tackle Internet fragmentation.

In conclusion, Internet fragmentation is a complex issue without a definitive definition. However, through the identification of three categories of fragmentation and the development of a framework for discussions, progress can be made in understanding and addressing this issue. Multi-stakeholder discussions that encompass all categories are essential to effectively navigate the challenges posed by Internet fragmentation.

Audience

The analysis delves into the topic of internet fragmentation and its various implications. It highlights the negative effects of technical fragmentation on the internet’s ability to evolve, innovate, and adapt. The argument is made that when the internet is split into different networks, its potential for growth and development is hindered. The analysis underscores the importance of maintaining the unity and interconnectivity of the internet to enable progress and positive outcomes.

The need for a uniform and unharmful user experience on the internet is also explored. It is noted that elements representing the user experience should be safeguarded to ensure a consistent and positive online environment. Additionally, the significance of interoperability is underscored. It is stated that interoperability is crucial for the smooth functioning of the internet, allowing different systems and devices to communicate effectively with each other.

The harmful effects of fragmentation are examined, particularly in relation to blocking user access to certain sites or content. This type of harmful fragmentation is seen as a significant problem, as it restricts users’ freedom and limits their ability to fully utilize the internet.

The analysis further delves into the impact of fragmentation on democracy and the digital space. It is argued that the integrity of the digital space is crucial for the defense of democracy. The risks associated with fragmenting the digital space are highlighted, bringing attention to the potential negative consequences.

Additional topics discussed include the ownership of IP addresses and the importance of decoupling IP addresses from networks. The analysis suggests that everyone should own their own IP address, allowing for more control and autonomy in the online space.

The involvement of regional or cultural leaders in internet policy formation is explored as a way to mitigate the impact of internet shutdowns and address the needs of specific communities. Engaging these leaders can lead to more inclusive and effective initiatives.

The potential widening of the digital divide due to the availability of satellite internet is also discussed. The rise of satellite and private corporate satellite internet is seen as a concern, as it could lead to the exclusion of certain populations and affect the quality of the online experience for many.

The challenges of implementing recommendations for internet fragmentation and the importance of internet governance are also addressed. The analysis acknowledges the difficulty in implementing recommendations due to the evolving and decentralized nature of the internet. It is concluded that there is a need to create governance to prevent internet fragmentation and ensure a cohesive and inclusive online environment.

Overall, the analysis offers a comprehensive examination of the topic of internet fragmentation, highlighting its negative effects and the importance of maintaining a unified and interconnected internet. It emphasizes the need for a uniform and unharmful user experience, interoperability, and inclusive internet policies.

Session transcript

Bruna Marlins dos Santos:
th th th th th th th th th th th th th th th that we, the Policy Network, just put out. So the session today will be a little bit of a presentation of the discussion paper and also some debates between both the pen holders of the documents and some community commentators on the three aspects that we described in the Policy Network framework. So, and before we move on to that, I just wanted to start with a very big thank to every single volunteer of the Policy Network that helped us shape this document. Some of them are on the stage with us and some of them are also here in this room, so thanks a lot for joining the conversation and helping us construct this debate. I’m gonna hand the floor to you, Wim, right? And then we can move on

Wim Degezelle:
with the agenda. Thank you, and this is on, and as you see we have a presentation. I think the first, Bruna, thank you, you already gave the overview of the agenda, so I will give the brief introduction. My name is Wim de Gezelle. I’m part of the Policy Networks. They are an intersessional activity by the IGF. That means that they also receive support by the IGF Secretariat, and I’m with the Secretariat as consultant to help this Policy Network to start. So, a brief introduction on the Policy Network on Internet Fragmentation. It is an intersessional activity. That means we not only work at this IGF meeting, but we start way earlier. We started working in May, and even before that, to prepare and work to this session and to the IGF. So, it is the Policy Network Internet Fragmentation. There are other Policy Networks also on the agenda, but this one on Internet Fragmentation has the So, what are the objectives of the policy network? The policy network is an open and interconnected network. It’s a network that wants to further the discussion and raise awareness on fragmentation on technical, policy, legal, and regulatory measures that may, and actions that may pose a risk to the open and interconnected, interoperable Internet. So, what are the objectives of the policy network? The first objective is to understand what is actually meant with Internet fragmentation. So, come up with a comprehensive framework and overview of what Internet fragmentation is. We look at case studies, what actually is happening, try to come up with examples or look for examples. And then the third question is what to do about it, how to address or how to address the issue. So, we try to avoid fragmentation. This, looking back to what we did last year, we actually dove into those questions, and as often is the case, you want to find the definition and try to define what you’re working on. Through the webinars, like you see, we had webinars during the year, asking specifically that question, what is the definition? What is the definition of Internet fragmentation? What does it actually mean to people when they talk about Internet fragmentation, and what should and what can be done about it, and who should be doing what? Very quickly, through those webinars and those discussions we had, it became clear that trying to come up with a definition is not really something that is still possible. It might have been possible earlier on, but at this how the people are discussing and trying to squeeze that all in one clear definition is not helpful. What we did instead, or through the work, it became clear there are different views on what fragmentation is, and that’s how last year, as the outcome of last year’s discussion, we came up with a framework, saying, actually, what is the definition of Internet fragmentation? What is the definition of Internet fragmentation? What is the definition of Internet fragmentation? What is the definition of Internet fragmentation? Actually, if we listen to the people, if we listen to the comments we get, we kind of can form three baskets of what people see and understand as fragmentation. What’s in those baskets, we will further discuss and hear from the panelists today. But that, and I think that was the main output of our work last year, that allowed us to come up with a framework. The framework you see in small, and this, I think, is a larger version. So a framework that says, well, we found that when people are talking about fragmentation, we can either form a basket that we can label as fragmentation of Internet user experience, fragmentation of Internet governance and coordination, or people really refer to fragmentation of the technical layer, technical architecture of the Internet. That were the baskets that we could form. With the important comment we got, those baskets are not alone, are not completely separate. There are interactions, there are overlaps between them, and that’s, they shouldn’t be considered as specific, as separate silos. One comment before I hand over to Sheetal to discuss what we actually did this year, is we labeled the framework as it is a framework for discussing Internet fragmentation. We don’t want to come up with a framework to define what it is, but we, from the beginning, say, well, we, this framework should help to discuss and further the discussion. Because I think that’s one of the main evolutions we saw in the work and in our discussions, is that people started to move from, well, we need to define something and then we need to discuss, in kind of an understanding. that it is important to discuss with stakeholders and have these multi-stakeholder discussions on Internet fragmentation. But these stakeholders are not necessarily always the same. It is possible in those three layers of our framework that you need to sit together with different stakeholders or different types of people, different organizations. And I think that’s one of the major or one of the main findings we had last year in our work. Together with, and that probably will become clear out of today’s discussion, the second point is that what those different groups are in those different layers, they come up with actually guidance or guidelines or ideas on how to avoid or address fragmentation, will not necessarily or does not necessarily, is completely complementary with each other. So at the end of the discussion, it will still be necessary to, across those baskets, have discussions on how to actually address ND. So that’s what we did last year. I hope that was clear. So I end here. This was a framework, and that was also the start of our discussions this year. So I hand over to you, Gita.

Sheetal Kumar:
Hi, everyone. Thanks, Wim. It’s great to be here and to be presenting our output for this year. And as Bruna said, we co-facilitate this policy network, and it’s really nice, I think, to now not be doing so much of the work, but to be hearing from you. Once we’ve heard from the drafters and the commentators who will be responding to the drafters of this year’s output, we really want to hear from you. So the work is going to be in the room, and then, as you can see from the agenda as well, we will also be looking for feedback after this session. So what have we done this year? As Wim said, we have been building on the work of last year, where we developed a framework to conceptualize what Internet fragmentation is understood as, as we have just discussed, in many different ways. And so this framework we developed is to support, it’s really a tool to support better understanding and clarification of what Internet fragmentation is. And in that sense, what we were able to do this year is further unpack what the framework is, and those three areas which Wim outlined, and those are the fragmentation of the technical layer, the user experience, and governance and coordination. And what we wanted to do in unpacking these areas was better understand what the priorities should be in each area, so what is actually harmful and negative, and from that, assess what can be done. So develop some recommendations for action, and where we are really, I think, looking forward to hearing from you all and from those who have been so involved already is… Well, really, whether or not you think that these recommendations are helpful, whether anything is missing, and whether you think the way that the different elements of the framework have been unpacked and what has been prioritized, it aligns with your view of what we should be focusing on as an international community when it comes to this issue. So, what we are going to do is take each element of the framework one by one, and I also invite you to go to the PNIF’s webpage and look at the discussion paper as we’re discussing it here, and consider also in the second part of this session how you may want to react to what is being presented. So, we’re going to do, first of all, a presentation of each track or each element, and so we have the very hardworking drafters of the output document here, and we’re going to go one by one, hear from them. They’re going to just present the top-level findings or the top-level points, so what priorities they found need to be addressed, and then some of the recommendations, and then we’ll have a commentator to respond. And so, we’ll do that for each, and then we will open up. So, without further ado, I’d like to hand over to Rosalind Kenny-Birch, who is with the UK government at the Department for Science, Innovation, and Technology. And, Ros, you worked with others to develop the chapter in our document focused on internet governance and coordination and the fragmentation of that. So, in the next three or four minutes, would you be able to just provide an overview of what that chapter says and the recommendations that you have for addressing this element of fragmentation? Thank you.

Rosalind Kenny Birch:
Thanks very much, Sheetal, and great to see everyone. here today. I think one of the points of this panel discussion too is to really provoke a conversation. Our multi-stakeholder working group that worked on this chapter had quite a few different perspectives because fragmentation is such a complex topic it can be to discuss. So it will be really interesting to hear your insights here today from a wider group of perspectives and so I would really invite you to engage in the discussion, offer some of your own insights and challenge afterwards as well. But just to present on what we’ve written up in the preliminary draft chapter on fragmentation at the governance layer of the Internet, I’d first like to lay out a little bit of context. So our multi-stakeholder working group draft wrote that fragmentation at the governance layer primarily relates to the interactions between global Internet governance and standards bodies. When these bodies do not coordinate inclusively, it can and does result in fragmentation. This fragmentation can manifest in siloed or duplicative discussions, an exclusion of specific groups from participation, resulting in decisions being taken without consensus from the global multi-stakeholder community. And it’s important to note too that fragmentation at the governance layer can also create knock-on effects at the other layers of the Internet user experience and the technical layer. So there were a couple of different components to our analysis about where fragmentation can sort of emerge or come from at the governance layer. And one was duplicative mandates. So if part of specific Internet governance bodies mandate is unclear or may have overlapping elements with a different body, this could foster a competition for legitimacy or create confusion between bodies. And therefore that can make it difficult for stakeholders to know where and when they need to engage in a specific conversation. Another point we observed was when mandates are exclusive or don’t fully empower all elements of the multistakeholder community to participate. So we see inclusion as central to combating that so that people can participate on an equal footing. And then finally, taking actions at the right level. So individual governments’ actions can sometimes lead to divergence in the rules applied to the Internet and its management. And in that sense, it’s really important that national governments and Internet governance global bodies are closely conversing about issues, specifically when they’re being developed or discussed through multistakeholder processes already. So with some of those analytical points, we proposed a couple of different recommendations. And again, very eager to get a wide range of perspectives and feedback on this today. But one was not to introduce further bodies into the Internet governance landscape. The Internet governance landscape is already complex. And as we all well know, through all our travels, there are a lot of different conferences, events taking place that we engage with across bodies already. And people only have so much time and only so much financial resource to be able to engage in these. So further perpetuating that complex landscape could end up excluding people from discussions if they don’t have the resources to fully participate in more and more emerging bodies and spaces. However, that being said, another recommendation we made was, therefore, it is important to improve coordination between existing Internet governance bodies to help address perceived or real gaps in these spaces. So coordination between existing Internet governance bodies. governance bodies is needed to help address that as well. Thirdly, and in order to avoid siloed public policy discussions regarding internet governance, all internet governance bodies must be fully inclusive to stakeholders and enable meaningful multi-stakeholder participation on an equal footing. We also believed that that would help address instances of fragmentation at the governance layer. And then finally, we recommend that existing global internet governance bodies must engage more closely with national governments. So this goes back to our point of analysis before. There’s actually a two-way street here. National governments, when looking at proposed legislation, can actually really benefit from talking to global internet governance bodies about their plans and therefore receive important information and feedback. But equally, global internet governance bodies should be on the front foot about engaging with governments and ensure that governments know what activities are going on in the global space to help potentially avoid duplicative measures. So I’ll stop there. And again, an exciting part of this panel is we’ll now receive some challenge and other perspectives on this work. So with that, I hand over to Jordan Carter. Great to have you here.

Jordan Carter:
Thank you, Roz, and good morning, everyone. My name is Jordan Carter. I work for the AU Domain Administration, the ccTLD manager for .au. And it’s a pleasure to offer a few not very provocative provocations to the group to help the conversation happen. I am making some personal remarks. I’m not advancing an outer position here. Overall, I think this is a good start to the discussion around fragmentation. And my congratulations to the volunteers. I should disclose that aside from joining the email list a couple of months ago, I have not been involved in any way in this paper. I was reading it fresh to prepare for this session. And I agree with the analysis so far as it goes. So in the end, my provocation is relatively brief. of broad-based participation is vital, particularly in the standards bodies and in some of the global internet governance organizations like ICANN. The Western bias in participation is undeniable, and meaningful participation from around the globe and from the groups that are not participating is absolutely essential to within whatever framework that we have. When I read the very first box, the definition here, fragmentation of internet governance primarily relates to the interactions between global internet governance and standards bodies. My core thesis might be that that’s too narrow a definition of governance fragmentation, because one of the key agents of governance are governments, and if to not deal with government-driven, policy-driven fragmentation in this section, I think maybe complicates the picture, though I’m sure I can in turn be challenged about that. You know, and part of the challenge there is that the definition of internet governance itself is under challenge. You know, do we think that it’s just about the governance of the internet, which is a distinction that has been made, or is it the governance on the internet, or is it these broader questions of digital governance that get often tacked on to those infrastructure-level discussions today? Another challenge I think it would be worth taking into account in the governance fragmentation is that caused by the narrow mandates of a lot of the technical internet governance organizations. Those narrow mandates are there for good reasons, but sometimes they make it difficult for those organizations to actually deal with a systemic view of what’s going on in the internet. So you can have a situation where each silo is dealing with its narrow mandate, and none of them are prepared to take a view about the system as a whole, and so I think there are some institutional drivers there at the global internet governance level towards fragmentation. The paper talks about the need for better coordination, and I agree, and it suggests further research, and I agree, but quite a lot of the people who are involved in these global internet governance bodies could undertake meaningful coordination together without further research. They just need to start doing it. Some of it is being done, but the challenge not to this paper but to those organizations is get coordinating. Get coordinating in the face of the challenges that the internet is throwing up, and in the challenges to the governance model that we see today, and I really did appreciate the paper calling out the duplication and the risks with some of the proposals in the Secretary-General’s policy brief for a digital cooperation forum, for example. The last thing that we need is duplicative institutions being established with new resources going to fund them instead of the resources that the IGF, for example, is crying out for and could make good use of. And the last point I want to make, I guess, having argued that the governance discussion could use maybe a broader look, is the multistakeholder-driven Internet governance system and the multilateral or state-based regulatory and legal system, I think, need to be much better at working effectively together. The two can and should shape each other, and the multistakeholder dialogues in organizations like the IGF could usefully inform policy if more of the people doing public policy related to the Internet were aware of their work. So I’ll probably wrap it up there. I don’t know if that was provocative enough, but thank you for the chance to comment.

Sheetal Kumar:
Thank you so much, Jordan. And we will be going through each of the elements of the framework first before we open up. And I also wanted to let you know that we had some written feedback from the community when we published the paper and wanted to weave in some of that into this discussion as well. So there was one point of feedback relevant to the Internet governance and coordination chapter. It was really about providing concrete examples of how governance fragmentation causes Internet fragmentation, and just it was checking that what our understanding in the paper that you put out in the paper is of Internet governance and coordination fragmentation is essentially that the existence of multiple uncoordinated international processes is a source of fragmentation. So if so, why is that treated differently than governmental and corporate-sourced fragmentation which are both addressed under user experience, which we’ll come on to. So I think there’s a question there about what is the focus of this chapter? Is it on the existence of multiple uncoordinated processes, which I think you have addressed, and that is the focus. And then, Jordan, you mentioned the importance of ensuring coherence or at least engagement and coordination. And it might be interesting. to hear from you later, but also from everyone here and online, whether you have any ideas for concrete mechanisms or examples that are already existing for how that coordination can effectively take place. So without further ado, we’ll move on then before we open up to the second chapter, and we have here Vittorio Bertola, who was one of the co-drafters of this chapter within the group, and I know Vittorio wear many hats, so I don’t know how you prefer to be introduced, but please do provide your, well, please do choose your hat, and then an overview of the work that you’ve done in this year to assess the priorities in the user experience fragmentation that we we had outlined last year, and then also the recommendations that you put forward. It’s a very hefty chapter of the discussion document, so good luck with summarizing it in three or four minutes. Yes, it’s pretty hard. Well, I don’t know,

Roswitharu:
maybe my head is like having gray hair and having been for too many years in this kind of discussions, almost 25 now, but I work for Open Exchange, which is a German open-source software company, and so, I mean, I was one of the people that tried to tackle this problem of user experience fragmentation, which is, I think, the hardest and most vague one. It’s because the entire discussion of fragmentation started from the technical level, and then multiple stakeholders tried to add more things into it, and user experience things are mostly coming from this kind of approach. So we tried to go for a definition which is completely open and pretty broad, basically by saying that anything that makes two different users of the Internet see different things when they try to access the same service, website, whatever, or do the same thing over the Internet is a form of user level fragmentation. And of course, if you take this very broad approach, then there’s the need to tell between the positive cases and the negative cases, because there are many situations in which which actually this difference in experience is a good thing. It’s made to help the user, to customize content for them, or it’s made to protect the user, to give them rights, for example, through privacy laws in specific countries. Or it’s done, for example, to prevent them from accessing unhealthy, like malware websites or whatever. So you have to then define what is a negative case of fragmentation. There could be another approach, and some people have argued for it, of just finding a definition that covers only negative cases, but then we found this becomes harder and harder. So we’d rather take, then, a case-by-case approach. So by starting from this very broad definition, we identified several priorities in different cases, and then we want to work on them one by one, because they all have a different need and a different view to be taken into account. So we identified the two major sources of this kind of fragmentation, and it’s never the user. Usually, it’s either a government that, for some reason, wants to exert sovereignty and modify the experience for their own citizens only, or it’s a company, usually the global platforms, that wants to build like this kind of ecosystem, or world gardens, how you want to call them, that basically prevents users from going somewhere else, because they want, of course, to exploit them for business reasons. And so through these two opposite pushes, a number of phenomenon emerge. So we identified six priorities, and the three top ones, the ones we would start with, are, well, first of all, internet shutdowns. These are the principles, anyway. The internet shutdowns, we discussed a bit whether it’s a user experience level thing or a technical thing, but in the end, we decided we could discuss it at this level, and we think they are a negative thing. So we already received a comment of someone in the community saying that there’s actually something like a positive internet shutdown. I don’t know what it is, but it will be up for discussion. And then we We, the second priority we identified is the case in which national blocking or law enforcement orders have global effects. So spilling over to other jurisdictions and creating, let’s say, issues, I mean, problems to other countries and other citizens. And then the third case was the walled gardens I mentioned. So basically the building of barriers and the restriction of user choice and competition, both by governments when they have like laws that favor, for example, national problems over the global ones, but also by the global internet platforms. And then there’s more because we also would like to discuss national level censorship when content gets blocked for political reasons. We would like to discuss the violations of network neutrality, which are another issue. And the last one is the geo-blocking for intellectual property reasons. So as you see, there’s a long list of things to do and I encourage people in the community to participate even on specific issues. So we tried, I mean, we don’t think we can make suggestions for everything at the same time together, but we tried to identify five principles that are summarized in the slide. Basically the idea we would like to start with is that there should be a principle of equality, meaning the default should be that everybody should be able to access everything in the same way. And then the second principle is a partial correction to this, it’s a principle of enhancement. So when the differentiation, the customization is done in the interest of the user or asked for by the user, then it’s a good thing. And so we don’t need to worry about it. The problem is when this gets imposed onto users by a third party against their wish, and then in this case, you could have negative effects. So the first suggestion is that we should have an impact assessment whenever you do something that creates a deviation from the global internet, whether it’s a national regulation, national law, or even a business decision. Then there should be harmonization. So the idea is that, especially in regulatory terms, we should rely as far as possible on global agreements on how to tackle the same problem in the same way everywhere. And only go to national regulation when either the harmonization is missing or doesn’t take into account any national needs. But then the last, and maybe. the most important principle is that in the end, there should always be free choice. So the users should be free to choose how they use the internet and where to go. And unless there are very important reasons to make that, I mean, to prevent that from happening, in the end, the user should always be trusted to be able to do the good thing. So thank you. I think we have Mariela as a commentator and I give the floor.

Marielza Oliveira:
Very much. Thank you. I really liked your presentation. Well, let me start by saying Konichiwa, and my name is Mariela Oliveira. I’m the Director for Digital Inclusion, Policies and Transformation in the Communications and Information Sector of UNESCO. And this work for the policy network on fragmentation is particularly important to us because what my team and I do is essentially defend freedom of expression, access to information and privacy. And these are the rights that are most directly impacted by fragmentation. First, I want to say also a big congrats to Bruno, Chih-Tao and Wim who have been steering this work since the last year and it’s shaping up super well. So, well, let me say that to me, the user experience fragmentation is maybe the most interesting type just because it has this positive side when users are served with custom features or content that is set and the negative side when users are actually prevented from accessing certain features and service and content. And the discussion paper is actually concerned very much primarily with the negative side, which is essentially about how these features, these mechanisms actually impose barriers that isolate or trap users into an information environment from which they can’t really escape. And a consequence of isolation and a major source of the harms that happen as a consequence of this type of fragmentation is essentially that it enables serving trap users different world views than are served from other internet users. And that brings a really important point that maybe it’s not quite explicit in the paper yet, but I like that it was mentioned, alluded to in the presentation just made is that negative user experience fragmentation actually affects all users, not just the ones immediately deprived of access to the internet or to a specific content and services. Some of the users that are excluded are prevented from enjoying their human rights to access to information or their freedom of expression and other rights, but, and they may end up being driven to echo chambers and elements like that. But it’s also true that the non-targeted users who are deprived of their rights to freely associate with those who are isolated to seek information from them and impart information to them. And therefore, the consequences that these two groups end up kind of driven apart. There’s an increasing gap in the information and knowledge between them. And that separates people. And many times, especially when it’s done for political purposes, the likely consequences, polarization, which then spews beyond the internet and into the real world and actually may. affect even non-internet users. So I think that this is a particularly important topic. In UNESCO, we work with what we call the Rome Principles for Internet Development, in which the internet should be human rights-based, open to all, accessible by all, most stakeholder-led. And the user experience fragmentation is very much about the explicit decisions that reduce openness and accessibility, which then has consequences for human rights. And when we talk about bad fragmentation, it’s essentially not done by a most stakeholder process. It tends to be a very, you know, kind of a unilateral decision process. One of the things that I really liked about the paper is that it actually laid out principles specifically for fragmentation that were mentioned in the presentation, and particularly this issue of free choice and equality of access and enhancement of experience in others. These are very much in line with the existing principles and particularly with the human rights framework. And the paper actually received a number of comments already, including a suggestion that these principles regarding user experience be explicitly based in human rights standards and processes, which are already, you know, globally acceptable, accepted, and there is like a solid jurisprudence foundation around them. And particularly, it just said that we need to consider the three-part test on legitimacy of interferences with the freedom of expression. And so this is an element that I think it would be important to add to the paper as well. Some of the points that have been already made through comments is that there’s some content that actually is relevant to block, that is legitimate to block, because there’s a law that prescribes their blocking, they pursue a legitimate aim, and they are in line with a democratic society. And content like that, for example, has to do with child pornography, terrorism, incitement of violence, and things like that. And this has not yet been reflected in the paper on how we’re going to be kind of disambiguating between these different types. And the next type in the next draft, I think, would be, you know, should be including some of that, and maybe even making reference to the speech, you know, and debates around what is awful versus what’s lawful. And, you know, maybe just to finalize, I think that the paper would also benefit from bringing up some of the potential mitigation measures, including, for example, talking about enforcing platform interoperability, data portability, strengthen users, media and information literacy that can counteract the effects of the echo chambers and the disinformation and other, you know, that are created by fragmentation. And so, I mean, I’m going to end up here, because I know that you would love to hear the comments from our participants as well. Thank you very much for the chance to comment.

Sheetal Kumar:
Thank you, Marielsa, that was great. And you were very positive about the chapter. and I think you also very helpfully reacted though to some of the feedback that we got online, the written feedback which I have to say was really helpful and constructive, so you can also access it on the web page, but quite a lot of it focused on the need to be more explicit about the use of different terms, the connection between the human rights standards and negative user experience fragmentation and explaining the difference between what is called negative and harmful fragmentation in terms of user experience and as I said being more explicit about that, so it was great to hear you respond to that as well because I think when we come to you on the floor and online, please do sort of pick up some of those points or add your own, but certainly a lot of really helpful feedback already from you Mariel, so thank you for that. So we are going to move now to the chapter that looked at technical layer fragmentation and Olaf Kolbmann is here with us and to present the chapter and really looking forward to hearing from you Olaf and then you’re going to be joined afterwards or we are going to be joined by Suresh Krishnan from the Internet Architecture Board who will respond and then we’ll open up, so please do get ready with your reflections and questions. Without further ado, over to

Olaf Kolkman:
you Olaf. Thank you very much, my name is Olaf Kolbmann, I work with the Internet Society, I’m principal there. Chapter on technical infrastructure. When we speak about technical infrastructure of the Internet, that is the network of networks. that are internetworking to provide global connectivity, 80,000 networks that interconnect to provide global connectivity, and the supporting infrastructure that makes that happen. That’s for us the sort of internet technical infrastructure. Now a few ideas that we had in constructing this chapter, I want to highlight those without going to the details of the chapter itself. But first I want to urge people to review this. This is a work in progress and it becomes stronger when stakeholders engage with the document and provide comments. At this moment I feel that there have been too little eyes on this chapter and we can use help. Anyway, the chapter starts with saying that the technical fragmentation is not something that is clearly defined. There is an operationalized definition of fragmentation around. It’s a work by Baltra and Heidemann, but they have a criterion that says if 50% of the public IP addresses cannot reach the other 50%, then you have a fragmented internet. That’s a very, very fragmented internet. That means that half of the population cannot reach the other half of the population. I think we don’t want to be there. It’s like you’re losing your hair and at some point you’re bald and at that 50% point, that’s true baldness, I would say. So how to prevent getting bald? That’s sort of the question. What we also said is fragmentation is is not necessarily everything where people choose to not interoperate and not internetwork. And there are cases like that. Like my home network, my home automation network, my own home automation network does not need to be on the internet directly. That’s a choice. That’s a choice you can make. Yesterday in a session on fragmentation, somebody said you have good fragmentation and bad fragmentation. I sort of like that idea. Decentralization is not fragmentation. Lack of connectivity because you choose not to connect is not fragmentation. Temporarily having to reroute your traffic because of a network problem, so to speak, not fragmentation. But what is fragmentation? How do we define it then? Well, again, that’s very difficult. But the approach that we took is using the critical properties as one of the frameworks. There are multiple frameworks that we point to the critical properties that the framework that the Internet Society developed that basically defines the critical properties of the internet in non-technical terms. They’re inspired by the network architecture, and I won’t go into the details of them. But that’s one of the frameworks where you can say if you lose these critical properties, if you’re sliding down the scale away from these properties, then you run into the risk of fragmentation. So this is the approach that we took. Another framework you can look at and approach is that of the public core. The public core is a framework that was developed by a think tank in the Netherlands and later further analyzed and defined by the Global Commission on the Stability of Cyberspace. That’s another framework and lens through which you can look at the internet and say, OK, we’re impacting elements of the public core, and that might lead to fragmentation. I think one One of the things that we’ve done in this document is also by doing that, by using this type of non-technical frameworks, frameworks that do not specify exactly the technology that’s being used, we allow for evolution. Because the Internet really is still evolving, and I think that’s important, that we don’t ossify as we usually say, the Internet in its current state. We need to continuously be able to evolve it. Another aspect of fragmentation that we looked at was basically what I would call the evolution of the edge, whereby what we see is that there is a lot of changing in routing behavior. transit or building their own network compared to using transit to get close to the user. That might cause a fragmentation of a different sort, basically the digital divide, increasing the digital divide of users that are close to that type of infrastructure and users that are not. That has impact on the application layer. There might be users that have a very good user experience, and there might be users that do not have a good user experience, and that is due by the way that the Internet evolves in more richer parts of the world versus less connected parts of the world. Hard to catch within those critical frameworks that I just mentioned, but it is a point that we point out in the document. Going to the recommendations. So the recommendations are basically look at these frameworks. Use those frameworks, these critical properties or the public core, and make sure that together we protect these properties. Make sure that we can continue to network and provide a global network to everybody, that brings the opportunities to actually do all this user stuff. If we fragment on the user layer but still have a global network that connects us all, we have a chance to defragment on that user level. But once we have fragmented the internet technical infrastructure, that fragmentation will also be reflected in the user space. So it’s much more important, it’s not much more important, it’s very important to take care that those properties are protected and we have to do that together. There are very few ways to actually understand how that fragmentation is happening, there are very few measurements around that look into, on a longitudinal scale, on what the evolution is that impact fragmentation and how it’s caused and how it evolves. This is really a call for people to set up measurements and think creatively on how you would assess this fragmentation on a technical layer. Once proposals are introduced either on the policy or on the technical layer in standardisation efforts for instance, do assess them against these critical properties, do assess them against frameworks and see if we lose interoperability. See if we lose the ability to connect. If that is the case, perhaps it’s not such a good idea. Of course, we’re into this together and the multi-stakeholder approach is a good thing. in order to make sure that what is being delivered, both by the private sector developing these technologies and the technical communities working on these technologies, as well as by the civil society and the governments, to make sure that we stay globally connected and don’t split up this network of networks. I think that’s the summary.

Sheetal Kumar:
That’s great. Thank you, Olaf. And we have Suresh online. So let me check, actually, do we?

Suresh Krishnan:
Yeah, I do. I’m here. Please go ahead. Thank you, Sheetal. Thanks a lot. Like, thank you for that excellent summary. And there’s like very little fault in there. So I’m just going to go over a few things that I think are important and then kind of give you a little, some minor hints to improve. But I think the key part that this thing got right is that the internet is a decentralized set of networks. There’s no single point of choke point of control over this. There’s like multiple people who, I would say, collaboratively got together and built this large network. And I think that’s a key thing to protect. And that does not mean fragmentation. That’s by design that these networks are like independent and decentralized. And what really holds them together is the technology that offers the interoperability. I think that’s like something that you got like really like well done in the first piece of this, where we talk about the technology being the thing that holds stuff together and not really the administration of them. And I think that’s like a key point to emphasize. And the second thing is like on the critical properties of the internet, I think openness is one of them. And also the incremental deployability of stuff. And it kind of ties into your, I would say, lack of ossification, right? Like new technologies keep getting deployed on the internet. So for example, we had like IPv6 come in, you know, like at some point we ran into a situation where there’s like more than 4 billion people on the internet. And then we had like ways to kind of get around it. And it takes time, but we are able to build newer things on top. And we’ve had technologies on the internet now that the internet pioneers couldn’t have imagined, right? Everything depends on it. So the way in which like, you know, we can put like newer things on the internet and still expect them to work with people around the world is really because of the openness and the connectivity that’s there. So it’s something that we should strive to preserve, like you said. And so the other kind of key thing in there is that the layering principles of the internet as well. So like the internet kind of holds together at like, you know, the layer three and four kind of of the OSI model, like in a very high level. And there’s also applications that like we have a rich variety of applications, but as long as we keep the kind of technologies and the lower layers to like, I would say a globally interoperable minimum, I think like things are going to be good. And that’s what we should also look for. And also try not to push in, I would say. So I think like Mariel talked a little bit about the content in there, right? So the question is, should the content filtering happen in the lower layers or the higher layers? And I think like I would posit it should happen at the higher layers because it’s kind of, we are talking about like transporting, staying connected while enforcing like, you know, millions of laws, like, you know, state laws, like country laws and local laws are like very different around the world. So like, you know, instead of trying to like do this at like a lower layer, which the whole world shares, like, you know, we should kind of try to keep it at the higher layers where that belong. And that’s also alluded to in the document. One of the things is like the messaging was given as an example, Olaf, right? And we have like, you know, something very positive happening recently in the space with the multi-stakeholders architecture is that the Europe came up with this like Digital Markets Act, right? Like which opened us the gatekeepers to open up the communications and the IETF, we started work on something called MIMI, which allows like interoperating at the message layer. So like, you know, this is like a really good, I would say, blueprint to follow where like, you know, the governments and the policy organizations and the technical community, we all work together to have this common goals of increasing the openness of the Internet and people being able to connect. And for the measurement, I think that’s a critical piece, Olaf, and I think we need to put a lot more effort into it. Right. And we need to have a lot more. measurement points across the globe and like kind of be able to have a platform where people can use like you know it’s not just for us to do stuff but also build a platform such as like the right path less is a platform like that that exists today where people can run their own experiments on this platform with the probes that exist. So maybe we should like let other people with ideas to measure things could use the same kind of platform to build their own metrics on how they see the fragmentation as like instead of like us prescribing some metrics. So that’s something that’s actually really good as well. And I’m totally with you on like the multi-stakeholder approach. I think it has worked really well to bring the Internet to this level. And I think we should really continue going down that approach to like work collaboratively and make sure that we learn from the past lessons we’ve had. And that brings me to like my last 20 seconds to critique it. And the critic is really like we kind of need a little bit more references out of this document. And so like you know talk about like when so let’s say like RFC 1958 which talks about the architecture of the Internet and principles and so on. I think it’s like very interesting reading for a lot of people who are coming in from the policy sphere to look at like you know what are the technical things that led to the Internet being the way it is and why it’s like very good for growth. I think that’s probably going to be my only critique on this.

Sheetal Kumar:
Okay. Thank you so much Suresh and thanks for joining us online. That was really really useful to get your feedback on that chapter. But also you made connections to the other chapters as including the user experience one and that’s also key. We do see these different elements of the framework as intersecting of course. The point is to help to provide a lens by which to have this discussion. And so if if you all have comments on that please do do of course share and you also I think you made a point about referencing about clarity of terms but definitions which we also got in written feedback. So that is something we can certainly incorporate. But I’ll turn over now to Bruna and Bruna will be facilitating this part of the discussion which is really to hear from you. And so please do please do get engaged. We’ll also be looking at the online participants for any questions and reflections there. Thanks Bruna.

Bruna Marlins dos Santos:
Thanks so much. Yes as we said this is the feedback moment of the session right. So any questions or comments you might have are very much welcome. We have some microphones in the room so if you want to add some thoughts or just ask questions to the panelists you can come to them. But I guess I’ll start with one remote question who is from Foley Hebert from Togo. And his question is I would like to know how we can reach every how can we reach every citizen in the world. So, I think that’s the first one. The second one is a question about how we can protect the internet in a more secure world, especially how can we overcome language barriers if content can be translated into our local languages, that would be very good. So, and also, he made a comment about the more people are aware of splinter net’s damages and danger, the more they will be ready and prepared to fight against splinter net and to protect the internet in a more secure world. So, I think that’s the first one. So, I will take three rounds of questions, like three questions in one round, and then I will divert back to the panelists, so we can start there.

Audience:
Hello, my name is Mia Kuliv and I’m also a member of the Internet Architecture Board. It’s more a comment than a question. I would like to comment on the technical fragmentation part. Olof talked a lot about interconnectivity. And, you know, I think and I think we are on the way to getting there. So, to get to the point, if you have less than 50 per cent, it means you have the internet and you have another network which is not the internet which is just not connected, right? But at 50 per cent, you actually have two internets, you don’t know which one is real internet anymore, and there’s nothing like two internets, there’s only one internet. So, this is very mathematical, and that’s the point where it actually breaks, where there’s no way to get back to one internet, and we really want to have a more open internet, because if you have two internets and you have a lot of connectivity, it’s not easy to get back to one internet, right? So, yes, then you’re, then it’s too late. But what I wanted to say is that it’s not only about interoperability or interconnectivity, it’s also about the ability to innovate and evolve the internet, right? So, if we put barriers into place where we cannot evolve the internet anymore, we cannot introduce new protocols, because if we put barriers into place where we need to back up our ideas as a vertical group, we can’t do this. So, that’s the challenge here, and the reason why we are doing this is because internet bill expansion isn’t the only reason we are romanticising it, and it I think it’s a very good question. I think the problem is that we are still interconnected, and this is like all Internet protocols are designed this way. You always have to have a way to evolve, to go on, and if you put barriers in the way where we cannot evolve anymore, that’s, I think, where we lead to, which leads to whatever fragmentation or to a very negative outcome, because that means not only that we cannot change the technology anymore, we cannot adapt, we cannot make it more secure, we cannot make it more flexible, we cannot adapt to the Internet. And so, I think that whatever we do on top of the Internet will be limited, because we cannot adapt to it anymore, and then we’re stuck, and then all the benefits we get from the Internet, where we see this positive impact on the society, on our economy, and so on, doesn’t happen anymore. And that’s like, that’s the point where we’re still connected, but the Internet wouldn’t be as useful as it is today.

Bruna Marlins dos Santos:
Thank you. I think that’s a very good point. Right here in the middle, can we get a second question or comment? Thank you.

Audience:
First, I want to thank the policy network that they put the Internet fragmentation into perspective, and we understood today what is meant by Internet fragmentation. Obviously, we have three dimensions, the policies and procedures, we have the user experience, and we have the data, and we have the data, and we have the data. And so, if we think about the policy and procedure, we can understand the subject much better. And the policy and procedure level, first, what gives us comfort is that there is a general consensus and agreement that we don’t want the Internet to be fragmented. So, all our effort is toward not fragmenting the Internet, and this gives us a comfort in this matter. And, so, in the last three decades, we have seen, we thought that the internet on national level, there were, let’s say, treaties or commitments that represents the interest of that regions or represent the national interest and in terms of social and economic. And I wouldn’t say all of them, but whatever represents the interest of the into there. of the socio-economics of this region or national, and it is there. And so these commitments or these frameworks or these agreements or treaties represent the interests of these regions or these people, and there should be a thin line between saying that this represents a fragmentation or represents the interests or the benefit of that group. Maybe this is something that we need or something that needs to be addressed. At least in terms of that, if there are any regional or national arrangements, there is a certain level that they should not conflict with the overall of the unity or the unification of the Internet. Going back to the user experience and Vittorio as an advocate of user experience, he gave us a kind of trust that the indicators or the elements that has been identified, the five elements, represents really truly the user experience, at least principles that we don’t need to be harmed. And actually when it comes to user experience, there is nothing regional or national. Internet users should be all equal. So in that terms, we need to have a global understanding of that this is the minimum of what is known or what should be a user experience. Going to the technical side, thank you that you limited this to the interoperability and thank you that you clarified that decentralization or lack of connectivity or choice is not considered a fragmentation. What gives us assurances is that the industry or even the technical community built all its work toward interoperability. And this is something that at least we feel trusted that it will continue. But again, bringing the matter to digital divide, it means returning this again to the user experience. The third thing is that the third thing is that the third thing is that the third thing is that there is a problem with the user experience, which is now a wide open issue, and this may have implications. Why it has implications? In some parts of the world, this may be a controlled user experience means, let’s say, a negative aspect on the social status of that user, and the social status of that user, and the social status of that user, and so on. So, from all of that, really, while we have some arrangements on policies and procedures, and we have some arrangements on the technical side, we are wide open on the user experience so far, and maybe this makes the start of the dimension of user experience more important than going first to the policies or the technical side. Thank you. Everyone has delta data? Someone was waiting for the next speaker to speak. Farid? , OLAF referred to a comment I made the other day about harmful fragmentation versus the fragmentation that is part of the internet, the way the internet is intended to work, and what I think about it is that it’s a very complex problem. It’s a very complex problem. I’ve come up to comment about this sort of grey area. We think of harmful fragmentation as something where, let’s say, a service provider blocks its competitors, access to its competitors, or a country blocks certain websites at the IP level or DNS resolution level and that kind of thing. I’ve had a conversation with a Facebook user who has a Facebook account, and he has a Facebook page blocking off the content that Facebook users can see from people outside Facebook who can’t get to that content, and, of course, my Meta colleague thinks that’s not fragmentation. That’s just the way an application looks. layered on top of the internet works. And one thing that he says is Facebook is not the internet. The World Wide Web is not the internet. These are application layers that are put on top of the internet. But from the user’s point of view, that often is the internet. So this kind of gets to Vittorio’s area of fragmenting the user experience. So I just sort of want us all to think about that a bit more. These gray areas that change the user experience in other ways that we don’t normally think of as fragmentation. And maybe we should start wondering whether it is and whether it’s good or bad. Just more to think about, I think.

Bruna Marlins dos Santos:
Thank you very much. Next up.

Audience:
Hi there. Thank you. I’m Christopher Tay from Connect Free Corporation and Internet3. We think that the future of the internet is really having everyone own their own IP address. I think that up until now, there’s been huge amounts of cost involved in creating infrastructure that has led to ISPs and others owning blocks of IP addresses and having the difficulty of really getting these IP addresses out on the end. So by allowing everyone to generate their own IP address through cryptographic public key pairs, we can give everyone an internet IP address. And so we think that there’s something kind of really cool going on here in Japan. Because the government has implemented a law against NTT in the 1990s so that they had the entity on the network, but they weren’t able to become an ISP, they have fundamentally created a countrywide layer 2 switching network where all ISPs can enter onto the network. And so what that has allowed us to do is become a ISP of individuals. And what that means is that every computer on the entity network using our software can have an IP address and connect and build a presence on the network. I think there’s kind of an interesting thing about decoupling IP addresses from networks. Obviously it’s very hard to have individuals create networks, but we think that there should be a decoupling between the infrastructure, the actual hardware physical layer, and the layer 3 IP layer. We’ve proven that this is a possible thing. I think that there’s a lot of discussions to be had and we hope to join these discussions, so thank you for your time.

Bruna Marlins dos Santos:
Thanks so much for your comment. I didn’t see a fourth line there, so I’m very sorry. Please go ahead, Laura.

Audience:
Hello. First of all, I’d like to thank you for the panel and the report and the work of the network in general. My name is Laura Pereira. I’m a delegate from the Brazilian Youth Fellowship. We know that the defense of democracy, integrity, and information integrity is one of the main fields to actually adopt a more protective view of the digital space currently, and in that sense, sometimes to cause fragmentation and to risk the integrity of the digital space in general. Actually in the Brazilian chapter of Internet Society, we actually made an experimental application of the proposed concept of user experience fragmentation to collaborate on a public consultation by Brazilian Internet Steering Committee on platform regulation alerting to the unadvertised risks of platform regulation when it does not consider these kinds of harms to the critical properties of the network. However, as mentioned by your presentation, it’s not easy to balance democracy, integrity, defense overall in the general sense and harmful fragmentations. Is it possible to reach this sort of balance by using the concept of user experience fragmentation? Do you intend to advance on this perspective? Is it a goal of the network? how do you see this issue in a more detailed way? Thank you for your presentation.

Bruna Marlins dos Santos:
Thanks a lot, Laura. Just reflect that I’m closing the queue, but we’re going to take the last three comments. So please go ahead.

Audience:
Thank you. Thank you for the panel. And I appreciate the discourse on internet fragmentation, but also just the difficulties surrounding understanding it. And so I can keep it very pointed to the discussion points that were listed. I am curious, as we progress with initiatives like this, do we continue to do so without engaging regional or cultural leaders in areas that experience shutdowns? Or, at the very least, massive hindrances to their freedom of access to open information? There was a point where it was national governments are what we are hoping to interact with, and no new stakeholders to involve in governance. However, there does seem to be valuable parallels between looking at the way that communities who have been oppressed in the past have also taken a stand and helped create legislation and international policy to curtail that from happening to any other group. And furthermore, I would like to raise a discussion point of meaningful connectivity, as is eluded to by the UN development goals. With the rise of satellite availability and private corporate satellite availability internet, and LLM sophistication, do we recognize the potential for not only fragmentation, but quality of online experience? Is this something where we see fragmentation leading from billions of people being priced out of meaningful connectivity? And does this appear to be a perfect storm for exasperating the digital divide, not necessarily closing it? So how does internet fragmentation policy design, like how does it design itself to effectively account for rapid development on these emerging fronts, taking into account their incredible potential to create disparity of access to meaningful connectivity? Thank you.

Bruna Marlins dos Santos:
Thanks a lot. Next comment.

Audience:
Thank you. My name is Michel Lambert, I’m coming from Montreal. I work with an organization called Equality, which is doing technology to support freedom online. This is my first participation to the policy network. I’m particularly interested by it. Hopefully, we will manage to create some governance that will prevent fragmentation. But I come from a background where we tend to believe that these discussions are difficult and sometimes they take more time. And we need to develop alternative technologies. So I’d like to use this floor now just to invite people to join us in Montreal. We are organizing a conference called the SplinterCon. And the idea is really to bring together the people developing new technologies, which are going to allow us eventually to, you know, build bridges or make holes into walls so that people can continue to enjoy the Internet. So if you are interested to be part of that process, please go to splintercon.net and join us in Montreal in December to develop those technologies. Thank you.

Bruna Marlins dos Santos:
Thank you very much. Next comment right here.

Audience:
Thank you. First of all, thank you for the excellent discussion on Internet fragmentation. I have a general question. We want the Internet to be inclusive and open, borderless, global network, which gives equal opportunities to everybody who’s connected and those who are to be connected. But because of a geopolitical situation, trade concerns, and other factors, the response of nation-states, some of the nation-states, is that they take certain actions and enact certain laws that may come under the ambit of digital sovereignty or digital protectionism, and which may result in either technical, commercial, or governmental fragmentation of the Internet. So my question from you would be that how do you… I mean, you’ve given some very good recommendations, but given the governance structure of the Internet, how do you see that how easy or difficult it would be to address this challenge of implementing those recommendations? And especially we see the Internet evolving, I mean, it’s becoming more decentralized, the Web 3.0, so how do you see addressing that particular challenge? And we talked about the five principles, the DFI principles, so if there are certain laws enacted which may compromise any of those principles, so how do you see addressing that challenge? Thank you.

Bruna Marlins dos Santos:
Thanks a lot. Last but not least, Raul.

Audience:
Good morning. My name is Raul Echeverria. I’m from Latin American Internet Association. I think that we are, sometimes we are in a loop trying to define what is Internet fragmentation or not, and it makes me remember when we discussed in the past about network neutrality and somebody introduced the expression, the concept, but we never had an agreed definition on that. So we lost a lot of energy discussing about what is network neutrality instead of discussing what we want to avoid. And if we look at the topic of this event, it’s the Internet we want. So instead of trying to define what is Internet fragmentation, we have to focus on what things we want not to happen. And so I think that the work that the Policy Network is doing is impressive, and it’s very good, and congratulations for that, and I have been part sometimes in the discussions, but we should focus also on more clear recommendations, whole things that, for example, for governments, don’t block apps, don’t adopt policies that create different experiences in Internet for users in the same country, that is, or in the globe. This is the kind of things that we have to recommend. Of course, I heard what the colleague said about when we don’t participate in a platform or in a given space, we don’t have access to the information that is there, but the point is that if I want to be I can’t do it. I can’t do it because I don’t want to be part of that, I can. But in some places, or due to some policies, even if I want to be a TikTok user or buy something on Amazon or whatever, I can’t do it. So this is fragmentation, but we will be in a loop if we try to say, oh, this is fragmentation, this is not. But there are things that we clearly don’t want to happen, because that’s the Internet we don’t want. Thank you.

Bruna Marlins dos Santos:
Thank you very much. I think we have a lot of time, but I would like to ask the panelists to leave the room at 10.30, because we’ve closed the line, and we have a deadline to leave this room at 10.30, right? And at the same time, we also have the process for bringing inputs to the discussion open until the 20th of October, but we really want also the panelists to be able to comment on that. So Olaf, Jordan, Ross, Vittorio, do you guys would like to add anything, since it’s the talks of the day and a time we have that Kevin have already started, Jasper, and then Jason?

Olaf Kolkman:
Yes. Well, not a lot. I think that a number of points were made that are relevant and critical. One of the points that was made was by Mary asking for more nuance, and I think that’s a fair comment. I think that’s a good point, and I think that’s a good point. I think that’s a good point, and I think that’s a good point. Myriam made a good point, the ability to innovate and evolve is one that we should protect. That is, indeed, the idea. I made reference to the critical properties, and one of the critical properties that we have defined, and that we sort of introduced also in this paper, is having an open architecture that consists out of building blocks, and protecting that open architecture whereby we can evolve, I think, is important. That sort of speaks to the general idea that we need to protect that open architecture, and I think that’s a good point. The gentleman that I forgot his name, but I know from the metro advertisement, he invented something new. I don’t know if that works. I don’t know how that will scale across the Internet. And as Mirja also pointed out, we did this transition from v4 to v6. That could have failed. There is technical fragmentation between v4 and v6. And the onus has been on the people who developed and are implementing v6 and give everybody their own IP address, because that was the intention of the v6 address space, to make sure that that interoperability with the v4 Internet continued to exist. And that has been 20 years of hard engineering work. Introducing something new will mean that the onus is on the entities that are introducing something new to make sure that that interoperability exists. The critical properties says there are common protocols. They don’t say it’s IPv4, IPv6, yet another protocol. The Internet should be able to continue to evolve. But we have to agree on something to keep that interoperability going. Finally, the comment on meaningful connectivity. When I talked about that evolution of the edge, this is a point that we’re making in the paper under the name of death of transit. The idea is indeed about having meaningful connectivity. If the Internet evolves in haves and have-nots, then there will be fragmentation, too. And being priced out of the market. is indeed a way to be fragmented. And, mind you, we have a fragmented user experience nowadays. There are many people who cannot afford being on the Internet. I think that’s something we all have to work on, making sure that people who want to connect can connect.

Bruna Marlins dos Santos:
Thank you very much, Olaf. Roz?

Rosalind Kenny Birch:
Yeah, thanks so much. And that’s actually a great transition line, Olaf, because I wanted to come in on some of the first comments about local languages, for example, and whatnot. I think this goes back to the broader thematic point we tried to capture in our chapter, talking about the importance of inclusion in global Internet governance bodies. And I think local languages, making sure people can participate despite their cultural or regional background is so important. So I really wanted to pick up on that point in particular. And there were further points about particular regional contexts and whatnot. Absolutely. I think I really wanted to highlight the role of the IGF’s national and regional initiatives in this regard. I think these are great multi-stakeholder spaces where people can come and talk about those local nuances, regional contexts, absolutely. And I think better coordination between Internet governance bodies, as we’ve been talking about, can hopefully help capture those and bring those different voices together as well. So not only just having these regional spaces, the NRIs, I was lucky enough to attend the Africa IGF two weeks ago, which is an absolutely fantastic opportunity to hear about some of these perspectives, but also to make sure that these are captured in the broader global discussions within these global Internet governance bodies themselves as well. So thank you so much. Just to say in general, a big thanks to the audience for the participation here. And please do, if you think of anything else, feel free to grab me on the sidelines throughout the rest of this week. Thank you.

Bruna Marlins dos Santos:
Thanks so much, Roswitharu.

Roswitharu:
Yeah, very quickly, a couple of points. There’s no time for everything. So please join the discussion on the mailing list in the call. Well, first of all, I think some of the comments pointed out what is the problem that we had to deal with when discussing the user experience level, which is that the user experience, as mentioned, fragmentation is really a big elephant as big as the planet. And people only see a very tiny bit of it and believe that that is fragmentation. And so if you talk to people from Silicon Valley, from the US West Coast, mostly they complain about what governments are doing in authoritarian countries, or even in the EU with the privacy laws and whatever. And if you talk to my friends in Europe, they complain about what the Silicon Valley platforms do. And everybody thinks that’s the big problem in terms of user fragmentation. So the first step is agreeing on whether something is a problem and why, and then starting to work together on that in a very pragmatic way. Because if we focus on definitions, we will not go anywhere. And the other thing I wanted to say is that, in the end, what we are facing now is the tension between the original dream of a united planet, borderless, and everybody talking with each other freely, and the reality of differences of values, interests, economies, and whatever, and languages throughout the planet. And so to a certain extent, you do need to preserve the local level, and even the national sovereignty, because that’s also a way to preserve the independence of peoples, something that was often hard fought. And to give them a way, to give each citizen of the world a way to have influence over the network, and not just give it to the people that manage the network globally and have more influence on it. But on the other hand, you have to avoid breaking the globalness of the internet. And so this is what we have to be concerned about, finding a balance. Thank you.

Sheetal Kumar:
Thank you. And I’m sorry that we don’t have time to provide the commentators from the earlier part of the session with the opportunity to respond. But the good news is that there is still time to respond after this session via email. Or indeed, you can come and talk. to us, and we are giving a deadline of the 20th of October, and you of course have time to look in detail at the paper online, and the slides will also be available. I think they really nicely summarize the in-depth work that has been done. So what we wanted to do, what the original mandate and intention of this policy network was to provide some clarity to an incredibly complex and indeed controversial topic. I hope that you agree that we have to some extent done that, but it is not over. It is an evolving, just as the internet is, an evolving framework and an evolving piece of work. Please do join us in continuing that work, and I think that is it apart from thanking you all for being here, for your contributions to the panellists, to the drafters, to the very active members of the network who gave their time to put this paper together. Thank you. And please do continue to be engaged. We will be here during the IGF, but you can also email us. Bruno, is there anything I missed?

Wim Degezelle:
No, thank you. Maybe we can just have the slide, because there was the link to the web page. And there you can see, because on the web page of the PNF there is a link to the discussion paper and there is also explained how you can react. So looking forward to your comments. And the only thing I want to add is thank you to everyone. Thank you.

Audience

Speech speed

192 words per minute

Speech length

3174 words

Speech time

989 secs

Bruna Marlins dos Santos

Speech speed

174 words per minute

Speech length

661 words

Speech time

227 secs

Jordan Carter

Speech speed

152 words per minute

Speech length

782 words

Speech time

308 secs

Marielza Oliveira

Speech speed

168 words per minute

Speech length

1049 words

Speech time

374 secs

Olaf Kolkman

Speech speed

136 words per minute

Speech length

1741 words

Speech time

769 secs

Rosalind Kenny Birch

Speech speed

151 words per minute

Speech length

1162 words

Speech time

462 secs

Roswitharu

Speech speed

215 words per minute

Speech length

1540 words

Speech time

429 secs

Sheetal Kumar

Speech speed

144 words per minute

Speech length

1756 words

Speech time

731 secs

Suresh Krishnan

Speech speed

217 words per minute

Speech length

1148 words

Speech time

317 secs

Wim Degezelle

Speech speed

169 words per minute

Speech length

1150 words

Speech time

408 secs

Protect people and elections, not Big Tech! | IGF 2023 Town Hall #117

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Daniel Arnaudo

In 2024, several countries, including Bangladesh, Indonesia, India, Pakistan, and Taiwan, are set to hold elections, making it a significant year for democracy. However, smaller countries often do not receive the same level of attention and support when it comes to content moderation, policies, research tools, and data access. This raises concerns about unfair treatment and limited resources for these nations.

Daniel highlights the need for improved data access for third-party researchers and civil society, particularly in smaller countries. Currently, there is a disinvestment in civic integrity, trust, and safety, which further exacerbates the challenges faced by these nations. Platforms are increasingly reducing third-party access to APIs and other forms of data, making it harder for researchers and civil society to gather valuable insights. Large countries often control access systems, resulting in high barriers for smaller nations to access data.

Another pressing issue raised is the insufficient addressing of threats faced by women involved in politics on social media platforms. Research shows that women in politics experience higher levels of online violence and threats. Daniel suggests that platforms establish mechanisms to support women and better comprehend and tackle these threats. Gender equality should be prioritised to ensure that women can participate in politics without fear of harassment or intimidation.

To effectively navigate critical democratic moments, such as elections or protests, social media platforms should collaborate with organisations that possess expertise in these areas. Daniel mentions the retreat from programs like the Trusted Partners at Meta and highlights the potential impacts on elections, democratic institutions, and the bottom lines of these companies. By working alongside knowledgeable organisations, platforms can better understand and respond to the needs and challenges of democratic events.

Algorithmic transparency is a desired outcome, but it proves to be a complex issue. While it has the potential to improve accountability and fairness, there are risks of manipulation or gaming the system. Striking the right balance between transparency and safeguarding against misuse is a delicate task that requires careful consideration.

Smaller political candidates seeking access to reliable and accurate political information need better protections. In order to level the playing field, it is crucial to provide resources and support to candidates who may not have the same resources as their larger counterparts.

The data access revolution is transforming how companies provide access to their systems. This shift enables greater innovation and collaboration, particularly in industries like infrastructure and industry. Companies should embrace this transformation and strive to make their systems more accessible, promoting inclusivity and reducing inequalities.

Deploying company employees in authoritarian contexts poses challenges. Under certain regulations, these employees might become bargaining chips, compromising the companies’ integrity and principles. It is essential to consider the potential risks and implications before making such decisions.

Furthermore, companies should invest in staffing and enhancing their understanding of local languages and contexts. This investment ensures a better response to users’ needs and fosters better cultural understanding, leading to more effective and inclusive collaborations.

In conclusion, 2024 holds significant democratic milestones, but there are concerns about the attention given to smaller countries. Improving data access for researchers and civil society, addressing threats faced by women in politics, working with organisations during critical democratic moments, and promoting algorithmic transparency are crucial steps forward. Protecting smaller political candidates, embracing the data access revolution, considering the risks of deploying employees in authoritarian contexts, and investing in local understanding are additional factors that warrant attention for a more inclusive and balanced democratic landscape.

Audience

The analysis raises a number of concerns regarding digital election systems, global media platforms, data access for research, and the integrity of Russia’s electronic voting systems. It argues that digital election systems are susceptible to cyber threats, citing a disruption in Russian elections caused by a denial of service attack from Ukraine. This highlights the need for improved cybersecurity measures to safeguard the accuracy and integrity of digital voting systems.

Concerns are also raised about the neutrality and transparency of global media platforms. It is alleged that these platforms may show bias by taking sides in conflicts, potentially undermining their neutrality. Secret recommendation algorithms used by these platforms can influence users’ news feeds, and this lack of transparency raises questions about the information users are exposed to and the influence these algorithms can have on public perception. The analysis also notes that in certain African countries, platforms like Facebook serve as the primary source of internet access for many individuals, highlighting the importance of ensuring fair and unbiased information dissemination.

Transparency in global media platforms’ recommendation algorithms is deemed necessary. The analysis argues that platforms like Facebook have the power to ignite revolutions and shape public discourse through these algorithms. However, the lack of understanding about how these algorithms work raises concerns about their impact on democratic processes and the formation of public opinion.

The analysis also highlights the challenges of accessing data for academic and civil society research, without specifying the nature or extent of these challenges. It takes the position that measures need to be taken to fight against data access restrictions in order to promote open access and support research efforts in these fields.

The integrity of Russia’s electronic voting systems is called into question, despite the Russian Central Election Commission not acknowledging any issues. These systems, developed by big tech companies Kaspersky and Rostelecom, lacked transparency and did not comply with the recommendations of the Russian Commission, raising doubts about their reliability and potential for manipulation.

The use of social media platforms, particularly Facebook, for political campaigning in restrictive political climates is also deemed ineffective. The analysis argues that these platforms may not effectively facilitate individual political campaigns. Supporting facts are provided, such as limited reach and targeting capabilities of Facebook’s advertising algorithms and the inability to use traditional media advertisements in restrictive regimes. An audience member with experience managing a political candidate page on Facebook shares their negative experience, further supporting the argument that social media platforms may not be as effective as traditional methods in certain political contexts.

In conclusion, the analysis presents a range of concerns regarding the vulnerabilities of digital election systems, the neutrality and transparency of global media platforms, challenges in data access for research, and the integrity of Russia’s electronic voting systems. It emphasizes the need for enhanced cybersecurity measures, transparency in recommendation algorithms, increased support for data access in research, and scrutiny of electronic voting systems. These issues have significant implications for democracy, public opinion, academic progress, and political campaigning in an increasingly digital and interconnected world.

Ashnah Kalemera

Social media platforms and the internet have the potential to play a significant role in electoral processes. They can support various aspects such as voter registration, remote voting, campaigns, voter awareness, results transmission, and monitoring. These platforms are critical in ensuring that voter registration is complete and accurate, enabling remote voting for excluded communities and remotely based voters, supporting campaigns and canvassing, as well as voter awareness and education, facilitating results transmission and tallying, and monitoring malpractice.

However, technology also poses threats to electoral processes, especially in Africa. Authoritarian governments leverage the power of technology for their self-serving interests. They actively use disinformation and hate speech to manipulate narratives and public opinion during elections. Various actors, including users, governments, platforms themselves, private companies, and PR firms, contribute to this manipulation by spreading disinformation and hate speech.

The thriving of disinformation and hate speech in Africa can be attributed to the increasing penetration of technology on the continent. This provides a platform for spreading false information and inciting hatred. Additionally, the growing youth population, combined with characteristic ethnic, religious, and geopolitical conflicts, creates an environment where disinformation and hate speech can flourish.

To combat the spread of disinformation, it is crucial for big tech companies to collaborate with media and civil society. However, limited collaboration exists between these actors in Africa, and concerns arise regarding the slow processing and response times to reports and complaints, as well as the lack of transparency in moderation measures.

Research, consultation, skill-building, and strategic litigation are identified as potential solutions to address the challenges posed by big tech’s involvement in elections and the spread of disinformation. Evidence-driven advocacy is important, and leveraging norm-setting mechanisms can help raise the visibility of these challenges. Challenging the private sector to uphold responsibilities and ethics, as outlined by the UN guiding principles on business and human rights, is also essential.

Addressing the complex issues surrounding big tech, elections, and disinformation requires a multifaceted approach. While holding big tech accountable is crucial, it is important to recognize that the manifestations of the problem vary from one context to another. Therefore, stakeholder conversations must acknowledge and address the different challenges posed by disinformation.

Data accessibility plays a critical role in addressing these issues. Organizations like CIPESA have leveraged data APIs for sentiment analysis and monitoring elections. However, the lack of access to data limits the ability to highlight challenges related to big tech involvement in elections.

Furthermore, it is important to engage with lesser-known actors, such as electoral bodies and regional economic blocs, to effectively address these issues. Broader conversations that include these stakeholders can lead to a better understanding of the challenges and potential solutions.

In conclusion, social media platforms and the internet offer significant potential to support electoral processes but also pose threats through the spread of disinformation and hate speech. Collaboration between big tech, media, and civil society, as well as research, skill-building, and strategic litigation, are necessary elements in addressing these challenges. Holding big tech accountable and engaging with lesser-known actors are also crucial for effective solutions.

Moderator – Bruna Martins Dos Santos

Digital Action is a global coalition for tech justice that aims to ensure the accountability of big tech companies and safeguard the integrity of elections. Headquartered in Brazil, the coalition has been gaining support from various organizations and academics, indicating a growing momentum for their cause.

Founded in 2019, Digital Action focuses on addressing the impact of social media on democracies and works towards holding tech giants accountable for their actions. Their primary objective is to prevent any negative consequences on elections and foster collaboration by involving social media companies in the conversation.

Moreover, Digital Action seeks to empower individuals who have been adversely affected by tech harms. They prioritize amplifying the voices of those impacted and ensuring that their concerns are heard. Through catalyzing collective action, bridge-building, and facilitating meaningful dialogue, they aim to make a positive difference.

On a different note, the summary also highlights the criticism faced by social media companies for their lack of investment in improving day-to-day lives. This negative sentiment suggests that these companies may not be prioritizing initiatives that directly impact people’s well-being and societal conditions.

In conclusion, Digital Action’s global coalition for tech justice is committed to holding big tech accountable, protecting election integrity, and empowering those affected by tech harms. By involving social media companies and gaining support from diverse stakeholders, they aspire to create a more just and inclusive digital landscape. Additionally, the need for social media companies to invest in initiatives that enhance people’s daily lives is emphasized.

Yasmin Curzi

The legislative scenario in Brazil concerning platform responsibilities is governed by two main legislations. The Brazilian Civil Rights Framework, established in 2014, sets out fundamental principles for internet governance. According to Article 19 of this framework, platforms are only held responsible for illegal user-generated content if they fail to comply with a judicial order. The Code of Consumers Defense also recognises users as being vulnerable in their interactions with businesses.

However, the impact of measures to combat false information remains uncertain. Although platforms have committed to creating reporting channels and labelling content related to elections, there is a lack of detailed metrics to fully understand the effectiveness of these measures. There are concerns about whether content is being removed quickly enough to prevent it from reaching a wide audience. One concerning example is the case of Jovem Pão, which disseminated a fake audio during election day that had already been viewed 1.7 million times before removal.

The analysis indicates that social media and platforms’ content moderation have limited influence on democratic elections. Insufficient data and information exist about platforms’ actions and their effectiveness in combating false information. Content shared through official sources often reaches a wide audience before it is taken down. Despite partnerships with fact-checking agencies, it remains uncertain how effective platform efforts are in combating falsehood.

There is a pressing need for specific legislation and regulation of platforms to establish real accountability. Platforms currently fail to provide fundamental information such as their investment in content moderation. However, there is hope as the Data, Consumer Protection, and Regulation (DCPR) initiative has developed a framework for meaningful and interoperable transparency. This framework could guide lawmakers and regulators in addressing the issue.

Furthermore, platforms should improve their content moderation practices. Journalists in Brazil have requested information from Facebook and YouTube regarding their investment in content moderation but have received no response. Without the ability to assess the harmful content recommended by platforms, it becomes difficult to formulate appropriate public policies.

In conclusion, the legislative framework in Brazil regarding platform responsibilities comprises two main legislations. However, the impact of measures to combat false information remains uncertain, and the influence of social media and platform content moderation on democratic elections is limited. Specific legislation and regulation are needed to establish accountability, and platforms need to enhance their content moderation practices. Providing meaningful transparency information will facilitate accurate assessment and policymaking.

Alexandra Robinson

The vulnerability of online spaces and the ease with which domestic or foreign actors can manipulate and spread falsehoods is a growing concern, especially in terms of the manipulation of democratic processes. The use of new technologies like generative AI further complicates the issue, making it easier for malicious actors to deceive and mislead the public. This highlights the urgent need for stronger protections against online harms.

One significant observation is the glaring inequality between different regions in terms of protections from online harms. The disparity is particularly alarming, emphasizing the need for a more balanced and comprehensive approach to safeguarding online spaces. It is crucial to ensure that individuals worldwide have equitable protection against manipulation and disinformation.

Social media companies play a pivotal role in creating safe online environments for all users. This is particularly important with the upcoming 2024 elections, as these companies must fulfill their responsibilities to protect the integrity of democratic processes. However, concerns arise when examining the allocation of resources by these companies. Despite investing $13 billion in platform safety since 2016, Facebook’s use of its global budget for combating false information appears disproportionately focused on the US market, where only a fraction of its users reside. This skewed allocation raises questions regarding the equal treatment of users globally and the effectiveness of combating disinformation on a worldwide scale.

Furthermore, non-English languages pose a significant challenge for automated content moderation on various platforms, including Facebook, YouTube, and TikTok. Difficulties in moderating content in languages other than English can lead to a substantial gap in combating false information and harmful content in diverse linguistic contexts. Efforts must be made to bridge this gap and ensure that content moderation is effective in all languages, promoting a safer online environment for users regardless of their language.

In conclusion, the vulnerability of online spaces and the potential manipulation of democratic processes through the spread of falsehoods raise concerns that require urgent attention. Social media companies have a responsibility to create safe platforms for users worldwide, with specific emphasis on the upcoming elections. Addressing the inequities in protections against online harms, including the allocation of resources and challenges posed by non-English languages, is crucial for maintaining the integrity of online information and promoting a more secure digital environment.

Lia Hernandez

The speakers engaged in a comprehensive discussion regarding the role of digital platforms in promoting democracy and facilitating access to information. They emphasized the importance of independent tech work to advance digital rights across all Central American countries. Additionally, they highlighted the collaboration between big tech companies and electoral public entities, as the former provide tools to ensure the preservation of fundamental rights during election processes.

The argument put forth was that digital platforms should serve as valuable tools for promoting democracy and facilitating access to information. This aligns with the related United Nations Sustainable Development Goals, including Goal 10: Reduced Inequalities and Goal 16: Peace, Justice, and Strong Institutions.

However, concerns were raised about limitations on freedom of the press, information, and expression. Journalists in Panama faced obstacles and restrictions when attempting to communicate information of public interest. Of particular concern was the fact that the former President, Ricardo Martinelli, known for violating privacy, is a candidate for the next elections. This situation has the potential to lead to cases of corruption.

Furthermore, the speakers emphasized the necessity of empowering citizens, civil society organizations, human rights defenders, and activists. They argued that it is not only important to strengthen the electoral authority but also crucial to empower the aforementioned groups to ensure a robust and accountable democratic system. The positive sentiment surrounding this argument reflects the speakers’ belief in the need for a participatory and inclusive democracy.

However, contrasting viewpoints were also presented. Some argued that digital platforms do not make tools widely available to civil society but instead focus on providing them to the government. This negative sentiment highlights concerns about the control and accessibility of these tools, potentially limiting their efficacy in promoting democracy and access to information.

Additionally, the quality and standardisation of data used for monitoring digital violence were subject to criticism. The negative sentiment regarding this issue suggests that the data being utilised is unclean and lacks adherence to open data standards. Ensuring clean and standardised data is paramount to effectively monitor and address digital violence.

In conclusion, the expanded summary highlights the various perspectives and arguments surrounding the role of digital platforms in promoting democracy and access to information. It underscores the importance of independent tech work, collaboration between big tech companies and electoral entities, and empowering citizens and civil society organisations. However, limitations on freedom of the press, potential corruption, restricted access to tools, and data quality issues represent significant challenges that need to be addressed for the effective promotion of democracy and access to information.

Session transcript

Moderator – Bruna Martins Dos Santos:
So, I’m going to start off with a little bit of background on what is happening at the moment, and then I’m going to turn it over to my colleague, Mariana, to talk a little bit about what’s going on. Good afternoon, everybody. We’re just starting out one last issue with Zoom, but I’m going to start off with this session. Welcome to the town hall that’s called Protect People and Elections, Not Big Tech. We’re here to talk about the global coalition for tech justice, which is a group of people who are working on big tech accountability, how to safeguard elections, and trying to bring in a new conversation or improve the current ones about why should we care about elections and why should we make this conversation even closer to social media companies, right? The global coalition for tech justice is a global organization that is based in Brazil, and we’re here to talk about the global coalition for tech justice, which is a group of people who are with me at this panel, but we do have more and more organizations and academics joining this space to discuss some of the things that we are planning for today. And as for those of you that don’t know digital action, we were founded in 2019 and we have been working for a number of years on digital action. So, we have been working on a number of issues, right, about how social media affects democracies and how the other way around works as well, but our work has been evolving some work, some catalization of collective action, building bridges, and also ensuring those directly impacted by tech harms are those that are actually in power, are those the ones that we are listening to. So, during these four days that I’ve been here, it has been a catalyst for to stop the digital sick seal, to stop the 10 micro SDGs, to stop the data hijacking, stopctioning of justice, Social media companies invest less or much less in in in the day-to-day lives So that’s a little bit of what we want to do. I want to first Bring in Alexandra Pardal. She’s the global campaigns director at digital action and then she’s gonna open this panel for us and Explain a little bit more about the year of democracy campaign and what we’re all about Alex. I think you’re in the room, right?

Alexandra Robinson:
Yes, I am. Thank you Bruna and Wonderful to be with you here, so welcome to all our panelists and participants in Kyoto to get today and Those joining us from from elsewhere remotely This is a global Conversation on how to protect people and elections not big tech So I’m Alexandra Pardal from digital action a globally connected movement building organization With a mission to protect democracy and rights from digital threats in 2024 the year of democracy more than 2 billion people will be entitled to vote as US presidential and European parliamentary elections converge with national polls in India, Indonesia, South Africa, Rwanda, Egypt, Mexico, and some 50 other countries. The largest mega cycle of elections. We’ve seen in our lifetimes, but our information spaces and the ability to maintain the Integrity of information and uphold the truth and a shared understanding of reality are more are more vulnerable than ever From foreign and malign influence in elections the use of new tech like generative AI Making it easier for domestic or foreign actors to manipulate and lie to financially motivated globally active disinfo industries, the threats have never been bigger nor more pervasive. Elections are flashpoints for online harms and their offline consequences. Now, over the past four years, Digital Action has collaborated with hundreds of organisations in every continent, supporting the monitoring of digital threats to elections in the EU and elsewhere, and led large civil society coalitions demanding a strong Digital Services Act in the EU and better policy against hate and extremism from social media companies globally. This experience has taught us that there’s startling inequity between world regions when it comes to protections from harms. From disinformation, hate and incitement to manipulation of democratic processes, online platforms just aren’t safe for most people. We know that the platforms run by the world’s social media giants, Meta, Google, X and TikTok, have the greatest global reach they’ve ever had and are at their most powerful, but safeguarding efforts have been weak to protect information integrity globally. For instance, Facebook says it’s invested $13 billion in its platform safety and security since 2016, but internal documents show that in 2020, the company ploughed 87% of its global budget for time spent on classifying false or misleading information into the US, even though 90% of its users live elsewhere. This means there’s a dearth of moderators with cultural and linguistic expertise, where Facebook has been unable to effectively tackle disinformation at all times and most consequentially during elections where when disinformation and other online harms peak. Similarly, non-English languages have been a stumbling block for automated content moderation on YouTube, Facebook, or TikTok. Algorithms struggle to detect harmful posts in a number of languages in countries at risk of real-world violence and in democratic decline or autocracy. What this means is that the risks on the horizon in 2024 are very serious indeed, at a time when social media companies are cutting costs, laying off staff, and pulling back from their responsibilities to stem the flow of disinformation and protect the information space from bad actors. If some of the world’s largest and most stable democracies, the United States, Brazil, have been rocked by bad actors mobilizing on social media platforms, spreading election disinfo, and organizing violent assaults on the heart of their democracies, imagine next year, where we’ll see democracies under threat, like India, Indonesia, Tunisia, alongside a whole swathe of countries that are unfree or at risk, where citizens hope to hold onto spaces to resist the manipulation of the truth for autocratic purposes. How can online platforms be made safe to uphold information and electoral integrity and protect people’s rights? So the challenge of 2024’s elections megacycle is a calling to all of us to show up, ideate, and innovate, bring our skills, talents, and any power we have to the table and collaborate. As an example of what’s in the works and background to the perspectives we’re going to hear today, together with over 160 organizations now, experts and practitioners from across the world, we’ve convened the Global Coalition for Tech Justice to launch the 2024 Year of Democracy campaign in order to foster collective action, collaborations and coordination across election countries next year. Together with our members, the Global Coalition for Tech Justice will campaign, research, investigate and tell the stories of tech harm in global media, supporting and amplifying the efforts of those on the front lines and building policy solutions to address the global impacts of social media companies. So we’re going to be actively collaborating with stakeholders and this conversation today is an opportunity to further these conversations and get collaborations off the ground with all those who share goals of safe online platforms for all. So I’m delighted to introduce this session for this important global conversation on how we protect 2024’s mega cycle of elections from tech harms and ensure social media companies fulfill their responsibilities to make their products and platforms safe for all. So I’m really happy to hand back to Bruna to introduce our panelists and the discussion this morning. Thank you.

Moderator – Bruna Martins Dos Santos:
Thank you so much, Alex and welcome to the session as well. And as she just brought up, this is really a global conversation, right, that we want to do. We want to spark a discussion on how can we collectively ensure that big tech plays its part in protecting democracy and human rights in 2024 elections. It’s not just one, it’s 60 elections as everybody has been talking about this week. So it’s a rather key year for everyone. So we have two provocative questions, kickoff questions for the panelists and I’m gonna bring you, Ashna, into the conversation first. Ashna is programs coordinator, right, for CIPESA. And the first question for you would be whether, like, if you consider that social media platforms and content moderation or the lack of it are shaping democratic elections, and if so, how?

Ashnah Kalemera:
Thank you, Bruna. Good evening, everyone, or good morning, like Alex said. I guess we’re all in very different time zones at the moment. It’s a pleasure to be here. Thank you for the invitation, Digital Action, and the opportunity to have this very important discussion. Once again, my name is Ashna Kalemira, and I work with CIPESA. CIPESA is the Collaboration on International ICT Policy for East and Southern Africa. We are based out of Kampala, Uganda, but work across Africa promoting effective but inclusive technology policy, but also its implementation as it intersects with governance, good governance, obviously, human rights, upholding human rights, as well as improved livelihoods. So I like to start off these conversations on very light notes. Very often, these panels are dense in terms of spelling doom and gloom. So first, I’d like to emphasize that technology, broadly, including social media platforms and the internet, have huge potential for electoral processes and systems. They are critical in ensuring that voter registration is complete and accurate, enabling remote voting for excluded communities or remotely based voters. They have been critical in supporting campaigns and canvassing, as well as voter awareness and education, results transmission and tallying, monitoring malpractice, all of them critical to electoral processes and lending themselves to promoting legitimacy and inclusion. of elections in states that have democratic deficits, which for most of Africa is many of the states. So I think that light note is very important to highlight as we then go on to the doom and gloom that this conversation will likely take. And now we start the doom and gloom. Unfortunately, despite those opportunities, there are immense threats that technology poses for electoral processes in Africa and I guess for much of the world. Increasingly, we’re seeing states, the authoritarian governments especially, leveraging the power of technology for self-serving interests. A critical example there is network disruptions or shutdowns. I see Kiputon coalition members in the room and they work to push back on that excess. On disinformation and hate speech, users, governments, the platforms themselves as well as private companies, PR firms, actively influencing narratives during elections, undermining all the good stuff that I mentioned in the beginning. And very often we ask ourselves at CIPESA and I imagine everybody in the room, why disinformation thrives, right? Because pretty much everybody’s aware of the challenge that it poses, but in Africa especially, it’s thriving and thriving to very worrying levels. One of them is again something positive. It’s because technology is penetrating and penetrating very well on the continent. Previously unconnected communities now have access to information at the click of a button literally, which again in the context of elections is great, but in the case of disinformation, it’s a significant challenge. Secondly is the youth population on the continent with many of them coming online via social media. There’s always jokes in sessions that I’ve attended where there’s African representation that for many Africans, the internet is social media. And that challenge is enabling disinfo and hate speech to thrive. Third is conflicts. The elections that we’re talking about are happening in very challenging contexts that are characterized by ethnic, religious, and geopolitical conflicts. Again, all the nice stuff I mentioned earlier on is then cast with a really dark shadow. Like Alex mentioned, that context that I’ve just described is going to be a very significant stress test come 2024 and beyond for the continent. And we’re likely to see responses that undermine the potential of the technology to uphold electoral legitimacy, but also for citizens to realize their human rights. One of those reactions we’re likely to see from a state perspective is weaponization of laws to undermine voice or critical opinion online, which again undermines electoral processes and integrity. And unfortunately, given the context around conflicts, we’re likely to see a lot of politically, sorry, fueling politically motivated violence, which restricts access to credible information and ultimately perpetuates divides and hate speech and can lead to offline harms. Now, bringing the conversation back to big tech, on the continent, unfortunately, we’re seeing very limited collaboration between tech actors and media and civil society in, for instance, identifying, debunking or pre-banking, depending on which side of the fence you sit, and moderating disinformation. Also, the processing and response times to reports and complaints are really slow, and this is discouraging reporting and ultimately maximizing, in some cases, circulation of disinformation and hate speech. There are also significant challenges around opaqueness in moderation measures. We’ve seen the case in Uganda during the previous elections where a huge number of. were taken down for otherwise not very clear reasons, and that led to a response from the state, i.e. shutting down access to Facebook, which remains inaccessible to date in Uganda. So, given those pros and cons, and either side of the coins that I’ve just described for the African continent, it’s important to have collaborative actions and movements just like what Digital Action is spearheading and we’re really honored to be a part of. And efforts in that regard should focus on showing up and participating in consultation processes just like this or others, where there are opportunities to challenge or provide feedback and comments. I think that’s really important. Such spaces are not many. We at CIPESA host the annual forum on internet freedom in Africa. We marked 10 years a couple of days ago, and for the second time, we were able to have the meta oversight board present and able to engage. They admitted that cases from the African continent are limited, but spaces like the forum on internet freedom in Africa that CIPESA hosts is providing that opportunity for users and other stakeholders to deliberate on these issues. I cannot not say that research and documentation remains important. Of course, we’re a research think tank and we’re always churning out a lot of pages and pages that are not necessarily always read, but I think it’s important because evidence-driven advocacy is critical to this cause. Skills building, again, digital literacy, fact-checking, and information verification, that remains critical, but also leveraging norm-setting mechanisms and raising the visibility of big tech challenges in new end processes, universal periodic review, the Africa Commission of Human Peoples’ Rights. These conversations are not filtering up as much as they should do, so there should be interventions that are focused on that, and interventions that, of course, promote and challenge private sector. to uphold responsibilities and ethics through application of the UN guiding principles on business and human rights. Lastly, is strategic litigation. I think that’s also an opportunity that’s before us in terms of challenging the excesses that big tech poses for elections in the challenging context that I’ve just described. Thank you. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks, Ashna. Thank you very much. Just speaking on two of the topics you spoke about, which is the weaponization of policymaking processes and politically motivated violence, I think that bridges very well with the recent scenario in Brazil, right? With, unfortunately, the repetition or yet another attack on a capital. And after a lot of discussions on a fake news draft bill and regulation for social media companies. Yasmin, I’m gonna bring you in now. Yasmin is from FGV Rio de Janeiro and also the co-coordinator of the DC on platform responsibility. Welcome.

Yasmin Curzi:
Thank you so much, Bruna. Could you please display the slides? Thank you so much. So addressing the first question that Bruna posed to us here, are social media and platforms content moderation shaping democratic elections? I’m sorry. To answer this question, I’d just like to give a brief context about the elections in Brazil, sorry, about the Brazilian legislative scenario regarding platform responsibilities. There are two main pieces of legislation that deal with content moderation issues. Specifically, since 2014, we have the Brazilian Civil Rights Framework, aka Marco Civil da Internet, probably known by many of you here. It establishes our basic principles for internet governance, such as free speech, net neutrality, protection of privacy and personal data. but also established liability regimes for platforms regarding UGC in its article 19 to 21. To sum up really quickly, article 19 created a general regime in which platforms are only liable for illegal UGC content if they not comply with judicial order asking for the removal of a specific content if it is within the platform’s capabilities to do so. There are only two exceptions to this rule, one for copyrights and one for non-authorized intimate imagery dissemination for which a mere notification of the user or their legal representative is surface. The second one is the Code of Consumers Defense, aka CDC, which considers users as hypo-sufficient and vulnerable in their relations with enterprises. In its article 14, CDC establishes an objective liability regime, a strict liability regime, in which enterprises or service providers are responsible regardless of the existence of fault for repairing damages caused to consumers due to defects or insufficient or inadequate information about their risks. So, in this sense, these two pieces of legislation can give users many protections online regarding harmful activities and illegal content. Nevertheless, users are still unprotected of the many online harms that are not clearly illegal, such as disinformation, or that are not even perceived as harm to them, like algorithmic gatekeeping, shadow banning, micro-targeting of problematic content. Regarding the first issue, given the non-existence of a legislation that deals specifically with coordinated disinformation, our Electoral Superior Court has been enacting resolutions to set standards for political campaigns and else. Also, the Electoral Superior Court established in the scope of its Fighting Disinformation Program partnerships with the main platforms in Brazil, such as Meta, Twitter, TikTok, Kuai, WhatsApp, and Google, that sign official agreements stating what their initiatives would be. In these documents, most of them committed with creating reporting channels, labeling content as electoral-related, and redirecting users to the Electoral Court official website and promoting official sources. Instagram and Facebook also developed cute stickers to support users to vote, in spite of voting being already mandatory in Brazil. Nevertheless, we don’t have enough data to see the real impacts of these measures, just the generic data on how much content was removed in a given platform, also generic data on how they are complying with the legislation. This sort of data is offered by the main platforms in Brazil since the establishment of partnership programs with fact-checking agencies in 2018. I’m not saying that they are not removing enough content. What I want to highlight here is that we don’t have data or metrics to understand what this generic number means, nor do we have knowledge on if the content is being removed fast enough to not reach enough users. Furthermore, in fact, some of these efforts to combat falsehood on YouTube, for example, were themselves a risk for democracy and elections in 2022. By the official sources program, this is the slide that is displayed right now, a hyper-partisan news media channel, Jovem Pão, was being actively recommended to YouTube users. To give an example, the election day, Jovem Pão was disseminating a fake audio. allegedly from a famous Brazilian drug dealer, Marcos Camacho, aka Marcola, in which he was supporting Lula’s election. Justice Alexandre de Moraes from the Brazilian Federal Supreme Court, which was presiding the Superior Electoral Court, ordered for the removal of the content, but not before it had already reached 1.7 million visualizations. Supporters also shared this video at least 38 WhatsApp groups and Telegram groups monitored by the fact-checking agency, Ausfatch. So to Bruna’s question, are social media and platforms content moderation shaping democratic elections, I tend to answer no, or at least not significantly, as either we have not significant data, or we don’t have enough information on their actions and results. That’s it. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks a lot, Jasmine. I’m going to bring it to Leah right now as well. Leah is representing Independent Tech, right, and also a fellow Latin American. Yet another region of the world that’s facing a lot of those discussions, right, in terms of proper resources, deployments, and also policymaking as well. So Leah, welcome to the panel.

Lia Hernandez:
Thank you so much, Bruna. Good afternoon. Well, my name is Leah Hernandez. I’m going to talk about mailing the recent and next electoral process in Central America, because politics is a big part of our conversation. I speak very loud, so no. OK, perfect. Well, because Independent Tech is a digital rights organization based in Panama City, but working in all Central America. So I’m going to refer mainly to the recent electoral process in Guatemala and the next electoral process in Panama that will take place in May 2024. And the first thing is that I want to send all my support to the Guatemalan people where they are mobilizing in the streets because they are demanding democracy in their past elections in the country. In Central America, digital platforms make tools available to our electoral public entities because they try to help them to verify the information and to avoid any violation of our digital rights, our fundamental rights as protests, freedom of expression, freedom of press, privacy. But currently in countries such as Panama, my country, a digital media platform and a journalist were ordered to remove information from their platform by the Tribunal Electoral, it’s the Panamanian electoral public entity, and they got a fine because they were posting information about Ricardo Martinelli Verrocall, I don’t know if you know about Ricardo Martinelli, he’s very famous, he’s so famous as Lula and Bolsonaro in Brazil, well, he was a former president of Panama and he’s a candidate for the next elections in Panama because he wants to be president again, and by the way, he’s the most violator of privacy in the country. So the electoral entity in Panama ordered this journalist to remove information about them because it’s against the democracy and it’s against their privacy, their own image. So the question is, if big techs are giving tools to our electoral public entities to promote democracy, to promote access to information, to promote fundamental rights. why electoral entities put barriers to the citizens, to journalists, and to communicators, who their main fulfill is legitimate the duty to inform, the duty to communicate to the citizen what is happening in the countries, and more in these cases of corruption, because this former president is very corrupt. So freedom of expression, freedom of information, and freedom press are limiting in Panama when journalists try to communicate based on the principle of public interest that we have in knowing the good, the bad, or the ugly of our candidates in our electoral process. Digital platform must match their words because with their actions, because even though they don’t have any autonomy in the country, in the decision of the electoral branch, they should not become like part of the problem, and limitate constitutional warranties, such as freedom of press. So mainly this is a very recent case that we are follow in Panama, and thank you so much, Bruna, for just facing this panel.

Moderator – Bruna Martins Dos Santos:
Thanks so much, Lia, very interesting that this kind of like, right, there is an ongoing line of major interferences with expression, with conversations online, and it’s not just like one or two countries, but it’s often the lack of, sometimes it’s the responsiveness, sometimes it’s the ongoing conversation, or the cooperation that social media platforms should have with authorities, and that should be interesting to be developing that, but there are also downsides to those partnerships when it goes towards the path of further requests for data, and access. or even like privacy violations, right? So it is definitely a hard and deep conversation. I’m gonna go now to Dan, Daniel Arnaldo from NDI. Dan, so welcome to the panel as well, and same question as the others.

Daniel Arnaudo:
Yes, thank you. Thanks for having me. Thanks for everyone for being here, and we’re really pleased to be a part of this coalition. For those who don’t know, I’m from the National Democratic Institute. We’re a non-profit, non-partisan, non-governmental organization that works in partnership with groups around the world to strengthen and safeguard democratic institutions, processes, and values to secure a better quality of life for all. We work globally to serve elections, strengthen elections processes, and my work particularly is to support a more democratic information space. And in this work, we engage with platforms around the world, both through coalitions like this or others, such as the Global Network Initiative, the Design for Democracy Coalition. We help highlight issues for platforms. We perform social media monitoring. We engage in consultations on various issues ranging from online violence against women in politics to data access and crisis coordination. I think as was mentioned, 2024 will be a massive year for democracy. And from our perspective, I think we’re particularly concerned about contexts we work in throughout the global majority and particularly small and medium-sized countries that do not receive the same attention in terms of content moderation, policies, research tools, and data access, and many other issues. This is all in the context of, I think, what is a serious disinvestment in civic integrity, trust and safety, and related teams within these organizations. So just in the region, I think you have Bangladesh, Indonesia, India, Pakistan, and Taiwan that will all hold elections in the coming year. I know there will be some resources devoted to larger countries, but on the other hand, they are massive user bases, and the smaller ones are going to receive very little attention at all. So, I think this is a consistent focus for our work and for considerations around these issues. I think one of the main kind of recommendations that I would have would focus around data access. In the context of this disinvestment, I think we’re seeing a serious pullback from access for third-party researchers. We are very concerned about changes in the APIs and in different forms of access to data on the platforms, as I think some of my other panelists have discussed, for research and other purposes, particularly meta and Twitter or X, and continued restrictions in other places. They are building mechanisms for access to traditional academics in certain cases, but not for researchers or broader civil society that live and work in these contexts. They’re often provisioned through mechanisms that are controlled within large countries in the United States or in Europe, and there aren’t really systems in place both for documentation or understanding those systems, and that there are, you know, huge barriers to that kind of access, even when it’s enabled in that sense. So that’s something that I would really urge companies in the private sector and groups such as ours to coordinate around in terms of figuring out ways of ensuring that access in future to shine a light within those contexts. Secondly, I think they’re ignoring major threats to those who make up half or more of their user base, namely women, and particularly those involved in politics, either as candidates, policymakers, or ordinary voters. Research has shown that they face many more threats online, and platforms need to institute mechanisms that can support them both to protect themselves, to understand threats, to report and escalate issues as necessary. We have conducted research that shows both the scale of the problem, but also look to introduce a series of interventions and suggestions for the companies and others that are working to respond to these issues. But I think this is really a global problem that we see in every context we work in globally. And I think many in the room will understand this threat and this issue. Finally, I think there’s a need to consider critical democratic moments and to work within those specific situations. How they can work with the broader community to manage them, not only elections, but major votes or referenda, and also more critical moments such as coups, authoritarian contexts, protests, really critical situations. If they cannot appropriately resource these contexts and situations that they may not have greater understanding of, they at least need to engage with organizations that understand them and help to react and effectively make decisions in these challenging situations. I think retreat from programs such as the Trusted Partners in the case of Meta and a consistent whittling down of their teams that are addressing these issues will have impacts on these places, on elections, on democratic institutions, and ultimately these companies’ bottom lines. The private sector should understand these are not only moral and political issues, but economic ones that will push people away from these spaces as they become hostile or toxic to them in different ways. We understand the trade-offs in terms of profit and organizing systems that are useful for the general public, but we would encourage companies to reflect that the democratic world is integral to the open and vibrant functioning of these platforms. As with 2016 and 2020, 2024 will be a major election year and also likely represent a concomitant paradigm shift in its moderation and information manipulation campaigns and regulation, which is another kind of threat that companies need to consider, and a host of related themes that will have big implications for their profits as well as democracy. I think they are going to ignore these realities at their peril.

Moderator – Bruna Martins Dos Santos:
Thanks a lot, Dan. And also, thanks for highlighting some of the things that are the year of democracy campaign. We issued a document that’s the campaign ask. So some things we would like to require from social media companies, such as streamlining human rights, or even bringing in more mechanisms to protect users, and addressing the problem at the real scale. We are not just saying, like, issue plans for elections. We are also saying, like, deploy the solutions. Invest the money. It’s not just Brazil that matters, but it’s also Brazil, India, Kenya, Tanzania. So that’s what’s really core and relevant about this conversation, for sure. So thanks a lot, everybody. I would like to ask if anyone has any questions for the panelists, or would like to add any thoughts to the conversation. There is a microphone in the middle of the room, so yes.

Audience:
Thank you for giving me some space and ability to express myself. So I’m from Russia. We have, like, a digital election system in Russia. And we are talking about, like, threats which are posed by global media platforms all around the world. Preferably, it’s meta. It’s, like, Facebook, Instagram, and Google, and et cetera, et cetera. But we didn’t talk about cyber threats to these digital election systems. For example, like, two months ago, we had elections all over Russia. And our digital election system was attacked by a denial of service attack by Ukrainian party to disrupt elections. And elections were disrupted for, like, three or four hours. And citizens were not able to actually vote. So this is not something about, like, harming Russia as a state. It is something about harming Russian citizens as citizens. That’s number one problem. The second problem is, I think you have mentioned it before, but I think it’s a little bit deeper. Because we have talked a lot about global media platform involvement in information manipulation, fakes, and disinformation spread, et cetera. But we didn’t talk about global media platform’s position, which is tend to be neutral, but is not always neutral in terms of conflict. Because there are two sides, and sometimes global media platforms choose sides. And what we see and what we talk about a lot is that global media platforms have some very closed, very secret recommendation algorithms, which basically forms the news feed for users. And the situation is that, for example, in some countries in Africa, Facebook, and I think you can approve me, Facebook actually, represent like internet for some people. And Facebook can do a revolution in a click, just altering users’ news feed with their recommendations algorithms. And nobody knows how these algorithms work. And I think internet society, and global international society, and IGF included, should put more pressure on global media platforms for making these algorithms more transparent. Because people should know why they’re seeing this or this content. That’s all. Thank you so much for giving me some time. Thanks a lot. Any other questions? Hello. Thank you for the panel. My name is Laura. I’m from Brazil. I’m here for the youth delegation, but I’m also a researcher at the School of Communication and Media Information at Getulio Vargas Foundation in Brazil as well. And I’d like to hear more about the issue of data access for academic research and civil society research. As a center specialized in monitoring the public debate in social media, we are very concerned with the recent changes mentioned by Arnaldo and mentioned by Yasmin as well, regarding the data access for us. And I’d like to hear more about what kind of tools and mechanisms can the academic community and the civil society community in general access to fight those restrictions and to face these issues, not only in the regulatory sphere where this debate is present, but also in a more broad way. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks so much, Laura. And the last question?

Audience:
Okay, two points. I’m Alexander from a country in spring of which next year 145 million will elect Vladimir Putin as president. And I have two points. First of all, I would like to thank Timothy about information on the DOS attacks because Russian Central Election Commission didn’t confirm any issues with electronic electoral systems. Unfortunately, such system in Russia was created by Russian big tech Kaspersky, created one system used in Moscow and Rostelecom, which could be considered as a big tech, created another one. Systems completely intransparent, does not comply to the Russian Commission recommendations and other kind of recommendations for digital system. And on my point, a few intended for just faking results. I hope. So if you are interested about such details, please ask me later. But I would like to ask maybe not panel, but everyone. Have somebody participated in elections last time? Thank you. Yeah. Okay. Have you tried to use platforms for your promotion? Okay. Nowadays, I also would like to inform Tim if Facebook is not possible, is not legal to be used in promotions. But before, I’ve created a political activist or political candidate page on Facebook and would like to advertise myself in a constituency with about 20,000 voters. So, I asked Facebook, please make a suggestion, and they suggested me two new contacts for 10 bucks. So, I think in some cases, either platforms don’t understand requirements for candidates, if it’s not presidents, something like, either we need to work with this, either they will want too much money for promotions. Because, okay, if I would create pret-a-porter cakes, maybe two contacts for 10 bucks is reasonable, but not for the one who wants to advertise himself in a constituency. So, I think such work with platforms and platforms helping candidates, especially in restrictive regimes where advertisements on the physical media is no longer possible, is also should

Moderator – Bruna Martins Dos Santos:
be done. Thank you very much. Thanks, Alexander. We have one extra question from the chat that I’m just going to hand out to you guys, and you don’t need to answer all of them, the ones that speak to you the most, I guess. The one that’s on the chat is, what should be done legally when some cross-border digital platforms like Meta refuse to cooperate to national competent authorities regarding cybercrime cases like incitement to violence and promoting pornography for children and private images, and even in serious crimes and refuse to establish official representatives in the country? Rather dense question as well, but… I will give it back to the floor to you guys, and as we’re moving to the very end of the session, you only have 12 more minutes, I would maybe also ask you to, in a tweet, if you could summarize what would be your main recommendation for addressing this so-called global equity crisis in big tech accountability. So I know it’s difficult to summarize that, but if you have a tip, an idea, a pitch for that, it’s very much welcome. I’ll start with you, Ashna.

Ashnah Kalemera:
Thank you, Bruna, and thank you for the very, very rich questions. I think they highlight that this conversation is not limited to elections and misinfo and disinfo or hate speech, but there are very many other aspects around it. The DOS attacks that you pointed out, which speak to tech and the resilience of not just civil society organizations, but even electoral bodies and commissions or entities that are state-owned or run and leverage technology as part of elections, as well as other conversations around accessibility and exclusion, because some of that technology around elections excludes key communities, which brings about apathy and low voter turnout, all of them critical to the conversation around elections. Similarly, the point around positions and the power of these tech companies to literally start revolutions, to borrow your word, I think that, too, is an area that is critical to deliberate more on. The answers are not very immediate. Some of the work that we’ve done in researching how disinfo manifests in varying contexts has highlighted that the agents, the pathways, and the effects vary from one context to another. like I mentioned in the beginning, in contexts where there’s conflicts, religious or border conflict or electoral conflict, the manifestations are always very different, the agents are always very different. So we’re not necessarily pointing a finger only at big tech but I think we are all mindful of the fact that this is a multi-stakeholder conversation that must be had and should be cognizant of all those challenges. There was an issue on research, I think that’s something that we’ve felt on the continent, the inaccessibility of data. Previously at CIPESA we’ve leveraged data APIs, I believe that’s the technical term, to document elections and monitor elections, social media, sentiment analysis and micro-targeting. That capacity is now significantly limited so we’re not able to highlight some of the challenges that emerged during elections around big tech. That’s not to say documentations through stories or humanization would not have the same effect if the access to data is limited. What else did I want to talk about? Now I forget because there were so heavy questions but yes, the conversation is much broader than just elections and big tech alone. We all have a role to play and engaging the least obvious actors like electoral bodies, regional economic blocs and other human rights monitoring or human rights norm-setting mechanisms is also critical to the conversation. Thank you.

Yasmin Curzi:
So, regarding recommendations I think it’s only possible actually to have really real accountability. If we have like specific legislation and regulation of platforms, it’s not possible to have like a multi-stakeholder conversation if we have like the sort of the power symmetries are just too big for us to sit on the same table and discuss with them and talk to them. They set all the rules that are on the table, so it’s not possible to talk to them without regulation. In Brazil, for example, during the elections again, the journalist Patrícia Campos Melo and Renata Gauf asked Facebook how much they were investing, not only Facebook, sorry, Facebook and YouTube, how much they were investing in content moderation in Brazil to see how much they were complying with their own memo agreements that they made with, that they signed with the Superior Electoral Court. And they did not answer, they just said that this was sensitive data. And this is like we are talking about aggregated data of how much they were investing financially to improve their content moderation in Portuguese. So if we don’t have this basic information, if we don’t have like how to assess how much harmful content is being recommended by their platforms, it is quite difficult for us to be able to make proper public policies to address these issues. So I’d just like to display the slides again just to do some propaganda. Sorry, can you display the slides again, just a minute? Just to make a brief propaganda, we have at the DCPR, our Dynamic Coalition on Platform Responsibilities. Our outcome last year was a framework. on meaningful transparency, meaningful and interoperable transparency, with some thoughts for policymakers and regulators worldwide if they want to implement, and also platforms if they are able and eager to improve their best practices, so they also can adopt this framework. And this year, our outcome we are going to release tomorrow also focusing on human rights, risk assessments, and else. So this is our title. It’s like a collaborative paper with best cases and also discussing legislation in India, GSA, GMA, the Brazilian legislation. So we are going to release it tomorrow. Our session is at 8.30. So thanks. I’m sorry for doing the propaganda. I just wanted to show the document. So this is what I would recommend for people to.

Daniel Arnaudo:
Yeah, thanks for the questions. I think certainly algorithmic transparency can be a good thing. You just have to be careful about how you do it. And to create systems to understand the algorithms, I think they can also be gamed in different ways if you have a perfect understanding of them. So it’s a tricky business. I think definitely on need for better protections and systems for smaller candidates in different contexts, it’s a part of the system. It’s not just the individual users and what they’re seeing and how these systems or these networks are being manipulated, but also how candidates can have access to information about political advertising or about even basic registration information. I think every country in the world should have access to the same systems that are used by Meta and by other major companies, Google and others, to promote good political information. And I mean. very basic political information about voting processes, about political campaigns, anywhere in the world. I think on data access, certainly, you’re seeing a revolution right now in terms of how the companies are providing access to their systems, and I think it’s focused on X and Twitter. That has changed the way that any sort of research is being done on those platforms. It’s much more expensive, it’s more difficult to get at. I think companies need to reconsider what they’re doing in terms of revising those systems and making them more difficult for different groups. Meta, in particular, I think will be really critical, so I think we need to work collectively to make sure that they make those kinds of systems like APIs available to as many kinds of people as possible. I think, you know, certainly, there are issues around placing company employees in certain countries around the world, and that can be problematic in certain ways because they could also be authoritarian contexts, and then the employees become bargaining chips, potentially, within certain kinds of regulations that they want to enforce, so you have to be careful about that, but I certainly understand the need to enforce regulations around privacy and content moderations and other issues, so I think it’s something that has to be designed carefully. I think, you know, certainly, there’s a huge crisis in terms of how companies are addressing different contexts, and they need, I think, ultimately, to better staff and resource these issues or these different contexts, so to have people that speak local languages, that understand these contexts, that can respond to issues in reporting, that know what they’re doing, but this is expensive, and I don’t think you’re gonna be able to work your way out of it through AI or something like that, as many have proposed, so I just think it’s something that they need to recognize that reality, or they’re gonna continue to suffer, as, unfortunately, we will all.

Lia Hernandez:
Just one minute, well I think that it’s necessary not just to empower the electoral authority, it’s most necessary to empower citizens, civil society organizations, human rights defenders, activists, because we are really working to promote and to conserve the democracy in countries. So this is my recommendation, and regarding your question about the data, for example in our case, we are working in monitoring digital violence based against candidates in the next election in Panama, and everything is very manual, because the digital platforms, they don’t make available the tools to the civil society, they make available the tools to the government. So we are trying to sign an agreement with the electoral authority to maybe have access to that tools, because it’s necessary to finish the work before the elections, and in another case, the data is not clean, they don’t use open data standards, so we have to find, sometimes guess the information that they have, not upgrading in their websites, so it’s a bit difficult for us to work with this kind of data.

Moderator – Bruna Martins Dos Santos:
Thanks a lot to the four of you, and Alex as well, that is following us directly from the UK, thanks everybody for sticking around as well. If any of this conversation stroke a note with you, go to the yearofdemocracy.org, that’s the website for the Global Coalition for Tech Justice campaign, and have a nice rest of the IGF, thanks a lot.

Alexandra Robinson

Speech speed

142 words per minute

Speech length

954 words

Speech time

402 secs

Ashnah Kalemera

Speech speed

162 words per minute

Speech length

1780 words

Speech time

658 secs

Audience

Speech speed

145 words per minute

Speech length

905 words

Speech time

375 secs

Daniel Arnaudo

Speech speed

172 words per minute

Speech length

1578 words

Speech time

549 secs

Lia Hernandez

Speech speed

132 words per minute

Speech length

744 words

Speech time

338 secs

Moderator – Bruna Martins Dos Santos

Speech speed

181 words per minute

Speech length

1397 words

Speech time

463 secs

Yasmin Curzi

Speech speed

141 words per minute

Speech length

1287 words

Speech time

548 secs

Promoting the Digital Emblem | IGF 2023 Open Forum #16

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Koichiro Komiyama

According to a report by the IISS, several Asian countries, including China, Australia, India, Indonesia, Iran, North Korea, and Vietnam, are significantly increasing their cybersecurity capabilities. This development has raised concerns about the escalation of cybersecurity capabilities in Asia.

Ransomware attacks have been on the rise, with damages increasing, and many of these attacks being driven by commercial profit. Over the past year, there have been successful breaches of critical infrastructure, such as hospitals. This highlights the vulnerability of essential services to cyber threats.

Japan, traditionally known for refraining from cyber offense due to its peace constitution, has changed its stance on cyber offense in light of national security concerns. This shift in policy indicates that Japan is recognising the need to enhance its cybersecurity capabilities.

To combat cybercriminal activities, the application of guidelines or emblems is suggested as a method to pressure criminal groups regarding their operations. Such guidelines can establish a framework for acceptable behaviour, discouraging criminal activities in cyberspace.

Koichiro Komiyama, a prominent individual in the field, has expressed concerns about cybersecurity threats specifically targeting hospital and medical systems. He emphasises the need for proactive measures to safeguard vital systems against evolving cyber threats.

Moreover, the implementation of local environment concepts for critical systems is considered crucial. Critical systems, whose offline or disconnected nature makes them less vulnerable to cyber attacks, do not use global IP address spaces or associate with any domain name. Implementing these concepts enhances the security of such systems.

Overall, the increasing cybersecurity capabilities of several Asian countries, coupled with the rise in ransomware attacks and successful breaches of critical infrastructure, highlight the urgent need for robust cybersecurity measures. It is essential to address cybersecurity threats to hospital and medical systems. Furthermore, the adoption of local environment concepts can enhance the security of critical systems.

Audience

During the discussion, concerns were raised about the offensive cyber capabilities that AI is reportedly enhancing. Automation and AI have increased the speed of cyber capabilities, leading to growing apprehension. The feasibility and effectiveness of the digital emblem solution were questioned, specifically regarding its ability to deal with the accelerated speed and wider reach of cyber capabilities. Doubts were expressed regarding whether cyber capabilities would take the time to verify the authenticity of digital emblems.

The discussion emphasized the need for strong interest from states and sub-state organizations in the digital emblem solution. The successful implementation and socialization of the solution require a strong appetite among these entities. Incentives were identified as necessary to encourage their engagement with the digital emblem solution. Additionally, the degree of interest among states and sub-state organizations was discussed, highlighting the importance of incentivizing their involvement.

The issue of incentivizing non-state actors and less organized groups to respect digital emblems was also raised. There was an example of activists in Russia and Ukraine pledging to reduce the scale of their cyber operations, indicating some willingness to comply. However, motivating these actors to fully respect and adhere to digital emblems remains a challenge.

Attribution problems and issues with incentivizing state actors were discussed. It was argued that problems with incentives and attribution could discourage state actors from respecting the digital emblem. This could potentially make emblem violations easier without clear attribution to a specific state.

The visibility of hospital targeting in the Asia-Pacific region was highlighted as evidence of the urgent need for the proposed emblem. Hospitals in this region are targeted by nation-states on a daily basis, underscoring the necessity of finding a solution to prevent such attacks.

The discussion also touched upon the self-regulation within the criminal community. It was mentioned that the criminal community regulates itself against targeting perceived “soft targets.” This suggests that there may be a deterrent effect that discourages criminals from attacking certain entities.

Finally, the potential role of Internet Service Providers (ISPs) in validating adherence to the digital emblem was suggested. ISPs possess the ability to identify operational nation-states and their infrastructure, which could provide insights into whether the emblem rules are being followed.

Overall, the discussions highlighted various challenges and concerns related to offensive cyber capabilities, the feasibility of the digital emblem solution, and the imperative of strong engagement from different actors. The importance of incentivizing compliance and addressing attribution issues was emphasized. The visibility of hospital targeting and the potential role of ISPs were also significant points of discussion.

Felix Linker

The ADEM (Authentic Digital Emblem) system, developed by Felix Linker and his team, is a technological solution designed to address the need for verifiable authenticity and accountability in the digital landscape. It was developed in response to a request from the International Committee of the Red Cross (ICRC) for a digital emblem. The purpose of ADEM is to provide a reliable and tamper-proof method of identification and endorsement for protected parties.

ADEM is designed to be a plug-in to the infrastructure of protected parties, such as the ICRC, allowing for the autonomous distribution of emblems. Prototyping is ongoing with the ICRC, and plans are in place to deploy ADEM within their network. This move is seen as a positive step towards enhancing cybersecurity and supporting the mission of protected parties.

One key aspect highlighted in the discussions is the role of nation-states in endorsing protected parties. ADEM allows nation-states to make sovereign decisions regarding the endorsement of protected parties, and emblems will be accompanied by multiple endorsements from nation-states. This approach empowers nation-states to exercise control and support protected missions according to their individual preferences and policies. It is considered a positive development in promoting digital sovereignty and aligning with the goals of SDG 16 (Peace and Justice) and SDG 9 (Industry, Innovation, and Infrastructure).

However, challenges arise when it comes to verifying endorsement requests. Felix Linker raises concerns about technical organizations that control parts of the internet naming system, such as ICANN. He believes that these organizations may struggle to authenticate requests for endorsement due to their technical nature. This argument carries a negative sentiment as it highlights a potential limitation in the current system.

In light of these challenges, Felix suggests that endorsement of protected parties could be undertaken by nation-states, supranational organizations, or entities with relevant experience and knowledge in the field, such as the ICRC. He emphasizes the importance of not burdening technical organizations with additional responsibilities that may not align with their expertise. This perspective is seen as positive as it suggests a more suitable and effective approach to securing endorsements for protected missions.

ADEM consists of two main components. The first component focuses on protecting entities identified using IP addresses and domain names. This aspect of ADEM aims to provide security and authenticity at the network level. The second component involves granting emblems through mechanisms such as TLS, UDP, and DNS. These mechanisms serve as a means to validate and authenticate the emblems, ensuring their authenticity and reliability. This dual aspect of ADEM showcases its comprehensive approach to safeguarding the integrity and authenticity of protected parties.

Felix’s team is also working on the development of local emblems, which aim to protect against threats at the device level. By addressing vulnerabilities such as malicious email attachments and network penetrations, this extension of ADEM provides an extra layer of security and ensures a holistic approach to safeguarding digital assets and missions.

Moreover, the discussions highlight the benefits of emblems in monitoring and reducing cyber attacks. Emblems serve as a mechanism for verifying the authenticity and legitimacy of actors engaging in cyber activities. By recognizing and respecting emblems, actors can be monitored more effectively to prevent and mitigate potential cyber threats. This observation carries a neutral sentiment as it reflects the potential of emblems in enhancing cybersecurity efforts.

Lastly, the proposition of Internet Service Providers (ISPs) taking on the responsibility of monitoring emblem distribution is viewed positively. Felix suggests that ISPs could play a crucial role in regularly checking whether emblems are being sent out as intended. This proposed role for ISPs aligns with SDG 16 and SDG 9 and potentially enhances the effectiveness of emblem distribution and validation.

In conclusion, the development of the ADEM system presents a promising solution for achieving authenticity and accountability in the digital realm. By allowing the autonomous distribution of emblems within the infrastructure of protected parties, ADEM promotes enhanced cybersecurity and supports protected missions. The involvement of nation-states and the consideration of various endorsement mechanisms further strengthen the system’s reliability and effectiveness. However, challenges exist in verifying endorsement requests, particularly concerning technical organizations’ ability to authenticate requests. The development of local emblems and the potential role of ISPs in monitoring emblem distribution offer additional layers of protection and monitoring. Overall, ADEM holds great potential for advancing digital security, ensuring authenticity, and supporting the goals of SDG 16 and SDG 9.

Moderator – Michael Karimian

The digital emblem is an innovation in humanitarian protection aimed at extending protections into the digital realm. Its purpose is to safeguard medical and humanitarian entities from cyber operations. This concept acknowledges the evolving nature of warfare and conflict, where cyber operations play an increasingly impactful role. By implementing the digital emblem, these entities can continue their work without fear of cyber operations.

Furthermore, the digital emblem represents a collective commitment to protecting the vulnerable from cyber threats. It highlights the intersection of technology, cybersecurity, and humanitarian protection, emphasizing the need for collaboration and advanced measures to ensure a secure digital future. This collective commitment signifies the importance of addressing cyber threats within the broader context of humanitarian efforts.

Applying multi-factor authentication and zero-trust principles can significantly enhance cybersecurity. Studies have shown that 99% of cyber-attacks can be prevented by adopting basic cybersecurity practices, including these two measures. By implementing multi-factor authentication, which requires multiple forms of verification for access, and following the zero-trust approach, which assumes no trust by default and verifies every action, organizations can greatly increase their cybersecurity resilience.

Keeping systems updated and employing data protection measures through encryption are also essential in minimizing the risks posed by cyber attacks. By ensuring that software and patches are up to date, organizations can protect themselves from known vulnerabilities. Additionally, encryption provides an added layer of security by securing sensitive data and making it unreadable to unauthorized parties.

To bolster cybersecurity efforts, it is encouraged for tech and telecommunications companies to join initiatives such as the Cyber Security Tech Accord and the Paris Call for Trust and Security in Cyberspace. The Cyber Security Tech Accord is a coalition of approximately 150 members committed to best practices and principles of responsible behavior in cyberspace. The Paris Call for Trust and Security in Cyberspace is the largest multi-stakeholder initiative aimed at advancing cyber resilience. By becoming part of these initiatives, companies can contribute to collective efforts in maintaining a secure cyber environment.

Engaging with the Cyber Peace Institute can also aid in improving cybersecurity. The Cyber Peace Institute focuses on promoting norms and advocating for responsible behavior in cyberspace. Collaborating with this institute can provide valuable insights and resources to enhance cybersecurity practices.

In the context of protecting medical facilities and humanitarian organizations, a multidimensional approach is required. This includes implementing technical solutions, fostering collaboration among various stakeholders, conducting research, and advocating for enhanced protection. The challenges and potential solutions in safeguarding these facilities and organizations were discussed, emphasizing the importance of research and advocacy in the process.

The significance of audience engagement and the contributions of the speakers were acknowledged in supporting the protection of medical facilities and humanitarian organizations. These discussions underline the critical importance of ensuring the safety of these entities, as the consequences of attacks can be just as devastating as physical assaults.

Overall, the digital emblem represents a critical innovation in humanitarian protection, offering safeguards against cyber operations for medical and humanitarian entities. By promoting the intersection of technology, cybersecurity, and humanitarian protection, advocating for best practices and responsible behavior, and implementing advanced cybersecurity measures, organizations can enhance their resilience against cyber threats. Collaboration, research, and advocacy are also essential in protecting medical facilities and humanitarian organizations. By joining together and adopting comprehensive strategies, we can create a more secure and resilient digital space.

Mauro Vignati

The International Committee of the Red Cross (ICRC) considers the digitalization of the emblem to be crucial and necessary. The digital emblem is used to identify medical personnel, units, and organizations, providing a means of recognition during armed conflicts. The ICRC argues for flexibility in the usage of the digital emblem, limiting its use to selected entities solely during times of armed conflict.

Initiated in response to the need for increased protection during armed conflicts and the COVID-19 pandemic, the ICRC began researching the digitalization of emblems. The digital emblem aims to provide security for medical facilities and Red Cross organizations.

Several technical requirements have been defined to ensure the effectiveness of the digital emblem. Ease of deployment, compatibility with different devices, and the ability to verify authenticity are among the key considerations. It is essential that the emblem can be utilized by both state and non-state actors.

Despite the benefits of the digital emblem, there are various challenges associated with its implementation. Such challenges include the lack of separate internet infrastructure for armed forces and civilians, difficulties in modifying medical devices, and the complex nature of the internet environment.

To develop the digital emblem, the ICRC consulted with 44 experts, initiating the project in 2020. This endeavor holds promise in reducing misuse through technological advancements. However, it is important to note that the authority to authorize the emblem’s use in physical space lies with the state, as stipulated by the Geneva Convention.

Both state and non-state actors are expected to comply with the conventions, including the digital emblem. The Red Cross actively appeals to non-state actors to adhere to International Humanitarian Law (IHL), as violation of IHL could be deemed a war crime.

In conclusion, the digitalization of the emblem is deemed vital in order to enhance protection in both physical and digital realms. The objective is to educate non-state actors on the significance of respecting IHL and the emblem to ensure the safeguarding of humanitarian efforts. Nevertheless, it is imperative to further assess the challenges and potential risks associated with the digital emblem.

Francesca Bosco

The Cyber Peace Institute was established with the goal of mitigating the adverse effects of cyber attacks on people’s lives worldwide. It plays a crucial role in aiding vulnerable communities to stay safe in cyberspace, conducting investigations and analysis on cyber attacks, advocating for improved cybersecurity standards and regulations, and addressing emerging technological challenges.

The healthcare sector is identified as a particularly vulnerable sector to cyber attacks, which often lead to the loss of data and disruption of services. The Cyber Peace Institute has a platform that documents cyber attacks on the health sector, highlighting the breach of over 21 million patient records and significant disruption to healthcare services. This demonstrates the urgent need for improved cybersecurity measures within the healthcare industry.

Cyber attacks during armed conflicts have a significant human impact as they threaten crucial services and spread disinformation. The borderless nature of cyberspace allows cyber operations to extend beyond belligerent countries, hitting critical infrastructures in third countries. This highlights the need for increased international cooperation and measures to protect critical services during armed conflicts.

Risks in the medical and humanitarian sectors include the increasing accessibility of sophisticated malware and ready-to-use cyber tools, as well as the blurring line between state and non-state actors. This presents a challenge as it lowers the barriers to entry for malicious actors and makes it difficult to attribute attacks to a specific entity. Thus, it is essential to develop strategies to effectively address these risks and protect vital infrastructures.

Education is identified as a vital component in understanding the importance of protecting healthcare and humanitarian organizations from cyber attacks. By educating different stakeholders, including professionals and the general public, they can better comprehend the potential consequences of not safeguarding these crucial infrastructures.

Francesca Bosco, an advocate in the field, emphasizes the need for analyzing the human impact of cyber attacks and the long-term consequences in order to underline the importance of protecting vital infrastructures. Efforts are being made to standardize a methodology to measure the societal harm from cyber attacks. The aim is to monitor responsible behavior in cyberspace and assess the societal costs of not adequately protecting vital infrastructure.

Basic cyber hygiene activities and information sharing are identified as critical elements in mitigating cyber attacks and improving cybersecurity. It has been found that 99% of cyber attacks can be stopped by implementing basic cyber hygiene practices. Additionally, full cooperation in terms of information sharing is needed to effectively trace and address cyber incidents, as seen in the case of the healthcare sector.

Civil society organizations are recognized for their close proximity to the people impacted by cyber attacks and their firsthand experiences. These organizations can play active roles in advancing knowledge and efforts in mitigating cyber attacks, working in collaboration with other stakeholders to address the challenges posed by cyber threats.

Sharing defense resources and enhancing cyber capacity building are recommended as important measures for protecting critical infrastructure. This can be achieved through initiatives such as the Global Cyber Capacity Building Conference, which focuses on the protection of critical infrastructure from cyber attacks.

In conclusion, the Cyber Peace Institute is at the forefront of efforts to mitigate the harmful effects of cyber attacks globally. Through its various activities, such as aiding vulnerable communities, investigating cyber attacks, advocating for better cybersecurity standards, and addressing emerging technological challenges, the Institute works to protect vital infrastructures, such as healthcare and humanitarian organizations. It is evident that education, cooperation, and capacity building are essential elements in effectively addressing cyber threats and safeguarding critical services. By understanding the human impact and long-term consequences of cyber attacks, there is a growing recognition of the need to protect vital infrastructure and develop strategies to mitigate cyber risks.

Tony

Tony highlights the necessity of a digital emblem in order to uphold International Humanitarian Law. This emblem should protect the end system data, its processing, and the communications involved. Moreover, it should be visible to those individuals who are committed to complying with international humanitarian law. Significantly, the digital emblem should not burden the operations of humanitarian organizations.

Tony suggests implementing the digital emblem by leveraging existing Internet infrastructure and technology. The internet has the capability to employ cryptographic methods to safeguard fundamental data. Critical data, such as naming and addressing required to operate the internet, can be protected through technology that is already established.

To implement the digital emblem, Tony proposes an implementation approach using secure DNS and secure routing. This approach involves inserting a special text record within the DNS record, which is signed by a trusted entity to validate the emblem. Additionally, visible blocks of address can be segregated to accommodate humanitarian traffic flows.

International cooperation is crucial for the successful implementation of the digital emblem. Nation-states have the responsibility to regulate the use of the emblem, and working through existing organizations like the ICRC can facilitate the process.

Tony argues that regional internet registries should take on more responsibility for verifying the authenticity of humanitarian missions, rather than relying solely on ICANN. This is particularly important because regional internet registries are better equipped to verify humanitarian organizations compared to ICANN, particularly in countries where there is a close coupling between the internet operator and the state, such as Egypt and China.

Coupling the verification of the humanitarian emblems with the operations of the internet can make the system more scalable. Tony suggests using DNS to propagate the emblem, rather than verify it, to make the process manageable. This can be achieved by having a local ISP or an organization like the American Red Cross sign the digital record within the DNS record.

The control of internet operations by the state is not universally applicable, and it varies among countries. In the United States, the government has little involvement in how names and numbers are allocated, whereas in countries like Egypt and China, the internet operator and the state have a close coupling.

There is a concern about the risk of unintended consequences and disruptions to humanitarian missions resulting from cyber attacks. Unintended denial of service attacks can occur if focus is only placed on the attacked entity, and nation-state attacks often focus on the infrastructure rather than individual users.

Protective measures should rely on internet infrastructure for third-party queries, instead of solely relying on potentially attacked endpoints. This proposed solution aims to mitigate the risks of cyber attacks by utilizing the infrastructure of the internet for third-party queries.

While basic cyber hygiene is essential, it is not a complete solution to cyber attacks. Existing technology can mitigate many damaging attacks, but sophisticated adversaries and high-value targets require more comprehensive defense strategies. To address this, authorities, whether legal or ethical, should promote and normalize cyber hygiene practices.

Transparency and collective action can help expose and deter malicious activity. Initiatives tied to scalable internet infrastructure can be repurposed for monitoring and responding to digital threats. Adversarial activities against sensitive institutions like hospitals and public utilities should be observable and provokable.

The current mechanisms and applications for protecting humanitarian operations in conflict zones should be expanded to other environments, even in peacetime. Ransomware attacks on peacetime institutions, such as hospitals, pose significant threats that current cybersecurity measures may not adequately address. Implementing existing security mechanisms sector by sector is challenging and impractical.

In conclusion, Tony emphasises the need for a digital emblem to respect International Humanitarian Law. Implementing this emblem by leveraging existing Internet infrastructure and technology, using secure DNS and secure routing, and ensuring international cooperation are vital for its success. Regional internet registries should play a larger role in verifying humanitarian missions, and coupling the verification process with internet operations can make the system more scalable. Cyberattacks pose a risk to humanitarian missions, and protective measures should rely on internet infrastructure. While basic cyber hygiene is important, more comprehensive defense strategies are needed for sophisticated adversaries. Transparency and collective action can help deter malicious activity, and mechanisms for protecting humanitarian operations should be expanded to other environments.

Session transcript

Moderator – Michael Karimian:
There we go. Hopefully everyone can hear me. So, distinguished guests and esteemed panelists, good morning, good afternoon, good evening, or good night, depending on where you are joining us from. Welcome to this important session on promoting the digital emblem. I am Michael Karamean, Director of Digital Diplomacy for Asia and the Pacific at Microsoft, and I have the privilege to serve as moderator today. In today’s digital age, the concept of the digital emblem represents a critical innovation in humanitarian protection. Much like the Red Cross, Red Crescent, and Red Crystal emblems have safeguarded lives during times of conflict in the physical world, the digital emblem aims to extend these protections into the digital realm. It is intended to be a symbol of hope and security, ensuring that medical and humanitarian entities can continue their life-saving work without the fear of malicious cyber operations. Importantly, the digital emblem concept is an acknowledgment of the evolving nature of warfare and conflict, where cyber operations play an increasingly impactful and harmful role. It emphasizes the criticality of upholding the principles of international humanitarian law in the digital space, where the consequences of attacks on hospitals and humanitarian organizations can be just as devastating as physical assaults. Our esteemed panel of experts today will delve deep into the technical, legal, and humanitarian aspects of the digital emblem. They will explore how it can be developed, deployed, and upheld, ensuring that it becomes a recognized symbol of protection in an increasingly digital yet vulnerable world. As we embark on this discussion, it is important to recognize that the digital emblem has profound importance. It not only signifies a collective commitment to safeguarding the vulnerable, but also highlights the intersection of technology, cybersecurity, and humanitarian protection. Through this dialogue, we aim to advance our understanding, share insights, and collectively work toward a more secure and resilient digital future. So, let us begin this exploration into the digital emblem concept, its significance, and the path forward. Together, we can hopefully promote digital peace and protect those who need it most. To help us achieve that goal, I am pleased to say that we are joined by Felix Linker, researcher at ETH Zurich, who joins us online. Dr. Antonio DeSimone, chief scientist at Johns Hopkins Applied Physics Laboratory, who also joins us online. Francesca Bosca, chief of strategy and partnerships at the Cyberpeace Institute, who is also joining us online. And in person, we are joined by Koichiro Komiyama, director of the Global Coordination Division at JPCERT, and also affiliated with APCERT, and Mauro Vignetti, advisor on digital technologies of warfare at the ICRC. So, to help set the scene, Mauro, please let’s begin with an overview of the digital emblem. Yeah, thank you very much,

Mauro Vignati:
Michael and everyone. So, I’m going to give an overview about the emblem, also the physical one, just to bring everybody at the same speed by discussing the digital emblem. So, the Red Cross, Red Crescent, and more recently, the Red Crystal have been symbols of protection. So, meaning that facilities, people, vehicles showing this emblem should not be attacked, they should be spared by the consequences of armed conflict. So, this is why the international military law requires part of the conflict to ensure the visibility of the emblem, so that combatants can identify the persons and the objects that they must protect and respect. And we’re going to see that this is a very important aspect, also in the digitalization of the emblem. So, the rules on the use of the distinctive emblems, or signals, are governing in the Annex I of the first additional protocol of the Geneva Conventions of 1977. So, and there is an article, it’s an article, it’s the article one of the Annex, that mandates the ICRC to see whether new systems of identification should be adopted. And that’s why we’re here to discuss the project of the digital emblem, because we think it’s fundamental to have a digital version of the emblem. So, the emblem marks medical personnel, medical unit, vehicle, and organization like the Red Cross and the Red Crescent organization. So, and there are two use of the emblem. So, there are, there is the distinctive use of the emblem. So, so to say it’s always on, in the way that organization like the International Committee of the Red Cross and the National Societies can use the emblem at all time. And then there is another use of the emblem that is the protective use. This means that the selected, dedicated entities can use the emblem only during armed conflict. This was a very important point because the emblem was in the digital space, must be flexible in this respect and in use only during armed conflict. So, that said, so it’s a general review about the emblem and, and we’re gonna go into the detail why we need to digitalize the

Moderator – Michael Karimian:
emblem to have a digital version of it. Thank you. Thank you, Mauro. So, today’s session will have three segments. For approximately 30 minutes, our speakers will frame the discussion from their perspectives. We’ll then spend approximately 20 minutes with the speakers having a conversation among themselves on the technical, legal, and humanitarian aspects. And we aim to dedicate 30 minutes for audience Q&A. So, please start to think of your questions now. In terms of framing the discussion, Francesca will turn to you first and it’ll be great to have your overview of the CPI’s role in protecting vulnerable entities in cyberspace. Overview of the trends in healthcare, sorry, cyber attacks against hospitals and medical facilities, including in times of conflict, and also importantly the role of neutral organizations in promoting digital peace. So, Francesca, over to

Francesca Bosco:
you. Thank you so much, Michael, and it’s a pleasure to be here with you all. Can you see my screen? We can, thank you. Great, thank you. So, thanks a lot, Mauro, for the excellent introduction in framing the discussion around the digital emblem. Let me take a step back, or better, to share some reflections on the work that we’ve been doing at the Cyber Peace Institute, specifically to understand the context. of the why it’s so important to protect civilian infrastructure like the healthcare sector and humanitarian organizations, both in peace time and during armed conflict. So let me share some also, some reflections on how the Cyber Peace Institute was created and is operating to try to understand some of the considerations that I hope will help the discussion further. So recognizing that our digitizing societies are particularly vulnerable to cyber attacks and often lack the resources to strengthen their cyber security. The Cyber Peace Institute was founded in 2019 in response to the escalating dangers posed by sophisticated cyber attacks. The mission, the overarching mission of the Institute is to mitigate the adverse effects of cyber attacks on people’s lives worldwide. This is extremely important because this will bring us to the focus of the Institute, which is to understand the human impact of cyber attacks. We accomplish this through key synergistic pillars that you can see here. So first, we aid vulnerable communities to stay safe in cyberspace, focusing especially on vital sectors as mentioned, like healthcare, non-profit and humanitarian organizations. Second, as you might see, we conduct investigation and analysis on cyber attacks. Our cyber threat analysis team has been focusing on cyber attacks against the healthcare since 2020 and since February, 2022, specifically on cyber attacks in the context of armed conflict. Now we are building the same capability to monitor attacks against NGOs, including humanitarian ones. Then we advocate for improved cybersecurity standards and regulations with evidence-based knowledge. And we complete, let’s say, the cycle by proactively addressing those emerging technological challenges and disruption to the work of humanitarian organizations caused, for example, by artificial intelligence or quantum computing. I wanted to explain this to understand also, I mean, how we came about, let’s say, the analysis that I’m going to offer some insight today for further discussion. All the information and specific data are available on our website and our different platform. As mentioned, I mean, when we think about the healthcare sector, what we did at the institute was that amid the pandemic, we focus on our work aimed by supporting the so-called the most vulnerable, specifically on the unique vulnerabilities of the healthcare sector and the real impact of the increasing numbers of cyber attacks against it. And you can see that we created a fairly unique platform that is called the Cyber Incident Tracer Health. And the platform serves to document cyber attacks. And not only to, you will find the, let’s say, the numbers in terms of like data collection, but also try to understand which are the criteria, which are the metrics that are relevant to understand the real impact that they have on people. So you will see how many attacks per week, so the total record breach, how many countries, but also you will find what it means in terms of, for example, how many days of disruption in hospital and medical facilities, how many people could not get the vaccines because a certain facility was attacked, how many people could not get the proper care, how many ambulances redirected. In total, I mean, and just to give an idea, this has led to the breach of over 21 million patient records, which has leaked or exposed in 69% of the incident. Again, the important aspect is that disruption to patient care endanger lives and create the stress and suffering for patients and medical professionals. And on the long term, it also erodes the trust in healthcare providers. We apply the same capability, we’re currently applying the same capability also to assess cyber attacks in terms of what is happening when civilian infrastructures are attacked during armed conflict. Again, no need to stress it again, but cyberspace is borderless, and so cyber operations go well beyond the belligerent countries to hit critical infrastructure and populations, also in third countries. We have to consider the anonymity of the digital world, so the actors involved in cyber warfare are numerous and diverse, and their true intention are even more complex, let’s say, to define and predict. And again, cyber operations have a significant human impact on population living in conflict. They are threatening crucial services, healthcare is a good example, and also other civilian infrastructure areas. And also there are, let’s say, kind of like a very peculiar dimension about the, let’s say, the digital space, and this is why the emblem is so important. For example, the spread of disinformation can make it harder to distinguish between fact and fiction, both inside and outside of countries in conflict. I would like to basically to stop here, maybe sharing these first insights, and we can possibly continue the discussion further. Thank you so much, Michael.

Moderator – Michael Karimian:
Francesca, thank you very much, and absolutely we can come back to more of these topics in the discussion later on. I think if anything, the pandemic showed in a perverse way that with the severe vulnerability of the healthcare sector there is a need for this sort of collective action together and hence the importance of the ICRC’s leadership in this space. Now moving on, Koichiro, it’ll be great to have a presentation from you or to hear your thoughts on the cybersecurity challenges in Asia and the Pacific, and the insights that you might have into the evolving threat landscape, and of course the importance of global coordination.

Koichiro Komiyama:
Thank you, Mike. And good morning, everyone. My name is Koichiro Sparky-Komiyama from Japan CERT and AP CERT. I think in this session I’d like to represent the technical community in this region, Asia-Pacific. I’ve been working for on-the-ground incident response for like dozens of years, and I’m also a scholar for international relation and related area. So. So from my perspective, I’d like to share with you a few things. First of all, in Asia, states are racing for expanding capacity and capability of offensive side of their cyber capabilities. And for instance, UK think tank IISS recently published a report on the cyber power of 20 major states. And quite a few, some of Asian countries are ranked as, for example, Australia, China. They are the tier two countries where we have only one tier one country, United States. So we have two major players in Asia. And for tier three, we have India, Indonesia, Iran, Malaysia, North Korea, Vietnam. They are all, well, by assessment from an independent think tank, they have well-established offensive cyber capabilities. So there’s an urgent need for a country like Japan to de-escalate the group of militarization of cyberspace. And then talking about Japan itself, we have been refraining to go offensive. Mainly due to our peace constitution prohibit us to use the force, except the case it is recognized as a part of collective defense. So historically, we do not have, and we did not try to equip offensive cyber capability. But that has been changed. That was changed December last year with new national security strategy. Japan also seeking to have an offensive. Well, in our wording, it is active cyber defense, not offensive. But, well, there’s a subtle difference. But anyway, it’s not something we haven’t even tried for the last 50 years. And my last point is we see many damages caused by ransomware attack. And most of those, they are mostly driven by a commercial profit. So they hack, they launch ransomware attack for profit. Now, for last 12 months, we see many successful breach to our hospitals, one of our very critical infrastructure. However, they are usually very strong in protecting their own network. And going back to the emblem, of course, I know it’s not for, you know, it doesn’t have any direct effect to criminals in a peacetime, of course. However, having this type of document and guideline, I expect there’s some pretty. can also put some pressures on criminal groups on what they cannot do for their operation. So that’s my initial contribution and I’m happy to discuss with you for further details. Thank you.

Moderator – Michael Karimian:
Koichiro, thank you very much. Interesting to hear you reference the intention for Japan to introduce active cyber defense as part of the new national security strategy. Of course, different actors always define active cyber defense in different ways. It’ll be interesting to see how Japan approaches it in line with responsible behavior and cyberspace norms and the pacifist constitution. Mara, returning to you, it’ll be helpful to hear more on the ICRC’s role in researching and developing the digital emblem, the importance of addressing the need for extending international humanitarian law into cyberspace, and the insights that you might have on the application of the digital emblem in practice. Thank you very much. So,

Mauro Vignati:
Michael, you and Francesca, you mentioned the pandemic. So this is exactly the point in 2020 when we start to think about the digitalization of the emblem by observing what was happening during 2019 in the pandemic time, but also observing what is happening during armed conflict. So that’s the period we start to research the possibility to digitalize the Red Cross and Red Crescent emblem to signal the protection against cyber operations for medical facilities and the Red Cross and Red Crystal organizations. So to start the project, we define some technical aspects that the emblem should have, a potential digital emblem should have. So these are the requirements that we define. So first one was it must be easy to deploy. So we know that during armed conflict, it’s always difficult to find, it’s already difficult the situation in armed conflict, but it’s also difficult to find IT personnel that is able to work in this domain. So the emblem must be very easy to deploy, like the physical one, also the digital one must be easy to deploy. So it must be able to be installed on a number of different devices. That’s a very important aspect because we know that, for instance, medical devices, they cannot be modified because of different reasons, the guarantee, the functioning of medical devices. So we have to find a way to put the emblem on those devices without touching them, without installing anything on those devices. So we do not have to generate costs for the entities that are showing the emblem. So if we think a medical unit, a doctor that has to show the emblem, he has not… to have a relative cost to deploy and show the emblem. And most importantly, he has to be seen and understood. So the logic of the emblem is from the perspective of the attacker. So when we have an operator running a cyber operation, they have to understand that they are confronted with an emblem. And they have to be able to recognize this is the emblem of the Red Cross Red Crescent. And they have to understand this emblem. And they have to be able to also check the authenticity of the emblem. Not that this is a fake emblem, but this is an original one. And another aspect is the emblem should be used by state and non-state actors. So we see many state actors who are involved in conflict. So not only thinking about states able to deploy the emblem, but also by non-state actors. So on that, we are seeing some challenges in deploying this. First of all, and I think it’s one of the most important challenges, that we don’t have an internet for armed forces. And we don’t have an internet only for civilians. So the infrastructure is mixed. The nature of internet is mixed. And that’s why we need a digital element that can go granular on identifying assets on network. Because networks are intermingled. And we cannot divide. So I’m thinking about cloud infrastructure, satellite infrastructure, and so on. So we can have a doctor that has a computer that should be protected with the emblem that is using a military network that is a target. So we have to think in those scenarios. And then, so the challenge is also the medical devices I mentioned before. And then the environment. So it’s very complex, fluid, dynamic field. So we have a very stressful situation in armed conflict. So we have to be aware of this. And that’s why the digital element must adapt to this kind of field. So that’s why we start to talk with John Hawkins University that we’re going to have later on in this panel, and with ATHZ and the University of Bonn Center for Cyber Trust. And we start to talk with them, and they start to develop. a potential way to digitalize the emblem. Then we consulted, during the last year, 44 experts from 16 countries. And we submitted the ideas that have been developed so far. And they identified benefits and risks in digitalizing the emblem of the Red Cross. So among the benefits, logically, the digital emblem will extend the existing protection from the physical space to the digital world. So this is a very positive aspect. And the emblem will make it easy for operators to avoid arming protected entities. So those are the main benefits resulting from the consultation, but also the risks. So we risk, based on the expert consultation, to increase visibility of sensitive and less protected entities, like hospitals. Knowing that all of the experts reflect on that, saying that nowadays there are already multiple, several possibilities to identify less protected entities, scanning the internet and finding out which IPs, which domain names, belongs to hospitals. So in their opinion, we are not aggravating the situation. We are not increasing, because there are already methods and means to identify those. But we have to keep in mind that putting an emblem on something, someone, an object, could be putting a target on a personal object if the parties do not respect the emblem. And then as a second big risk is the possible misuse. So we know in the physical world, there are several cases of misuse of the emblem. We’re going to see, with the presentation from the two universities, that we can reduce in the digital space the possible misuses through the technology that they are developing. So this is a positive development in this respect. So we published the first report in November last year. So if you are interested. on the website of the ICC, you’re going to find the report. So this is generally how the genesis of the project in this time. Thank you very much Mauro, and you mentioned the role, the issues

Moderator – Michael Karimian:
surrounding non-state actors. During the Q&A, perhaps we can discuss the ICC’s recent principles on non-state actors. I know a question has already been posed on the Zoom platform, I encourage more questions as well, and of course encourage the audience to think about their questions when we come to the Q&A portion later on. Felix, turning to you, ETH Zurich, it’ll be tremendous to hear your thoughts on the technical solution of the Center for Cybertrust to implement the digital emblem, your thoughts on the feasibility and design considerations, and any insights that you might have on the role of technology in protecting medical and humanitarian organizations. Felix,

Felix Linker:
over to you. Thank you for a great introduction Michael, and also thank you to the other speakers for setting the flow so well. So as Mauro said, we were contacted by the ICRC in 2020, and in response to their question of how a digital emblem could work, we developed a system that we call ADEM, which stands for an authentic digital emblem. And in the next minutes, I’d like to give you an overview of the key design concepts that went into ADEM. So first, Mauro mentioned it, an emblem must be verifiably authentic. We looked at this problem more generally and asked ourselves the question, when is the digital emblem trustworthy? And we identified three security requirements in response to that. So as I said, an emblem must be verifiably authentic. That means parties who observe an emblem can check that it is legitimate and develop trust in the emblem itself. Second, a digital emblem must provide accountability. As Mauro said, there can be misuse, but we designed our digital emblem in such a way that whenever parties misuse it, they commit to irrefutable evidence that could be admitted to court, for example, to prove that they misbehaved and to hold them accountable for that misbehavior. And finally, attackers must stay undetected when inspecting the emblem. I put attackers in quality because it’s a bit of a funny attacker model. We are thinking about parties here who are willing to engage in offensive cyber operations, but not when their target has a digital emblem on it. These people must feel safe in using the digital emblem and trust that it doesn’t harm the operations. For example, that it would reveal in other cases that they’re about to attack entities. Coming to ATEM itself, we envision our design to be used by three types of parties. First, nation-states who endorse protected parties, then protected parties who send out digital emblems to attackers. With ATEM, nation-states can make sovereign decisions as to who they do or not endorse. Protected parties can distribute emblems autonomously, and this touches on what Mauro said earlier. This is a means for protected parties to decide individually whether or not they want to show the emblem, whether or not they feel safe to showing it. ATEM was also designed as a plug-in to the protected parties infrastructure. You can just add a device into their networks and it will distribute emblems for you. And for attackers, these parties can verify an emblem as authentic while staying undetected. And critically, we designed ATEM so that it also fits the standard workflow of attackers. Looking more at the technical side of ATEM, we identify parties via domain names for countries, for example via their .gov address, and protected parties as well. For example, let’s say pp.org. Governments cryptographically endorse a protected party, and a protected party, for example, would cryptographically endorse a hospital that has some IP address. In practice, these hospitals have multiple protected digital assets, for example, a website, tablets of the medical staff, or general purpose medical devices that cannot be touched, as Mauro explained. With ATEM, you can deploy an emblem server additionally within the hospital that would signal protection via TLS, UDP, and DNS to aforementioned attackers. This emblem server would distribute emblems that have multiple parts. First, the emblem itself in the center that is a cryptographically signed statement of protection. And this emblem would be accompanied by multiple endorsements. Endorsements from all the nation states that endorse the protected party and an endorsement from the protected party itself. An attacker could learn from this emblem that multiple conflicting states endorse the emblem and thus deem it as trustworthy. This reasoning might be simpler for military units who are bound by AHL. For these military units, it might suffice that they see that a nation state they trust, for example, their own nation state or an ally endorse the emblem. In summary, our design, ADEM, provides three security requirements. It’s verifiably authentic, it provides accountability, and it lets attackers stay undetected. Our design is to appear in a top tier security conference and our publication is accompanied by formal mathematical proofs of security. Currently, we have prototyping ongoing with the ICRC and we hope to deploy ADEM within the ICRC’s network, as I just showed for hospitals soon. If you want to learn more about the digital emblem, I encourage you to follow the QR code on the right hand side or reach out to me via my contact details. And I look forward to the discussion later.

Moderator – Michael Karimian:
Felix, thank you very much. And it is important to note that Felix and Francesca are dialing in at approximately 4.30 AM their time. So real kudos to them and thank you for their generosity. Tony, I think has a slightly better time zone, but still up a little bit late. So turning to you, please, Tony, if we can hear your thoughts on similar aspects as Felix’s presentation, but from the perspective of Johns Hopkins APL. Thank you.

Tony:
Yes, happy to do that and happy to be here. Thank you very much for inviting us to this and also to participate in the larger effort. We, the Applied Physics Lab or division of the university, we have a variety of technical efforts, many focused on protecting critical infrastructure. The project we’re discussing here is actually part of a broader set of activities we have, recognizing that while we are a laboratory, major technology activities, if we expect to have significant impact have to be tied into a legal and even a legal policy and even a social framework to be successful. And so that’s what this is about. We’ve had a longstanding effort to look beyond the technology into the other policy, ethical norms based issues associated with critical infrastructure. And when we discussed with ICRC, some of their objectives for the digital emblem, there was a significant overlap, particularly because within the context of international humanitarian law, we had a fairly specific way of thinking about what needed to be done in order to provide that emblem to the parties that needed to be able to implement it and observe it and respect it. So I’ll tell you a little bit about what we envisioned for the technical solution, but I wanna back up a little bit to kind of our thoughts on what is it that a digital emblem has to do. And this is a recapitulating a little bit of what we’ve heard, but I think the important thing to think about here is twofold. Who is it that has to respect the emblem and who is it that has to observe that set of behaviors? And it’s important that we are looking at actors who would desire to comply with international humanitarian law. So there’s a large class of cyber actors, a large class of cyber attacks. There are hacktivists, cyber criminals, script kiddies who are doing it for fun. And then there are nation states or organized militaries or organized combatants who employ cyber in conjunction typically with other means of power. And those are the types of cyber operators we’re focused on. That’s the nature of the emblem for international humanitarian laws. It applies to those types of actors. And one thing we observe is that if you look at how nation states have employed cyber means in conflict, they typically have fairly broad capabilities and will do things like major disruptions to the internet in order to support whatever it is that they would like to do, suppressing activity within their state or limiting the ability of combatants to operate within their domain. So what that means is from a protection point of view, we can’t just think about protecting the end systems, the data, the processing. We also have to be able to protect the communication. Many of the operations that we look to protect rely not just on the ability to process locally, but the ability to reach back and communicate either for logistics purposes, to receive advice, receive supplies. So the emblem needs to protect both the end system, its data and processing and the communications. And it has to do that with a degree of assurance. It has to do that in a way that’s visible to operators. And then to some of Francesca’s points, it also has to be visible to third parties in a way that doesn’t disrupt the operations of the humanitarian mission. So we were looking for a solution that had those kinds of attributes. It needs to be scalable. It needs to be visible globally. And it can’t be a burden on the operations of the humanitarian organization beyond what they need to do in order to operate on the internet. And in order to do that, what we tried to do was look at how we would leverage the infrastructure that is in place in the internet, rather than looking at developing a new capability that would require new infrastructure. And what we were looking at was the way to leverage what is on the internet today in order to secure the internet. The internet technology has grown the capability to employ cryptographic methods to protect the fundamental data that you need to operate the internet. And that is the naming and the addressing that’s used in order to enable communications. So with that infrastructure in place, we have an asset that we can use that doesn’t require us to roll out new capability in support of the emblem. We leverage what’s out there that gives us the global reach and the scale that we think we need. And a lot of these technologies are well understood. What we have to understand is how to adapt it into this mission, into the mission of supporting a digital emblem. And the fundamental problem, you know, in our opinion, isn’t the technology to protect information on the internet or to indicate your presence on the internet, protecting IP addresses, protecting names as established technologies. What needs to be done is adapting it into the model for how international humanitarian law and the emblem are used. And there’s a very strong analogy with what’s done physically, and I think we’ve touched on some of this. The emblem is understood globally through the good work of the International Committee of the Red Cross and the National Societies, but the the emblem itself is regulated under the laws of each state, and so it’s different in each state. And what has to be done then is to tie the assurance that the emblem is valid to that authority that the state has to determine how to regulate the use of the emblem, which is different in different states. In some places, there’s a very close coupling to the National Societies. In other places, there are state agencies that are responsible for regulating the emblem. But that’s the new connection that has to be made from a technology point of view, and that is all about the ability to use the same cryptographic techniques that are used to protect the Internet, but to protect the emblem. Now that’s the premise for what we’re doing. Let me talk specifically about what we think would be a valid implementation of the emblem that has these properties of global visibility and scalability. What we’ve looked at doing is simply leveraging what’s already in place for secure naming, secure DNS, and for routing for securing the BGP system used for global routing. And what that means is that we have cryptographic protection for that information, for names and addresses. How do we now layer on top of that the cryptographic protection for the emblem? Well, to do that, we can leverage what’s available already within DNS, and we have a prototype running where what we have done is taken part of our DNS namespace at JHU, and as part of our demonstration said that that subset of the namespace is for humanitarian missions. Now the name itself isn’t the emblem because the name is not something that can easily be assured, but what we do in addition to assuring the name, which shows that the name is legitimate, we insert within the DNS record a special text record that is signed by a different entity that is trusted to verify that the emblem is being used properly. And that’s what then has to be tied back to the way international humanitarian law is regulated in the different states and in the different jurisdictions. So that’s the first part of what we’ve suggested, that we use the DNS in order to propagate this information, make that available within the DNS record using standard technology, thereby inheriting the scalability and global reach. But it’s not enough to have names. In order to see what’s happening on the internet, you actually have to focus on addresses and you get an address from the namespace. But if you just relied on that, you’d run into the problem of being able to do that at scale. If you are Francesca’s organization, you don’t wanna have to look for each individual name and collect each individual address. What you’d like to do is operate in a way where the addresses used for these protected missions are part of a distinguished part of the IP address space. And again, that’s something that can be done. It is used all the time in order to segregate some of the traffic for the normal users of the internet. Commercial internet operators, nation states that operate the internet, will distinguish how they handle traffic based on what they know about the meaning of that address, but they do that based on local considerations. What we’re seeking to do is make that context by which you determine how to handle an address global and global tied to international humanitarian law. So the suggestion then is to have designated blocks of addresses that are associated with the humanitarian missions and assigned through the normal process to provision internet services tied to the infrastructure in place for secure routing. What that means is, an entity that would like to have a service supporting a humanitarian mission would number that out of the address space that is designated for humanitarian missions and register that within the RPKI, the Resource Public Key Index that exists for routing and thereby gain the global scaling and visibility for the address so that if an entity like the Cyber Peace Institute would like to see if internet traffic disruptions are affecting humanitarian traffic flows, that is done based on aggregated blocks of addresses so that it’s quickly visible to a third party observer that a state action has in fact affected a humanitarian mission. So those are the core technical concepts. Adopt a naming technology and the means to do secure naming in order to provide a distinguished record that serves as the namespace address and rely on blocks of addresses in order to have traffic flows that can be monitored that are associated with humanitarian missions. All of that secured by standard cryptographic techniques that then need to be tied to essentially a route of trust associated with the way that international humanitarian law is implemented. That last piece really is where we see, excuse me a second. That last piece is where we see a great opportunity to work with international organizations on how that would be done. If it’s done country by country, we again have a scalability problem. Every country would have to be able to, every country, not just every country. Everyone interested in participating would have to essentially touch every country. Better would be to work through existing organizations, national societies and the ICRC or the IFRC or perhaps regional associations that countries might use in order to coordinate how they would implement the regulation of the entity that they do under their domestic laws. That piece again is at the intersection of the technical solution that I’ve sketched out here and the legal policy frameworks that are in place to allow cooperation among nations and then a cooperation with third party entities. So that’s where we are. As I mentioned, what we’re doing now is prototyping focused not on showing you can do this. Like I said, most of this is very well established technologies, but showing that if you do it on the operational internet, it will behave the way you expect. It will have the scaling properties, the global visibility. We will have the ability to bring up or take down an emblem. We have to understand what those time constants are given the way the internet works. And that’s an experiment that we hope to do over the next few months with some technical partners. And in parallel to that, as I say, we should be doing some work with the appropriate bodies that would look at how the nations that are responsible for putting in place regulation of the use of the emblem would cooperate in order to make the assurance of the emblem something that also scales globally. And that’s what I have. Thank you.

Moderator – Michael Karimian:
Tony, thank you very much. I think both yourself and Felix, your remarks have highlighted the technical feasibility of the emblem. And of course, that in itself demonstrates the innovative nature of the emblem itself. And also I think speaks to the credit of the ICRC for taking so much time to go through the due diligence to identify and design how this could be rolled out in practice. In the next 15, 20 minutes, we have. the privilege of engaging in what I hope to be a dynamic conversation among the speakers and that will delve into the technical policy, cybersecurity and humanitarian aspects surrounding the digital emblem. This is intended to be a conversation among the speakers so that they all have a chance to react to and build upon each other’s thoughts. If I can please request for the AV team to have Antonio, Felix and Francesca on the screen at the same time so we can see them simultaneously, that’ll be very helpful. Thank you. So let’s start by discussing the mix of technological and policy dimensions of the digital emblem. I think it’s crucial to consider the involvement of international organizations such as ICANN and the ITU in this endeavor. I wonder if any speakers have any thoughts on how these organizations can play a role in the development and implementation of the emblem and what collaborative efforts we can envision on this front. Felix, I think maybe you have some thoughts on this topic. Yeah, this touches a bit on what

Felix Linker:
Tony said last time. So we, in our design of ADEM, we feature a notion of authorities as well and we are deliberately vague in what these authorities are supposed to be because we don’t know which authorities like the world in the end will agree upon which are the good ones to be endorsed by. So one of these authorities could be the ICRC that endorses protected parties to run humanitarian missions. It could also be organizations like ICANN. But what we thought is that organizations that, for example, control parts of the naming system of the internet are not particularly well suited to verify whether someone that reaches out to them and tells them, hey, I run a protected mission, can you please endorse me? Organizations that are more of technical nature would have a hard time verifying these requests as genuine, is what we feared. So we didn’t want to put any legal burdens on technical organizations, so to speak, and rather focused on nation-states or maybe supranational organizations like the Arab League or organizations that know what they’re doing in the space anyways, like the ICRC. Thank you, Felix. Do any other speakers have

Tony:
thoughts on this? I agree with Felix that it’s really the regional registries more than ICANN. They are responsible for operations, but their role is the validity of the information used to run the internet. They are not in general in a position to verify humanitarian organizations. but that’s not true as a blanket statement. And the difference is it is a state responsibility as the ICRC has written to regulate the use of the emblem. And in many states, there is a very close coupling between the internet operator and the state. And so in that world, under the ICANN and the regional registries, there is a state authority that controls names and numbers. And if that’s the case, then there’s a natural place for that to be the authority that controls the use of the emblem, not as the numbering authority, but as the state authority for the use of the internet. Now that’s not global. In the United States, that’s not the way the internet operates. In the United States, the government has very little involvement in how names and numbers are allocated. But in other countries, Egypt, for example, China, the coupling is very close. So the answer, Michael, to your question is not simple. In some places, you’d expect a close coupling. In other places, it really needs to be distinct, but it does need to be tied into the way that the internet itself is operated, or you have to overlay another global scalable system, for example. So we envision using DNS, not to use DNS to verify that the emblem is correct, but to use DNS to propagate the emblem, regardless of who has signed the digital record within the DNS record that says the emblem is valid. That can be an ISP, as I say, in certain countries. In the United States, it almost certainly would not be. It could be the American Red Cross, or it could be the US as part of the supranational organization. But the general technical solution does have to maintain that separation, recognizing that operationally, to make this scalable, it does have to couple to what’s done by the registries and ICANN.

Moderator – Michael Karimian:
Thank you, Tony. Mauro?

Mauro Vignati:
Yeah, just to give a couple more thoughts on the legal and policy perspective. So the use of the emblem is not decided by the ICRC. So this is decided by states on the Geneva Convention. And this is in the annex ones of the additional protocol. So that’s where we have to operate from a legal perspective. So buying to the technological development, we are working on the legal process. And we are presenting the idea to states so that for the international conference in October 2024, where the state’s going to come to Geneva also to discuss about the emblem. States are aware about the project, and national societies too, and then we look for them to give us the mandate to continue to explore this project. Because at the end of the day, we have to amend the Geneva Convention. So we have to amend the additional protocol or to create a new protocol. So this is the basic legal process that we have to go through to be able to have a digital version of the emblem. So that said, in the offline physical space, then are the state’s authority that decide who is able to use the emblem. So the Ministry of Health or other ministries entitled for this, they decide who internally in their nation or in their territory, because we are always talking about a non-state actor that occupy territory and control territory. So these could be also a non-state actor. They are entitled to give the permission to selected entities to display the emblem for protection. So the distinctive use is already in the Geneva Convention, so the ICRC and the national societies. But at the end of the day, the entity that decide who is able to display the emblem in the physical space is the state. So we try to replicate the same. process that we have in the offline or in the online. We’re going to see the difficulties that we can have in this specific domain, but we would like to replicate exactly the same process for the authorization. Then the implementation is another topic. Thank you Mauro, very

Moderator – Michael Karimian:
helpful. Let’s turn to the cybersecurity implications because of course we must recognize that with innovation comes great responsibility, and so let’s examine the risks and benefits associated with this concept. I wonder if any speakers have any thoughts on potential vulnerabilities that we should be vigilant about, and of course conversely how the overall cybersecurity of cybersecurity posture of critical medical and humanitarian organizations can be enhanced by the emblem, but also recognizing that in a world where cyber threats evolve and sometimes in predictable ways, sometimes in unpredictable ways, what proactive and best practices can we put in place to safeguard these vital systems? Would anyone like to start? Kojiro, please.

Koichiro Komiyama:
So I don’t have a clear answer for the question, but to protect the, or talking about the protecting the, for example, infrastructure at the hospital or medical system, so this is more like a question to Felix or Antonio. You mentioned the ADEM or the implementation of digital emblems right now is, it can sign on DNS domain name or IP address or TLS, DNS. Could it be possible to sign like individual files or medical, the physical systems that are used in factory or hospitals?

Felix Linker:
Felix, you have your hand up? Can I jump right in? Yeah, great. Yeah, so we need to distinguish two types of, two parts of ADEM. just talking about my design now, or our design. So there is, for one, what you say that is protected, right? Like how you speak about the entities that are protected, in which direction you point. And what we use in Atom are IP addresses and domain names for that, right? This is how we identify an entity that is protected. And then TLS and UDP and DNS are our mechanisms by which we give someone the emblem, right? And then this emblem includes the pointer, we give it via, for example, UDP, and then this emblem says, ah, this is the protected IP address, this is the protected domain name. Now, a colleague of mine is currently working also on local emblems, where the idea is that malware that infested some device could check whether this device is protected, or whether parts of this device are protected. In the work that I presented, we focused on the network level, and on the network level, we thought it only makes sense to talk about things that you can also see from the network level, right? We found it would be kind of like, what would a verifier do with the information? Oh, like looking at their notes, file f.txt on this computer is protected, allegedly. But I mean, I have no access to this computer, right? What am I supposed to do with this information, yeah? So on the internet, we wanted people to only say, something is protected that they can also recognize as that thing that is protected. But for local emblems, we are looking on future work, yeah. And this, for example, would target especially the devices of medical staff, because not every penetration happens through the network layer, right? It could be malware in a malicious email attachment that just gets sent out en masse, right? And then the malware happens to find itself, wake up within the hospital network. And we also want to cater those designs, or those problems, rather, not designs.

Moderator – Michael Karimian:
Oh, thank you, that’s great. So, I’ll, Rik.

Tony:
Let me make, can I make one comment on some of the risks? We worried a bit about unintended consequences, and what we have to be careful of is not to create an emblem in a way that itself potentially causes a disruption to the humanitarian mission. And really, the important thing here is to think about how a third party, not the cyber actor, how a third party would observe that, the emblem was being respected. What we wanted to avoid was depending on the humanitarian organization itself to field a query from an arbitrary third party in order to avoid the potential for an unintended denial of service attack. The scenario to think about is, you would like to be able to observe a cyber attack in progress. If the only way to do that is to query the attacked entity, what you are doing is focusing traffic on the attacked entity. That’s how unintended denial of service happens. There’s no way to check for malware on a machine without checking the machine. But given what we have seen, that nation-state attacks typically are focused more on the infrastructure than on the individual user. We want to make sure that the observation of attacks on the infrastructure don’t depend on observing the endpoint. I’m talking about a set of mechanisms that have actually manifested many times on the Internet with the loss of certain critical capabilities because of a focused overload on the endpoint. You can imagine that kind of thing happening if all the news organizations in the world or all of the third parties that care to monitor compliance with international humanitarian law, address a endpoint that is intended to be protected. That’s a little aspect of this that is still a concern to me. Our solution tries to mitigate that by relying on Internet infrastructure to query for third parties. But there’s nothing that prevents those third parties from actually, now that they know where the attack is manifesting, from actually focusing their attention on it, unintentionally disabling the humanitarian operation.

Moderator – Michael Karimian:
Thank you, Tony. Kojiro?

Koichiro Komiyama:
Just a very quick comment. But I do agree with, or I strongly believe that the local environment is something. We really need to implement this concept because the more a system is critical is, those system tend to be completely offline or not connected, or doesn’t use the global IP address spaces, do not associate with any domain name and others. That’s something I need to see your future proposal.

Moderator – Michael Karimian:
Thank you, Kojiro. Francesca, if I may put you on the spot at 5 AM your time. I know strategic foresight is a speciality of yours. I wonder if you have any thoughts on where risks to the medical and humanitarian sector might go in the future, and how we can proactively mitigate those risks.

Francesca Bosco:
Actually, can I share a reflection that was, I think, a connection point across different aspects that were mentioned, starting from what Mauro was mentioning in terms of one of the key requirements of the emblem is that it needs to be understandable by the different parties, let’s say. Let me share specifically also to address your point, Michael, in terms of we chart the evolutions in cyberspace that we are seeing. I’m sharing an evolution that we are all aware about, for example, the kind of like civilization of conflict, for example, that we’ve seen and why the emblem is so relevant. More than an evolution in terms of technology, I would like to share an evolution, which is a combination of, let’s say, technological disruption, like, for example, the availability of certain tools. And I’m thinking, for example, about the accessibility of harmful and sophisticated malware, for example, the diffusion of ready-to-use cyber tools that are accessible online, link-to-sold, and so they lower the barriers of entry for malicious actors. One of the key elements that Mauro mentioned before is that the emblem needs to be understandable also by the attackers. And here we’ve been talking more about, let’s say, the technological vulnerability, but let’s also think about the human vulnerabilities, let’s say, in terms of lowering the barrier to entry means also that, again, as we’ve seen, let’s say there is a blurring line between the state and non-state actors, the complexity, clearly, of the, I mean, the attribution of cyber attacks and the increased complexity of having civilians, for example, engaging in cyber operations. This is to say that one of the problems is also understanding the real impact that certain actions might have. What we have observed is, for example, that there is a combination between, for example, state-sponsored actors and activist collectives that usually conduct more basic attacks and focus on disruptive effects, but you can never completely, let’s say, foresee the spillover effect or without fully understanding the consequences that their actions might have, often because they don’t understand the full impact, basically, that they might have with their actions. So I think this is an interesting evolution, let’s say, in cyberspace, where, again, to Mauro’s point in terms of the value of the digital emblem is indeed something to consider. And let me also allow another comment, which was also, I was seeing some of the comments in the chat about the education. I think that the education needs to go in different directions. Again, going back to why it’s important, let’s say, to protect healthcare organisations, institutions and facilities, but also, at the same time, humanitarian organisations. Before understanding, let’s say, why it’s important to protect, often the easier argument is to offer concrete examples of what it means if we’re not protecting them. And we’ve seen this. We have not necessarily learned from that, but this needs to go across, let’s say, the different stakeholders involved. I started with the malicious actors, but then let me go back also to what the, let’s say, which are, let’s say, the ones that need to decide on the emblem, as Mauro was mentioning, are states at the end of the day. Also, in terms of like states, we need to educate in terms of which are the real consequences and the real impact of attacks. And to this end, one of the work that we’re currently doing is also analysing, basically starting exactly with the work that I was mentioning on the healthcare, to understand the real human impact, but also to foresee potential consequences on the long term. We started doing this work by which we are working on a standardised methodology to measure the societal harm from cyber attack and monitor also the responsible behaviour in cyberspace. And to the points that have been made, this needs to be applicable in peacetime, in armed conflict time, and be able to assess which are the costs that we are paying as society if we are not protecting vital infrastructure like healthcare and humanitarian organisations.

Moderator – Michael Karimian:
Francesca, thank you very much. We now have approximately 22 minutes for audience Q&A. For anyone in the room who has a question, if you could please approach the microphone at the stand. I don’t say that to make things awkward, but just it is important for accessibility and so that questions are captioned on the screen as well. But just to help kick things off, there is a question in the Q&A chat on Zoom which I will pose. It is actually a very helpful big picture question. Then we can zoom back in. The question comes from Aliou Shabashi. They ask, can we stop cyber attacks in all sectors by investing a huge amount of funds for developing highly sophisticated software tools slash systems, or are there other means to at least minimise cyber attacks that harm countries? It is a big picture question, not just specific to the digital emblem. It helps us expand the conversation on cyber security more broadly. If any of the other speakers have thoughts on this, I will just quickly mention the Microsoft perspective. At Microsoft, we talk about five specific actions which are recommended that are taken. One, this is true for individuals and systems administrators, is to apply multi-factor authentication. I know that can sometimes seem very annoying, but it does make an enormous difference, as studies have shown. Secondly, apply zero-trust principles, that is specific to systems administrators. Extend detection and anti-malware software and solutions. Keep up to date, in other words, patch systems and use the latest available versions of software and protect data, ideally through encryption. Studies have shown that 99% of cyber attacks can be stopped by those basic cyber hygiene activities. I would also encourage tech and telco companies to join the Cyber Security Tech Accord, which is a coalition of approximately 150 members who have committed to best practices and principles of responsible behaviour in cyberspace, as well as the Paris Call for Trust and Security in cyberspace, which actually applies to all sectors. It is the largest multi-stakeholder initiative to advance cyber resilience. I would encourage anyone to engage with Francesca’s organisation, the Cyber Peace Institute. Does anyone else have any thoughts on this? Francesca, I see your hand is up.

Francesca Bosco:
I was waiting for this moment. Because actually, when we worked on the cyber incident to trace our health, in full transparency, we started receiving many requests. like, can you do it also for the banking sector, for example? Can you do it also for other vital infrastructure? On purpose, we decided to focus on all civilian infrastructure. And so we started looking into that. So I get the point. So I’m talking here more about like understanding the full landscape. I’m not gonna go into the weeds, let’s say, of the definitions and let’s say the landscape of different laws and regulations that apply that are making also difficult, let’s say, to do some proper collection work. But let’s stick to our own experience and to answer to the question, would the funding be enough from a technical standpoint? And I spent all my life in cybersecurity. I would say, no, stopping, let’s say, cyber attacks worldwide, not possible. But on the mitigation side, indeed, there’s work that can be done. You mentioned, you started basically already answering, Michael, in mentioning, I mean, basic cyber hygiene. And to me, this should be kind of like the minimum requirements, let’s say, of all society education. But the sticking more in terms of like what the different stakeholders can do. I think there’s one basic point, which is full cooperation in terms of like information sharing. One of the challenges that we encountered, for example, in the cyber incident trace of health was to collect the data, analyze the data, and also share the data among the different partners. So information sharing is still a challenge. And there is one part which is also related to then how to transform the knowledge into, let’s say, palatable and understandable knowledge that can help the international community to advance the mitigation efforts, notably when it comes to, for example, accountability. But also I’m thinking in terms of like the active role that civil society organization or non-state actors, Michael, you mentioned the tech accord, for example, or civil society organization like us and like many other attendee, for example, the IGF and for sure in the room can play a role because they are the ones that are often either impacted or they are the last mile, let’s say, very close to the people that are impacted by cyber attacks. So to understand, again, the consequences and for potentially advancing knowledge for the mitigation efforts, we need to have this constant dialogue. And then the third part that we have not discussed so much about, but in the end, it’s also, I mean, the framing of the conversation, which is protecting the protectors, meaning sharing also defense resources, because there is one part which is the information sharing when it comes to the attacks, but then there is also, okay, so what we can do about it and therefore how we can mitigate. Enhancing cyber capacity building, there are different efforts in that regard. I would like to mention there is going to be a high level meeting in Ghana at the end of November, the Global Cyber Capacity Building Conference. I’m mentioning this because this goes also into the mitigation effort side and that there will be also one focus specifically on protection of critical infrastructure, both in, let’s say, I would say developed and in developing countries. But then also, again, sharing the knowledge, the good practices, and also sharing active, let’s say, defense initiatives. To this end, and considering the humanitarian context, we launched the Humanitarian Cyber Security Center, which is a sort of like umbrella platform by which we are collaborating with different entities exactly to go, I mean, hopefully to stop cyber attacks, but especially to mitigate the impact of cyber attacks specifically on humanitarian organizations, because they are the ones, again, that they are protecting society as a whole.

Moderator – Michael Karimian:
Thank you, Francesca. Tony, your hand is up.

Tony:
Yeah, I just wanted to, first, Michael, very much endorse your points about the importance of some basic cyber hygiene. Many, many of the kinds of attacks you see that are very damaging, we have the technology to mitigate, and it’s just not done. Having said that, I think we can’t count on a technology solution to these problems because some of the adversaries are so sophisticated, some of the targets are so valuable that there has to be more than a technical solution. And that’s one of the things that got us started down this path. We think there’s a lot of value to exposing malicious behavior and looking for collective action, which is one of the reasons why we’ve tied a lot of the mechanisms we’ve used specifically for the IHL application to general mechanisms available on the internet because IHL is very important but very limited to the humanitarian operations in conflict. So you wanna have a solution that works in that environment, but you’d like to be able to extend it under different authorities into other environments. And authorities could be legal authorities or it could just be ethical or norm-based behavior that says, we will be able to observe that there seems to be hostile activity against a hospital, not in conflict, a hospital or a public utility. And to do that, you have to make, you have to provide some more transparency so those who are interested in watching know what they’re seeing. And again, to do that globally and scalably, you have to tie it to the scalable infrastructure that’s in place. You can’t hope to do that sector by sector and still scale. And that’s one of our motivations to try to tie what we’re doing to the infrastructure that’s in place that can then be repurposed for these purposes. IHL, very good special case, but would not address, for example, ransomware at a hospital in peacetime. That’s not an IHL problem, but it’s very much an important problem that could be solved by looking for those same kinds of bad behaviors.

Moderator – Michael Karimian:
Tony, thank you very much. Again, in terms of questions in the room, please do approach the microphone, which is on that side to my right, if you’re looking at the screen. Yasmin, I believe you have a question, please.

Audience:
Hi, it’s a bit awkward to be standing in front of a microphone, but thank you very much for this very interesting and fascinating panel. I’m Yasmin, I’m a researcher at the UN Institute for Disarmament Research. So I do have a few questions, so I hope you bear with me. First on the question of offensive cyber capabilities that are being enhanced by AI. I know that there’s a lot of hype around it, but fact is that there will be cyber capabilities that are increasing in speed, even without automation and AI. And I was wondering how the digital emblem solutions would deal with issues surrounding the need for the emblem to be verifiable and in an authentic way, but at the same time, how do you deal with the increase of speed of the cyber capabilities that might not even take the time to verify the authenticity of these emblems, or they don’t even care about the emblems in a way. And second is my question of surrounding the appetite of states and sort of sub-state level organizations and agencies for these solutions. So obviously I’ve heard a lot about your efforts at socializing the idea, which I think is great, but at the same time, how much appetite do you see concretely at the moment and what sort of incentivization have worked so far? Because I saw, I think it was just yesterday, a couple of days ago, I saw an article about, for example, the activists in Russia and Ukraine who actually pledged to sort of lower, like de-scale the level of cyber operations that they’re conducting. But at the same time, how would you incentivize, for example, activists that are less organized in these groups to respect solutions such as the digital emblem? And yeah, I think that’s about it, right? Because I’m aware of the limitations.

Moderator – Michael Karimian:
Yasmin, thank you very much. I know we have more questions, and so we can… good if we can have the questions bunched together and then allow the panelists to respond in whatever makes most sense for them. So another question

Audience:
please. Sure, so hello my name is Glyn Glasser. I’m actually with the Syravese Institute. Hi Francesca. But we don’t work directly together so I’m not a plant. My question actually follows on quite well from this last one about incentives. I’m wondering given problems around attribution that Francesca mentioned, would you foresee kind of fewer state actors being motivated to respect the emblem given that there’s maybe an easier or higher probability that they could, the emblem could be violated without the attack being attributed to a state? That’s my question, thank you. Thank you. It looks like we have a third question. Hello, thank you very much everyone. This has been really interesting. I didn’t actually know about this proposal. I’m Jess Woodall and I work in policy and national security for Telstra, which is Australia’s incumbent ISP and telco provider. So it’s been really fascinating and I have a background in international relations so this really hit me. A couple of kind of observations and then a question. I think just to kind of add to what kind of Sparky was saying, I think there’s a real kind of need for this. Like we have excellent kind of visibility on the targeting in the Asia Pacific region given our kind of network and this is a real threat. This is stuff that is happening now. There’s hospitals being hit by nation-states that we can see kind of almost every day. So there’s, you know, from the outset say there’s a case for this and it’s really interesting. I think to kind of answer the question before my question, the first question, what I think you might say is like the malicious kind of criminal community is very self-regulating. So they will go after people who target people that they perceive as soft targets. Like they don’t like that amongst their own community. So whilst this is kind of primarily targeted at nation-states, you might even see that trickle-down impact within the criminal community itself. So yeah, I think that there might be broader kind of impacts than what you’ve even outlined here. On the kind of issue of validating kind of who is adhering to the emblem, because I’m a real kind of, you know, how do we implement this? This is great but what will it look like in reality? Like how do we roll it out? How do we do it? You could even look to ISPs because we can see, we have really good knowledge of who the key nation-states are that are operating in our jurisdictions, what their C2s are, what their infrastructure is. So if you were to implement something like this, you could reach out to kind of those organisations and be like, okay, is this actually being adhered to? Are people following these kind of rules? And we could give you kind of some insight, you know, is that happening or is that not happening? So yeah, my question is like, do you think that there’s kind of a role for, you know, ISPs and that kind of situation to help validate that people are adhering to, you know, an emblem type

Moderator – Michael Karimian:
scenario? Thank you. Thank you Jess, tremendously helpful. So just to briefly summarise there, we’ve had a question on how to deal with the implications of AI empowered attacks but also AI empowered defence, the appetite for states here and similarly how we can ensure that states respect the emblem. How do we think about knock-on consequences of the emblem and the role for ISPs? piece. We have approximately six minutes left. So if I could encourage our speakers to exercise some brevity, that would be great. Who would like to go first? Felix, I see your hand is up.

Felix Linker:
Yes, I hope I can be brief. I’ll do my best. So I actually would like to comment on all of the questions or parts of them. So in the context of the question regarding AI, it was like, how do we then even deal with attackers who might not even verify the emblem as authentic? And here I think it’s important to recontextualize the emblem. So the emblem is a mechanism that aims to reduce cyber attacks, but only by design from those people who verify it and pay respect to it. So I think it’s important in all discussion to focus just on these actors, because otherwise there is no point and there’s nothing we can do. Regarding the last question, I appreciate that the second question was already answered by the person asking the question themselves. A role that we were exploring for our design in general, not regarding ISPs, was because our design is so active, it functions like a heartbeat protocol, right? Emblems are just sent out regularly or not. We were wondering if monitors that regularly, but not too often, check whether these emblems are actually sent out to be able to attest, for example, to other people. I mean, you say you didn’t see the emblem, but look, we saw how it was sent out. It was not dropped. I’ve never thought of ISPs taking this role, but it could be one of the possible

Moderator – Michael Karimian:
roles, yeah. Thank you. Thank you, Felix. Four minutes remaining. Who would like to go next? Mauro?

Mauro Vignati:
Yeah, probably on the non-state actor and the incentive for the state actor to respect the emblem. So from the state’s perspective, there is a legislation that they signed, or other conventions, so they should comply with the Geneva Conventions if they’re going to sign this amendment or the new protocol. So they are bind by law. Knowing that inside the space you can be a little bit more anonymous than the physical one when you do operation, it’s one thing. We have to test the emblem when it’s going to be out there. But we tend to think that countries that are respecting the physical emblem will also be in respect of the digital one. Another story is about the non-state actor. So we published a couple of days ago in the European Journal of International Law an article about eight rules that non-state actors should respect. Those are not new rules. So some newspaper thought that we are doing a new Geneva Convention or new commandments in this respect. Those are just rules based on IHL, so rooted on IHL, and we call non-state actors to respect IHL. We formulate in a little bit new way because of the recent conflicts, but those rules are rooted in IHL. So what is the goal is to talk to, through the publication of this rule, to talk to those non-state actor and to ask them to respect IHL and not to attack civilian objects and not to attack civilian people and so on and so on. So you can find this on our blog and on the European Journal. So through this work we are doing, we are teaching those people what is IHL, what is the respect of IHL, and then an infringement of IHL could be considered as a war crime. So this is what we try to do. We do in the physical space with armed forces and now we try to do also on a digital space, knowing that the people in the digital space are physically somewhere. So that’s the goal. Thank you Mauro. Two minutes

Moderator – Michael Karimian:
remaining. Would anyone like to be the final speaker for this session? If not, then… Sure, I’ll help to wrap up. You don’t need me to reiterate the significance or importance of protecting medical facilities and humanitarian organizations. We know that. I think this session has helped demonstrate how we further help those sectors to be protected. But of course, as we’ve also discussed, technical solutions are not enough. We need a broad range of multidimensional solutions involving many, many actors. And so I hope that those of you here who have joined us in the room or online have found that this has been relevant to your work and that you can also contribute in ways that are necessary. Of course, Mara will be here. And of course, feel free to email or connect to any one of us if it is necessary to do so. I think we clearly need to have more collaboration. But also, there’s a space for more research and more advocacy on these matters as well. This session alone doesn’t achieve all those goals. But with that, I’d like to thank our great speakers for what I hope has been an interesting session and thank our attendees as well for their tremendous engagement and questions. Thank you all very much.

Audience

Speech speed

189 words per minute

Speech length

931 words

Speech time

296 secs

Felix Linker

Speech speed

163 words per minute

Speech length

1710 words

Speech time

631 secs

Francesca Bosco

Speech speed

151 words per minute

Speech length

2487 words

Speech time

989 secs

Koichiro Komiyama

Speech speed

102 words per minute

Speech length

677 words

Speech time

397 secs

Mauro Vignati

Speech speed

165 words per minute

Speech length

2236 words

Speech time

811 secs

Moderator – Michael Karimian

Speech speed

186 words per minute

Speech length

2465 words

Speech time

796 secs

Tony

Speech speed

168 words per minute

Speech length

3499 words

Speech time

1253 secs

Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martin Wimmer

The analysis explores various perspectives on digital transformation, sustainability, and the environmental impact of technology. One speaker emphasises the need for a human-centric approach to digital transformation, focusing on improving individuals’ lives and preserving the integrity of the Earth. They draw on the metaphor of the Japanese rock garden to describe our relationship with technology. Additionally, they highlight the importance of considering sustainable development goals and respecting human rights in the use of technology.

Another speaker argues that digitalisation and technology should promote sustainable development goals and uphold human rights. They point out that the German development policy supports the realisation of human rights, protection of climate and biodiversity, gender equality, fair supply chains, and other important aspects. They propose that a just transition to sustainable economies requires a nurturing approach rather than exploitative practices, drawing parallels with being a “gardener.”

However, concerns are raised about the environmental damage caused by artificial intelligence (AI). The negative sentiment towards AI’s impact on the environment is highlighted, suggesting that we are currently in a state of repair. Similarly, the negative sentiment towards the industry’s lack of concern for the environmental impact of their activities is expressed. The argument is made that industry needs to consider the environmental impact, aligning with the sustainable development goals related to responsible consumption and production.

The analysis also addresses the lag in legislation and regulation related to technology. The negative sentiment is expressed, stating that legislation and regulation are often implemented too late. The need for learning and better preparedness for future technologies is emphasised, as well as the positive sentiment towards gaining knowledge from the mistakes of the past.

The role of civil society and non-governmental organisations (NGOs) in exerting pressure is highlighted as a means to drive change. The positive sentiment towards the pressure from civil society and NGOs is expressed, suggesting that their involvement is crucial in advancing sustainability and human rights.

The transformation of the internet is discussed, with references to its evolution from interconnected networks to the oldest among digital technological artifacts. The neutral sentiment is expressed towards the internet, implying that it can neither be deemed good nor bad. Instead, the focus is on the internet’s role as a foundation for various digital technologies, with artificial intelligence being considered the most recent incarnation.

Overall, the analysis highlights the importance of considering sustainability, human rights, and the environment in digital transformation and technological advancements. It also underscores the need for a human-centric approach, better industry practices, improved legislation and regulation, preparedness for future technologies, and the involvement of civil society and NGOs in driving positive change. The varying perspectives shed light on the different aspects and challenges associated with digital transformation and its impact on society and the environment.

Audience

The analysis explores different perspectives on technology development, highlighting concerns, and advocates for a proactive approach. The concerns revolve around the necessity and impact of new technologies, with a particular focus on the harms and risks faced by certain communities. It is noted that significant investments are being made in technology development, but there is a need to address the potential negative consequences associated with these advancements.

One argument raised is the need to rethink the ideology and narrative of growth and development. There is a call to move away from the traditional approach and consider alternative ways of achieving progress. The emphasis is on the importance of responsible consumption and production, as well as considering the long-term sustainability of new technologies.

Another perspective suggests that countries from the Global South are not prioritising sustainability and climate protection over digitalisation. It is argued that these nations should focus on addressing environmental concerns and ensure that technological advancements align with sustainable development goals. This observation highlights the need for a balanced approach to technology adoption and an emphasis on considering the environmental impacts.

The analysis also highlights the existing digital divide, with the most advanced centres of research and development and influential companies predominantly located outside the Global South. This observation points to the power dynamics in the technology sector, indicating that decision-making and agenda-setting are often controlled by entities outside the Global South. This imbalance calls for efforts to bridge the digital divide and empower the Global South to have a greater say in shaping the technological landscape.

In conclusion, the analysis presents a range of perspectives on technology development. It underscores concerns regarding the impact of new technologies, calls for a re-evaluation of growth narratives, emphasises the need to prioritise sustainability, and highlights the inequality in the technology sector. The analysis also suggests that a proactive approach is necessary to address the challenges and potential negative consequences associated with technology development. Overall, it provides valuable insights into the complexities of technology’s role in society and the need for a more balanced and responsible approach.

Siriwat Chhem

This analysis examines the challenges and progress of sustainable AI in Cambodia. Cambodia has experienced impressive economic growth, with an annual GDP growth rate of 7% over the past 20 years. The country also benefits from a young population, with two-thirds under the age of 30. The availability of affordable mobile data and Wi-Fi has accelerated digitisation in Cambodia. Moreover, Cambodia has bypassed card payments and adopted mobile payments directly.

However, Cambodia currently lacks specific policies on AI and sustainable AI. The country is learning from regional models and others’ mistakes to develop its own AI framework. Civil society, represented by AVI Asian Vision Institute, plays a crucial role in Cambodia’s sustainable AI development by providing policy research and capacity building in the digital economy. The institute also focuses on Cambodia’s role as a small state in global governance.

Efficiency evaluation of AI tools and platforms is important as the misconception that AI can solve everything comes at a high cost and can create more problems. Long-term partnerships and continuous engagement are essential in addressing global issues related to AI and sustainability. However, there is a challenge of lack of follow-up and building on discussed points after high-level international conferences.

AI and sustainability are long-term journeys that require careful legislation and policy development. Backtracking or catching up from a regulatory standpoint is difficult due to the established nature of AI and sustainability. It is crucial to consider the broader implications of AI beyond just the technology itself.

In conclusion, Cambodia needs comprehensive policies on sustainable AI while capitalising on its progress in digitisation. Civil society, particularly AVI Asian Vision Institute, plays a vital role in advancing the digital economy. Evaluating the efficiency of AI tools, advocating for long-term partnerships, and focusing on sustainable solutions are crucial for sustainable AI in Cambodia.

Robert Opp

Digitalization and climate change are identified as the biggest global mega-trends. Developing countries bear a disproportionate burden of climate change and face challenges in terms of digitalization. Although digitalization presents the opportunity for positive action against climate change, it is also contributing to carbon emissions.

Environmental regulations and governance should not be sidelined in the pursuit of rapid digitalization. It is important that countries prioritize reducing data centre inefficiency and addressing the issue of e-waste. The global north, as a major contributor to technology development, has a responsibility to ensure that the environmental impact of these technologies is minimized.

Forming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental Sustainability (COBE) and the AI for the Planet Alliance aim to foster political alignment and promote sustainable approaches in the digital sphere. These alliances recognize the importance of involving diverse stakeholders including the private sector, civil society, and governments.

The value of local digital ecosystems and capacity building is emphasized for addressing sustainability issues. The global pattern of AI systems often lacks representation and diversity, and local innovators may struggle with financing, skillsets, and access to tools for building locally relevant systems. Strengthening local digital ecosystems can lead to fresh ideas and innovative solutions for sustainability.

Concerns are raised about the lack of representation and diversity in AI systems, particularly generative AI. The underlying data, or lack thereof, and the training processes contribute to this issue. It is important to address this lack of diversity to ensure that AI systems are fair, inclusive, and do not perpetuate biases or discrimination.

Developing countries may face challenges in prioritising environmental issues due to limited resources. However, it is important to recognise that the current pattern of environmental issues was created primarily by countries of the global north. It is crucial for these countries to take responsibility and work towards mitigating their impact on the environment.

Advising country partners to consider environmental implications in digitalization is a key recommendation. Technology should serve people and the planet, rather than exploiting or harming them. The process of digital inclusion and transformation should continue while not forgetting the importance of environmental considerations.

In conclusion, the extended analysis highlights the need for a balanced approach to digitalization and climate change. Environmental regulations and governance should not be overlooked, and alliances in global digital governance are crucial for promoting sustainability. The importance of local digital ecosystems, diversity in AI systems, and capacity building is emphasized. Furthermore, the responsibility for environmental issues should be acknowledged and addressed by countries of the global north. Ultimately, technology should be used as a tool to benefit both people and the planet.

Moderator – Karlina Octaviany

The IGF 2023 Open Forum 37 focused on the topic of sustainable development in relation to ICT technologies, with a particular focus on artificial intelligence (AI). The discussion aimed to address the ecological and social risks associated with the rapid digital transformation.

The panel of speakers included representatives from diverse organisations, such as the German Federal Ministry for Economic Cooperation and Development, UNDP, Mozilla Corporation, ITU, and the Asian Vision Institute. These experts shared valuable insights and examples of initiatives aiming to integrate sustainability in ICT technologies and global digital governance, specifically focusing on AI.

One important aspect highlighted during the forum was the need to limit the ecological impact of digital technologies. The panelists emphasised the growing contribution of digital transformation to greenhouse gas emissions and stressed the importance of ensuring sustainable AI development and deployment. They discussed the need for sustainable aspects to be considered in the development and deployment of digital technologies, including AI, and highlighted the role of digital transformation in addressing the planetary limits of AI.

The speakers discussed various options for action to promote the sustainable development of ICT sectors and technologies, with a specific focus on AI. They proposed measures such as the development and adoption of green ICT standards to support governments and stakeholders in developing sustainable and circular ICT systems. Examples were shared to illustrate how these standards could contribute to reducing ecological impacts and fostering sustainable practices.

Another key topic of discussion was the role of civil society and business in promoting sustainable AI. The panelists discussed the specific challenges faced by communities in Africa and Cambodia in adopting and benefiting from AI technologies sustainably. They highlighted the importance of including diverse perspectives and ensuring that the benefits of AI are accessible to all members of society.

Transparency and measurement were also highlighted as crucial factors in achieving sustainable digitalisation. The need to avoid the risk of greenwashing, where companies make false or exaggerated claims about their environmental practices, was emphasised. The discussion emphasised the importance of accurate measurement and reporting frameworks to assess the ecological impact of digital technologies and ensure genuine sustainability efforts.

The forum concluded with closing statements from each of the speakers, summarising the key points raised during the discussion. There was an overall agreement on the significance of integrating sustainability in ICT technologies and global digital governance, particularly in the context of AI. The forum provided a platform for meaningful dialogue and collaboration among stakeholders to drive positive change towards a more sustainable and inclusive digital future.

Noam Kantor

Businesses have a crucial role in sustainable AI by investing in environment-friendly partnerships. This involves seeking out and investing in or partnering with organizations that mitigate the climate emergency. Tech companies should also consider the ethical standpoint of their investments. Making products more efficient and sustainable is another important aspect of sustainability. Mozilla, for example, allows developers using Firefox developer tools to track the carbon emissions of their software. Civil society plays a significant role in educating the public about the climate impacts of technologies like AI. In Africa, sustainable technological development faces challenges such as limited funding and finance. However, initiatives like Mozilla’s Africa Emrati Project aim to address these barriers. Transparency is vital in sustainability, and digital companies should develop a transparent look at their environmental impacts. Tech regulators also have a crucial role in enforcing against deceptive greenwashing claims. Making sustainability part of product development can drive sustainable digitalization. Overall, businesses, civil society, tech regulators, and individuals all have important roles to play in promoting sustainable practices in the digital age.

Atsuko Okuda

The analysis highlights the need for greener AI and ICT development to address their negative impact on the environment. The greenhouse gas emissions generated by top telecom companies were estimated to be 260 million tons of carbon dioxide equivalent in 2021. This calls for urgent action to mitigate the environmental impact of these industries.

However, digital transformation shouldn’t be abandoned; instead, it should take environmental considerations into account. AI can play a crucial role in enhancing green transformation and weather forecasting. For example, AI can improve the predictability of demand and supply for renewable energy across a distributed grid, promoting sustainable energy practices. Additionally, AI can enhance weather forecasting, which has implications for climate action.

Another concerning issue is the significant amount of e-waste generated due to the increase in internet users. It is estimated that over 70 million tons of e-waste will be generated annually by 2023. Efficient e-waste management practices, including recycling to extract critical raw materials and promote a circular economy, are urgently needed.

Standardization and recommendations for environmental performance and e-waste management are crucial to ensure all stakeholders work towards common environmental goals.

Raising awareness among wider societal groups about the environmental impact of AI and ICT is crucial. The International Telecommunication Union (ITU) is implementing an AI project to build capacity and awareness among different stakeholders. This inclusive approach enables diverse perspectives to be considered in finding solutions to environmental challenges.

The ITU is also evaluating the environmental resilience and performance of data centers, aiming to improve their sustainability.

While AI technology offers opportunities, it should be integrated with environmental considerations to minimize negative impacts.

Addressing e-waste management requires collaboration with small and medium-sized enterprises (SMEs). An area office and innovation center in Delhi is working with SMEs and businesses in India to tackle e-waste management challenges.

Policy and regulatory mechanisms play a significant role in addressing the e-waste issue, ensuring producers take responsibility for proper e-waste management, even if they are not located in the same country as end-users.

Furthermore, proper e-waste disposal practices are essential to prevent environmental and ocean pollution.

Digital inclusion and transformation are crucial for global development. However, environmental concerns must be considered alongside these goals. Approximately 2.6 billion people are still unconnected, highlighting the digital divide. Bridging this gap while incorporating environmental considerations is essential.

To summarize, addressing the negative impact of AI and ICT on the environment requires greener development practices. Key areas of concern include greenhouse gas emissions, e-waste generation, and the digital divide. Incorporating environmental considerations into digital transformation, promoting proper e-waste management and recycling, raising awareness, and implementing policy and regulatory mechanisms are vital steps towards a sustainable future.

Session transcript

Moderator – Karlina Octaviany:
Hello, everyone. Thank you for joining the IGF 2023 Open Forum 37, Planetary Limits of AI, Governance for Just Digitalisation. I think all of the speakers are here. Thank you, everyone, for joining on-site and also online. We are broadcasting this event also hybrid, so I think we should gather a lot of insights and also a good balance of questions online and on-site. Thank you for coming here. My name is Karlina Oktaviani. I’m Artificial Intelligence Advisor for Fair Forward Indonesia and also Digital Transformation Centre Indonesia, Global Initiatives Dedicated to Open and Sustainable Development and Applications for Artificial Intelligence in Africa and Asian Countries. On behalf of German Federal Ministry for Economic Corporations and Development, BMZ, implemented by GIZ, I will be your moderator for today. The session will run with discussion led by the moderator and also some speakers will send also a presentation and later we have a question and answer session, so please be prepared with your questions and if you have an opinion or response, we also welcome that and we shall begin the session. The digital transformation increasingly contributes to the greenhouse gas emissions. To ensure sustainable artificial intelligence or AI, there’s a need to limit the ecological and social risk. How can we ensure that sustainable aspects are considered on the development and deployment of digital technology, such as AI, and how we can form the basis of digital transformation? In this open forum, we will discuss options for action for the sustainable development for ICT and of the ICT sectors and technologies, especially for AI. I will introduce the panelists for this session. For impulse statement, we have Martin Wimmer here, Director General Development Policy Issues, German Federal Ministry for Economic Corporation and Development, BMZ. We have on that side Noam Kether, Senior Public Policy and Government Relation Analyst of Mozilla Corporation. We have Robert Opp, Chief Digital Officer of UNDP, and joining online we have Atsuko Okuda, Regional Director, International Telecommunication Union or ITU, Regional Office for Asia and the Pacific. Also joining online, we have Chem Siruwat, Director of Center for Inclusive Digital Economy at the Asian Vision Institute and Advisor to the Council for the Development of Cambodia. To begin, please welcome impulse statement from Mr. Martin Wimmer, Director of General Development Policy Issues, German Federal Ministry for Economic Corporation and Development, BMZ. Please welcome. Give the warm welcome. We can clap your hands.

Martin Wimmer:
Thank you. Yesterday morning, I went to Ryoen-Chi. This World Heritage Site in Kyoto and yours is one of the most inspiring gardens ever built by mankind since the 15th century. It is a rock garden. Basically, it consists of 250 square meters of flat gray gravel and five islands with rocks on it. It is rectangular like a screen. The gravel representing dots. You see, it can mean anything you come up with during meditating there. And it is a metaphor for the technological design, the shaping of nature and the current five hype digital technologies, AI, quantum computing, whatever everyone is crazy about. It is a metaphor for the millions of websites in the internet and the five platforms that stand out. It is a metaphor for all the millions of users and the five founders who get all the attention and money. It encourages thinking out of the box. And whether you think the digital transformation leads to good or bad, the lesson you get from the rock garden at Ryoen-Chi is that the more you focus on the five outstanding highlights, the more you watch the rocks that steal the limelight, the more your attention will shift to the gravel. If you look long enough, if you think deep enough, it’s the gravel that makes the rocks shine. There are only 15 rocks and millions of pebbles there, but the task is to leave no one behind. For our discussion today, this could mean to emphasize the importance of a human-centric perspective. What does the big platform, the new technology, the great solution, the fascinating vision of one of our outstanding speakers mean for the people who do not stand out and do not get all of the attention at first sight? The poor, the children, women, people with disabilities, LGBTI+, people in the global south, oppressed people, indigenous people, victims of terrorism and war. You don’t need to be a sand master. It’s just common sense. Whether you are a gardener or a coder, whether you use a shovel or a server for your work, using technology, data centers, AI to change the world, nature, societies, human interaction should never be for technology’s sake, but for improving the lives of every individual living with us on this planet, and to secure the integrity of this one, our Earth, which translates into safe energy, safe water, safe resources. Don’t believe in growth. Don’t fool consumption. Don’t produce waste, which, to be clear, is the opposite of what the digital economy does most of the time, most of them. If you’re serious about carbon neutrality and a just transition of our economies to sustainable economies, we have to act as a gardener. Respect, tend, nurture, not exploit data centers. Do what the rock garden does. Remain within given boundaries. That’s why the German development policy supports the global realization of human rights, the fight against hunger and poverty, the protection of the climate and biodiversity, health and education, gender equality, fair supply chains, fair working conditions, the democratic, social, ecological, feminist, inclusive use of digitalization and technologies transfer to promote sustainable development goals worldwide. Thank you.

Moderator – Karlina Octaviany:
Give your own welcome. Please clap for Mr. Wimmer. Thank you, Mr. Wimmer. I really like the analogy of making it as an ecosystem that grows everyone’s in the AI. So, let’s begin the session. I will remind you that this is an open forum, so I will encourage people and invite people to prepare your questions, your response, your opinion. If you have any points that you want to discuss, we’re open to that. And first, we’ll go to UNDP, Robert Off, Chief Digital Officer of UNDP. So, let me ask the first question. How can we form broader efforts to integrate sustainability in ICT technologies and global digital governance, including AI? Thank you so much for having me at

Robert Opp:
the panel. Thank you, Martin, for the poetry. I knew you were going to deliver something inspirational, and you’re absolutely right about the boundaries. Couldn’t agree more. I think I would like to start with just a general reflection on the issues that we’re talking about here. Digitalization and climate, these are quite possibly the biggest mega trends that we have globally right now. They are changing everything about the world, but they’re doing it disproportionately. And so, we know that a disproportionate burden of climate change is borne by developing countries. We know that digitalization is happening at different rates globally, and developing countries are at a disadvantage when it comes to the speed of digitalization or the technology generation and things. And then, between these two concepts, there is a tremendous interaction that goes, it’s a bi-directional. So, on the one hand, digitalization presents the possibility to take dramatic, positive action against climate change. On the other hand, we know that digitalization is driving carbon emissions. It’s also contaminating soil for the extractive industries that have grown up around building chips and technology platforms, the rare earth minerals and so on. And it’s even the data center techniques of using cooling water that is not a closed-loop system, can contaminate water sources and things. So, there’s a really interesting, I mean interesting, important interaction, bi-directional interaction between the concepts. I think that one of the things from a UNDP perspective that we really work with, when we work with countries worldwide on their digital transformation, and we’re engaged, well, we have digital programs in 125 countries. We are engaged in about 50 of those countries on questions of national digital transformation. And I think that our partners are in the developing world, but they, like pretty much most countries, tend to put some of the environmental regulatory and important governance discussions on the back burner in the favor of quick digitalization. And so, one of the things that we really try to do in our approach when we work with a country, and we look at their readiness for digital transformation, or the readiness for artificial intelligence, because we do that kind of assessment as well, we try to place those questions centrally. And it’s about putting people and their rights, their right to, for economic and social rights to development, but also the rights for the environment. And we put the questions in front of countries. If you’re using data centers, are you doing that in a green way? Have you looked at optimization for efficiency? Have you looked at the carbon footprint of the digital change you’re making? Are you transparently disclosing environmental impact of technologies that you’re adopting? And like I said, our countries, our partners, are developing countries, but I think every country in the world needs to put this as a central concern, particularly those who are driving the technologies. And I think the last thing I would say is simply that if we look toward what it’s going to take for global action in this, it really is going to be that this has to become the norm, so that these are central questions. And going back to that point about disproportionate impact, I really think that we need to send the signal in the global north that is developing a lot of these technologies, that we must find ways to ensure that the environmental impact, the greenhouse gas footprint of the technologies that are being used, is seen as a priority in terms of data center efficiency, reduced e-waste, reduced contamination, and so on. So those are just a few initial thoughts around that.

Moderator – Karlina Octaviany:
Thank you so much. Give a warm welcome to Israel. And next we’re heading online. Asko Okudad, Regional Director of International Telecommunication Union, or ITU, Regional Office for Asia and the Pacific. Ms. Okudad will present a presentation on examples of green ICT standards, how can they support governments and stakeholders to develop sustainable and circular ICT, including AI. Asko, if you’re ready, you can start. Give a warm welcome to Ms.

Atsuko Okuda:
Asko. Thank you very much for giving… Thank you. First of all, I would like to thank the organizer to invite ITU to this very important meeting, and I believe that Robert also shared that the topic is very timely. We should perhaps think about our action in terms of how to ensure that the AI development, as well as ICT development, are greener. And I have a few statistics that I would like to show from the recent studies. Let me start with the GDPT and rapid rollout of AI solutions globally. And I’m sure that, you know, all the participants have been using or experimented the use of generative AI, such as chatGPT, and the power of the solutions that are in front of us. I just want to share with you, with the participants, that there are many interesting and innovative use of chatGPT. One of our ITU senior officials got married recently, and I believe that he asked chatGPT to write his marriage vow. So I hope that that was successful. But there are increasingly very interesting and widening use of chatGPT in our social life, as well as in our workplace. Now, the question, perhaps, which is very relevant to this session, is the environmental impact of an increasing use of AI, because the tool itself is not material, in a way. And it is very difficult to quantify the environmental impacts. But today, I would like to share with you the two aspects. One is the electricity consumption, and second, the green gas emission in this presentation. So as you see on the screen, I hope that you are seeing the flipping slides. Of course, the increased use of AI is supported by the increasing the transmission of data. And those data are stored, as Robert mentioned earlier, in the data center, carried by different means of telecommunication. Now, that data center, as you know, consumes lots of electricity. And as you may have heard in the other sessions, there are significant progress in making data center energy efficient. However, one study still suggests that the training of AI solutions would require 3.5 million liter of water to cool the facilities itself for the computing. And additionally, there is a study that in terms of the greenhouse gas emission, the top most telecom companies estimated to have produced 260 million tons of carbon dioxide equivalent in 2021. So there are certainly benefits, but there are some environmental impacts that we have to consider. So because of these two aspects of digital transformation and the need to green, some scholars coined the phenomena as the twin green and digital transformation, which means that the digital transformation should take into account the environmental aspects. And AI can certainly enhance many different aspects of the twin green and digital transformation. For example, AI can enhance the predictability of demand supply for renewables across a distributed grid. And of course, as you know, there are benefits to improve weather forecasting by incorporating more of the real world systems in calculations. So the question I believe is the balance that we must find between these two, the green part and the transition part, and to get best out of the twin transition. Now, coming back to the data center, there are two, as I mentioned, the components in terms of the data center operation, there is a cooling part that will be required and these are significant and largest, the energy loss in the facility and the cooling replacement for water includes the refrigerant, which could contain harmful chemicals. But in addition to that, there is globally increasing data traffic and that is generated from low and middle income countries because they are now investing more in the storage and hosting solutions to meet the increasing demand of the internet users, which are increasing in these economies. And that will require more data centers in these locations and that may consume more electricity according to the latest statistics. Now, that’s the reason why we, ITU, have been working with partners such as GIZ to ensure that the green aspects are integrated into this digital transition and digital transformation. And one entry point to ensure that is the public procurement, to make sure that in the process of procurement to establish data center or improve data centers, the green and environmental aspects are considered and taken into account. So another entry point to ensure the environmental aspects is the e-waste management and to manage the critical raw materials. As you know, there are more internet users globally, mainly in middle and low income countries, which means that there are more devices for people to connect to the internet. And by 2023, over 70 million tons of e-waste are expected to be generated annually. And as you may see on the screen, it is estimated that the storage of the expected 2025 global data sphere alone would require up to 80 kilotons of neodymium, which is about 120 times the EU 2020 demand for this material. And at the same time, the critical raw materials can be extracted from this process of recycling if we do it properly. And we hope that the member countries as well as industry, academia, and in partnership with other stakeholders, we can create a circular economy to ensure that the e-waste are discarded safely. At the same time, that will regenerate and that will recover the critical raw materials. And in addition, ITU, as you know, have been working on the standardization and recommendations to ensure that best practices are applied on these critical aspects of environments and the environmental performance. And I hope that today’s discussion will shed light on some of these topics, including the green gas emission as well as the e-waste and data center management. And finally, I just want to highlight one point that we should also perhaps encourage a wider societal groups to be aware and to be exposed to this discussion through their awareness raised on the benefits and challenges of AI solutions in their societies. And I would like to share with you as a second question to my intervention at ITU has been implementing an AI project to build capacity awareness across different stakeholder groups in these four countries supported by the government of Australia. So that was my last slide. Thank you very much. Back to you.

Moderator – Karlina Octaviany:
Thank you so much, Ashikol. Next, we go to Noam Kantor, Senior Public Policy and Government Relations Analyst of Mozilla Corporation. So first question is what’s the role of civil society and business and what are the specific challenges of communities on the Africa continent on sustainable AI?

Noam Kantor:
Thanks so much. Thanks for having me. Thanks everybody for coming. Regarding the role of business, the first thing I like to do is zoom out to consider tech companies as just companies. An example of what I mean is that one thing companies do just as companies is invest in other companies and in financial instruments. So I guess my question zero is, and really a primary question is, is a tech company investing in companies or partnering with companies that exacerbate the climate emergency? I would say that’s the bare minimum before you start thinking about the tech they’re implementing. I think another thing businesses can do is share best practices in terms of how to make products more efficient and sustainable. For example, this year we at Mozilla created a way for developers using our advanced Firefox developer tools to track the carbon emissions of the software they’re developing. So I recommend you go have a look at that if you’re interested. I think civil society can also play a really significant role in education, especially regarding the climate impacts of technologies such as AI. My Mozilla Foundation colleagues wrote a review recently of all the climate, of many of the climate impacts of the internet and AI usage, including how much energy is used when you’re on a Zoom call when the video’s on versus when the video’s off, which maybe you all have seen. And it’s been a really popular article. I think that shows that people wanna know the impacts of the tools that they’re using, but in the case of technology, that information can be really hard to find. As for specific challenges on the African continent, I have to say that I’m not on our Africa team. I do wanna tell you just a little bit about our work there, because I think the team does great work and I’m really proud of it. I just wanna echo also, first of all, the disproportionate impact of the climate emergency on the African continent, which was previously discussed. One thing we’ve done is in 2021, Mozilla partnered with AfriLabs to study the African innovation landscape. Across the continent, the study that we did with them found key innovation barriers, such as access to funding and finance, local policies to protect and enable the ecosystems, lack of access to affordable connectivity internet, which is a big one, and a general need to collaborate across the regions that they studied. Mozilla’s Africa Emrati Project is working to fight these barriers. I think many of the same barriers affect sustainable technological development in the area. But ultimately, we think that communities should be able to speak to and try to solve their own challenges with support from others. That’s why the Mozilla Technology Fund, which supports open source projects with promising approaches to solving pressing issues, recently announced that the theme for this year is AI and environmental justice. The fund will provide $300,000 to open source projects that leverage AI to make a positive impact on the environment and local communities. It includes one year of Mozilla mentorship and support, and awardees will likely be announced in early 2024.

Moderator – Karlina Octaviany:
Interesting. So if you want to, you are welcome. And if anyone wants to explore more, you can ask later about the finding. And we go online again to Tam Sriwat, Director of Center for Inclusive Digital Economy at the Asian Vision Institute and advisor to the Council for the Development of Cambodia. Tam, are you ready? Okay, he’s online. So what’s the role of civil society and business and what are the specific challenges faced by community in Cambodia and specifically on AI?

Siriwat Chhem:
Yes, thank you very much for your question. First of all, I would like to thank the organizers for inviting me on this panel of very esteemed and distinguished panelists. Just to let you know, I’ve been following the IGF for a very long time in my research. It’s always been a dream to come and attend, but unfortunately I wasn’t able to join in person, but at least I’ll be able to join the panel online. And so onto the question. Maybe I’ll start with the second part first, since we’re talking about specific challenges faced by Cambodia on sustainable AI. I think the first thing you think of when you think of Cambodia might not be related to technology or let alone sustainable AI, but maybe I can just share a little bit of the context. And so pre-COVID for the last 20 years, Cambodia had been experiencing 7% GDP growth annually, so developing extremely quickly. And I would say that within the last five years, Cambodia has gone through its own form of digital transformation. If you were to visit around five to 10 years ago, you would see that we predominantly use cash everywhere. Also making it more complicated, we are a dollarized economy, meaning that we are on dual currency, both with the USD and our local currency. And so basic things like going to the market or taking a tuk-tuk for transportation around, you would have to do these things very manually and having complications of converting currencies and so on. And basically what happened throughout the last few years, a very high digital adoption rate. We’ve been able to, let’s say leapfrog the era of using cards, credit cards, debit cards, and move straight into mobile payments, transfers and QR code payments. And so that the main reason I would say is because of Cambodia’s young population with two thirds of the population under the age of 30, a median age of 26. We have quite affordable mobile data and access to Wi-Fi within the urban population. This has allowed us to really move forward in terms of digital transformation. And so now if I can just go back to the question of the theme for today on sustainable AI, we face different types of challenges from the previous ones that were mentioned. Because we could say that we joined late to the game, our focus is really building it from the ground level up. And because we don’t have any legacy technology or any established longstanding institutions in terms of AI, we rely quite significantly on looking at the models of our regional partners, on looking at what’s being done successfully around the world, and also learning from others’ mistakes. So in terms of sustainable AI, we are, let’s say, building a strong foundation from the beginning. We don’t have any existing specific policies on AI or specific to sustainable AI. And so I think looking at regional models, what’s being done around and locally contextualizing to Cambodia’s situation is very important. And so if I could just elaborate a little bit more on the role of civil society, on behalf of our institute, on the AVI Asian Vision Institute, we are an independent think tank. And so what we’ve tried to do over the last four years in Cambodia is to provide policy research and also capacity building and training related to the digital economy. And so over the last four years, we’ve published two books, one of them on Cambodian cyberspace, another one on Cambodia’s emergent cyber diplomacy. So really giving an overview of the digital economy, what kind of role does Cambodia as a small state play in the frame of global governance? I know that will be onto the next theme and question, so I won’t talk too much about it. And so with that, I would like to close my opening remarks. Thank you.

Moderator – Karlina Octaviany:
Thank you so much. Give a warm welcome to Tim. Thank you. Thank you. Thank you. Thank you. Thank you. Okay, thank you. Okay, so I think we can move to the second round of questions. I come back to Mr. Robert Opp, what type of alliance for global digital governance are needed?

Robert Opp:
Okay. Hello, okay. No, thanks for that. I think all of these interventions so far have drawn attention to some form of angles that we’re talking about. There’s sort of private sector, there’s the civil society, the importance, there’s governments and so on, the importance of bringing together the stakeholders can’t be overemphasized. And of course, that’s what IGF is about. I think in this space, the biggest role for alliances is around alignment of purpose, alignment of intention. And I can just give a couple examples in this space of alliances that we’re involved in and that I think have some hope for the directions that we need to set globally with this. The first one is called COBE. which stands for Coalition for Digital Environmental Sustainability, and that is an initiative with the German Environment Agency, with the UN Environment Program, UNDP, the International Science Council and Future Earth. And actually recently Atsuko ITU has also joined CODES as one of the kind of core members. And CODES has engaged with over a thousand stakeholders in the last couple years that it’s existed, and it really is trying to get a few different things. One is around political alignment for the kind of these issues of the twin transitions. Then there’s sort of a set of initiatives around mitigating negative impact, and then there’s accelerating the innovations for efficiency. And so this is a kind of broad-based coalition, I would say, and there’s some action lines that are being developed now. And I think it really just highlights the importance of really coming together under common purpose. The second alliance, which is a little bit more focused on the topic at hand, is called the AI for the Planet Alliance. And that has been created by the Boston Consulting Group, UNDP and UNESCO, plus a coalition of startups called Startup Inside. And it’s a group, a kind of an odd group in a way, of players that are engaged in this issue as well, but specific to artificial intelligence. And it is really also about providing a platform where we can identify and promote innovations that are, again, driving innovations that can help us with environmental action, as well, and scaling them, as well as looking at ways to really encourage the players in the artificial intelligence space to adopt more efficient and more environmentally friendly, more sustainable approaches to their work. And these are, you know, again, things that are very multi-stakeholder in nature, open for participation of many. The organizations I mentioned are just the kind of spearhead organizers, but really open for all to be involved in. And that’s an open call for everyone who’s listening in today, as well. These can be found, I’m not going to give the websites, but they’re both, they can be googled and found online, and encourage everyone to participate. Thank you. And additional resources for our discussion

Moderator – Karlina Octaviany:
later on, you can also share. We go to Noam. So how can we move towards sustainable digitalization? Thanks. I want to talk about transparency first. I

Noam Kantor:
talked about it a little bit with the education bit before, but I do think we need a transparent look at the environmental impacts of tech tools, including AI. So, you know, sustainability reports are often a big tool towards transparency, but as we all know, there’s a spectrum of transparency when it comes to reporting. So I wanted to talk a little bit about what we do in our sustainability report. It’s maybe an example, because we hope that we’re leading the way. So my understanding is that per the greenhouse gas protocol, which is one of the reporting standards, we’re not required to calculate or report the product use emissions associated with using products like Firefox, Mozilla Hubs, and Pocket. But we want to lead by example. We want to support transparency by reporting the optional data. So we started doing it actually in 2019, and we’re hoping that it’ll encourage our peers to do the same. What we had to do, though, was we had to work with an external consultant, and we had to develop a brand new methodology, because no one had really developed a methodology for measuring the impacts of browsers, the environmental impacts of browsers. And, you know, we hope that it accounts for device emissions that can be reasonably attributed to the browser, so that, you know, it captures the work that we’re doing and what we control. So it’s possible and vital that companies report on this aspect of the work. I think the hope is that we’re showing that it’s possible, and we’re encouraging others to do so. And I think the hope is, if the impacts are too high, they should consider changing their product roadmap. Now, as I mentioned before, also related to Mozilla Developer Tools, it’s not just about the products that companies build, but also for customizable or open-source products. It’s about giving developers and users the ability to measure and reduce the emissions in the tools that they build. I also want to say that tech regulators sometimes have an interesting role to play in sustainable digitalization. A good example is the Federal Trade Commission in the United States, which is a primary, one of the primary tech regulators in the United States. But the FTC also enforces against deceptive greenwashing claims. So there’s an interesting nexus there. In fact, the FTC has just begun a once-in-a-decade update to its green guides related to deceptive environmental claims. And some commenters have specifically requested that they bolster their enforcement against certain misleading net-zero claims or sustainability claims. But there are limits to anti-greenwashing policies because they require deceptive representations in the first place. So, you know, they’re just one piece of the puzzle. But I thought that was just an interesting nexus of how different regulators can work together in the

Moderator – Karlina Octaviany:
space. Interesting. So measurement and also risk of greenwashing. Okay, we can go to the Q&A in this open forum. We welcome respectful, diverse questions and opinion. If you have any questions, please kindly raise your hand, introduce your name and organizations, and then mention your questions. Also, for participants online, I will also remind to please type your name, organization, and questions. We will select the questions to be read online. So I will give the opportunity for the people on site first. Is there any questions, opinion, or curiosity that you want to ask?

Audience:
Yeah, thank you everyone. My name is Bushree Badi. My question is around, I think much of the conversation has focused on the impacts after you start adopting these technologies or developing them, and I’m wondering like how much work is now being done to really think critically if these specific types of technologies are needed in the first place. Because it feels like we’re trying to mitigate risks that are already being, or like certain communities are being exposed to those harms and risks, and trying to kind of like put things back into the bag that shouldn’t have necessarily been implemented in the first place. And you see a lot of this type of development in places like Silicon Valley, where there’s a lot of investment that keeps going into the development of these technologies that are presented as solutions to really systematic problems that we’re facing, but fundamentally will fail to do so. And we know this as people who maybe work on this through a systematic lens or framework. So I’m wondering if you could speak to some of the work that’s being done there, because it feels like a lot of this is just responsive instead of being proactive in addressing these issues. Thank you. Interesting. We will go

Moderator – Karlina Octaviany:
around. It’s okay. Well, thank you. Well, first of all, great to be here. My name is

Audience:
José Renato, I’m from Brazil, and I have two questions actually. Maybe jumping a little bit upon her question, we started the session talking about the growth, about the possibility of thinking beyond this, let’s say, ideology, narrative, I don’t know how to put it, but of development, of growth. We use some of these terms here, so like what are the opportunities that we have to rethink this? Maybe, is there any other paradigm that we could focus on? And the second question, after, I unfortunately forgot the name of the UNDP representative. Robert, thank you very much. I apologize, I’m terrible with names. You mentioned about the role of countries from the Global South in this whole theme, and how they were sort of not prioritizing, at least as far as I could understand, the issue of like sustainability, climate protection, over the digitalization. But I would like to hear from you, and maybe if there are any other inputs would be also welcome. How is it, like, considering that we have all of this push towards digitalization, this, it is part of the whole imaginary of development, of how a development, developed economy should look like. What would be your take, considering that the most advanced search centers of research, of development, the companies that dictate most of the agenda, they’re outside of these territories. It’s like, how do you work with these countries? How do you, you could potentially work with them to some degree, either create an environment in which they can build upon, in which it’s not like, in which they’ll be, they’ll have the benefits of all of this, even when we consider that many nations who are advancing these technologies are not fulfilling these questions. So yeah, thank you so much.

Moderator – Karlina Octaviany:
Thank you so much for all the questions. So we can move to our panel, starting

Robert Opp:
with Robert. Sure, I can address particularly that that last question. In a phrase, the value of local digital ecosystems here is super important, and this is very relevant for AI. It’s relevant far beyond a sustainability question. The concern that I have, and anybody who’s spoken to me recently has heard this, because I say this over and over again, I am very concerned about the global pattern of rollout of AI systems, particularly generative AI at the moment, because I worry about the representation and diversity in technology, in the underlying data or lack of data, and in the training process as well. And I believe that one of the most important things that we can do is to look at the ways to build capacity for local digital ecosystems, so that local innovators who are, you know, innovators and entrepreneurs are everywhere, but they sometimes lack the ingredients, and you were talking about that before, Noam. They may lack the financing, they may lack the skill set or the access to skills, and they may lack the set of tools to compete globally, or not necessarily compete globally, but to actually build systems that are locally relevant, and that will actually work towards satisfying the needs of people locally, and the needs of those markets locally. And so I really think, and this will also I think benefit the sustainability agenda as well, the stronger the local digital ecosystems are in these countries around the world, the more I think we’re going to see innovative and fresh looks at how we can address the sustainability issue as well. So that would be my response to your question about, you know, the countries, and when I said countries are not necessarily prioritizing environmental issues, that’s not a criticism. That’s because developing countries have a lot on their plates right now, and need to, are desperately short of resources, and in a constrained environment where you’re trying to really think about where you’re going to put your scarce resources, it may not be the first instinct to put it into something like that, and that actually I think needs to, the light needs to be shone toward the countries of the global north, who’ve basically created this pattern, and they didn’t think about environmental concerns either. That’s why we have this issue. And so what we say is that going forward, as we work on digitalization in these countries, we advise our country partners to stay aware of the environmental considerations as part of their governance, to think about the policy and regulatory environment that needs to be there from the beginning, so that ultimately that will pay off down the road. Maybe I’ll let other panelists answer some of the

Noam Kantor:
other questions. You can go, Noam. Still on? Okay, I think probably I have the most to say on the first question, about, you know, question is when should we not implement technologies at all, given their risks and their benefits? I think it’s the golden question, and I guess I just want to talk about the ways that trust, the concepts of trustworthy AI, transparency in AI, and transparency in climate impacts, all kind of work together to create, you know, as ingredients to create, you know, hopefully responsibility here, which is that, you know, I think one of the challenges is that many, many of the products that you reference that might not be very effective relative to their risks, oftentimes people don’t know how to measure the effectiveness of those products, like that if we’re talking about an AI model, people don’t necessarily know how to talk about the robustness, the accuracy of the model, potential for bias, even though there’s been a lot of work on those things that sort of, both investors and the public and regulators are still learning, and will be learning for a long time, how to measure those things, and so I think the more that we can push on the side of trustworthy AI, the more it will be obvious to people what they’re weighing the environmental impacts against, right? If it’s obvious, you know, how trustworthy or how accurate a model is compared to what it’s claiming to do, then it’ll be more obvious, you know, is it worth it compared to the amount of energy we have to pay for, and then, you know, external effects that are impacting our

Moderator – Karlina Octaviany:
climate and economy. Thank you, we go to Martin. To your question, I would fully

Martin Wimmer:
agree, the damage is already done. AI is here, and we are only in repair mode once again, and the reason for that is the industry just doesn’t care about the environmental impact of their money-making, and legislation and regulation are way too late once again. All we can do is learn for the next technology that breaks through. We have to be better and faster, and we need the pressure from the civil society and the NGOs here. And then we go to online.

Moderator – Karlina Octaviany:
Atsuko, if you want to answer the question. Sure, thank you. I have two,

Atsuko Okuda:
maybe, examples where we can, you know, concretize and show concrete examples of how we can take into account the questions on AI benefits and challenges to the environment. One is the, you know, mainstreaming of greening questions. ITU has been working in the communication sector and digital technology for many years. And one of the questions we are increasingly receiving is to evaluate, for example, the resilience and performance of data centers. And we have conducted the assessments in a few countries in Asia and the Pacific. But in the process, we made sure that the environmental aspects and best practices applied in the process of assessment so that the recommendations include how to mitigate the negative impact on the environment. And I hope that there will be more of this integration of greening and environmental considerations in all the aspects of digital transformation and what we do. But I would also like to add the perhaps partnership that we can expand with the industry, especially small and medium-sized enterprises. And I want to give an example of e-waste management that I mentioned earlier, that increasingly there will be data that is generated through an increasing number of devices people are using. And in ITU, we have opened a new area office and innovation center in Delhi recently. And one of the topics that we are addressing with the association of SMEs and businesses in India is to encourage innovation and to make sure that the e-waste management and climate technologies are taken up and mainstreamed in the industry side so that we can make it as a successful and profitable business. And we hope that that will contribute to the circular economy. And I hope, I believe that more of these business models will be required now that AI is being rolled out very quickly. Thank you. Back to you. Thank you. We go to Cem.

Siriwat Chhem:
Yes, just for my final comment. Recently, about last week, I attended a workshop specializing in AI organized by the International Science Council. And so what we did, they invited AI experts from the Asia-Pacific region. And I would just like to share two of the outcomes from this full day discussion. And so the first point is on mindset. Currently, we have this mindset and mentality that AI should be the solution for everything. And this comes at very high costs, not only in terms of sustainability and environmental aspects, but even down to the efficiency of actually trying to solve a problem. And so what is happening is that now we’re starting to use AI to the extent that it creates more problems than it solves. And so the overall consensus from the workshop was that we should be extremely careful in evaluating and assessing how efficient AI tools and platforms and applications are being used, and whether it’s actually solving the problem more efficiently and effectively, and not in turn, creating more problems. And so the second part, which I would like to share, is on long-term partnerships. As I mentioned, we were in a room full of very, let’s say, qualified individuals from that field of expertise. And they shared that one of their challenges or the main problem is that when they convene together for high-level international conferences, or they have workshops or meetings, the time period leading up to the meeting, a lot of preparation and time is involved. All the stakeholders are engaged throughout the event. But the problem is that following up after meeting, not much is done to bring together all the important points that were discussed. So in terms of an extensive report, in terms of building long-term partnerships to build on what was discussed at those events, because addressing global issues in terms of AI and sustainability, it requires a lot of considerations. And these things cannot be solved in one day or in a one-week conference, but it really has to be taken many steps forward into the long term. So I would just like to conclude with that. Thank you. Thank you. We go to questions from

Moderator – Karlina Octaviany:
online audience. It’s Avis from Cameroon, from the Proto-JQVIS organization. One of the thorny problems in Africa remains the return of e-waste to producer. What binding mechanism can we put in place for its effectiveness? Anyone wants to answer from the panel?

Atsuko Okuda:
Ah, yeah. Atsuko, perhaps you can answer. Sure. Thank you. Thank you for this very important question. I have a question regarding the obligation of programs that Mr. Avis asked regarding the return of e-waste to producers. Of course, there are policy as well as regulatory mechanisms that could bring, but perhaps the, as I mentioned earlier in my example, perhaps that could be seen also as opportunities to work with startups as well as SMEs so that they can recycle the devices before, of course, e-waste. And perhaps that could be seen as one part of the circular economy. So I believe that, of course, returning the e-waste is one thing that could be, you know, mandated, but perhaps we can see more collaborative ways because the producers may or may not reside in your countries, right? So returning to the producer in some cases could be a challenge. So perhaps we can see it from a holistical and ecosystem point of view on what’s the best mechanism to make sure that the e-waste are not discarded in the environment and in the ocean. I’m not sure if we have sufficient time to answer this question, but I believe that this mechanism and how to do this is a very important and essential topic, I believe, for all of us.

Moderator – Karlina Octaviany:
Thank you. Back to you. Thank you, Asuko. So as your reminder, it’s already closing time for our open forum. So we’ll have a closing statement from each of the speakers. Perhaps we can go online first from Tim.

Siriwat Chhem:
Yes, thank you. So just back to our topic on AI and sustainability, I believe it is a long-term journey, as mentioned from our opening statement and all the panelists, that in certain cases they have already been established for a long period of time. And it’s difficult from the legislation and policy point of view to kind of backtrack or catch up for that matter. And so with that, I think rather than focusing too much on the technology, which is something that’s being done in the field of AI, we should focus more on the fundamentals, which are utility and also what are some of the implications. Because if we focus too much on the technology, we think that it’s a solution to everything rather than looking at the overall big picture and weighing out the pros and cons. So I would say that we should take a more big picture approach and looking into the long term rather than just focusing on solving immediately what we can do in the current state without

Moderator – Karlina Octaviany:
thinking too far ahead. Thank you. Thank you. Atsuko, you want to share closing remarks? Thank you.

Atsuko Okuda:
I want to also add a dimension on digital inclusion. As you know, according to the latest ITU estimates, 2.6 billion people are still unconnected. And I believe that this process of digital inclusion and digital transformation should continue so that those who need the digital technology and transformation can benefit from the technologies. But at the same time, I believe that we shouldn’t forget about the greening part and environmental considerations in the process. And I hope that this conversation will continue among all of us or in the expanding community globally so that we can make sure that we can mainstream the perspectives and considerations to the environment in the future in our effort to connect the unconnected and making the digital transformation sustainable. Thank you. Thank you, Atsuko. We go to Robert for closing.

Robert Opp:
I didn’t expect a closing statement, and I don’t have a closing statement, but I do have a couple thoughts. And actually, even these last couple thoughts that were offered about the digital divide and not focusing on technology, I think Cem is exactly right. The focus here should not be the technology. The focus should be on what best serves people and the planet. And I think that if we stay focused on what best serves people and the planet, you know, technology, we’re not going to stop the sort of innovation for commercialization process. But I think as we go forward in alignment around what needs to happen, we have to keep people, we have to make sure that technology is serving people, not the other way around. And it’s the same for the planet. We just, we can’t keep that cycle of the planet is here for the taking, for the purpose of technology rollout.

Noam Kantor:
It’s not about that. Thank you. I know it’s 2.31, so I’m between you all and your coffee. But yeah, this was fascinating. And I think what I’ve been able to see is efforts towards sustainable digitalization from code to cooperation on an international scale. How everyone in the policy stack, as it were, can make an impact from where they are. And it’s been great to learn about that. I hope you’ve also come away with the sense that better practices are possible in the tech space. And that, you know, there is a way to make progress on these goals, including when necessary, you know, not shipping certain products when it wouldn’t be responsible to do so. I don’t have a poem to end with, like we started with, which is sad, but probably something from Mary Oliver would be good, so you all can imagine that. Thank you. To Martin?

Martin Wimmer:
Yeah, interconnected networks, the internet, are a variable thing. They are something like Tupperware or color TV or punk rock. Ideas from the middle of the last century. People who were there at the beginning are very old now and have gray hair. The subtext of this conference, as I experience it, is to discuss what the digital transformation means to the internet. It’s old heroes, it’s old myth, old narratives, old governance structure. And while there’s still a community of people who believe in the value of the internet for internet’s sake, there might be a new generation out there who considers the internet to be just the oldest of many digital technological artifacts, AI being the most recent incarnation, which are not good or bad in itself. A matchstick firing global warming in the worst case scenario, or tools…

Atsuko Okuda

Speech speed

129 words per minute

Speech length

1924 words

Speech time

892 secs

Audience

Speech speed

170 words per minute

Speech length

552 words

Speech time

195 secs

Martin Wimmer

Speech speed

136 words per minute

Speech length

803 words

Speech time

354 secs

Moderator – Karlina Octaviany

Speech speed

135 words per minute

Speech length

1162 words

Speech time

517 secs

Noam Kantor

Speech speed

190 words per minute

Speech length

1596 words

Speech time

504 secs

Robert Opp

Speech speed

149 words per minute

Speech length

1857 words

Speech time

748 secs

Siriwat Chhem

Speech speed

184 words per minute

Speech length

1328 words

Speech time

434 secs