Futuring Peace in Northeast Asia in the Digital Era | IGF 2023 Open Forum #169

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During a recent discussion, the importance of youth engagement in governance and politics was emphasised. Participants highlighted the need for young people to be aware of and involved in governance, as political decisions can have a significant impact on their lives. The argument put forward was that young people should strive for a better understanding of governance and actively advocate for more meaningful engagement in decision-making processes.

Another key point of discussion was the role of consistency and resilience in gaining influence and becoming opinion leaders. An example was shared about a CEO who started a company at a young age and, after 20 years of consistent hard work, became an influential leader in the industry. This highlighted the importance of persistence and unwavering dedication in achieving influence and becoming a respected voice in one’s field.

In terms of overcoming challenges and gaining acceptance, participants stressed the significance of collective dialogue and collaboration. It was emphasised that by engaging in conversation, working together, and accepting challenges, individuals and communities can effectively tackle obstacles and foster acceptance. This highlights the need for open and inclusive discussions where all voices are heard and valued.

The discussion also drew attention to the current trend of youth inclusion and the need to capitalise on this momentum through various initiatives. It was noted that there are already numerous programs and speaking engagement opportunities available that aim to involve and empower young people. It was suggested that further efforts should be made to maintain this momentum and create additional initiatives to sustain youth engagement and ensure their voices continue to be heard.

In summary, the discussion emphasised the importance of youth engagement in governance and politics, with a specific focus on understanding governance, advocating for meaningful involvement, maintaining consistency and resilience to gain influence, engaging in collective dialogue and collaboration, and leveraging the current trend of youth inclusion. These insights highlight the significance of empowering young people and recognising their role in shaping the future.

Yukako Ban

The analysis covers several topics related to the metaverse and its future implications. It begins by highlighting one of the main policy gaps for the metaverse: the lack of clear definitions and regulations. The metaverse is often described as the future of the internet, a network of virtual worlds blending the digital and physical realms. However, due to the absence of clear definitions and regulations, there is uncertainty about how it should be governed.

Moving on, the analysis discusses the potential benefits and risks associated with the metaverse. By 2026, a significant proportion of the population is expected to be engaged in the metaverse. To prevent issues such as hate speech, misinformation, and anonymity, better management and regulation are necessary. On the positive side, the metaverse has the potential for application in education and fostering intercultural dialogue. It can revolutionise the way we learn and interact globally, reducing the need for physical travel and potentially lowering CO2 emissions.

The analysis also emphasises the importance of considering Northeast Asia’s geopolitical tensions in relation to the metaverse. The unregulated metaverse could exacerbate existing conflicts and geopolitical tensions in the region. Given the region’s geopolitical importance and the anonymity between nations, specific consideration must be given to Northeast Asia when shaping metaverse policies.

Regarding education, the analysis suggests that there is a need to explore the metaverse’s educational utility, as it remains largely unexplored. Currently, there is a lack of developed educational content, highlighting the importance of further research and investment in this area.

In terms of age diversity, the analysis highlights the different perspectives that the younger generation, known as digital natives, have on digital technology’s involvement in reality. Their viewpoints should be taken into account in policymaking processes. Similarly, the perspective of age diversity, especially in regard to internet governance, is lacking. Both the voices of the youth and the older generation should be considered to ensure a comprehensive approach.

Notably, the analysis touches on the demographic changes happening worldwide, with many countries leaning towards ageing societies. As a result, youth voices tend to be undermined. It argues that youth should have more access to decision-making tables and be part of larger discussions, breaking away from age-based segregation.

The analysis also highlights the significance of cross-border cooperation in the Northeast Asia region. Countries like China, Japan, and Korea already have extensive economic cooperation. In today’s globalised world, no single country can manufacture a product independently. Academic programs promoting cooperation also exist among these nations.

Furthermore, the analysis emphasizes the role of technology, education, and capacity building in initiating cooperation. By focusing on these topics, peacebuilders can avoid political issues and foster citizen-level awareness and collaboration.

Cultural diversity and localization are also deemed crucial on a systemic level and in internet governance. While different cultures and values bring about diversity, fragmentation and division can arise. However, technology can help bridge language barriers and differences, promoting cooperation.

In conclusion, the analysis underscores the need for a comprehensive understanding and collaboration to navigate the challenges and opportunities associated with the metaverse and related issues. Clear definitions and regulations should be established. Age diversity should be considered in decision-making, and youth voices must be heard and included. Cross-border cooperation and dialogue among different generations are paramount. Additionally, technology, education, capacity building, and cultural diversity play significant roles in promoting collaboration. By addressing these aspects, we can work towards harnessing the full potential of the metaverse and achieving a more inclusive and sustainable future.

Linda Hjelle

In the meeting, Linda Hjelle, an Associate Political Affairs Officer at the UN Department of Political and Peacebuilding Affairs, was introduced. Linda provided insights into her involvement in various roles. Firstly, she mentioned being the program manager for a project related to UN projects and aligned with SDG 16: Peace, Justice, and Strong Institutions. This demonstrates her dedication to promoting global peace and strengthening institutions for achieving justice.

Additionally, Linda stated that she is moderating the online discussions during the meeting. As an online moderator, she addresses questions from the online audience, ensuring informative and interactive discussions. This highlights her active involvement in engaging with a wider community.

Linda’s introduction as an Associate Political Affairs Officer at the UN Department of Political and Peacebuilding Affairs establishes her expertise in political affairs. Her role reflects her significant influence in shaping policies and strategies for peace and stability.

Overall, Linda’s active participation as a program manager, online moderator, and Associate Political Affairs Officer demonstrates her commitment to advancing UN initiatives. She works towards promoting peace, justice, and strong institutions while engaging with various stakeholders in meaningful discussions.

Ijun Kim

The “Futuring Peace in Northeast Asia” programme, organized and led by the United Nations Department of Political and Peacebuilding Affairs, aims to promote peace and stability in Northeast Asia. The programme is in line with the Youth Peace and Security Agenda and seeks to engage young people in discussing and shaping the future of the region.

The programme brought together young people from China, Japan, Mongolia, and the Republic of Korea to collectively discuss the future of Northeast Asia. This inclusive approach allowed for diverse perspectives to be shared and considered. The discussions were facilitated by UNESCO, which provided capacity building through a session known as the Futures Literacy Lab. This lab helped participants develop the skills to explore potential future scenarios and examine their implications.

One of the key proponents of foresight in the programme is Ijun Kim, who believes that foresight is a structured and systematic way of using ideas about the future to anticipate and better prepare for change. Kim emphasises the importance of wide participatory foresight tools, which engage a diverse group of people in discussions. The goal is to make the discussions interactive and to surface trends or signals that may not be immediately apparent.

As part of the programme, Kim proposed various policy avenues for realising the vision of a peaceful Northeast Asia. These include regional cooperation for education, focusing on cultural exchange to foster understanding and collaboration. Additionally, the establishment of a Northeast Asian Youth Parliament for climate change aims to involve young people in addressing environmental challenges. Furthermore, the promotion of digital literacy programmes through cross-sectoral partnerships is seen as essential for enabling young people to navigate the digital landscape effectively. The programme also emphasised the importance of consensus-based regulation and policy presentation.

The role of young people in governance and policy-making was also highlighted. It is crucial for young people to understand how governmental decisions can impact their daily lives. They are encouraged to advocate for more meaningful engagement and push for their voices to be heard in decision-making processes. Creating an intergenerational cooperation environment was identified as essential for fostering understanding and collaboration between different age groups.

The Internet Governance Forum was recognised for its contribution to shaping governance and peace-building. The involvement of young people in such forums was highly valued, and there was gratitude expressed for their active participation and contributions. Moreover, the integration of digital literacy and the concept of the metaverse into existing initiatives was supported, as it would facilitate the implementation of these initiatives and promote innovation and development.

In conclusion, the “Futuring Peace in Northeast Asia” programme, organized by the UN Department of Political and Peacebuilding Affairs, seeks to involve young people in shaping the future of the region. The programme emphasised the importance of foresight, inclusivity, and meaningful engagement in discussions and policy development. With the participation of young people, the programme aims to foster a peaceful and prosperous Northeast Asia.

Oyundalai Odkhuu

Upon analysing the provided information, several key points emerge regarding the development and governance of the metaverse in Northeast Asia. The main arguments put forward are as follows:

1. Developing Northeast Asian metaverse platforms: The analysis recognises the importance of Northeast Asian countries leveraging their world-class technological capacities to develop their own metaverse platforms. This is seen as a preventive measure against potential monopolies by Western countries. By creating their own metaverse platforms, Northeast Asian countries can maintain control over the digital space and ensure equitable access for their citizens.

2. Promoting the development of inclusive algorithms: The analysis emphasises the need for open and inclusive algorithms in the metaverse. It suggests that countries should collectively develop algorithms that facilitate cross-language information sharing, ensuring that diverse voices and perspectives are represented. The argument is rooted in SDG 10, which focuses on reducing inequalities.

3. Fostering regional collaboration and stakeholder dialogues: The analysis emphasises the importance of engaging a wide range of stakeholders in the development and governance of the metaverse. This includes marginalised communities, youth, individuals from different social classes, genders, sexualities, and disabilities. By fostering collaboration and dialogue, Northeast Asian countries can ensure that the metaverse reflects the needs and aspirations of all its users.

4. Discussing the regulation of the metaverse: The analysis highlights the absence of a single player in metaverse regulation. It suggests that a regional initiative, similar to the Internet Governance Forum, should be established to address this gap. By engaging in discussions around regulation, Northeast Asian countries can shape the metaverse’s governance framework and ensure that it aligns with SDG 16, promoting peace, justice, and strong institutions.

5. Engaging youth and promoting global connectivity: The analysis underlines the significance of youth engagement in internet governance. With 71% of the world’s youth using the internet, their involvement is crucial for shaping the metaverse’s future. In addition, the argument advocates for the internet as a tool that transcends borders, connecting people, businesses, and governments on a global scale.

6. Ensuring privacy in the internet: Privacy is identified as a key concern in internet governance. Decisions related to internet governance have far-reaching effects on various aspects of people’s lives. Therefore, it is crucial to establish mechanisms that safeguard individuals’ privacy rights in the metaverse.

7. Capacity building and skill enhancement: The analysis stresses the need for capacity building and skill enhancement in the metaverse. This involves promoting cultural awareness and sensitivity training for developers and users of the metaverse, as well as bridging skill gaps to facilitate effective cross-border cooperation.

8. Investment in the education sector: Considering the metaverse as a new sector, the analysis argues for investment in the education sector to enhance digital literacy and responsible usage. This investment aims to equip individuals with knowledge about the metaverse and its potential risks and benefits, targeting both the young and old.

9. Mechanisms for conflict resolution: The analysis puts forth the need for mechanisms to resolve conflicts during cross-border metaverse activities. It suggests adopting arbitration and mediation processes to address disputes that may arise in this context.

10. Establishing industry standards and a regulatory framework: The analysis contends that industry standards addressing privacy, data security, content moderation, and digital property rights are pivotal in the metaverse. It argues for the creation of a code of conduct or regulatory framework to ensure responsible and ethical practices within the metaverse, in line with SDG 16.

In summary, the analysis advocates for the development of Northeast Asian metaverse platforms, inclusive algorithms, collaboration and stakeholder dialogues, regulation discussions, youth engagement, privacy protection, capacity building, investment in education, conflict resolution mechanisms, and the establishment of industry standards. Northeast Asian countries are encouraged to seize the opportunity to shape the metaverse, ensuring equitable access, responsible usage, and meaningful participation for all.

Manjiang He

The analysis provides a comprehensive examination of various topics, including digital platforms, youth engagement, international cooperation, and the significance of respecting the local context. It begins by discussing the influence of digital platforms on daily life, noting their ability to enhance communication and cultural exchange. However, the analysis also acknowledges the negative aspects of digital platforms, such as the prevalence of hate speech, prejudice, and discrimination.

A key argument put forth is the importance of digital literacy in understanding and navigating the influence of digital platforms on daily life. It highlights the need for individuals to be equipped with the necessary skills to effectively engage with digital platforms and address the negative aspects associated with them. The analysis further emphasizes that social platforms often serve as breeding grounds for hate speech, prejudice, and discrimination. It also highlights the challenge faced by social platforms in swiftly responding to these issues due to technological limitations and differing priorities.

Moreover, the analysis explores the role of young people in internet governance and conflict resolution, pointing out their innovative approaches and willingness to explore different solutions. It emphasizes the importance of including young people’s perspectives in decision-making processes, highlighting that they are often seen as naive but possess fresh insights and ideas.

However, the analysis also identifies limited efforts to engage youth in decision-making processes in the Northeast Asia region. It highlights active youth engagement initiatives in other parts of the world, such as Africa and Bangladesh, and suggests that Northeast Asia is lagging behind in this regard.

Another argument put forth is the exclusion of young people in policymaking and decision-making processes. The analysis provides no supporting facts, but it asserts that young people are often left out of important discussions and their voices are not adequately heard. It argues that mechanisms should be established to channel young people’s voices into both the government and private sectors.

The analysis then delves into the challenges of international cooperation, particularly in regions with differing stages of development – economic, social, and cultural. It asserts that these differences pose obstacles to achieving effective collaboration.

Respecting the local context is also highlighted as a crucial factor in creating a more inclusive and open online digital space. The analysis suggests that societies have their own uniqueness, and integrating the local context into digital literacy programmes or the metaverse can yield beneficial outcomes.

Additionally, the analysis touches upon cross-border cooperation, skill gaps, and funding limitations in the implementation of digital literacy initiatives. It mentions that cross-border cooperation is already happening in certain regions like Mongolia, but no supporting facts are provided.

Ultimately, the analysis underscores the importance of stakeholder engagement in the decision-making process and advocates for the integration of digital literacy and metaverse elements into existing initiatives. It acknowledges the challenges posed by funding and sustainability concerns but suggests that these limitations can be addressed by reaching out to stakeholders and incorporating their recommendations into existing initiatives.

In conclusion, this in-depth analysis offers valuable insights into various topics related to digital platforms, youth engagement, international cooperation, and the significance of the local context. It underscores the need for digital literacy, young people’s perspectives in decision-making, and meaningful stakeholder engagement. It brings attention to the challenges faced in international cooperation and stresses the importance of respecting the local context for creating more inclusive digital spaces.

Jerry Li

The analysis emphasizes the importance of digital literacy and understanding modern technologies. It highlights that while digital literacy programs already exist, there is still a significant knowledge gap between these programs and those offered in schools. To address this gap, in-school and out-of-school digital literacy programs are seen as essential. In-school programs would cover the basics of accessing technologies, effective online engagement, and education on important concepts. Out-of-school programs would be offered in community centres, libraries, and public spaces to include a wider range of demographics. These programs would play a crucial role in ensuring that individuals have the necessary skills to navigate the digital world.

The analysis also underscores the need for a proactive and inclusive approach to digital space governance. It argues for an approach that goes beyond a reactionary stance and involves more voices in shaping policies related to safe digital spaces online. By including a diverse range of perspectives, digital space governance can be more effective in addressing emerging issues such as disinformation, misinformation, and the metaverse.

Furthermore, the analysis highlights the importance of youth involvement in internet governance. It asserts that the younger generation, being the inheritors of the problems and subjects for decisions made on their behalf, should have a voice in shaping internet governance policies. This inclusion of youth perspectives is seen as vital to ensuring inclusivity in the digital space.

The analysis also touches upon the topic of global governance of the internet. It suggests that while there was a consensus on global governance regarding certain aspects of the internet’s structure in its early stages, the content should be left to national policies sensitive to cultural differences. This approach recognises the importance of balancing global coordination with the need for cultural and national autonomy in shaping internet content.

The need for improved collaboration between public and private sectors in digital literacy programs is another key point highlighted in the analysis. It showcases examples of successful collaborations, such as the digital literacy program introduced by META in Hong Kong and the Women’s Foundation’s encouragement of women in Hong Kong to be part of STEM fields. These collaborations demonstrate the potential benefits of joining forces to enhance digital education and literacy efforts.

Additionally, cross-border regional collaboration and the inclusion of experts in policy development are advocated. Collaboration with existing cross-border regional collaboration groups, particularly in the education space, and research consortia is seen as a strategic way to leverage resources and expertise. This collaboration can help make policy proposals more informed and inclusive by sourcing a variety of voices and perspectives.

In conclusion, the analysis underscores the need for digital literacy programs, a proactive approach to digital space governance, youth involvement in internet governance, and improved collaboration among stakeholders. By addressing these aspects, it is believed that individuals will be better equipped to navigate the digital world, policies will be more inclusive and effective, and the potential of the internet as a tool for positive change can be maximised.

Session transcript

Ijun Kim:
Just you. This is nice. I can just focus on you guys. No. No. Linda, can you try raising your hand again? Because before it was on the presentation, now we’re back on Zoom. Oh, yeah. We see it.

Manjiang He:
I just got lowered. Okay. That’s good to know in case there are any questions from here.

Ijun Kim:
I mean, you can kind of look around a little bit, but that would be the primary audience. And then we can look at the camera a few times. I don’t know. Maybe I’ll do like this a little bit. Yeah. This is a weird setup. This is a weird setup. Yeah. But then they said not to sit past that chair because they won’t be able to see on camera. Maybe I’ll kind of like… Yeah. Kind of like this. All right, let’s get started. Good morning, everyone. Thank you for joining us. us today on the last day of the IGF. My name is Yi-Jun Kim and I will be providing a general introduction of the program we hope to share with you and of course later moderating the discussion we are about to have. Before starting off I would just like to give my colleagues an opportunity to introduce themselves and greet you personally. Go ahead.

Jerry Li:
Hi everyone my name is Jerry. I’m from Hong Kong, China and I’m one of the youth researchers at the UNDPPA as part of this project.

Manjiang He:
Hello everyone my name is Manjong from China and I am a youth peace builder and a member of the Youth Advisory Group under the Asia-Pacific Division of UNDPPA. First of all I want to thank you and thank IGF Kyoto 2023 for giving us this opportunity to speak here and also want to thank all of you either sitting here in the room or watch online for joining with us in this session.

Oyundalai Odkhuu:
Okay hello thank you for everyone and thank you for providing the great opportunity and my name is Ayunda Lai. I’m from Mongolia. I am youth peace builder at UNDPPA. Thank you for all.

Yukako Ban :
Good morning I’m Yukako. I’m from Japan. I’m also one of the youth peace builder from the same division. I’m very great to be here today. Thank you.

Ijun Kim:
And we also have Linda on Zoom. Linda do you want to come in real quick?

Linda Hjelle:
Hi everyone my name is Linda Yella. I am Associate Political Affairs Officer at the UN Department of Political and Peacebuilding Affairs and I’m the program manager for this fantastic project that we’ve been having for now three years and I think Ijin will tell you more about the project itself but I’m happy to be here and I’m the online moderator if there are any questions. from the audience online.

Ijun Kim:
So speaking of the fantastic project that Linda mentioned, thank you. This project is called Futuring Peace in Northeast Asia. So just looking at the title, you’ll notice that there are several components to it. Number one, the future. We leverage the concept of the future to host discussion spaces. Number two, we host discussion spaces about peace and peace where? In this context, Northeast Asia. Futuring Peace in Northeast Asia is a program organized and led by the United Nations Department of Political and Peacebuilding Affairs, and it is designed in line with the Youth Peace and Security Agenda. The YPS agenda recognizes the valuable contributions of young people to establishing and sustaining peace and hopes to empower them and engage them more meaningfully in relevant discussion spaces. Through this program, young people from China, Japan, Mongolia, and the Republic of Korea were able to convene and discuss collectively how we envision the future of Northeast Asia. And the central methodology throughout the program that led the overall process is called foresight. I think some of you may be familiar with the concept, but long story short, in a nutshell, here I quote, it is a structured and systematic way of using ideas about the future to anticipate and better prepare for change. So foresight is all about leveraging this concept or this idea of the future so that individuals, organizations, or societies as a whole can become more anticipatory and more resilient to change. The program, the first phase of the program was launched in 2021 in partnership with UNESCO, and UNESCO came in to provide capacity building opportunities through a session program they call the Futures Literacy Lab. the opportunity to really understand what foresight is, what it means for us, and how we can leverage it in these contexts. Then phase two began in 2022 in partnership with a Swiss policy think tank, FORALS. And FORALS supported us in translating foresight activities into tangible policy recommendations that we can later share with a broader audience. So I want to speak a little bit more about phase two, because that’s what this is all about today. Phase two was focused on using participatory foresight tools. So there are many tools within the foresight methodology, and there are many different ways of using it. And participatory foresight tools are focused on engaging as diverse a group of people in these discussions, making it interactive, hopefully fun, so that we can surface trends or signals that are sometimes not as visible. This was done, for example, through a workshop that we hosted and facilitated using the Futures Triangle. The Futures Triangle is a tool where we, for example, a single concept such as regional collaboration, for that single concept we explore the weight of the past, what is holding us back from achieving that, push of the present, what is happening right now that is driving us to change, change the way we think, change the way we do things. And then the pull of the future. What do we want in the future? What kind of vision do we have for the future that is also adding to the desire to change? Then we had a really interesting intergenerational dialogue, and it was my first time engaging such a wide range of audiences. We leveraged an online tool. It was slightly more interactive than a simple survey. It encouraged participants who are taking part in it to imagine themselves stepping into a time machine, going forward a couple decades, and then once they look out the window of the time machine, the first question was, what do you see? And through that process, we encouraged people to dream quite vividly about how they see the future. And through that intergenerational dialogue, we were actually able to interact with almost 150 participants, and of a very wide range of backgrounds, expertise, and of course, age groups. And I found it very valuable because while this is based on the youth peace and security agenda, we also recognize the need to, especially when building a collective vision of the future, it’s very important to engage as wide a range of audience as possible. And then our youth peace builders, we moved on to the desk research phase. Based on the insights we gathered through the workshop, and then the online dialogue with the intergenerational audience, we delved a little bit deeper to understand the current landscape, what is going on in Northeast Asia, what are some opportunities that we have that could essentially launch us closer to the future we want to see, but also what are some challenges that we foresee and how to address them. That desk research culminated in a publication called Future of Regional and Narrative Building in Northeast Asia, Policy Recipes by Youth Peace Builders. So we called it a policy recipe because we wanted to make it slightly fun. So it’s quite easy to read. The reason usability essentially is similar to a cook recipe book. We tried to integrate the concept of using different recipes to essentially create a delicious cuisine. And in this case, the cuisine was a metaphor for the future. of peaceful Northeast Asia. In the publication, if you want to Google it or find it online, you’ll find four policy avenues that the Youth Peace Builders came up to recommend how the region, whether through national policies or regional cooperative policies, can move us towards the vision that we hope to see. The first is calling for regional cooperation for education, specifically focused on cultural exchange to build a more cohesive regional identity and enable collaboration. The second encourages the establishment of a Northeast Asian Youth Parliament for climate change. We recognize that climate change is very relevant to the younger generation and, of course, future generations, and we feel the urgency to do something about that. And one way to address it and meaningfully engage young people is by establishing such a body. The third is calling for partnerships, especially cross-sectoral partnerships, to support digital literacy programs. And the last, but certainly not least, is calling for a more consensus-based regulation and policy recommendation presentation. Thank you. Cool. Thank you.

Manjiang He:
Thank you, Adrienne. So as Adrienne mentioned, there’s a policy recommendation about digital literacy program. So Jerry and I co-authored this policy recommendation, and I’ll briefly talk about the background and why we focus on digital literacy. And Jerry will elaborate more on the interconnected and the digital integrated. So we recognize the influence of digitalization in our daily life, work, and study, and acknowledge the positive impacts of digital platforms promoting communication and cultural exchange. However, it is crucial to address the negative aspects of the digital platforms, especially social platforms, which often serves as the breeding ground for hate speech, prejudice, discrimination. So I’d like to invite Jerry to talk a little bit more about that. Sure. and antagonism and violence. So our findings from the open online dialogue conducted in 2021 revealed that the negative emotion frequently stemmed from historical grievances, recent conflict, nationalism, fake news. I tackled the escalated issues of hate speech and online violence. But relations remained to the social media platforms in technology capacity and priorities often are too rapid for proactive policy changes. In Northeast Asia, the region where I belong and also the peace builder belong, also there are the conversation that I had over the few days at IJF, children and youth held by the Youth IJF China that aims to build capacity for children and youth in.

Jerry Li:
But also, what are the modern technologies? What are the ongoing conversations about these technologies and how can we efficiently and effectively utilize technology in these ways? And so the program focuses on education, learning about technological developments and modern conversation in order to. engagement knowledge that already exists on broadcast media and written media. So public and partner partnerships offer a way to utilize private expertise and developments into a public guided system so that developments are organic and from the ground up and can also consider regional and cultural differences. We noted many existing digital literacy programs throughout the many stages of this project, and we noted that while many are offered by private companies, there is a gap of knowledge that exists between these programs and those offered in schools. And so the components of our recommendations are firstly to have in-school literacy programs, digital literacy programs, and have different stages of these programs for different grades. Covering basics of access to what are technologies, to effective engagement online, to education on important concepts such as what is the metaverse, what is disinformation, and what is misinformation. Bridging versions of these conversations we are having here at the IGF to the classroom enables more voices to eventually be heard in further discussions in online spaces and new technologies. And this is an inclusive approach we really believe in. Our second component of recommendation is out of school digital literacy programs for the public. So in community centers, in libraries, in public spaces, and this approach serves to include more demographics in digital education conversations, and so that we can further adjust material for certain regions and generations as well. Private stakeholders should be providing updates and information onto new technologies. And the third component of our recommendation is to include more voices on policies pertaining to safe digital spaces online. So as Manjong adeptly discussed, there is a lot of online problems that we’re facing, particularly with disinformation, and these discussions need voices from those precluded due to lack of access, language, or even knowledge or care. And we believe that this is not one of those issues where demographics have to seek out the tools in order to engage, and that we would be preemptive in equipping people with knowledge and with access and tools so that they can have a voice in this space. Governance in this space necessitates a grounds-up approach that is not just reactionary. We have next stages in the works and we’re very glad to be sharing part of the project here today. So now Yukako will present the second recommendation. Thank you so much. So from here we will focus on the part

Yukako Ban :
of metaverse landscape in our recommendation. So when we consider our future, technological development is a topic we cannot ignore. So I will introduce the background and policy gaps in this part of recommendation, and Oyuka will explain the detailed recommendation part. So I will introduce the background and policy gaps in this part of recommendation, and Oyuka will explain the detailed recommendation part. So metaverse, often described as a future of the Internet, is a network of virtual world blending the digital and physical realms. When it’s still in its infancy and lacks of clear definition, many providers are rapidly developing technologies as we could see in this forum. When we imagine our future peace in this region, its potential benefit and risks are unknown. So by 2026 a significant portion of the population will be engaged in the metaverse. necessitating better management to prevent issues like hate speech, misinformation, and anonymity. Electricity usage for such a massive use of technology is a debate, but it may reduce CO2 emissions by replacing physical travels. At this time, the metaverse holds the potential for application in education and fostering intercultural dialogue. However, the educational utility remains insufficiently explored, and there is a notable lack of developed educational content. Key challenges include regulation, privacy, and accessibility. One of the main policy gaps for the metaverse future is regulation, and this is about how to regulate this decentralized, transnational, and technologically evolving space. Questions of state power, privacy, and data protection vary regionally and culturally. Universal digital access by 2030 is a goal based on UN Our Common Agenda, and government and international organizations are working to improve Internet accessibility during digital space as a public arena. In this context, accessibility and affordability are also concerns. Currently, metaverse is primarily being shaped by Western tech giant. However, its influence extends beyond the Western world. Monopolization of metaverse platform could lead to ownership and operation issues. In Northeast Asia, the unregulated metaverse could exacerbate geopolitical tensions and conflict given the region’s geopolitical importance and existing anonymity between nations. As metaverse evolves, addressing these issues is a priority. these challenges is crucial for its responsibility, responsible and sustainable development. So here, I over to Oyuka for recommendation part.

Oyundalai Odkhuu:
Okay, thank you for Yukako and all, and I would like to highlight some components of our policy recommendations regarding the metaverse. And first component of our recommendation is develop Northeast Asian metaverse platforms. In many Northeast Asian countries have world-class technological capacities, and yet they have been heavily influenced at Western cultures. But also, it’s so appreciative that some Northeast Asian countries have already developed their own metaverse platform. And each country in Northeast Asia should take the initiative to foster increased interaction between relevant industries, research institutions, academia and governments in order to develop platform or originating from Northeast Asia and prevent monopolies and agopolies by a small number of Western countries. And that is also so important in terms of our recommendations. And first, the component of our recommendation is focused on promoting the development of inclusive algorithms. Of course, in metaverse is very hot topic and currently in technology-focused world. And so openness and inclusive algorithms is so important in the metaverse space. In the metaverse, the physical distance doesn’t matter anymore. And while traditional cooperation among countries in the Northeast Asia region can be tricky due to historical differences, territorial disputes and increased tensions leading to hate speech. and climbers and regional collaboration remains vital. And so domestic discussions with Northeast Asian countries have typically held in their native languages and creating limited exposure to views from other nations. To foster feasible relations, governments and should collectively develop algorithms for cross-language information sharing and measures to counter excessive filter bubbles. And this legislation in each country and regional agreements foster the creation of shared narratives that support fees in region and also even the world. And the third component of our policy recommendation is to focus, foster regional collaboration and mostly stakeholder dialogues between private sector and public sectors and even governments and youths and also intergenerational. It’s so important. Yeah, in the metaverse where the physical and the virtual worlds are approximated, people from fields other than internet and new technologies and policy fields should be engaged and to be heard and consulted, including marginalized communities, youth, people of different social classes and gender and sexuality and people with disabilities. And last point of our recommendation is regulation. There is no single players in the regulation of the metaverse and we need to more discuss about the regulation and code of conduct and kind of this conference, Internet Governance Forum should serve as a model for a similar regional initiative in Northeast Asia and which could. contribute intra- and intra-regional collaboration and services. Yeah, that’s four issues that we focused on in our policy recommendations, and okay, thank you.

Ijun Kim:
Thank you, Manjiang, Jerry, Yukako, and Oyuka for presenting our recommendations. I find these opportunities fascinating not only because we have the chance to share with the audience, but it also brings back memories, makes me reflect on the processes that we underwent to develop these recommendations. We have some topics that we want to surface through a more open discussion, elements of the programs that we hadn’t quite been able to touch upon through the presentation. But before I launch into that, I wonder if there are any questions, immediate questions from the audience. Interactivity and engaging a wide range of stakeholders is the key value of our program, so you’re welcome to address any questions you have. While being trained on futures literacy, I was instructed to not be afraid of silence, and I have come prepared to essentially really leverage the silence that we have. So like I mentioned, there are some elements that we want to really share with you of the program. So shall we start the panel discussion? Ready? So let’s see. We are at the IGF, and specifically I want to hear your thoughts on why Internet governance should engage young people in building consensus, possibly regulations, and moving forward so that digital spaces can become safer and more inclusive. Any takers?

Jerry Li:
Thank you, Yijun. I think that’s a really, really good question. important question that youth also face in so many other of these big systemic and pending issues, particularly with internet governance and technological developments and the whole gamut of challenges that brings. I think youth involvement and youth perspective is so important to ensure that those spaces are inclusive because the internet should not just inherit the existing problems of the physical and outside world. I think the younger generation can bring so much perspective to these changes and as we all know the younger generation, the youth, is usually inheritors of problems and guinea pigs for decisions made on our behalf or for us. So definitely when we discuss concepts like the metaverse and pending policy proposals, youth perspective and youth engagement is key. Thank you, Jerry. Maybe I also want to give some comments

Manjiang He:
on this. I think young people, as Jerry mentioned, usually they’re seen as a problem or they’re too naive but I do want to mention because we are young that is where we are open-minded, we’re open to different kind of solutions and approaches and also we are innovative. We are able and dare to take innovative approaches in this context for internet governance and also the issue relevant to conflict resolution and peace-building and also young people, they are the future leaders so they should have their voices heard and ensure their perspectives are taken into account during the decision-making process. And also I want to touch upon that in Northeast Asia region, I think usually there are limited efforts to bring and engage young people in the decision making process. Well, over this discussion with other participants over the past few days at IGF, I got to know that there quite a lot efforts has been done, has been made in other parts of the world, for example, in Africa, there’s a very active youth engagement initiative. For example, the youth IGF under African Union in different countries in Africa, also there’s Bangladesh youth-led initiative that also aims to address the digital literacy on digital platforms. So I do see there’s a lot of things happening in other parts of the world, but I don’t see at least in this region, in Northeast Asia, youth engagement are not enough. So we need to take in the initiative and to take actions to bring young people

Yukako Ban :
into the floor, into the decision making process, into the implementation process. Thank you. Thank you so much. So I really resonate with what Jaylee said and Manjung said. So I have a two point. So first one is, yes, as you said, so younger generation, this generation, including this generation I assume is called digital native. So how we engage to the digital technology and how we contract reality is different from other generations. So our perspective should be considered to know that into policy making, first of all. And the second part is the perspective of age diversity, generation diversity. I think it lacks. it’s not limited to internet governance, but especially in our region, because of the demographic change, and most of the country, maybe Mongolia is exception, but most of the country is leaning toward aging society, so it’s easy, youth voice tend to be undermined because of its structure, but especially for the policy related to technology, different perspective should be considered. Of course, in terms of digital literacy and technology, we shouldn’t exclude the policy related to technology, different perspective should be considered. Of course, in terms of digital literacy and technology, we shouldn’t exclude older generation because they are also kind of vulnerable in terms of digital technology, but age diversity in general, like youth voice is equally important to older generation.

Oyundalai Odkhuu:
Okay, yeah, I also completely agree what you said and youth engagement is super, super crucial to the internet governance, especially in the internet area and around the world, and 71% of world’s youth aged 15 to 24 years were using the internet currently. It’s a big number compared with 57% of the other age groups. It’s a big number compared with 57% of the other age groups. So as we know that the internet is a global network and that transcends borders and connects people, businesses. in the governments worldwide and the decisions related to the internet governance have far-reaching effects on the various aspects of our lives including communications and commerce and sharing information and security. And so in order to create opportunity for young people we need to share some kind of opportunities and some kind of information and create some capacity building and share and also some information about the internet governance and have to ensure our privacy in the internet space. This is more crucial currently. Thank you. At this point, let’s see, I know we’re

Ijun Kim:
slowly running out of time but since we kept talking about why we need to engage young people and this question stands for my personal interest in area of work as well, I want to ask what does good or meaningful youth engagement look like? And no pressure that everyone has to answer but I want to get your thoughts and also to share with the audiences based on your experience what are some core elements that are necessary to ensure a program or an initiative is truly meaningful in terms of youth engagement? Maybe some keywords, a sentence or two, please. Maybe I will start. I think the current situation

Manjiang He:
in the region in Northeast Asia is young people, they’re often excluded. in the decision-making process, in the policy-making process, I think the meaningful engagement with young people should be in the very beginning from the top-down, I mean, well, from the top-down approach while making policy and making decision, they should be consulted. Their opinions and perspective should be included into while we make the policies, what kind of internet, what kind of future that young people, they want. I mean, this is the future of, I mean, young people, the next generation. So I think the meaningful engagement should, in the very beginning, at a very early stage, their voices should be heard. Well, to realize that, I think there should be a mechanism there because you can, you cannot do things without any frameworks or organization to support that, right? So there should be framework where the young people voice and perspectives can be channeled into the government or private sectors, technology companies, decision-making process. But I see, for now, the efforts are quite limited. I think that’s the direction that we should aim for.

Ijun Kim:
Including young people from the early stages, I think, truly demonstrates the willingness and readiness of whoever the host is to truly listen to the inputs of young people and shape whatever it may be, a program, an initiative, a policy, but to shape it in the way that is relevant for young people. I very much agree with you. Is there any immediate reactions to this? If not, that’s okay. We can move on. Yukako?

Yukako Ban :
Thank you so much. It was a very good question. I was thinking, what is that? So from the past. I grew up in Japan, but now I live in Africa, South Africa, so like as Manjin mentioned, there are a lot of youth initiatives, youth leaders. So I was wondering what is the difference between us. But in general, not only youth engagement, youth participation, but I just rather want to ask, you know, youth from other country, but at least in Japan, the interest to the politics itself is quite low among younger generations. And so, yeah, then, so we don’t need to like immediately engage to decision making, but just like we need to be exposed to the opportunity to be heard, and also about the policy making, because I think most of young generation just feel it’s very far from where they are, and experiences are like valued. I think it’s culturally in our society. But like just maybe like as Manjin mentioned, it should be more framework and opportunity than we have more access to the tables and to be discussed, not necessarily like only youth talking about it, but we can just talking about intergenerational dialogue, because it’s also like segregated based on age, and most of the conference room and the meeting rooms. So yeah, this was, this opinion is not very like organized, but that’s what I’m thinking, and thank you so much. Thank you. There were recommendations from Manjin and Yukako on how to more meaningfully engage young people. So essentially, these are recommendations for organizers, other stakeholders from older generations, but I think it’s also important to remind young people

Ijun Kim:
that while governance, the concept may seem very far fetched from the daily lives of young people, especially because it seems to be the province of governments and state. However, I do think it’s necessary for young people to understand how those decisions can affect their daily lives, and with that awareness to continuously push and advocate for more meaningful youth engagement. And I think once there is the back and forth between these two groups, that is truly the way to create this intergenerational cooperation and an environment that enables that, so that there is response from both sides. Shall we move on to the next question? Before I do, I wonder if there are any questions from the audience.

Audience:
Hello, I’m Daichi from Japan. So I’m working on internet service providers. So I’m just middle-aged, so 40 years old. But my company established before 20 years ago. My CEO operate for 20 years, and my CEO is 46 years old. So he established in the younger age, and it continued 20 years. Then now he’s the opinion leader in our industry. But how about the 20 years, maybe nobody hear about his opinion. This is a challenge. But most important thing is to continue, don’t give up. And collaborate, and then have a conversation with each other. This is very important. And I recognize that. So when I was 40 years, everyone is ready to hear our opinion. So please try and challenge. This is my opinion. Thank you so much. Thank you. That’s super, super encouraging. And I very much agree with you. You’ll notice that currently, youth engagement or youth inclusion is a very big trend. So I think it’s really important for us to not only recognize the importance of youth engagement, but really utilize and leverage this momentum and ensure we can keep the momentum going through different programs, different speaking engagement opportunities like this, and also internal and also external dialogue. So thank you.

Ijun Kim:
Well, one, I think maybe we might have time for one or possibly two questions. This is a question that hangs over us all. Cross-border cooperation, particularly in the context of Northeast Asia, currently where fragmentation globally and regionally is very much happening, such cooperation has proven to be quite challenging. So I want to get your thoughts, youth peacebuilders, on how our policies aim to address and essentially overcome the realistic challenges of the world.

Jerry Li:
Thank you for that question. And that definitely is a major question that we always get asked when presenting policy recommendations of this nature. And for me, I think that looking back to the beginning stages of the internet itself is a great guide in that there was a lot of consensus on global governance regarding certain parts of the structure of the internet. But then content, for instance, was left to. nations a national policy so that it could be sensitive to cultural differences or religious differences and considerations. I think despite the fragmented nature of the Northeast Asia region on some aspects that could just that could also be possible and left to national policies for certain cultural considerations but with that said a lot of our policies particularly on digital literacy education and regional community building across borders those are initiatives that have existed in our respective countries but we our policy proposal just serves to improve on these existing efforts. For instance in Hong Kong META in 2021 has a digital literacy program and it was applied and there were workshops held. The Women’s Foundation in Hong Kong also had similar efforts to encourage women in Hong Kong to be part of STEM. The Hong Kong Bureau also has their own digital literacy program. So our policy recommendation on digital literacy programs like the beginning foundations are all there we just hope that there could be more public and private collaboration so that more voices as we’ve said repeatedly can be included but yes so I guess my quick answer is that I don’t see that as a major problem. And a quick food for thought I wonder if we can take a positive spin on

Ijun Kim:
the concept or the keyword fragmentation and consider it diversification. Diversification that respects and cars out spaces for diversity but without the challenges of fragmentation which hinders communication and cooperation so just food for thought. Maybe I just want to add one more thing, while we do see the challenges of cross-border cooperation

Manjiang He:
or international cooperation in the region, given that the countries in the region are in a very different, they’re at very different stages of development, economically, socially and also culturally. So to keep that in mind is, while we wanted to have a kind of regional initiative, intergovernmental or international cooperation, but I think important is also to respect the local context, the differences, that all the societies have their own uniqueness, although we want to have a kind of regional initiative and cooperation, but back to the digital literacy program, we can kind of integrate the local context into the literacy programs or metaverse, into the mechanism, while also we keep that in mind, the overarching goal is to create a more inclusive and also open online digital space or platform. I would like to add some insights, cross-border cooperation is happening in some kind of regions, for example Mongolia, and due to some issues, for example skill gaps and cultural awareness

Oyundalai Odkhuu:
and also some kind of mechanisms, and so that it’s so valuable that in ways, in capacity building, promoting that enhancement. the skills and knowledge of individuals and users and organizations involved in the metaverse context. And this can help bridge skill gaps and promote effective cross-border cooperation. And secondly, I would like to put some points. There is inquiry in promoting cultural awareness and sensitivity training for metaverse developers and users. And it is more helpful that understanding the cultural nuances in the Northeast Asian countries. Countries can facilitate smoother, more smoother cooperation and collaboration, digital literacy and metaverse context, and even more sectors. And lastly, the point is going to establish some kind of mechanisms and for solving conflicts and disputes that may arise during cross-border metaverse activities. And so arbitration and meditation processes can be valuable in this crisis. And also, it wastes some kind of funding for education sector is more valuable, because a metaverse is newly born and new sector that we are facing today. And so we need to encourage and gain more knowledge in terms of the metaverse context. Yeah, that is what I am thinking that. Thank you so much. I have a relatively longer time to think about my answer, but this is a very challenging question. but cross-border cooperation is challenging but particularly in

Yukako Ban :
political area but economically we already have a lot of cooperation within the region like because just manufacturing some like smartphone those things oh like no no single country can manufacture single product these days and especially like China, Japan, Korea we have a lot of economic cooperation and also Mongolia like also as you mentioned like those capacity building those things that there are cooperation in some ways but because we are peacebuilder when we’re talking about peace we can’t avoid you know like political issues so like just that the conversation from politics makes the conversation more difficult but the internet and then something related to education and capacity building it can be like how to say like milder topic to start the cooperation so that’s why I personally like this topic like technology and skill development and also like we already have a kind of inter like university like those academic program among three nations at least so starting the compensation from like non-political layer but it is definitely connected to the broader like concept of peace like citizen level awareness and if you like this kind of initiative like you get to know each other and then I also really like your like not your like food of thoughts a diversification actually like having different unique culture it’s I think it’s very like it’s nothing bad about it and localization we have a different value and culture and it’s natural there is diversity but the issue is if it’s like closed off and it’s fragmented and divided but if they are like just in the system level and internet governance like if it’s like interoperated and also they are language barrier but the technology can break through those differences. So thinking about the cooperation from different angle,

Ijun Kim:
not only politics, then yeah that’s what I’m thinking. Thank you. Thank you all and Yukako, I love your point about essentially being more creative on how we start conversations and proposing innovative ways on how we can maneuver around political barriers or other challenges that we foresee for regional cooperation. Thank you. We have just over five minutes left. Any questions from the audience? Not to worry because I have another question to pose to our panelists. But just to be mindful of time, let’s keep our responses short so we can clear the room just right on time. So last but not least, and also if the audience is very much interested in this program, I’m sure this question will be fascinating. But I want to hear from you guys, what are our next steps for these policy recommendations. Thank you, Yijun. So as Yukako mentioned, there are existing

Jerry Li:
cross-border regional collaboration groups already and a lot of them do pertain to the education space. So research consortiums and research groups, university efforts. So we hope to collaborate more with existing groups to develop and be more informed about what is possible and what needs to also be further discussed and source more voices and experts in in the fields to make our policy proposals more informed. Yeah, I have two points about the next steps of the metaverse.

Oyundalai Odkhuu:
And first of all is we need to invest in some kind of funding to educational sector. And it is still new sector and so implement education programs to improve digital literacy and responsibility use of the metaverse. And these initiatives should target both young and young people and adults and also intergenerational peoples should target that. And it can help to raise awareness of the potential risks and benefits, of course. And secondly, contributing the developing code of conduct or regulating a framework is more crucial. And industry standards that address privacy and data security and content moderation and digital property rights within the metaverse is crucial. And so the next step is to contribute some kind of code of conduct and regulating of the metaverse and also education programs. I think it’s more crucial for the next step.

Ijun Kim:
Quick note, let’s try to keep our responses to a minute. I know it’s hard, but… Okay, yeah, because we are running out of time. After the next step, so our recommendation

Yukako Ban :
is not for recommendation, like we shouldn’t stop there. So it should be implemented in some ways, then we need a cooperation and collaboration with other organization, and that potentially as a youth organization, maybe like youth IGF, and also other, of course like different, also like not only like having dialogue, but have a more practical conversation with different organization, but that’s why we are here. So that is going to be a next step, and I’m also open to talk each of you like attending these sessions, yeah.

Manjiang He:
Yeah, maybe just want to add the last point that for next step and the future plans, we do see the realistic limitations. For example, the funding investment and how to keep this program sustainable. But I just want to echo what Jerry mentioned is we can start with integrate our possible recommendation into the existing original initiative already that make it easier, and also we already have the stakeholders around, and then we reach out to them and just add the element of digital literacy and also metaverse into it. I think it could be make it more easier to implement and proceed further, yeah.

Ijun Kim:
I think we are right on time. I just want to reiterate, thank you to the Internet Governance Forum for providing this platform for us to share our recommendations and insights. And to our audience, if you are interested in continuing to observe and also explore how young people can shape governance and beyond that peace-building, especially in Northeast Asia, please keep up with Futuring Peace in Northeast Asia, thank you. Good job. Thank you. Thank you. Good job guys, good job. That went by so much faster than I thought it would, right?

Linda Hjelle

Speech speed

149 words per minute

Speech length

77 words

Speech time

31 secs

Audience

Speech speed

120 words per minute

Speech length

240 words

Speech time

120 secs

Ijun Kim

Speech speed

149 words per minute

Speech length

2423 words

Speech time

975 secs

Jerry Li

Speech speed

137 words per minute

Speech length

1089 words

Speech time

476 secs

Manjiang He

Speech speed

137 words per minute

Speech length

1152 words

Speech time

505 secs

Oyundalai Odkhuu

Speech speed

121 words per minute

Speech length

1145 words

Speech time

568 secs

Yukako Ban

Speech speed

145 words per minute

Speech length

1447 words

Speech time

600 secs

Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Caio Machado

In the discussion about the impact of artificial intelligence (AI), several key areas were highlighted. The first area of focus was the importance of data quality, model engineering, and deployment in AI systems. An example provided was the Compas case, where an algorithmic tool used for risk assessment began being used to determine the severity of sentences. This case illustrates the potential consequences of relying on AI systems without ensuring the accuracy and quality of the underlying data and models.

Another concern was how AI tools become the infrastructure for accessing information. It was noted that, similar to how Google search results differ based on the keywords used, it becomes harder to verify and compare information when it is presented as a single, compact answer by a chatbot. This raises questions about the reliability and transparency of the information provided by AI systems.

The lack of accountability in AI systems was identified as a major issue that can contribute to the spread of disinformation or misinformation. Without proper proofreading mechanisms and quality control, distorted perceptions of reality can arise, leading to potential harm. It was argued that there should be a focus on ensuring accountability and fairness at the AI deployment level to mitigate these risks.

Furthermore, the discussion highlighted the need for more inclusive and ethical approaches to handling uncertainty and predictive multiplicity in AI models. It was emphasized that decisions regarding individuals who are uncertain or fall into multiple predictive categories should not be solely made by the developing team. Instead, there should be inclusivity and ethical considerations to protect the rights and well-being of these individuals.

Policy, regulation, and market rules were mentioned as important factors to address in order to limit the circulation of deepfake tools. Evidence was provided for this, citing the common use of deepfake voices to run scams over WhatsApp in Brazil. It was argued that effective policies and regulations need to be implemented to tackle the challenges of deepfake technology.

Promoting digital literacy and increasing traceability were seen as positive steps towards addressing the challenges posed by AI. These measures can enable individuals to better understand and navigate the digital landscape, while also enhancing accountability and transparency.

In conclusion, it was acknowledged that there is no single solution to address the impact of AI. Instead, a series of initiatives and rules should be promoted to ensure the responsible use of AI and mitigate potential harms. By focusing on data quality, accountability, fairness, inclusivity, and ethical considerations, along with effective policies and regulations, society can navigate the challenges and reap the benefits of AI technology.

Audience

Advancements in AI technology have led to the development of systems capable of mimicking human voices and generating messages that are virtually indistinguishable from those produced by actual individuals. While this technological progress opens up new possibilities for communication and interaction, it also raises concerns about the potential misuse of generative AI for impersonation in cybercrime.

The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various ways. For example, they can impersonate someone known to the target, such as a relative or a friend, to request money or engage in other forms of scams. This poses a significant threat, as victims can easily fall for these manipulated and convincing messages, believing them to be genuine.

Given the potential harm and impact of the misuse of generative AI for impersonation in cybercrime, there is a growing consensus on the need for regulation and discussion to address this issue effectively. It is crucial to establish guidelines and frameworks that ensure the responsible use of AI technology and protect individuals from deceptive practices.

By implementing regulations, policymakers can help deter and punish those who misuse generative AI for malicious purposes. This includes imposing legal measures that specifically address the impersonation and fraudulent use of AI-generated messages. Additionally, discussions among experts, policymakers, and industry stakeholders are essential to raise awareness, share knowledge, and explore potential solutions to mitigate the risks associated with the misuse of AI technology.

The concerns surrounding the misuse of generative AI for impersonation in cybercrime align with the Sustainable Development Goals (SDGs), particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 16 (Peace, Justice, and Strong Institutions). These goals emphasize the importance of promoting innovation while ensuring the development of robust institutions that foster peace, justice, and security.

In conclusion, while advancements in AI technology have brought about remarkable capabilities, they have also introduced new challenges regarding the potential misuse of generative AI for impersonation in cybercrime. To address these concerns effectively, regulation and discussion are crucial. By establishing guidelines, imposing legal measures, and fostering open dialogues, we can strive for the responsible use of AI technology and protect individuals from the harmful consequences of impersonation in the digital sphere.

Heloisa Candello

Generative AI and large language models have the potential to significantly enhance conversational systems. These systems possess the capability to handle a wide range of tasks, allowing for parallel communication, fluency, and multi-step reasoning. Moreover, their ability to process vast amounts of data sets them apart. However, it is important to note that there is a potential risk associated with the use of such systems, as they may produce hallucinations and false information due to a lack of control over the model.

In order to ensure that vulnerable communities are not negatively impacted by the application of AI technologies, careful consideration is required. AI systems have the capacity to misalign with human expectations and the expectations of specific communities. Therefore, transparency, understanding, and probe design are crucial for mitigating any harmful effects that may arise. It is essential for AI systems to align with user values, and the models selected should accurately represent the data pertaining to their intended users.

In addition, the design of responsible generative AI systems must adhere to certain principles. This will help to ensure that the models are built in a way that is responsible and ethical. By considering productivity, fast performance, speed, efficiency, and faithfulness in the design of AI systems, their impact on vulnerable communities can be effectively addressed.

Overall, exercising caution when utilizing generative AI and large language models in conversational systems is essential. While these systems have the potential to greatly improve communication, the risks of producing hallucinations and false information must be addressed. Additionally, considering the impact on vulnerable communities and aligning user values with the selected models are key factors in responsible AI design. By following these principles, the potential benefits of these technologies can be harnessed while minimizing any potential harm.

Diogo Cortiz

The discussion explores multiple aspects of artificial intelligence (AI) and its impact on society, education, ethics, regulation, and crime. One significant AI tool mentioned is JGPT, which rapidly gained popularity and attracted hundreds of millions of users within weeks of its launch last year. This indicates the increasing penetration of generative AI in society.

The potential of AI is seen as limitless and exciting by students and learners. Once users realized the possibilities of AI, they started using it for various activities. The versatility of AI allows it to be combined with other forms of AI, enhancing its potential further.

However, there are conflicting views on AI. Some individuals perceive AI as harmful and advocate for its avoidance, while others express enthusiasm and desire to witness further advancements in AI technology.

The ethical and regulatory discussions surrounding AI have emerged relatively recently, with a focus on addressing the evolving challenges and implications. The ethical aspects of AI usage and the establishment of a regulatory framework have gained attention within the past five years.

In the academic field, AI has brought about drastic changes. Many individuals are utilizing AI, potentially even for cheating or presenting work not developed by students themselves. This development has led to teachers and students organizing webinars and seminars to share their knowledge and experiences with AI.

The prohibition of AI tools is not considered a solution by the speakers. Instead, they advocate for adapting to new skills and tools that AI brings. They draw parallels with the emergence of pocket calculators, which necessitated adapting and evolving curricula to incorporate these tools. As AI tools reduce time and effort on various tasks, students need to acquire new skills pertinent for the future.

It is emphasized that regulation alone cannot resolve all AI-related issues. AI, particularly generative AI, can be employed for harmful purposes like mimicking voices, and existing laws may not be equipped to address these new possibilities. Hence, a comprehensive approach encompassing both regulation and adaptation to the new reality of generative AI is imperative.

In conclusion, the discussion highlights the increasing impact of AI on society, education, ethics, regulation, and crime. The rapid penetration of generative AI, like the JGPT tool, signifies the growing influence of AI in society. While AI holds unlimited potential and excites students and learners, there are conflicting views on its impact, with concerns about its harmful effects. The ethical and regulatory discussions around AI are relatively recent. The academic field is experiencing significant changes due to the adoption of AI, necessitating the acquisition of new skills by students. Prohibiting AI tools is not the solution; instead, adapting to the new skills and tools that AI offers is necessary. Regulation alone is insufficient to address AI-related challenges, as AI can be misused for harmful purposes. Overall, a well-rounded approach encompassing both regulation and adaptation is needed to navigate the complex landscape of AI.

Reinaldo Ferraz

The network session on generative AI commenced with a diverse panel of speakers who shared their insights. Eloisa Candelo from IBM Research and Caio Machado from Instituto Vero and Oxford University participated remotely, while Roberto Zambrana and Mateus Petroni were physically present. Each speaker brought a unique perspective to the discussion, addressing various aspects of generative AI.

The session began with Eloisa Candelo expressing her appreciation for being a part of the esteemed panel. She highlighted the significance of generative AI for the wider community and shared her thoughts on its potential impact. Despite some initial technical issues with the microphone, Eloisa’s remarks eventually became audible to the audience.

Following Eloisa’s presentation, Roberto Zambrana offered his industry-oriented views on generative AI. He emphasized the practical applications and benefits, shedding light on the potential for innovation and growth. Roberto’s insights provided valuable perspectives from an industry standpoint.

Next, Caio Machado provided a different viewpoint, representing civil society and academia. Caio discussed the societal implications of generative AI and considered its impact on various sectors. His presentation drew attention to ethical concerns and raised questions about the involvement of civil society in the development and deployment of AI technologies.

Mateus Petroni then shared his insights, further enriching the discussion. Mateus contributed his thoughts and experiences related to generative AI, offering a well-rounded understanding of the subject.

By incorporating inputs from diverse stakeholders, the session presented a comprehensive view of generative AI. The speakers represented various sectors, including industry, academia, and civil society. This multidimensional approach added depth to the discussions and brought forth different perspectives on the topic.

Following the initial presentations, the audience had the opportunity to ask questions, albeit briefly due to time constraints. Only one question could be addressed, but this interactive engagement facilitated a deeper understanding of the topic among the participants.

In summary, the session on generative AI successfully united speakers from different backgrounds to explore the subject from multiple angles. Their valuable insights stimulated critical thinking and provided knowledge about the potential implications and future directions of generative AI. The session concluded with gratitude expressed towards the speakers and the audience for their participation and engagement.

Matheus Petroni

Advancements in artificial intelligence (AI) have the potential to revolutionise the field of usability and enhance user engagement. One prime example of this is Meta’s recent introduction of 28 AI personas modelled after public figures. These AI personas provide users with valuable advice and support, addressing usability challenges and improving user engagement. This development is a positive step forward, demonstrating how AI can bridge the gap between technology and user experience.

However, there are potential negative implications associated with AI chatbots. Users may inadvertently develop strong emotional relationships with these AI entities, which could be problematic if the chatbots fail to meet their needs or if users become overly dependent on them. It is crucial to carefully monitor and manage the emotional attachment users develop with AI chatbots to ensure their well-being and prevent harm.

In addition to the impact on user engagement and emotional attachment, the increase in AI-generated digital content poses its own challenges. With AI capable of creating vast amounts of digital content, it becomes imperative to have tools in place to discern the origin and nature of this content. The issue of disinformation becomes more prevalent as AI algorithms generate content that may be misleading or harmful. Therefore, improvements in forensic technologies are necessary to detect and label AI-generated content, particularly deepfake videos with harmful or untruthful narratives.

To address the challenges posed by AI-generated content, promoting a culture of robust fact-checking and content differentiation is vital. Presenting essential information alongside user interfaces can facilitate this process. By providing users with transparent and reliable information, they can make informed decisions about the content they consume. This approach aligns with the sustainable development goals of peace, justice, and strong institutions.

In conclusion, while AI advancements hold enormous potential for enhancing usability and user engagement, there are also potential risks and challenges associated with emotional attachment and AI-generated content. Carefully managing the development and deployment of AI technologies is essential to harness their benefits while mitigating potential drawbacks. By promoting transparent and informative user interfaces, investing in forensic technologies, and fostering a robust fact-checking culture, we can unlock the full potential of AI while safeguarding against potential negative consequences.

Session transcript

Reinaldo Ferraz:
Hello, good afternoon. We are going to start our network session about generative AI, and for this session we will have different speakers that will contribute to our discussion here. We will have two online participants, that is Eloisa Candelo from IBM Research, Caio Machado from Instituto Vero and Oxford University. We have Roberto Zambrana, present in person, and also Mateus Petroni. So I will invite the online speakers to start our discussion. So Eloisa, could you please start your initial remarks?

Heloisa Candello:
Thank you, Diogo. I’m going to start. Hello everyone. I’m going to share my screen, and then we can start. Thank you so much for the introduction. One second. So I’m Eloisa Candelo, I’m a research scientist and a manager at IBM Research Brazil. I have a group that’s called the Human Centered Responsible Tech, and we have several projects that the aim is to have social impact and using AI. For the last eight years, I’m conducting and researching in the intersection of HCI and AI. particularly in conversational systems. So this picture illustrates one of my current projects that aims to measure social impact of financial initiatives using AI. Okay, in the area of conversational systems, we had several projects to understand the perception of text-based machine outputs, for example, in this first one. This is an example, this is just a series of examples to look at the conversational systems and the main challenge that we are studying for a long time and now with large language models, how those challenges are enhanced and how can we take care of those issues that were before that, but with the new technologies, we have to pay more attention and think deeply how is the impact of those new technologies. So for example, the first one that I was mentioning was in 2017 and we measure how typography text was perceived by humans in chatbots. So we did the kind of twin tests to understand about the human as of machines. And then we worked with multi-agents and multi-bots and how people collaborated with agents representing financial products to make investment decisions. And with the same platform, we did an art exhibition where bots talk to each other and the humans talk to the bots. And this exhibition we did in a cultural venue. in Brazil, and the idea is to have the same platform as the mute bots that we had before. And in this one, we had three characters of a book, Capitu, Bentinho, and Escobar, that are characters of a book from a famous novel in Brazil. And we measured how audiences perceived the interaction of those chatbots on the table. So people type it, and there was a projector that projected their answers that were designed and draw it on the table. So we also look at that, how the engagement was, if the chatbots, they asked people for their names, and they addressed it, they used the direct address. So this was something that we also look at. We also did our work when we answer this one, that people are looking at the pictures. Actually, they are talking to the paintings as well. They are asking, oh, what’s this yellow color? And then the system answers, what is that? So we can think about, now that we are going to reflect about prompts as well. And last year, we launched this exhibition in a science museum in Brazil that children can teach the robots. So they teach examples of how humans talk and similar examples of the same statement. So the robots can learn with them. And we also have a kit for teachers to work in the school with them. And finally, the last one is one of my… by recent studies, one of the research studies that we did with a collaboration with a big bank in Brazil. And we studied how people, they train machines, chatbots in the banks. So those people were the best people that worked in the call centers, the best employees. And they train Watson, that’s the chatbot there. So it’s a room full of people to make sure that the bot will understand the clients. And there are a lot of articulation of work happening there. So how the curators, they interact with each other to create those answers, the chatbot answers. So we see that we can have a screen full of a challenge that we research it. And a lot of people research it in the HCI community. So we have, for example, errors, how can we minimize and mitigate errors? We have a turn taking. If you have more than one chatbot, for example, we have the problem of interfacing humanization and how people can be deceived by bots. We also have the scope visibility in that time of conversational user interface. Because if the chatbot does not know how to answer, if you answer, I don’t understand, or please, can you repeat your question? With the new technologies, this is not an issue because it always answers something. Malicious uses as well, resolutions of ambiguities or something that those creators that I just mentioned, they use it to do every day. Transparency was also ensured, discrimination and harms, and bias, and we’re going to talk more about this in this session. So with generative AI and the use of large language models, what changed, if we think? As I mentioned in the beginning, the scale is much higher, like the ability to ingest and process huge amounts of data. It’s huge compared to the conversational systems that we had before. So we can have the same task adapted to multiple tasks, and this could be that we have an automation also, and also maybe different contexts. So for example, we had a client that worked with cars, and for each car, they had to do a different chatbot. So we’ve used certain models. We can use the same parameters and just change the car, the model of the car, and use the same corpus. Emergency as well, and the scale. So it can do like parallel communication as well, fluency, and the multi-step reasoning. And it can learn and continue doing in certain models. I’m going to focus more in conversational systems. That’s the main area that I came from. So now we can think about all those challenges, and we have the additional challenge about hallucination, for example, and false and harmful language generation due to the lack of model control and safeguards. That’s why now we are creating several platforms that we can. control the models and fine tuning those models. Misaligning of expectations, so you have the human expectation and actually what happens and what the model can deliver. So generate contents that are not aligned to the human expectation or expectations of certain communities. We are going to talk in a little bit about vulnerable communities, so we can understand a little bit better which kind of values we also look at. And lack of transparency, so it’s difficult to inspect because the quantity of data that is there and also how the algorithm was made. So for example, before we have this exhibition that I mentioned to you, that was an exhibition that we could have three bots and you have the three heads and people could interact with that. And what happened is like if people they type at something that the bots didn’t recognize the characters of this book and it was a closed scope here, it’s not like an open scope, it’s just phrases from the book statements. Then one of the chatbots would say more coffee or something like that. But in the case of the generative AI, you have the hallucinations and the Taoist answer is more reactive than proactive. So we experienced in some projects that if you have interfaces, conversational interfaces that are more proactive, you have less errors as well because it’s more like a script conversation and now this is not a reality anymore, it’s more reactive to prompt. And if you have a prompt to design a way to insert information. information that ask the system based on large language models what you want with more details. Maybe you increase the chance that the system will answer you what you want. Automation, we talked about that, large data sets as well, and the harmful language. So in this case that I showed you, which was a public space, so we had like people, we had a character that was all women, all women, and we had like several not suitable language that was typed to the bot. So everything that was typed on the tablet actually didn’t show in the table, but the chatbots answered the phrases of the book. But we saw in the corpus because we analyzed the corpus as well. We published this paper too. So harmful language is there, is inherent in the norm. Now it can be more evident. So going on that, I also mentioned this. It was a project that we did. You can see that’s 2017, and I brought for purpose to see that it’s the same thing in the way that now we have conversational systems that are more eloquent and can deceive people. So in this study, people look at a conversational system with a financial agent, and they should say if the financial advisor, financial agent, was a human or a machine, and then why. And we saw that when people received a text, that they could see the typeface of the agent, the typeface. has a script-type face, like a handwriting-type face. They said, oh, it’s a machine anyway. So most of the people, they said that were machines. But this one wants to deceive me. So what’s the limit to be human? So this is one thing that we can think. And one of my favorite books is The Most Human Human. So Brian Christian, actually, I’m going to go to him again later, he studied the Turing test. And instead of looking at people that pretend to be machines, he looked at the qualities that humans should have to be humans. So what are the qualities that describe a human? So we maybe should look and pay more attention on that. Yes. OK, so when we look at that, at transparency as well, and if it’s a human, if it’s not a human, we maybe should think about communities that the access to education, to AI education, to technology education is not so close to them. So what they have, for example, this is a community in Brazil, low-income, small business women. And they have access to technology because they have mobile phones. And you can see this mobile phone, their mobile phones. Actually, they’re paying several installments, so they have this. And their contact is with WhatsApp, for example. So we did an experiment with them. And we asked them, what question does an AI need to answer to be used? for effective and trustworthy, to be trustworthy and respect the human rights and also democratic and so on. So we asked that. And this system, what’s the output of this system? So this system, they are part of financial education course as well. It’s an NGO. And when they enter the course, they answer a questionnaire. When they leave the course, they answer a questionnaire. And after three and after six months, they answer another questionnaire. So what we did, we worked with the NGO and we had those questionnaires and we redesigned the questions to add in a chatbot. And those women, they answered. And while they were answering, they were answering about their business. They were answering questions related to women empowerment. They answered questions related to business growth as well and about revenue. But the main thing about this system and the questionnaires was to extract some indicators to measure the social impact of the program. So we used this with them. We tested with 70 women. And as an output for them, they could see how is the health of their business, their business health. So we had like a scale and they could see that. But then when we tested with them, we had several that had like zero, for example. And why zero? So one of them said, this result means nothing to me. It won’t not like zero. So we tested with them. And as an output for them, they could see how is the health of their business. So one of them said, this result means nothing to me. It won’t not like zero. that I will continue to engage it to do my business. So the index was zero because my business is not really running. I’m not going to say it’s dying. I’m going to say it’s being born. I would like to know how it’s my advertisement. So I can talk a little bit about that. But before about the zero, it’s important because for some of them, it was like not exciting and very frustrating to see zero. And we needed to understand why. So one of them, the husband paid, the ex-husband paid the rent and she count that in the expense. But in the end she had profits as well. So those things that are so, how can I say, so little, but makes a lot of difference because they are intrinsic in the context. Other things that women that wanted the chatbot to tell them I would like to know how it’s my advertisement. And if I’m doing, I’m in the right path. What are the recommendations? We asked about their vision about the future. And this was something, ah, I like this. I want to consider answering this because then it makes me reflect about. And it means, for example, ah, the score I can improve. Yeah, but I don’t have a structure yet. So this is like a kind of delicate because maybe they are not in this stage that they feel well about that. Yeah. So I think, ah, and some mistakes about education. So mistakes about the terms. for example, education and polite is something that it’s a word that’s similar in Portuguese. And religion is an interesting fact. The NGO, we said, oh, should we take off this question? And they said religion is one of the main things that they disagree about because they are in the same economic level, more or less, the same status. But then we have people from different religion and we put in the same WhatsApp group, then usually we have friction there. OK. So how can we legitimate what the chatbot answers? So maybe in the future, this is one provocation paper that we did, we could have a score for each kind of generative system. And with this score, we can see how legitimate this is, how transparent this is, and where is this data came from, right? So in our project, we used closed scopes, closed domains to avoid hallucinations or at least mitigate a little bit of that because then at least the corpus is from the clients. And the third one that I would like to mention, I’m almost finishing, we have the expectation alignment that I mentioned. So this is another one. So if we have generative systems, how the values of people, those are the values that we collected in the field, could be aligned to the values that we have from other stakeholders as well. And the AI is there in the middle. So here’s an example of call center. For example, we expect productivity, fast performance, speed, efficiency, faithful, and we need all that. But then when we look at the model, we need to choose the models that are aligned to that. So we want a model that reduce hallucinations and that has the data representation of the public that is going to use, right? So I’m going to end, yes, a joke. And I’m going to end with that. We have some design principles as well that we can think about. How can we build generative AI systems in a responsible way? So thank you so much. Thank you, Heloisa, for your great presentation, share your wonderful work with us. So we had a view from the industry. So now I invite Roberto to bring a perspective from the technical community about those topics. So please, Roberto. Thank you very much.

Reinaldo Ferraz:
I think that you should use mic for online people. Thank you. Thank you very much, Diogo. It’s a pleasure for me to be with this distinguished panel. Sorry? It’s okay, right? It’s listening, okay. I think it will be nice.

Diogo Cortiz:
I totally agree with Heloisa about her intervention. So I would like to switch a little bit my comments regarding how it emerged specifically, of course, generative AI, since we have artificial intelligence for many years now in different forms, like using translators when we have image recognition, software, and different other ways of using different forms as well of AI. But I think one game changer indeed was JGPT. And it’s not because there isn’t any other tools. There are many, but of course, this one was, I will say the initial that was presented, I think it was in October last year. And in a matter of. of maybe weeks, many people started to use it, starting to be thrilled using this tool, and then spreading the word. And in times of, I don’t know, maybe in weeks, it passes from thousands of users to hundreds of millions of users. So this one, indeed, I would say it’s a particular phenomenon to analyze. I don’t remember any other tool that was very, very rapidly penetrated to society. And I will say there is a factor that perhaps was included regarding the use of this tool. It’s not because of the fact that many people already used different bots. But in this case, initially, many people were experimenting. But once they realized the potential of this tool, then everyone started to use it for many, many other activities. I mean, formal activities, what now, in some cases, in the academic world, we can even talk about maybe cheating or presenting elements that are not necessarily developed by academic students, learners, et cetera. But I will say that many people felt that this tool was really without limits. And again, I will say that it can be applied in different ways, now combined with some other forms of AI. Actually, there are people that are even making money now. They found this as a way of making money. What I can talk about, my particular perspective is related to the technical side and related to the academia, because I am a teacher for the last 20 years, more or less, at the university, mostly in IT-related. subjects, and as happened with some other areas, in our case, the teachers, the students, when they learn about this tool, of course, they were thrilled, and they many, and this, I would like to maybe comment this story, because perhaps this happens in some other parts, but in my country, maybe not only in my university, but the people that was encountering this tool started to, wanted to formally tell the others about this, and then started to organize webinars, seminars, and things like that, in a way trying to call them such as experts in this field. Many people started to feel like that, just because they use it, and they discovered this fantastic tool, and they wanted everyone to know I use, so I think that’s another part, another important part that we need to reflect on. The other comment that I wanted to make is that yes, AI is with us for several years now, but maybe the ethical aspects, the regulatory framework is being discussed, I will say, maybe the last five years, and I can witness about that, because I was a member of the MAG during the last, well, past three years, last year was my last year as a MAG member, and then I had a chance to see how the discussion regarding the regulation of AI was evolving as well, and then it reaches the academic sector regarding all these possibilities, or maybe even negative impacts that this may cause, and this is something, and I think we are in that moment now, back in Bolivia, and perhaps in the region, or even in the world, with, again, different parts, I mean, different sides of the coin. People that feels that, again, this is like the devil, and we should try to avoid it, the use, maybe we should try to prohibit the use of this tool. because they are teaching bad things to our learners because the learners are trying to do or trying to pass for for persons that they are not etc. You you understand my point regarding this and then of course there is the other side that actually will love to have this even more evolved and when we talk about regulation and we talk about the adjustment of maybe policies that will apply even in the academic sector I think that will not that shouldn’t be the way and I will I always like to put this example I know that we should respect the difference of the scenarios but if you we remember back on back in the 70s 60s maybe no one here is going to remember that moment when we were using the sliding rule of course one of the skills that we required from our students was also of course to know how to to manage that kind of of tool right but then the pocket calculators appear so immediately of course it was important to adjust the big curriculum designs in the different areas and start to use I mean they start to to evolve in in a way in what was the need for our learners to to learn and I think that’s the kind of reflection we need to do at the university it’s not about prohibiting the use of this kind of tools but to adjusting what skills the new skills we need and we want for our students to have in the in the near future in the near future knowing that now we have tools like this one then of course are going to reduce a lot many many of the activities in terms of time of course many of the activities that our students can do and of course our teacher and of course the academic

Reinaldo Ferraz:
community as a whole so I will stop there thank you very much thank you Roberto. So we had views from industry, from technical community. Now I invite Caio Machado to give us a perspective from civil society but also from the academia. Welcome Caio and the floor

Caio Machado:
is yours. Thank you very much. It’s great seeing all of you. I’m going to quickly put a slide up with my contacts but I won’t use slide for my speech. It’s just for having an opportunity to network with the folks over in Japan. So if anyone wants to reach out I’d be glad to continue our conversations later on. So I hope you guys are seeing the slide okay. Yeah. Can I get a nod? Yeah you can see it. That’s perfect. Thank you. Great. So my concerns when we’re talking about generative AI and the title of our talk Synthetic Realities, let’s lay down a premise here. I think of issues related to artificial intelligence in three major layers. So the data, quality of the data, you know, diversity of the data set, whatever is used to train and develop the models, the engineering of the models themselves, and a final layer which is deployment. And that’s when we get a tool, throw it into society, and then it behaves in ways that are unexpected. I think a great case for that, and it’s kind of a cliche case, it’s an algorithm tool, it’s not even AI from what I understand, is COMPAS case where algorithmic tools were used in certain states in the United States. And on the one hand the algorithm was biased, so we do have an issue in the bottom layers in terms of data and development of that tool. But also judges started using something that was intended to attribute risk to the defendants and use them to determine the severity of the sentences. So what was intended for one purpose, once it was thrown out into the world, people incorporated and it was embedded into society in different ways. And that is harder for us to foresee, and I think that is an issue that is much greater than we were discussing. I do agree that hallucination, error, all of this is a very severe problem, but we’re not thinking as much as what happens once the AI is out in the world. For example, I know that lawyers, judges around the world are using generative AI. What is the impact of that when a judge decides to pay $20 a month to use ChatDPT and all of a sudden ChatDPT is deciding the cases and making a precedence? So I think that’s a big concern. My second concern, again, addressing the issue of synthetic realities, is not so much the fabrication of extremely realistic content, which isn’t an issue, I acknowledge deepfakes and so on, but I think that will be addressed in the midterm with new mechanisms of developing trust. What I’m really concerned about is how these tools become infrastructure of access to information. The same way we use Google to access information today and you get 10 results, and depending on the words you put in, you get different results for… where dinosaurs came from. It could be a evolutionist theories. It could be a creationist theory. When you have a chat doing that and everything is compacted into a single answer, what sort of tools do we have to double check that and to equip the users to be able to fact check that, to get different perspectives? So I think in the sea of information we have, the eyedrop is getting smaller and more complex and less transparent. And I think that plays a big role in creating distortions in our readings of reality. So speaking of disinformation or even malinformation, I think these tools and the lack of accountability around these tools and how they operate can have severe effects in that regard. And I’m trying to be quick so we can all speak. That obviously refers back to things that were brought before by the previous speakers. So fairness, accountability. I think there’s still little debate on how we can ensure at the development level means of accountability and fairness at the deployment level. So metrics, ways of keeping people from using the AI tools for unintended purposes. This is a more conceptual proposition. I don’t see any, I’m throwing this issue to the engineers. As a lawyer, I can throw it to the engineers that you think of solutions. But this was something I was discussing with some folks here at the School of Engineering is how can we think of fairness metrics and somehow have that dialogue with the user and have the user think through how the AI is being deployed. And that also speaks to what was mentioned before. on AI literacy and tech literacy in general. And finally, just to point to some of the work that we’re doing right now, academically, I’m at Oxford. But right now, I’m also a fellow at the School of Engineering here, learning a lot with the engineers. And we’re thinking a lot about the uncertainty around different models of machine learning, where, OK, you might have 95% of accuracy across different models. But then you have that 5%, where you’re getting predictive multiplicity. And what do you do with these people? And who has the legitimacy to decide what should be done with these people? So you can look at the work from Professor Flavio Calmon, Lucas Monteiro. They’re really going off into this topic. And we’re working together. And for me, the fundamental question here is, OK, there’s a whole section that algorithmic tools, a section of the population, or users, or you name it, of the data, that the algorithmic tools don’t know what to do with. And who should be able to decide? And so far, obviously, this is being answered by the team developing those models. But once this is deployed in society, the effects aren’t restricted to code. These have social ethical effects, which perhaps should be discussed in other spaces as well. With that, I’ll conclude my speech. And thank you once again for having me. Please feel free to reach out so we can continue the conversation. Thank you, Caio.

Reinaldo Ferraz:
We have more five minutes. So I invite Mateus to give his contribution to the session. Amazing. Thank you so much, Diogo. So hello, everyone.

Matheus Petroni:
My name is Mateus Petro. I’m a master degree student at the Pontifical Catholic University of Sao Paulo. And I am in the field of design, human-computer interaction, and artificial intelligence. I’m also actively engaging the user experience designer with the Latin industry. So I will add just a few things here to bring more of this user-centric perspective, and also to not repeat with the other remarks that I am aligned with. So on one hand, there are plenty of expectations concerning the potential benefits of these advancements. Even with the content generated by AI being considered as syntactic realities, the proximity to users’ actual experiences is so striking that this has the potential to overcome longstanding challenges within the usability domain, such as the learning curve is associated with new digital technologies, the enhancement of engagement through personalized experiences, and a more accessible way to obtain knowledge. This potential value extends to diverse domains, such as education, health care, well-being support services, digital communications, and even customer support. The human-like AI techniques showcased in specific chatbots serve as a prime illustration of this trend. Meta’s recent introduction of 28 AI personas modeled after well-known public figures is a case in point. The aim to provide users with valuable devices within the realms of the celebrity’s expertise. In doing so, it significantly broadens the scope of engagement and diversifies the ways through which individuals can access digital support to address their needs. From another side, despite the promises that these innovations hold, numerous concerns deserve our attention before we take further steps. In a world where a significantly part of digital content could be created partially, or even entirely by AI in the next few years, facilitating tools to user to discern the origin. nature of this content becomes imperative. This underscores not only a governance and technical challenge, but also a design one, as we need to allow users to analyze a small mobile screen and recognize visual clues such as color typography, iconography, or other elements that help them to get informed to make better decisions regarding its utilization. Additionally, we must remain vigilant regarding the potential dangers associated with the establishment of intimate and effective bonds with such technologies. Users may inadvertently develop strong emotional attachments to chatbots, which could prove problematically if these chatbots fail to adequately meet their needs, or if users become overly reliant on them. In this realm of education and mental health support, such attachments could compromise social and learning skills, and the significance of sharing experiences with peers, families, and the surrounding community. Beyond that, if we start to prospect a little bit more about possible futures, we could consider the possibility of users simulating their own presences to automated chatbots on social media platforms. This idea invites us to have a critical examination of what is inherent human, such as having a unique personality, and how we can effectively communicate the capabilities of these emerging technologies without over-promising features that a current state of AI may not, or even should not, deliver. In conclusion, I believe that there are huge room for improvements in our forensic technologies to detect and label content created by generated AI, sometimes to indicate the user about its nature, sometimes to prevent the dissemination of content that threatens human rights, democracy, or propagates misinformation. As example, the same case of artists being used for personalized chatbots as Meta launched could be applied for artists performing deep fake videos with harmful or untruthful narratives, a phenomenon that is increasingly prevalent. Say that, I invite you to reconsider the significance of presencing essential informations alongside the user interface, promoting a robust culture of fact-checking and content differentiation. These emerging challenges require collective efforts from government, society, and research to safeguard democratic values and individual freedom in the face of this rapidly evolving landscape. So that’s it for me, thank you so much.

Reinaldo Ferraz:
Thank you, Mateus. So we had inputs from different stakeholders groups and now we have time for just one question if someone wants to ask a question. Yes, please, you can go to the mic.

Audience:
Oh, the mic is on. Okay, thank you very much. So my name is Valerius, I’m representing the KCGI, it’s a university here and I’m a master degree student. So my question and maybe one point that I would like to speak about is how the generative AI can be used for the crime and cyber security. So as we all know now we can generate images, now we can chat with the LLMs. My thinking is like now we can also mimic voice and what’s stopping the bad people or the really people who want to do the harm using those tools to, for example, to generate somebody’s grandma voice or to generate my voice and call my parents requesting for money or for something closely related to that. So I just like thinking this is the point that needs further discussion and maybe regulation, like how are we going to deal with this possible crime. This is going to be, in my eyes, extremely fast growing in the couple years when the algorithms going to become much more efficient and output will be barely recognizable by human beings. Thank you. Thank you, so Roberto do you want to start answering? Sure, I will go back to my previous point. I will say that it’s really, really hard to start thinking that regulation

Diogo Cortiz:
is going to resolve everything, more if we’re going to come up with some creative ways of dealing that kind of examples. Everything needs to change now. We need to adjust to this new reality. I can talk about the academic area. I’m not an expert in the crime, of course, but I will say, just take an example, that it will be hard now to consider that one image and a voice is a concrete evidence of a crime due to these new possibilities. And that is now fixed in the laws, in our current laws. So that’s an example of the things that need to be changed based on that reflection. And I will say that that will have to be in all different areas, thank you. Okay, so Caio, please, the floor is yours. Yeah, just to quickly compliment,

Caio Machado:
I mean, that’s already a reality, for sure in the US, for sure in Brazil, the use of deepfake voices to run scams over WhatsApp in Brazil is very, very common and becoming even more common. So that’s something we need to deal with. I think we can look back at the knife. We had knives around for thousands of years and still we created laws and that hasn’t prevented people from stabbing each other, meaning that the tools around, it will be used for good and for bad. I think that the policy, not only crime, as in regulation, market regulation, all sorts of rules we can think of need to be addressed to limit the circulation of these tools. in whatever context they’re used for criminal purposes, increase traceability, increase, we should promote, so public policy to promote digital literacy, sorry, it’s late here, to promote digital literacy and to get people to mistrust these audios and have other means of checking. So it’s more of a, let’s say an ecosystem solution than passing one rule that will outlaw the misuse of deepfakes and voice and video, you name it. We don’t have a silver bullet. It’s a series of initiatives and rules that we need to promote. Thank you, Caio.

Reinaldo Ferraz:
So our time is over. I’d like to thank all the speakers and the audience and the session is closed. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you, Caribe. Thank you, Caribe. Thank you, Di, d Andres and Clara. Thank you, Caribe, and thank you for your time. It was nice knowing you. And sorry for the your silence. It was nice knowing you. Aren’t you going to post a message of belief for everybody that you want this conversation to reach you? I’m past that. Thank you for the invitation. Thank you. Starting from you and for the second question, I’d like to invite, let’s hear an attitude. The Padre spiritually who begot Enriqueํ™”๏ฟฝ. Are you watching my video? Yes. Thank you. Amazing contributions. Nice to meet you. Thank you so much. It’s a great pleasure. Nice to be in touch. Thank you. Keep in touch. Nice to meet you. Bye-bye. So that’s just helping me. Hi. Nice to meet you as well. Thank you. Thank you for listen to me. Hi. Nice to meet you. I love you.

Audience

Speech speed

157 words per minute

Speech length

241 words

Speech time

92 secs

Caio Machado

Speech speed

151 words per minute

Speech length

1367 words

Speech time

542 secs

Diogo Cortiz

Speech speed

154 words per minute

Speech length

1247 words

Speech time

485 secs

Heloisa Candello

Speech speed

140 words per minute

Speech length

3027 words

Speech time

1299 secs

Matheus Petroni

Speech speed

162 words per minute

Speech length

746 words

Speech time

277 secs

Reinaldo Ferraz

Speech speed

157 words per minute

Speech length

501 words

Speech time

192 secs

GC3B: Mainstreaming cyber resilience and development agenda | IGF 2023 Open Forum #72

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Allan Cabanlong

The Global Conference on Cyber Capacity Building (GC3B) brought together experts and decision-makers from all over the world to discuss the importance of addressing digital risks and strengthening cyber resilience. The conference highlighted the fact that the digital world has a profound impact on every aspect of our lives but also presents numerous risks that need to be addressed.

One of the main arguments put forward at the conference was the necessity for individuals and nations to be aware of these digital risks. It emphasized that simply being aware of these risks and their potential impact is not enough. Resources, knowledge, and skills are required to effectively mitigate them. The speakers stressed the need for investment in the digital future and the importance of every country having the resources and expertise necessary to navigate the challenges posed by the digital transformation.

The conference also emphasized the need for global collaboration in cybersecurity. It recognized that no single nation can tackle these challenges alone and that nations need to work together and support each other to keep up with the rapid pace of the digital transformation. Collaboration was seen as crucial not only for addressing current challenges but also for staying ahead of emerging threats and technologies.

The aim of GC3B was to support and strengthen global cyber resilience. The conference brought together high-level government leaders, practitioners, experts on cybersecurity, and representatives from the development community. Through interactive discussions and knowledge sharing, the conference aimed to develop a global framework for concrete actions that support countries in enhancing their cyber resilience.

Cyber capacity building was highlighted as a key enabler for sustainable development. It emphasized that all nations need to prioritize building their capacity to effectively respond to cyber threats. Building robust cyber capabilities is seen as essential not only for protecting critical infrastructure and national security but also for promoting economic growth and social development.

The conference had a positive impact on inspiring other regions and strengthening global cybersecurity cooperation. The insights, ideas, and best practices shared at GC3B were seen as invaluable in inspiring other regions to take similar actions and fostering a renewed commitment to global cybersecurity cooperation.

In conclusion, the Global Conference on Cyber Capacity Building stressed the importance of being aware of digital risks and having the necessary resources, knowledge, and skills to mitigate them. The need for global collaboration and support in cybersecurity was emphasized, aiming to keep up with the digital transformation. The conference aimed to support and strengthen global cyber resilience and highlight the key role of cyber capacity building in enabling sustainable development. The GC3B conference inspired other regions and left a lasting impact on global cybersecurity cooperation.

Audience

The audience member raised several questions during the event. Firstly, they were curious about the reasons for choosing Ghana as the location for the event and asked about the availability of virtual involvement. The organizers did not provide a direct response to this, but it can be inferred that Ghana may have been chosen for its potential to host successful and impactful events.

The audience member also inquired about the organizers’ plans for the year after the event and their goals within the next three years. No specific plans or goals were mentioned, which implies that the organizers may not have disclosed this information. However, it is important to have long-term plans and goals to ensure the sustainability and continuity of initiatives like the Accra call.

Speaking of the Accra call, it was stated that achieving its objectives, as outlined in the Accra call document, will take a considerable amount of time. This indicates that the goals and aspirations laid out in the Accra call cannot be accomplished within a short period, such as six months or two years. It is crucial to understand that long-term commitment and efforts are required to bring about significant changes and advancements.

The concept of effective capacity building was also highlighted during the event. The audience member pointed out the importance of tailoring capacity building efforts to the specific needs and demands of the recipient country. It was emphasized that capacity building should be demand-driven, ensuring that the recipient country can absorb and sustain the knowledge, resources, and skills provided.

Furthermore, legislators were recognized as playing a vital role in sustainable cyber capacity building. It was stated that involving the legislators and helping them understand the value and importance of sustainable cyber capacity building is crucial for securing adequate budgetary resources. This acknowledgment highlights the need for collaboration and communication between policymakers and industry experts to ensure the allocation of necessary resources for successful capacity building programmes.

During the event, the issue of donor coordination was addressed. It was emphasized that de-conflicting between donor countries is essential to avoid duplication of work and optimize resource allocation. The Sybil Portal was mentioned as an existing tool that can be utilized to prevent overlap and promote effective coordination among donors.

In the context of cybersecurity, collaboration and coordination were emphasized as key factors for success. It was noted that going solo in cybersecurity initiatives is not effective; instead, collaboration and cooperative efforts are necessary. This is particularly relevant in the Pacific region, where countries are at different stages of cybersecurity development. The audience member highlighted the importance of ensuring that no country is left behind and called for coordinated efforts to address cybersecurity challenges collectively.

In conclusion, the audience member raised various insightful questions and concerns during the event. They inquired about the choice of Ghana as the event location, the availability of virtual involvement, plans for the future, and the goals of the Accra call. The concept of effective capacity building, the role of legislators in sustainable cyber capacity building, and the need for donor coordination were also discussed. Collaboration and coordination in cybersecurity efforts were emphasized, especially in the diverse Pacific region. Overall, the event provided valuable insights into the challenges and opportunities in event organization, capacity building, and cybersecurity.

Liesyl Franz

The involvement of the United States government in international cyberspace security and capacity building is vital for the development of knowledge, skills, and infrastructure in other countries. Over time, the US has increased its funding and activity in this area, moving from just one person to providing significant support to initiatives such as the Global Forum on Cyber Expertise (GFCE) and the Global Conference on Cyber Space (GC3B) which aim to improve coordination and dialogue on cyber capacity building.

Recognizing the interconnected nature of cyberspace security and digital development, efforts are being made to address both areas together. Bridges are being built to bridge the gap between these two domains, ensuring progress in connectivity without compromising security. The goal is to digitize societies while also making them resilient to cyber threats.

The United States is a strong advocate for multi-stakeholder community discussions, which include donor countries, recipients, implementers, the private sector, and academia. Initiatives like the GFCE and GC3B facilitate engagement and effective cyber capacity building. The US actively participates in these conferences through a high-level interagency delegation.

Efficient capacity building depends on tailoring the approach to the specific needs of each country and ensuring its absorbability. Sustainability is another crucial aspect of capacity building, requiring long-term viability and continuous support.

Additionally, capacity building efforts should address immediate responses to crises. The United States highlights the importance of addressing urgent needs in countries facing crises like Ukraine, Albania, and Costa Rica. This demonstrates the necessity for capacity building to be adaptable and responsive.

Financial resources are vital for providing assistance in capacity building and other areas. Adequate funding is necessary to implement programs and initiatives effectively.

Emphasizing the benefits of cybersecurity efforts can encourage investment and political support. By highlighting the positive outcomes and advantages of cybersecurity measures, it becomes more likely that resources will be allocated to support and advance these efforts.

In terms of training, it is recommended to provide in-country, on-site training for better integration of cybersecurity measures. This tailored approach directly addresses the specific needs and challenges of each country. Continuous learning is also seen as beneficial in the field of cybersecurity, allowing individuals to stay updated and take advantage of professional development opportunities even if they are unable to travel for training.

In conclusion, the United States plays a pivotal role in international cyberspace security and capacity building. Their involvement includes financial support, hosting conferences, and promoting multi-stakeholder engagement. The interconnectedness of cyberspace security and digital development is recognized, and efforts are being made to address these areas together. Capacity building should be tailored to the specific needs of each country and focus on sustainability. Immediate responses to crises are essential, and adequate financial resources are necessary for providing assistance. Emphasizing the benefits of cybersecurity efforts can drive investment and political support. In-country, on-site training and continuous learning are recommended for better integration and professional development in cybersecurity.

Keywords: United States government, international cyberspace security, capacity building, funding, Global Forum on Cyber Expertise (GFCE), Global Conference on Cyber Space (GC3B), cyberspace security, digital development, multi-stakeholder community discussions, sustainability, immediate responses, financial resources, cybersecurity efforts, in-country training, on-site training, continuous learning.

Christopher Painter

The Global Forum on Cyber Expertise (GFC) is an organisation devoted to promoting cyber resilience and capacity building in line with sustainable development goals. An important initiative of the GFC is the ACRA Call to Action, which seeks to enhance cyber resilience in development and foster sustainable capacity building. This call to action is aimed at countries, regions, the private sector, and the technical community, with a focus on promoting cyber resilience, advancing effective cyber capacity building, strengthening partnerships, and enhancing resources.

Christopher Painter, a strong advocate for cyber capacity building, emphasises the significance of aligning with development goals. He believes that consultation and community input are crucial for the success of the ACRA Call. To ensure community engagement, a mature draft of the ACRA Call will be circulated for public comment by the end of October. The GFC also plans to engage the community through public consultations at various events, such as the Paris Peace Forum, Singapore Cyber Week, and the IGF session. Their objective is to address major concerns and incorporate new ideas and input into the ACRA Call.

Due to COVID restrictions, the GFC had to change the location of their first conference, originally planned to be held at the World Bank in Washington. Instead, they see an opportunity to hold the conference in Ghana, a country with unique needs in the field of cyber resilience. The government of Ghana is supportive of hosting the conference. Efforts are being made to facilitate virtual participation for those who are unable to attend in person, ensuring robust virtual connectivity.

The GFC places emphasis on involving the global south and securing legislative and leadership buy-in for sustainable cyber capacity building. They highlight the need to integrate cyber resilience into national plans and view it as an integral part of broader development strategies. They also stress the importance of respecting human rights and the rule of law in any declaration pertaining to cyber resilience and capacity building.

In terms of governance, the GFC aims to integrate improved governance practices into their work. They advocate for building partnerships, local leadership, and coordination among developing countries. By fostering the leadership of developing countries in coordinating cyber capacity building efforts, the GFC seeks to create stronger partnerships and enhance long-term sustainability.

Additionally, the GFC underscores the importance of information sharing and coordinated efforts among donor countries to avoid duplication of work. Regular meetings are held for donor countries to collaborate and exchange information. They also advocate for strengthening existing organisational structures rather than creating new ones, ensuring greater sustainability and efficiency.

Financial resources play a critical role in cyber resilience activities, and the GFC calls for maximising existing financial streams, including international development financing, domestic resource mobilisation, and private sector involvement. Drawing from the development community, they propose utilising models to measure sustainability and incorporating cyber resilience into integrated national financing frameworks.

To ensure professional development and capacity building, the GFC aims to professionalise the cyber capacity building community and promote human rights-based and gender-sensitive approaches. They also underscore the need for project prioritisation and the creation of measurement tools to assess the results and impact of projects.

In conclusion, the GFC is passionately committed to promoting cyber resilience and capacity building aligned with sustainable development goals. Through initiatives such as the ACRA Call to Action, partnerships with developing countries, and efforts to maximise financial resources, they strive to create a more secure and resilient cyber landscape. Their focus on consultation, community input, and collaboration reflects their commitment to inclusive and sustainable cyber capacity building efforts.

Tereza Horejsova

During the analysis, several important points were highlighted by the speakers. One of these points focused on the International Governance Forum (IGF) being described as a hybrid event. This means that the IGF combines both in-person and virtual elements, allowing for greater participation and connection from around the world. The IGF is seen as a significant platform for global networking and exchange of ideas.

Another key topic discussed was the Global Future Council (GFC) organising a major conference in Ghana, with a strong emphasis on partnerships for the goals. The conference aims to bring together various stakeholders to collaborate and work towards achieving the Sustainable Development Goals (SDGs). The GFC’s commitment to partnerships highlights the importance of collective efforts in addressing global challenges.

The analysis also focused on the ACRA call, which sets guidelines for efficient global action on cyber capacity building. This highlights the need for effective coordination and collaboration in addressing cybersecurity challenges worldwide. The call serves as a roadmap for enhancing cyber capacity and ensuring the global community is better equipped to mitigate digital risks and threats.

The digital world was discussed extensively, with a recognition of its vital role in essential areas such as food, water, and healthcare. The digital world enables connections and facilitates communication, leading to improved access to resources and services in these critical sectors. However, it was also acknowledged that digital risks are associated with the digital world. This emphasises the need for strong cybersecurity measures and proactive efforts to address potential threats.

Efficient resource use and better coordination were identified as crucial factors for enhanced global support. The analysis highlighted the importance of using limited resources effectively and establishing better collaboration among countries. This includes linking different communities in cyberspace and improving coordination to ensure optimum efficiency in resource utilisation.

The Sybil portal was discussed as a valuable resource for mapping various cyber capacity building projects. This portal allows for easy access to information on projects already implemented or currently ongoing and enables filtering based on specific regions or countries. The portal serves as a tool for tracking and analysing global efforts in cyber capacity building.

Furthermore, the analysis highlighted the necessity of building on previous projects to plan new activities effectively. This approach avoids duplicating efforts and optimally utilises limited resources. By learning from past experiences, countries can enhance their planning and implementation strategies, leading to more impactful outcomes.

Another noteworthy observation was the importance of collaboration among countries. By working together and sharing their expertise, countries can achieve more efficient use of resources and tackle challenges collectively. The analysis emphasised the significance of partnerships and collective action to promote sustainable development and address global issues.

In conclusion, the analysis provided valuable insights into key topics such as hybrid events like the IGF, major conferences organised by the GFC, the ACRA call for efficient global cybersecurity action, the role of the digital world in essential areas, the need for efficient resource use and better coordination, and the significance of the Sybil portal and collaboration among countries in cyber capacity building. These insights highlight the interconnectedness of global efforts and the importance of cooperation in addressing complex challenges in the digital age.

Pua Hunter

Significant progress is being made in the cyber ecosystem and cyber capacity building space in the Pacific region. Initiatives such as the Pacific Cybersecurity Operational Network (PECSON), Pacific Islands Law Officers Network (PILON), Cyber Safety Pacifica, E-Safety Commissioner, and Oceania Cybersecurity Center (OCSC) are actively contributing to the development of the cyber ecosystem. They are strengthening the region’s infrastructure, legal frameworks, policies, and capabilities to handle advancements in cyberspace effectively.

The Global Forum for Cyber Expertise has recently launched its Pacific Hub, aiming to enhance cooperation and knowledge sharing on cybersecurity matters in the region. Collaboration, engagement, and coordination among stakeholders need improvement to maximize the benefits of these initiatives. Embedding these aspects in the cyber capacity building approach will enhance the region’s overall cybersecurity preparedness and resilience.

Cybersecurity is a crucial aspect of digital engagement that cuts across all sectors and impacts various cyber-related activities. Pacific leaders have recognized its importance and emphasized its individual and collective responsibility. The Oceania Cybersecurity Centre has highlighted cybersecurity in its review for several Pacific region countries, including the Cook Islands.

In terms of donor assistance, sustainability becomes a challenge when donors leave without ensuring adequate resources. Planning and resource allocation are vital to ensure the longevity and effectiveness of projects.

Regarding training and capacity building, the concept of in-country training has been proposed to enhance knowledge transfer. Bringing trainers to countries to train a larger number of individuals can improve expertise implementation and dissemination.

In conclusion, the Pacific region is progressing significantly in the cyber ecosystem and cyber capacity building. Various organizations, networks, and conferences have contributed to these developments. Enhancing collaboration, prioritizing sustainability in donor assistance, and emphasizing in-country training will strengthen the region’s cybersecurity capabilities and readiness to address evolving threats.

Session transcript

Tereza Horejsova:
also to everybody joining us online. I heard we had a bigger crowd online than in the room, which is always exciting, given that the IGF is a hybrid event. My name is Teresa Horejsovรก. I will be your moderator for today’s session, and I’m with the Global Forum on Cyber Expertise. And joining me in speaker capacity today is Pua Hunter from the Cook Islands, who is joining us online. Hello and good afternoon to you. Then we have here in the room Liesel Franz from the US government, thank you, and Christopher Painter from the GFC, the president of the GFC Foundation. My helper online for the remote moderation is Allan Tsabanlong, also from the GFC, the director of our Southeast Asia Hub. What we will try to do at this session is to actually mostly have a conversation with you. We will have a few, you know, points to get us started connected to the presentation of a, I hope, major conference that the GFC with its partners is organizing at the end of November in Ghana, the so-called GC3B, the Global Conference on Cyber Capacity Building. But we will particularly focus on one of the outcome documents that we expect will be coming from this conference, so-called ACRA call, which would set some, let’s say, guidelines and ideas for more efficient global action on cyber capacity building. And we would like to use your perspectives to help us shape what this document could look like. So, I hope that this sounds as a good plan. What I suggest that we do for a start is that we will play a very short video that should introduce the conference a little bit, and then we go to the various speakers. So, now, let’s go to the video. Now, fingers crossed that everything works. And if I may ask our dear colleagues here in the room to get Alan on screen, who will share his screen and play the video. Thank you very much. At this moment, it’s without sound. Alan, can you stop it for a sec? I don’t know if the sound issue is something we can handle in the room or on Alan’s end. It also comes with digital risks. Alan, can you start again? Oh, he can’t hear us. Sorry about that. Thank you. We can hear the sound now. And apologies for the technical glitch. The digital world touches every aspect of our lives. It enables us to connect, learn, and travel, and plays an important role in safeguarding life essentials, such as food, water, and health care. Along with huge opportunities, it also comes with digital risks.

Allan Cabanlong:
We all need to be aware of those risks. To ensure a free, open, and secure cyberspace, every country should have the resources, knowledge, and skills they need to invest in their digital future. To this end, nations should work together and support each other with these capabilities so that every country can keep up with the digital transformation. After all, a chain is only as strong as the weakest link. On 29th to 30th of November, 2023, the first Global Conference on Cyber Capacity Building takes place in Accra, Ghana, co-organized by the Global Forum on Cyber Expertise. the World Bank, the Cyber Peace Institute, and the World Economic Forum, and hosted by the government of Ghana. This conference will be attended by decision makers from all over the world, high level government leaders and practitioners, the development community, experts on cybersecurity and capacity building, the private sector, international organizations, academia from all regions and across all sectors. They will gather to acknowledge that it is paramount for all nations to have the expertise, knowledge, and skills to strengthen their cyber resilience, and to work together on developing these capabilities to ensure a free, open, and secure digital world. We must all act now on cyber capacity building, because it is a key enabler for sustainable development, economic growth, and social progress. To this end, at the GC3B, the ACRA pool will be announced, a global framework for concrete actions that supports countries in strengthening their cyber resilience. Stay tuned to the GC3B 2023.

Tereza Horejsova:
Thank you very much, Alan, for playing the video. And I hope this serves as a little bit of an introduction on what we are up to. But Liesl, if you could tell us more about why, at first place, it’s also important for the U.S. government to be involved in these efforts, and why you think the GC3B is tackling some issues that are missing on the agenda. Great.

Liesyl Franz:
Thank you, Teresa, and good afternoon, everyone. I’m Liesl Franz with the State Department in the Bureau of Cyberspace and Digital Policy, and I am responsible for our international cyberspace security unit. And one of the key elements, one of our business lines, as I have come to describe it. it is on international engagement and capacity building. And it builds upon years of efforts in building capacity around the world in various ways, including helping countries with national strategies, learning from our experience, perhaps mistakes, and also with building incident response teams and other efforts that help build institutions in other countries to address the risks that you heard about in the video. Over the years, fortunately, we’ve been able to garner a little bit more funding to provide capacity building around the world. We started with sort of one person doing cyber issues in capacity building years and years ago, and we have been able to build that out into a little bit more of activity. But what we found, first of all, is that there’s an increasing amount of demand for not only funds, but also the breadth of things that countries are looking for to be able to build up their own resources, knowledge, and skills. And other countries were also looking for ways to help provide such cyber capacity building. And I think as Chris has said in another session where he talked about the global cyber capacity building, we want to make sure that all the countries that have the means to provide resources or funding are not doing all the same thing for the same people around the world and that we are able to spread ourselves across the globe in a more. coordinated fashion, or at least informed fashion. So that is why we were supporters of the GFCE, the Global Forum on Cyber Expertise, in the beginning, and why we are supporting the conference. Because we, you know, think it’s a unique opportunity for the multi-stakeholder community and donor countries, this sort of coordinated fashion, recipients, implementers, you know, those who are actually on the ground doing the capacity building that we and others can fund, the private sector and academia to actually have discussions and dialogue to discuss the current state of cyber capacity building. What does it look like? Where is it happening? What are we providing to whom, and what are the demands that are coming from the global community? And so this comes at a critical moment when conversations at various multilateral organizations, such as the UN, say, or the International Telecommunication Union or others, look to cover capacity building in greater detail because of that growing demand. So the, as you’ve probably heard, this year’s conference, inaugural, right, conference is thematically focused on bridging the gap between cyber capacity building and digital development, and I would say maybe development writ large also, because it’s not its own, it doesn’t have to be its own thing. But it’s a unique opportunity to connect various groups and ideas that have too often been siloed, and not, Chris used to talk about silos of excellence in the U.S. government, fair point, but we see them in sort of every aspect of the world, and we want to build those, the connectivity. them. So how do we make progress on connectivity without sacrificing security? How do we digitize societies but also make sure that they are resilient? And these critical questions and these are critical questions for us in the 21st century. We’ve heard them throughout the week here and I think probably in our everyday work lives and I think all of you here and online understand that covering them in detail is important and worthwhile. So for these reasons and probably many others, the U.S. is looking forward to participating through a high-level interagency delegation led by Ambassador Fick, or Ambassador for Cyber, Space, and Digital Policy, and engage the multi-stakeholder community on these questions and probably many more that will come to the floor in the conference. So we hope to see many of you in Ghana as well and so that we can take meaningful steps toward a safer, digital, and cyber future. Thanks.

Tereza Horejsova:
Thank you very much, you know, for your remarks but also for the support of the of the U.S. government and kind of reconfirmed by the delegation that you are sending to Accra. That’s fantastic. Although the conference is called a global conference, it does take place in Africa. It is true that the Africa region is of particular importance to the GFC. It’s also a region where we have kind of progressed most with kind of the approach of regional agendas to cyber capacity building. But the main aim of the event is really like to connect the regional perspectives with the global discussions. So in this sense, it will be very important that we get perspectives from various regions. And at this point, I would like to turn to you, Pua. from joining us from the beautiful Cook Islands to tell us a little bit more about the perspectives of Pacific Island States when it comes to cyber capacity building and how you see the regional efforts feeding into the global action. I hope we have you online and we can hear and see you. Let’s give it a few seconds. Hi, Trisa, can you hear me? We can both hear you and see you, it’s perfect. Please go ahead. Thank you so much.

Pua Hunter:
Greetings, everyone. So there’s actually a lot happening in the Pacific in the cyber ecosystem and cyber capacity building space. In my view, this is a good sign because it demonstrates that nationally countries in the Pacific region are developing their own enabling environment, their infrastructure, their legal framework, their policies and plans, including their capability and capacity to deal with the development in the cyberspace. And we do receive support from our development partners such as the World Bank, Asia Development Bank, United Nations Development Program and so forth, which is a great thing and we’re very grateful. We also benefit from the initiatives of regional and international organizations who deliver cybersecurity initiatives in our region. And for example, the Pacific Cybersecurity Operational Network, PECSON, the Pacific Islands Law Officers Network, PILON, the Cyber Safety Pacifica, the E-Safety Commissioner, the Oceania Cybersecurity Center, OCSC. And just recently, last week actually, in Nandi, GFCE, the Global Forum for Cyber Expertise launched its Pacific Hub and it was a great event. And this. more, many more regional and international organisations helping us here in the region, in the Pacific region. So it’s actually a busy space, a good busy space, and these are useful initiatives, undertakings and training offers extended to our region. However, I think we need to be able to manage these events, both nationally and regionally, so we can better reap the benefit that these initiatives are intended for. It’s one thing to bring something to the ground and then leave and nothing moves from there. So yeah, it needs to be managed properly. Back in 2020, the Oceania Cybersecurity Centre hosted the Global Cybersecurity Capacity Building Conference. It focused on national approaches to cybersecurity and also engagements in the region and with the development partners. The takeaway for me from that conference was contextualising nationally and regionally through more collaboration and engagement and also better coordination. And just last week, I attended the Pacific Cyber Capacity Building and Coordination Conference, the P4C in Nadi in Fiji. The same message about collaboration and coordination was also repeated several times, but this time Accountable was also attached to this, and I think that’s a very powerful message. We need to be accountable for what we’re doing in the cyber ecosystem. For me, this message confirms that cybersecurity is our own individual responsibility as well as our responsibility collectively. So despite cybersecurity and cybercapacity building being a busy space in the region, I think it’s highlighting that cybersecurity is a very important component of our digital engagement that cuts across all sectors. sectors and across all the dimensions of cyber activities. We’ve seen that in the CMM review that OCSC did for some of the countries here in the region, including us, the Cook Islands. I’m actually encouraged that at the highest level in the region, our leaders recognized and placed emphasis on the importance of cyber security and references in the region’s high level plans, the BOE declaration, the 2050 Blue Pacific Strategy, and recently the Langatoi, the endorsed Langatoi declaration. Next month, the Pacific Islands Forum leaders will be meeting here in the Cook Islands from the 6th to the 10th of November. And in their program, I was so happy to see that they’ve got a session for strengthening cyber security arrangements. You know, again, it actually demonstrates the commitment of the Pacific leaders and leading up to the upcoming GC3B in Ghana, it sets a clear path for the region and also the fact that we’re looking at high level participation from our region. Thank you so much.

Tereza Horejsova:
Well, thank you very much. Good remarks there. And, you know, I’m happy that you also kind of called for a bit more action for things to be moving. And that’s what we are hoping that the GC3B will help with, not only to make some concrete progress on bringing the two rather siloed communities of development and cyber together, but also to bring more political attention to the very urgent issue of cyber capacity building, as Liesel stressed, but then have kind of a tangible document, you know, as an outcome. that hopefully can contribute to more concrete actions in the future. So, Chris, if I can turn to you. The document’s working title is the ACRA Call, but would you be able to tell us a little bit more about the document in the shaping that will then be basis for the discussions and inputs that we will hopefully hear from all of you here? Thank you.

Christopher Painter:
Yeah, certainly, Teresa. And just building off the prior comments to give a little context, we just launched the Pacific Hub. What the GFC does is it tries to do this exact coordination. So as Pua said, and I saw this in Melbourne at the conference we helped have a session just before the pandemic, many of the island countries are saying we get lots of offers for help, but sometimes they’re the same offers for help and sometimes we can’t actually deal with them. And so one of the reasons for being of the GFC is to take donors and implementers and recipients and try to make more sense out of this given we don’t have a lot of resources. And that really builds on another thing that was mentioned, which is the overall purpose of this conference is to, as Teresa said, to highlight and to promote this idea of cyber capacity building, which is often lost as important as it is, but also to bring these often disparate communities that don’t talk to each other very well, the cyber security capacity building community, which we know very well, but the traditional development community and the traditional development community, not just as digital development, but indeed development projects around the world. And if you think about the SDGs, or if you think about those, almost all of them are undergirded by both digital and having strong cybersecurity. If you think about development projects like water and power, we saw this in the video, they’re often controlled by cyber means and therefore cybersecurity is a foundational thing, but the communities don’t really interact that much. So one of the big outcomes from this is to really promote that integration between these two communities and dialogue. and actually leveraging each other’s efforts. If we can go to the, just move to slide five. Yeah. Okay. So, you know, obviously bringing people together, having those conversations, having that program is gonna be important, but even more important is this is meant to be a process and a call for future action. As Pua said, it’s great to have all these, like, oh, let’s do this, but it’s not that great if you actually don’t have the actions that follow it. So the ACRA call, which is the working title right now, instead of a declaration, declarations are like, we’re gonna declare this, you know, but a call is a call for action, much like the Christchurch call or the Paris call or some of the other ones that are out there, meant to be sort of a living document. And the idea is really to elevate a mainstream cyber resilience and in the development agenda and vice versa with actionable items. So going to the next slide. Okay, so that, so it’s meant to be an action framework drawing on from existing commitments, but also some new commitments in a few different areas and really a blueprint for motivation and work in this area for both the development and the cyber communities. And I should be clear, it’s not that, you know, we’re not saying the development community has to understand cyber and the cyber community doesn’t have to understand development. We both have to understand and work with each other. I think that both communities have been a little with blinders on. Now there are exceptions, the World Bank, USAID, the British Development Organization, a number of them are doing more of this. And I think that’s good, but it’s still kind of in its infancy. So it’s gonna, this is a blueprint, a call to action with the aim to elevate cyber resilience. And you may wonder why we use cyber resilience. Well, not surprisingly, when you say cybersecurity, the development community says, oh, that’s a military thing. That’s a security thing. Why are we dealing with it? Cyber resilience, it really resonates with both communities, both the cyber community and the development community. I think it is really what our overall goal is, resilience. So, it’s to elevate that, promote capacity building that supports larger development goals. Go to the next slide. So, and I should say that this document is still in development. We hope to circulate a somewhat mature draft at the end of October for comment, for community comment, and welcome your comments then. But today, we want to kind of give the conceptual framework and get some thoughts from you. We think it matters now, as Teresa said, because we’re at an inflection point. We’re at this point where these development projects are getting more dependent on cyber technologies and digital technologies. And we really need, we can’t afford any longer to be in these separate communities. We can’t afford in terms of resources to do that either. There’s lots of resources in development. There’s not that many in cyber capacity building. But we make each other stronger by working together. And it’s meant, the call is directed to countries, including recipient countries, donor countries, regional organizations, private sector, technical community, really the entire multi-stakeholder framework that we know and love so well here at IGF. Next slide. I basically covered this. The framework is really meant to be more of a call to action with specific items that will be listed under four major categories. It’s voluntary, like most calls are. You can’t make, you can’t really reach a binding agreement, as I think any of you know, in a short period of time. But a voluntary call where people sign on or endorse, I think is very helpful. So it’s not formal signatories, but people who endorse it. Go to the next. OK, and I mentioned these four major areas, which will have various thoughts or action items under. And the four areas are, one, actions to strengthen the role of cyber resilience as an enabler for sustainable development. So that’s exactly what I was covering, that drawing this connection in very clear terms and making recommendations within that bucket in terms of how the development community and the cyber community can work and leverage each other. The second major bucket is actions to advance demand-driven, effective, and sustainable cyber capacity building. These are things like making sure you have the political will in countries to actually not just do one-off trainings, but that they really want this and you have more sustained capacity building. And that is demand driven. You heard Pua talk about this as well, that we’re simply not saying here’s a whole bunch of programs, but we’re listening and talking to people in regions and countries about what they want and what they need, and we’re matching that. Because that, again, leads to sustainability and traction and something that our scarce resources are more effective by. The third bucket is to foster stronger partnerships and better coordination. So the coordination is, again, one of the major elements here, and I mentioned this before. The whole reason we were set up is to promote coordination. There’s much better coordination, I’ll tell you now, than there was seven years ago. Still not perfect, you won’t be surprised, but there’s a lot. I mean, countries are talking to each other. Donors are talking to each other. The platform we create has allowed a lot of this to happen. It’s also happening organically in other venues too, and that’s great, but we need to amp up that coordination because, again, if we don’t do that, we’re wasting the resources. We’re not actually meeting the needs of the countries and the others who need this help. And then finally, the last bucket is one that everyone understands, which is resources. How can we significantly up the game in terms of resource commitments to this area? Much like, no one has enough money. We all understand that. No one has enough people to do these issues, but I think the SDGs have been very successful in focusing political attention and getting some resources, and I think there’s been a lot of resources devoted to those, and we’re not trying to, as they say, rob Peter to pay Paul. We’re trying to leverage each other’s resources. This is not like give it to us and not to them. This is using the resources in order to achieve the things that the SDGs are trying to do, the development community is trying to do, that we see with the same vision of how this is done, and it also allows us to learn from each other in terms of implementation modules. So those are the four major buckets, and we’d love to get input from you in those. It’s gonna be based on the conference in terms of those four buckets kind of reflected in the agenda for the conference, but the content of the declaration or the call is meant to actually be actions after the conference. It’s a set piece, but it’s really the process I talked about, and I mentioned consultation. We started with a small group of co-organizers, us, the GFC, the World Bank, the Cyber Peace Institute, and the World Economic Forum. on the steering committee who have helped fund the conference, larger group of friends and the community and we’re in that process now and that’s one of the reasons we’re here today. So I don’t need to go through all these with the public consultations. We’re here. We’re doing one now. We’re going to do one at the Paris Peace Forum. We’re going to do one in Singapore Cyber Week next week. Any possibility we have, any chance we have to engage with the community, we’re going to take and as I say, we’re going to circulate a mature draft but certainly willing to take input. So the question we have for all of you is really those four buckets that I talked about. Does that cover everything? We think it does but that doesn’t, you know, we don’t know everything and the people we’re working on don’t know everything. So we want input from you, does that, those four broad buckets cover I think the major concerns we’re talking about and are there particular barriers that we need to overcome in better connecting cyber capacity building with development goals and elevating the role of cyber resilience in development and vice versa that would lend themselves to particular action items that you would like to talk about today. And also this is an ongoing thing. If you leave this room and say, oh, I should have mentioned this, let us know. I mean, we want to hear about it. So that’s really the setup for what we’re trying to do today. We really would like to hear from you about where you think there could be progress made on this, about the overall idea and about this kind of structure and if this makes sense.

Tereza Horejsova:
Thank you very much, Chris. I think that was quite clear and now really is the time that we want to hear from you. You come from different backgrounds, different perspectives. You might have maybe come across some complications stemming from the fact that the cyber and development communities don’t interact with each other. You might have been involved in various cyber capacity building projects. So I would suggest, Alan, that we actually keep the slide with the four areas up and at this point, really, I would like to. to encourage you to share your views with us, either online or here in the room. And don’t be shy. We really want your views. Yes, and I know there are a few people in the room who are not shy. I’ve already seen my daily news. No, Michael, you will disappoint us if you don’t. May we ask you? But we need to get you on the microphone. No, no, hold on. Otherwise, the online won’t hear you. Either take this one or go there. But let me give you this one. Sorry. No worries.

Audience:
Mike Nelson with the Carnegie Endowment for International Peace. And I’ve worked with these people. Really simple question. Why Ghana? And what were the other things considered? And how much of it will be virtual? I mean, you don’t have to be there to be part of it, right?

Christopher Painter:
I think we’re trying to work on connection details so there’ll be a good virtual ability. But I’d say it’s sort of a long and storied history. I think originally, we were hoping to have it at the World Bank in Washington, which also would have posed some challenges for people on visas and et cetera. Because of various COVID restrictions, other things, that wasn’t going to work. And then we thought about a number of places, frankly. But as Teresa said, every region has unique needs. And we partner with the OAS in America’s region. We have a Pacific hub we just launched. We have a ASEAN liaison. We’re doing a lot of work in Africa. And the government of Ghana very much wanted to do this. And given all the work that we’ve been doing in Africa, setting up an African experts group, et cetera, it seemed like an important place to have. And it also was important, I think, to have the first one somewhere in the global south. I think that that was an important thing. Rather than have it in, you know, there’s lots of nice places in the north you can have it. But it doesn’t really send the right message. Then it’s like a conference of the global north talking about what they’re going to do, where this really needs to be a conversation. So that was really the rationale. And we’re quite happy about that, too. Yes, we are. very grateful to know that you know we want to get this one under a belt but in the future ones we’ll have to figure out where the next one will be like these things always work but we want them to be representative and so we certainly want to get people from all over the world as I said this is in Africa but

Tereza Horejsova:
it’s not just an African conference thank you very much Chris also for the question address please including hopefully even on the substance of the of the document in the making any takers please please go ahead Sparky thank you and yes if you can also introduce yourself and your institutional affiliation thank you thank you Sparky from yeah Sparky from JP said thank you for your presentation my question is obviously what is what what are written on Accra Accra call cannot be achieved like six months or

Audience:
two years it has it should have you know it may take more than few years to you know to reach the level you like to you like to achieve so my question is other than you know your short time goal launching the call maybe after a few months is is there any plan for like year after or maybe go your goal within next three years thank you yeah so that’s why I said for each of these categories the idea is to have several more specific goals and although they’re

Christopher Painter:
not gonna have I don’t envision them having strict time frame saying this is gonna be done in like 90 days or it’s something like that we are going to monitor them we’re gonna look at them you know after six months after a year see what progress is being made when there is a second one of these conferences as we said this is the inaugural one that’s often also a stock taking but where there’s lots of opportunities for stock taking and very much the idea of a call is unlike a more general declaration is to make sure we’re making progress. You know, we don’t want this to become shelfware, as many things become, and then you never look at it again. So that’s a thought process. Now, you know, people are going to make progress at different rates in different parts of the community. We’ll implement them in different ways. That’s why it’s voluntary. But we want to track and even go back to the parties who support these efforts and say, OK, well, what have you done? Not in an accusatory way, but in a way that just says, are we making progress? Thank you, Sparky.

Tereza Horejsova:
Thank you, Chris. Others? Liesl, yes? There you go. Stereo.

Liesyl Franz:
I think, well, first of all, I would say that I think the US government has had some input into as part of the concentric circles that Chris was talking about as far as the consultation about the conference and the substance of the call. So I’ll say this in my capacity and not necessarily prejudge or undermine anything we’ve said into the process. But one of the things that I think comes under the action B, the second bucket on effective capacity building, is looking at the ability for any particular country to absorb a certain amount of capacity building at any given time. Do they have the institution before they get a deluge of funding for something that’s sort of amorphous or doesn’t quite fit the need? So demand driven, but also tailored. enough to the recipient so that it can be effective and I think also sustainable. The other thing that we have been grappling with is that, to Sparky’s point, I think about the fact that foreign assistance and capacity building is often a long-term investment over time and takes time for the knowledge, skills, and institutions to develop before they can have the full impact that you want. But we have been grappling with more emergent or urgent response in some of the crises, I suppose, for lack of a better word, that we’ve seen in Ukraine and Albania and Costa Rica and so that might be an element of effective as well in the second bucket, although I would also think it could be captured in the third bucket as far as partnerships and coordination. Of course, I think everything relies on D, which is the financial resources, but even if those aren’t in text in the Accra Call, I think those are two things that we in the United States are looking at when we’re looking at these days as far as our strategic approach, our strategic outlook for some of the capacity building that we’re trying to do now.

Audience:
Thanks, Liesl. Thanks, Liesl. I’m all choked up, I’m all choked up. Others, please? Online, too, if people have comments. Yes, please go ahead, yes. Okay, my name’s Casey Rout, and Das Franz is actually my boss, so I’ll pose this to Chris so I don’t put her on the hot seat, but we had a conversation yesterday, and I’ve been kind of thinking about this a little bit more, and it goes. to be of a sustainable cyber capacity building. And so after the donors and trainers leave, you know, the countries need budgetary resources to continue, you know, the hardware, the software, the knowledge, the training. So how do we work, what’s your view on involving legislators in training them, having them understand the value of this so that they create the budgetary resources we need to really have sustainable capacity with cybersecurity and governments? And how do we better integrate them, whether it be through GC3B or other ways?

Christopher Painter:
Yeah, look, I think that’s a big issue. And that goes to the political will and the sustainability point. So, you know, there are two aspects of that. One is getting the country buy-in at a legislative and leadership level. And I agree that maybe those are some things we can work into this. Another is, you know, under that last bucket, unlocking the financial resources. You know, there are a lot of financial streams that are available and used in the development community. And there are models the development community uses to measure sustainability, to make sure that their dollars and pounds and pesos and other things are, yeah, are actually well spent. And it’s, you know, not just one off. So I think there’s a lot we can learn from the development community, too, in terms of the tools they use. So for example, one of the things that we’re thinking of having as one of those action items is to identify and employ the full range of financial streams available for financing of national cyber resilience activities, including international development financing, domestic resource mobilization, which really goes to your point, private sector, incorporation of cyber resilience and integrated national financing frameworks. And that’s exactly your point, I think. So it’s not just an add-on or like some boutique little bubble over here. It’s actually part of the larger plan. So I think that’s the kind of wording we’re thinking about now,

Tereza Horejsova:
but I think that helps put that into some relief. So thanks for that. And I’m really glad that we are talking about the, you know, practicalities connected to budgets and money, because also, as Chris pointed out, But I mean, no one has enough money, budget, no one.

Christopher Painter:
If you could all leave a check on the way out, that would be helpful. No one has enough people. Mike, you just leave your credit card and your pin and we’ll be fine.

Tereza Horejsova:
Yeah, that’s why it’s a little bit also makes the situation inefficient. And we should make sure that the resources are used efficiently, which wouldn’t be necessarily happening if we do not connect these two communities, but also if we do not connect more on kind of coordinating cyber capacity, building support globally, which is kind of the main raison d’etre of the GFC. Because we do have a speaker online and because she is online, I don’t want to kind of put her in the shadow. So, Pua, please give us a sign if you want to chip in. Otherwise, we will continue the discussion in the room. And I also know we have โ€“ okay. So, yes? No? Sorry. I know we have one comment online from Alan on Southeast Asia. So, please go ahead now. Okay, we cannot hear you. Hello, good morning. Can you hear me now? Yeah, and maybe let’s remove the slides so that we can see you properly, Alan. Thank you. Yes, good morning, everyone. Yes, the GC3B Conference on Cyber Capacity Building.

Allan Cabanlong:
This will inspire other regions and leave with a renewed commitment for global cybersecurity cooperation. So, it’s very important for Southeast Asia, not just the Pacific as well. in other regions so that they will be inspired to globally engage with other regions as to capacity building efforts and share insights and ideas and good practices in that they can learn in the GCTP. And I would also take this opportunity to invite everyone next week in the GFC regional meeting in Singapore during the Singapore internal cyber week. And this will be again discussed there during this, I mean next week.

Tereza Horejsova:
Thank you so much. Thank you very much, Alan. And you know, if any of you are traveling to Singapore for the Singapore international cyber week, please let us know, you know, so that we make sure that you’re also part of the part of the conversation. Any other reflections either online or here in the room, please. Please go ahead. Hello everyone. Sorry.

Allan Cabanlong:
My name is Guus van Zwolle and I’m with the Dutch government and we are very supportive of course of the GFCE. They’re run by our colleagues at the same team. We recently as the Netherlands have published our new international cyber strategy where we also lay a big layer foundation of our strategy is cyber capacity building. But we do tie it in it to also supporting countries that are receiving the cyber support to also adapt their regulatory frameworks in order to make sure that these cyber capacities are being run in a framework that’s with respect for rule of law, international human rights standards. And I was wondering what your perspective on that would be and if that would also be a part of the GC3B conference. Thank you very much. Yeah, I think, you know, there’s parts of any kind of declaration or call and one of them is sort of the preamble that sets it out and certainly respect for human rights.

Christopher Painter:
And, you know, we don’t get really get into the regulatory framework as much, but rule of law, yes. And then the action items are more, I think more tailored to other things. though there’s some of that mentioned there too. But that is certainly a goal. We want to integrate that better governance and respect for human rights. And this is like foundational to the GFC certainly too, going forward as we do this. And that’s indeed what the development community does too. So that’s another place where there’s a good nexus, I think. Thank you. Also- So Teresa, if you have thoughts, or Pua, if you have thoughts or- And Pua has thoughts actually. So Pua has thoughts. So a second attempt to connect you, Pua, please.

Pua Hunter:
Thank you, Teresa. Sorry. I actually wanted just to circle back to the comment earlier on from one of our participants here about the sustainability. So right, sometimes when our donors come and assist us with something and then they leave and there’s no continuity, we need to look at how we can resource ourselves properly so there’s sustainability attached to it. Also from the meeting last week in Nandi, participants were talking about these trainers coming into the country, into the country, so that there’s more of us to be trained at one given time rather than one person going to a regional or somewhere where the trainer is able to train many countries, but one or two from each country. So the idea is to bring the expertise and train more on the ground rather than one or two going out to be trained. Because the other issue with that is the knowledge learned from these trainings overseas may not be transferred back or appropriately transferred back in country. So again, those needs to be looked at appropriately.

Tereza Horejsova:
Thank you so much. Thank you. Thank you, that’s a very concrete suggestion there. Any other reflections, comments?

Christopher Painter:
I just want to say that I totally agree with that and we’ve seen that, so I’m not sure which, it fits under several buckets, but one place that we’re trying to reflect that now is under the third bucket of fostering stronger partnerships and better coordination. And one of the things we’re thinking of under that is a bullet or something that would say fostering the leadership of developing countries in coordinating CCB efforts in close cooperation with donors and others. So it’s not, it’s more locally owned as well, and I completely agree that just having a whole group of people descend on the country and then leave again doesn’t actually help in the long term. So you do want to have, you know, there’s another part in the last bucket where we talked about systemizing south-south and triangular cooperation. So again, it’s not just, you know, a whole bunch of people landing on your shores and then leaving again, but really kind of building this in more permanently. Kind of the train-the-trainers logics in that. No, thank you very much, Chris. Any other reflections? Please, go ahead.

Tereza Horejsova:
Thank you.

Audience:
Hi, Linda Maisels from the State Department. So I’m interested in de-conflicting between donor countries and how we can use the GC3B and GFCE as a mechanism for doing that. That would also involve not reinventing the wheel, so if there are tools that already exist that there’s no reason to do them again. How do we find them? How do we, for instance, use an existing tool like the Sybil Portal to make sure that we are not doing the same work over and over again? And how do we get donor countries to speak to each other?

Christopher Painter:
Thank you. And that’s the raison d’etre for why we were created is for that very purpose. And indeed, when we were in the Pacific, when we launched in the Pacific Hub, we had a sort of side meeting, which I guess we do every couple months of the donor countries. They are a core group of donor countries, which they found veryโ€”this has been at their request so that they can share information with each other. Now, you know, it’s never going to be perfect because countries have their own priorities and that’s the way the world works, and that’s fine. But I think they welcome the ability to share that information to find out what someone else is doing because sometimes it’s like, well, we don’t need to do that or we can join your efforts, right? And we were notโ€”we don’t want the Accra call to create new, giant new structures. You know, that I think is not helpful. We leverage the structures we have. Many of you know, who’ve been following some of the debates in the OEWG, there’s this debate, should we create a new, you know, ecosystem? Well, why would you do that with the scarce resources you have when you need to leverage what’s there? So, for instance, under the coordination, the third bucket, one of the things we specifically say is utilize existing coordinating nation platforms like ours, for instance, to better coordinate and de-conflict and have the kind of donor dialogue that you’re talking about and strengthen them, you know, so make them more participatory, get more people involved in them. So, I think that’s what we’re trying to do. You know, take what we have, make it stronger, and be more effective with the resources we have.

Tereza Horejsova:
Thank you very much, Chris. And maybe just to add, because Linda also mentioned the Sybil portal, you know, it’s available on sybilportal.org and it’s kind of a resource where we try to map various cyber capacity building projects. globally, it’s possible to, you know, filter on specific regions, specific country, and, you know, get the information on already implemented or currently ongoing projects. And why is that important? Because to be able to plan, let’s say, a new activity in a specific country, it is kind of a good idea to build on what others have done so that there is, to the extent possible, a little bit less duplication of efforts and ultimately, again, more efficient use of the limited resources available for these activities. Any other comments or inputs? Susan? Please. Susan Garoรฉ from our Pacific Hub, please go ahead.

Audience:
Thank you. Thank you so much. I’d like to also just say on what Linda has said when it comes to de-conflicting interests when it comes to donors. What we notice is that we’re living in an era where collaborations and corporations is a strength going forward. Going solo, an individual, is not effective anymore. And there are many reasons to it. And in the Pacific, one of the things that I noted is we are on different parts when it comes to cyber security. Some of us are more advanced. Some of us are just taking baby steps. And with this well-coordinated effort, we ensure that no one is left behind. And we make use of all the resources that we have. So that’s a plus on these types of platform.

Tereza Horejsova:
Thank you. Thank you very much, Susan.

Liesyl Franz:
Definitely a good comment there, if there are no other comments. So yes, Lisa, and then Chris, yes? Thanks. So this conversation has actually spurred a couple things for me that maybe to add to the thought process going forward. One is that, you know, we talked about in the video. And a lot of the conversations here have been about the cyber security efforts to address the risks and that there is a cost to doing so. But there’s also, I think, many benefits to providing cyber security efforts in the processes and digitization and digital transformation efforts that countries are going through right now. And I think finding a way to emphasize the positive because when we talk about funding or we talk about political will or we talk about, okay, funding or political will, those are pretty important. Sometimes it’s hard to say, well, we’ve got, we have to make this huge investment in cyber security for something may never happen. But I think perhaps changing some of that rhetoric to providing cyber security for the betterment of economies and digitization and investment in economy. That might be something to, if a development bank builds a bridge, it’s a positive, right? So maybe thinking about what the analogy is there for cyber. And then secondly, I really appreciated Pua’s comment about wanting to have the ability for training in country and on site so that it is well integrated into the, whatever phase of development that that institution, the country agency is in. But I think perhaps we can also talk about the various types of training or capacity building that can happen. and maybe thinking about it in stealing a page from the legal community, continuous learning or continuing education so that there’s a, you know, sort of fundamentals and then things that will help individuals even if they have to go somewhere else to get it. We know that not every, you know, cert for example can send all their people to a training and outside the country at any given time, but perhaps there are ways that individuals who have, you know, been trained in country can then take advantage of continuous learning opportunities going forward. Anyway, that’s just a reaction to a couple things that people have said here and if it’s able to be captured and there’s interest by others, then perhaps some way to think about it.

Christopher Painter:
So thanks for that. There are a couple of things that, you know, as I heard some of the comments, I just like to know. First of all, we welcome your continued feedback. Does the structure make sense? Are we missing a whole, I don’t think we are, but are we missing a whole group of things that we should be addressing? I think the comments I’ve heard would fit into those four buckets in some level. You know, to the question was asked by our Dutch colleague earlier, one of the proposed things we’re thinking of putting under the second bucket is professionalizing cyber capacity building community of practice with tools and guides to help stakeholders put into practice established principles, including human rights-based and gender sensitive approaches to CCB. So that is built into at least our thinking right now. It has to be put on paper and actually kind of wordsmith negotiated among folks, but that’s certainly there. And also this idea of doing a better job of creating tools where we can measure the results. And that’s where the development community is pretty good, you know, or at least I think they’re pretty good. They have these tools where they measure the result of our project, because that then helps them decide where they’re going to invest in. And the other thing they do well, which I think we need to figure out how to do, is to prioritize. One way is to link it to critical national resources. Big projects are going to make a big difference where cyber is going to be critically important. You know, figuring out how to prioritize too, I think will help too. and learning from each other on that. So those are some of the areas. But I’d say, again, we have a couple more minutes left. Do you have any input or things you think should be in there or thoughts? But also, welcome input afterwards. So you have four minutes, right? Four minutes, so take advantage of it. Structure, any one of these, any comment, any suggestion you’d like to see.

Tereza Horejsova:
And yes, it’s before lunch, so we respect that. And anyway, it’s time to wrap. But thank you very much, Pua, Alan, online for your support, Liesl, and Chris here in the room, and in particular to all of you online and on site. To those of you here in the room on your way out, we have prepared some more resources on GC3B and some goodies as well. So you might take it home with you. And as this is the last day of the IGF, let me also wish you very safe travels back home and see you around. Thank you very much. And I should thank you all for being here. I should just shout out to Teresa for organizing this. And Teresa has also been on the Multistakeholder Advisory Group for the last several years. And she’s rotating off that, so thank her for all her efforts in the IGF too. Thank you.

Allan Cabanlong

Speech speed

167 words per minute

Speech length

595 words

Speech time

213 secs

Audience

Speech speed

176 words per minute

Speech length

599 words

Speech time

204 secs

Christopher Painter

Speech speed

212 words per minute

Speech length

4039 words

Speech time

1141 secs

Liesyl Franz

Speech speed

147 words per minute

Speech length

1619 words

Speech time

659 secs

Pua Hunter

Speech speed

146 words per minute

Speech length

872 words

Speech time

358 secs

Tereza Horejsova

Speech speed

170 words per minute

Speech length

2004 words

Speech time

706 secs

DC-Blockchain Implementation of the DAO Model Law:Challenges & Way Forward | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Morshed Mannan

The discussion surrounding regulations for Decentralized Autonomous Organizations (DAOs) encompasses various aspects. Incorporation fees, perceived as a form of taxation, pose challenges in establishing regulatory equivalence. These fees have been a recurring topic during the transposition process, evoking a negative sentiment.

Another area of contention is the verification of formation requirements in the DAO Model law. Different jurisdictions hold differing views on who should conduct the accreditation, leading to ongoing debates and a neutral sentiment on the matter.

Regulators face the task of applying existing laws to DAOs, which presents potential risks and unintended consequences. As regulators endeavor to enforce these laws, cases pertaining to DAOs may reach appellate courts, resulting in the emergence of case law related to this technology. This neutral sentiment underscores the uncertainty surrounding the outcome.

A key argument posits that decisions made in appellate courts may establish legal precedents that restrict innovation. This negative sentiment highlights the potential risks and unintended consequences of this approach.

To mitigate these risks, proponents advocate for a proactive regulatory approach, including the use of regulatory sandboxes. This proactive stance is seen as a means to shape laws in a manner that fosters innovation without impeding progress. A positive sentiment surrounds this argument, emphasizing the need to prevent harm and anticipate future risks.

Furthermore, regulators are encouraged to educate themselves about DAO technology prior to implementing laws. This sentiment stems from the belief that a thorough understanding of the intricacies of this technology is necessary for crafting effective regulations. Educating regulators would facilitate a smoother implementation process and contribute to the overall success of DAO regulation.

Jarrell James

The analysis explores the introduction of the Dow Model Law and the coalition group Koala. Koala is a multidisciplinary research and collaboration firm that brings together professionals from various fields such as law, academia, computer science, and entrepreneurship. They aim to understand the challenges and opportunities presented by decentralized technologies and their impact on the legal system and society.

Jarrell James acknowledges and appreciates the efforts of the coalition and panelists. Notably, Rick Dudley is praised for establishing cross-field technical standards, Fatemeh Panazera is recognized as the leading counsel, and Silke is commended for facilitating the Dow Model Law presentation.

The analysis delves into the legal recognition and interaction challenges faced by the decentralized technology space. Obtaining legal recognition is a major hurdle for entities operating in the decentralized space. The Dow Model Law offers a meaningful legal pathway to address these challenges and allows decentralized technology entities to interact with municipal authorities, corporations, and international coalitions. The development of the Dow Model Law took three years, highlighting its importance and thoroughness.

The analysis also explores regulatory challenges and privacy concerns related to Decentralized Autonomous Organizations (DAOs). Governments exhibit major hesitation in interacting with single entities or individuals in the decentralized space, and there are concerns beyond just liability. Additionally, regulatory challenges are likely to arise during the process of recognizing DAOs due to their unique characteristics.

The transparency of DAOs is discussed, with all transactions and payments being visible to participants. However, this transparency undermines privacy, leading to a quest to restore privacy within DAOs. Regulatory authorities lack understanding regarding the extent of transparency provided by DAOs and the tracking capabilities they offer.

Privacy is perceived as being anti-state in the digital space, creating conflicts with state objectives. The analysis emphasizes the importance of reconciling these ideological differences.

Furthermore, the analysis recognizes the significance of coordination and innovative solutions in the field of regulation. The efforts of Koala in developing novel solutions, along with the implementation of the Dow Model Law, are appreciated.

Lastly, the analysis highlights the role of civil societies and coalitions in effecting change in different jurisdictions. The frustration faced in movement and change across various legal systems is acknowledged.

Overall, the analysis provides a comprehensive overview of the Dow Model Law and the challenges faced by decentralized technologies. It emphasizes the importance of legal recognition, addresses regulatory challenges, privacy concerns, and advocates for the reconciliation of ideological differences. The role of civil societies and coalitions in effecting change and innovation is also emphasized.

Silke Noa Elrifai

Decentralized Autonomous Organizations (DAOs) offer a new way of organizing and coordinating global collaborations. They present large-scale coordination opportunities that align with the needs of an increasingly multipolar world. However, DAOs currently face significant legal uncertainties that impede their development. To address this, the proposed Model Law aims to grant DAOs legal personality and capacity, enabling effective interaction with the off-chain world. The Model Law is based on the principles of functional equivalence and regulatory equivalence. While it seeks to provide solutions for taxation issues and legal certainty, the requirement for DAOs to register as global entities poses challenges. Some jurisdictions, such as Utah, have not adopted this approach, hindering the implementation of the Dow Model Law. Efforts are needed to improve the Model Law, especially regarding registration requirements, to make it more feasible for jurisdictions to adopt. Taxation concerns are also an obstacle in the interaction between jurisdictions and DAOs, with jurisdictions emphasizing potential tax benefits. Silke Noa Elrifai suggests an innovative approach to addressing taxation rules for DAOs. Additionally, the transparency of DAOs undermines privacy, and efforts should be made to reintegrate privacy into the model while maintaining transparency for regulatory purposes. The Model Law may not be suitable for all jurisdictions due to its issues, and the default characterization of DAOs as general partnerships or unincorporated associations with joint and several liabilities needs to be addressed. Overall, addressing legal uncertainties, registration requirements, taxation concerns, privacy issues, and the default characterization will support the growth and development of DAOs while respecting legal standards and protecting individual rights.

Rick Dudley

Regulators have faced criticism for a perceived lack of understanding regarding internet communication and cryptographically signed messages. It is argued that regulators misunderstand the unique properties of these mediums, resulting in a negative sentiment towards their treatment. However, proponents argue that existing laws, protections, and regulations can be applied to satisfy the requirements of both the online community and regulators, advocating for a positive approach without the need for special treatment.

There is also a negative sentiment towards compromising privacy due to technical limitations or engineering practicality. Privacy is considered a constitutional guarantee, and individuals are not willing to sacrifice their privacy due to these limitations. The importance of protecting privacy rights and implementing privacy-enhancing measures in technological advancements is emphasized.

Privacy issues related to blockchain are seen as similar to traditional internet privacy concerns, generating a neutral sentiment. The argument is that privacy compromises in blockchain operations, particularly decentralized autonomous organizations (DAOs), are not distinct from the compromises observed in regular surveillance practices. This suggests that similar privacy concerns apply to both traditional internet activities and blockchain technologies.

Rick Dudley, a business owner in the United States, supports the use of Tornado Cash to provide financial privacy to his employees who receive payments through blockchain foundations. This positive sentiment reflects the value he sees in granting financial privacy and the enhanced security it offers. This indicates that privacy-enhancing technologies like Tornado Cash are being recognized as beneficial in the context of blockchain finance.

Privacy pools are considered a technology that restores a basic level of privacy. These pools allow for transaction privacy that can be revealed upon request, and they can demonstrate the legitimacy of funds by ensuring they were never sourced from regulated or restricted entities. The positive sentiment towards privacy pools suggests they are valued as a tool for individuals to regain control over their privacy rights in the digital realm.

There is an argument for regulators needing to educate themselves and gain a better understanding of these technologies, affirming the importance of keeping pace with technological advancements to make informed decisions and regulations.

Regarding regulation, there is a negative sentiment towards creating numerous new laws. Advocates argue that existing laws and regulations can be effectively applied, eliminating the need for extensive legislation. This preference for using and adapting current legal frameworks aims to avoid burdening the industry with unnecessary regulatory complexities.

In summary, regulators face criticism for their perceived lack of understanding of internet communication and cryptographically signed messages. However, proponents support the application of existing laws and regulations to meet the needs of both the online community and regulators. Privacy is considered a fundamental right, and compromising it due to technical limitations is met with a negative sentiment. Blockchain privacy issues are seen as similar to traditional internet privacy concerns, and privacy-enhancing technologies are recognized and valued. Regulators are encouraged to educate themselves, and a preference for using existing laws over creating excessive new regulations is evident.

Fatemeh Fannizadeh

The discussion surrounding the model law for Decentralised Autonomous Organisations (DAOs) covered several important aspects, including legal personality, limited liability, and internal governance. The model law aims to address all the necessary considerations in corporate formations, such as rights, obligations, and the entity’s purpose. It specifically emphasises that there is no implicit fiduciary duty for any one decision-maker. In addition, provisions were made to accommodate the unique nature of blockchain technology, including the possibility of forking.

Various jurisdictions have considered the model law for DAOs, including Australia, the UK, St. Helena, and New Hampshire. However, the state of Utah in the US is the only one that has adapted and implemented the model law thus far. Other jurisdictions, such as Vermont, Wyoming, and the Marshall Islands, have attempted to regulate DAOs through incorporation. Utah’s adaptation of the model law has drawn criticism, particularly regarding the requirement for DAOs to register within the jurisdiction and nominate a registered agent. Critics argue that this requirement deviates from the original model law and is an attempt to exert control over DAO entities.

The discussion also highlighted privacy as a constitutional right. It was emphasised that privacy is not against the state, but rather, it is recognised as a fundamental right in most places. The false dichotomy between protecting privacy and preventing terrorism financing and money laundering was also disputed. It was argued that individuals should not be forced to choose between their privacy and preventing illicit activities, as both can be upheld simultaneously.

When it comes to regulating the DAO Model Law, caution was advised against rushing the process. The technology underlying DAOs is still in its organic growth phase, and it was suggested that hasty regulation could hinder its development. Instead, the establishment of a regulatory sandbox was proposed as a way to allow the technology to mature more effectively. A regulatory sandbox would provide a controlled environment for experimentation and refinement.

The future of the internet was also discussed, with a prediction that it would become more decentralised in the next 20 years. This implies a shift towards a less centralised structure, where power and control are distributed among various entities and individuals. Such a transformation could have implications for internet governance, privacy, and overall functionality.

In conclusion, the discussion on the model law for DAOs covered various important aspects, including legal personality, limited liability, and internal governance. While Utah’s adaptation of the model law has faced criticism for imposing additional requirements on DAOs, other jurisdictions have pursued different approaches, such as incorporation. Privacy was recognised as a constitutional right that should not be compromised. Caution was urged in the regulation of the DAO Model Law, with the suggestion of implementing a regulatory sandbox. The future of the internet was predicted to involve a more decentralised structure, potentially impacting governance and functionality.

Session transcript

Jarrell James:
and like presentations, but we want to also be able to see our notes at the same time so that we don’t just ramble on and waste your guys’ day. So just bear with us real quick, but I can go ahead and introduce the general theme and concept and also the panelists involved. Hello, hi, welcome to the last day of the IGF. We all made it, round of applause for yourselves. I think we’re going to be very brief on technical terms here, but we’re just going to be introducing something called the Dow Model Law and a group coalition called Koala. And Koala is a multidisciplinary research and collaboration firm. It gathers lawyers, academics, computer scientists, and entrepreneurs with a collaborative mindset. We’ll be researching together the challenges and opportunities of decentralized technologies and their impact on specifically law and society and creating actionable steps forward for legal recognition of entities that are known as DOWs, which are called Decentralized Autonomous Organizations. So a little bit about the Koala panelists. I’m not sure I can see everyone that’s online, but assuming that we’re all here. Okay, so online somewhere is my good friend, Rick Dudley, and he’s in New York City, and he is a, honestly, Rick’s pretty freaking cool. He’s got like 20 years of just building the craziest intercommunication technologies and helping design standards across a number of different technical fields. And he is a founder of a project called Laconic and Vulcanize. And he’s just, you know, he’s a really real guy, so it’s gonna be a little bit blunt stuff from him. I’m excited for it. Over there, doing a little coordination, we have Fatemeh Panazera, and she is a badass lawyer and has worked on a number of different decentralized technologies as a general counsel, leading counsel, and is also one of the authors of the paper that we’re gonna be discussing today, the Model Law. This right here, this person is Silke. They’re going to be facilitating the presentation, walking you guys through, as we can, the Dow Model Law, and we’ve scaled it down and made it very applicable to just specifically what we wanna focus on with legality in society. And I believe we have, oh, hey, everybody, how’s it going? Then we have Morshed, and Morshed, actually, now that you’re up here, do you wanna introduce yourselves, Rick and Morshed? I’ll start with you, Morshed.

Morshed Mannan:
Hello, everyone. My name is Morshed, I’m a lawyer and a legal academic. I’m currently based at the European University Institute in Florence, working on the BlockchainGov project. I’m also a member of Koala and had the great pleasure of co-authoring the Dow Model Law with Silke, Fatemeh, Rick, and I think I see Greg as well. Greg, it’s a pleasure to be able to be here today,

Jarrell James:
even if it’s only online. Well, the point of this conference is that the online participation is just as important as in-person participation. So, Rick, do you wanna give a quick and better update about who you are and what you believe and how your life is going, or do you want me to continue to do that? Unmute, unmute. Unmute, Rick. Hi, yeah, I’m Rick. I think the very good intro, thank you. I’ll just sort of add, primarily a mechanism designer

Rick Dudley:
in the blockchain space, and that’s sort of how I ended up working with Koala on the Dow Model Law. And I work with a lot of Dows as well, just as part of my professional capacities. Awesome, and myself, I am Jarell James. I’ll be moderating this panel and hopefully not messing it up.

Jarrell James:
So, I come from a space of decentralized technologies as well, also have a history in computational chemistry. And I am co-founder of a project called Internet Alliance with Fatemeh Fonazadeh over there. And we’re focused on internet resiliency, infrastructure, and different strategies with which to achieve that for various populations, depending on what their needs may be. So, without further ado, which I don’t actually know what that sentence means, let’s just start off with a quick question. Does anyone, I think we already asked, has anyone heard of Dow Model Law? And the answer is no, right? There’s been words, for instance, perhaps the service was paying. I’ve seen some nods. There’s some nods, okay. So, that’s a little bit where we come from in Europe. I think maybe we could just highlight some main points on this. And the Dow Model Law is a lot of work towards, I think, something that’s been frustrating on a good portion of society inside of the hyper-technical space that’s trying to push forward decentralized technologies. You know very well that Europe is one of the. There is overwhelmingly a wall that people run into, which is being recognized as legal entities or finding some kind of meaningful legal pathway to interact with, whether it be municipal authorities, corporations, and international coalitions. So, this is maybe a good way to go into why it took three years and a little bit about what it is from Silke.

Silke Noa Elrifai:
Hi guys. I think most of you have, if you are here in this presentation, you have heard about Dows. They represent a new form of coordination and collaboration that has not consisted until very recently. And it’s an opportunity, especially for people, for global organizations, and many of you are from global organizations, to actually look into this coordination form that goes beyond companies. It’s an opportunity for large-scale coordination and it’s desperately needed, as you can see from the recent geopolitical changes for this increasingly multipolar world. They do face, as Gerald just said, they do face significant legal uncertainties that can be very detrimental to their development. And Rick, if you could just move to the next slide. So, why do we need the model law? It’s, there’s a need for these organizations to have a legal personality and capacity. Just click it through, if you could. So that Dows can actually interact and interface with the off-chain world. Companies do have limited liability and that’s why they are so successful. This is not the case for Dows and to, however they need this to protect the contributors, and you might have seen, there has been quite a few, there has been a lot, not a lot, but some jurisprudence on the topic. And this has been going on for the last three years and we need to continue to look into this. There is a need for legal personality so that Dows actually have standing, for example, to sign contracts or to sue in court. There is also a need to resolve taxation issues because at this moment in time, a lot of those Dows, they’re not registered anywhere. That means they do not pay taxes anywhere. That needs to find a solution. And then there’s also overall generally a need for a legal certainty and predictability. Again, to interface with the off-chain world. People have asked us, and several jurisdictions around the world, what they’ve done, they have actually trying to make their jurisdiction more hospitable to Dows. And they, or people ask us, why do we not just incorporate in a company, in LLC, just a corporation? Shouldn’t that be enough? Why are we actually pushing for something new? And the reason for that is, and Rick, if you just move one further. Next slide, please. We can summarize that, and I’ve just said that already. It’s basically Dows are transnational, pseudo-anonymous, autonomous, and actually they are incorporated and they are incorporated on a blockchain, which is decentralized, secure, and tamper-resistant. And the question then become, why do they need to be incorporated in a company’s register? To address all these points, what we did is, over the last three years, we worked on this model law. And the model law has, we should start with that. Actually, we want to go into problems we have faced since then. But the model law is premised on two concepts, or two principles. The first one is functional equivalence. And Rick, if you move over to the next slide, would be great. It’s functional equivalence and regulatory equivalence. These get mixed up quite a lot. They’re two different concepts, but very similar. So the first one, functional equivalence, is between the tools and the tech, to comply with specific legal rules. So, what are the tools that are available to actually fulfill whatever the tech says? So, for example, you have wet signatures and you have e-signatures. And then, the even more important concept is regulatory equivalence, which is between the means used to achieve a regulatory objective. So, as an example, the deployment of a smart contract on the blockchain with all the relevant data about the DAO might not be functionally equivalent. In fact, it is not functionally equivalent to registration into a corporate register, but its regulatory policy objectives of publicity and certainty are fully achieved, or we at least think that it fully achieved this goal. And based on those principles, we came up with this model law, and Fatemeh is going to continue on this. Hello, yes.

Jarrell James:
Rick, if you can follow up on the next slide, I’m still gonna present on this slideshow. So, what you’re gonna see soon on the slide is, thank you.

Fatemeh Fannizadeh:
So, this is the structure of the model law that we drafted, which is itself like a 50-page document with the commentary. And I’m not gonna really enter into the details of the various chapters, but you can see that it basically tackles all of the points that we traditionally pay attention to in corporate formation. So, it being rights and obligations and the purpose or activity of the entity, the governance requirements that would lead to fulfilling the minimum conditions to have legal personality and limited liability, some exceptions to that, as we also have in the corporate world, some other rules about internal governance. We highlighted the absence of implicit fiduciary duty for any one decision maker within that novel form of organization, because this is one of the big risks that people who are involved with DAOs are concerned about whether or not they will be considered as a fiduciary and then bear the responsibility for whatever the activity of that entity is. But if the DAO has been granted legal personality and limited liability, then there is this absence of fiduciary as well within the scope of that model law. And then we further went on to discuss particular provisions about the nature of the blockchain itself, which if you’re familiar with, like can be, for instance, forked. So, people can move away from a blockchain into another version of it and so on. So, these are very technical possibilities that exist in very different forms in the corporate world. So, we had to tackle these problematics there. And then we have some other provisions and briefly deal with tags, which is something that we couldn’t really satisfyingly cover within the DAO model law because it’s so jurisdiction specific. Rick, please, the next slide. And now, so we wrote this a few years ago, published it, and then what happened since? And usually, this guy moves around and he’s like lost and looks for an answer for where is the adoption. And so, the adoption has to be put in the global context of the fact that when we wrote that model law and published it, it was quite early, even in the technical space, for DAOs to mature and also for legal space, like the jurisdictions, to grasp this novel form of entity and understanding and decide how they want to regulate it if even they want to regulate it. And should they want to regulate it, then whether this should be through, for instance, implementing the model law or just finding other ways. And since the publication of the model law, we’ve seen many developments in various jurisdictions. So, the three first ones that are listed are Vermont, Wyoming, and Marshall Islands, have decided to tackle the question of their relationship or their jurisdiction’s relationship with this novel form of entities through a vision that is not the one of the model law but is very important to pay attention to. So, they, for instance, decided that this DAO, in order to interact with the legal system and other corporations and just the bureaucracy overall, of their jurisdictions, and then globally through their jurisdictions, then they have to, for instance, incorporate an entity there or somehow incorporate their DAO entity within that jurisdiction through some novel form they came up with. Then, this obviously has drawbacks. So, to understand the attempts here is that, I like to give this example that blockchain and DAOs, they speak their own language. It’s a novel form of association between individuals who decide that to pull together some form of treasury or asset and govern it in a global way that is novel in comparison to what we’ve been doing so far, that association is usually amongst people done within a geographical zone, so a country, a jurisdiction, and internet and our hyper-connectivity and whatever opportunities that the blockchain technology offers allows people to interconnect and join within a purpose in a more global scale. And this language is not actually spoken by the language of our legal system yet. So, for these two systems to interact, we need to somehow bridge this interaction and Vermont, Wyoming, and so on, try to do it through this incorporation method. But then, the model law has also been considered by other jurisdictions and implemented, adapted and implemented only in one so far. Here, you can see that Australia has analyzed it, the United Kingdom, it appeared in one of their works. St. Helena is considering it, New Hampshire as well, but the state of Utah in the United States decided to actually adapt. adopt and adapt the model law approach. But it did make some modification to that. And if Rick, you go to the next slide, please. So Utah, what they did is that they took the model law and tried to fit it in within their own jurisdiction and system that is currently existing. And for that, they had to make adaptations, of course. But one of the features of the model law that is very core to the whole exercise that Utah parted with is the one where the model law, we do not require registration of the DAO. So the sole fact that it exists and fulfills the conditions of the model law should suffice for it to be recognized and granted legal personality. While in the state of Utah, they said, yes, but it also need to register within our jurisdiction. Rick, please, the next slide. So here is just like a screenshot of the bill if any of you want to look further and read the bill. Next slide, please. And this is a screenshot of that registration provision where it says that it has to nominate one registered agent within the state of Utah. So I think that what they were attempting to is having a point of connection within the jurisdiction in order to speak to that DAO entity. So I can break here if any of you have questions so far

Jarrell James:
or otherwise we move to. Hello? Yeah, I think we should stop here for a second and maybe help everyone, what did we all just hear? What was this as a full review? So we are talking about decentralized autonomous organizations. These organizations can be collectives of people, but I think maybe more relevantly for the IGF, it could also be coalitions of companies or orgs that are all coming together under a shared mission and that shared mission would require them to have some kind of legal interaction with various bodies around the planet. And I think what you guys have just done really well is explain all of that, but also the issues with where this philosophy tends to run up against a wall and that is the oftentimes these jurisdictions. If anybody does have any thoughts, I would encourage you to start maybe thinking about your own governments and maybe your own jurisdictional issues that you’ve considered. And yeah, get your questions ready for us for later. But yeah, I think if we wanna move into really explaining the first challenge, might be good. And we can, like the first challenge of the Dow Model Law going forward. And from there, I think we can also take into account audience participation of maybe other challenges that you may think could be propping up in your own places or could pop up in examples you’ve seen in the past. So I’ll hand it over.

Silke Noa Elrifai:
I think one important factor, we have prepared two challenges. The reason we come with this challenge here is because here at IGF, you have a lot of government officials and governments that consider Dow legislation. What we face with Utah is basically, they adopted the Model Law wholesale with a few exceptions but for this registration requirement. But the core of the Model Law is actually not too hard to force a Dow to have to register because they are global entities. We’ve ourselves wondered why this is the case and what we can do to improve the Model Law because we’re working on a version two. The registration requirement, basically what we said, we had earlier the equivalence, the functional equivalence and the regulatory equivalence principles. We feel that the publication on the blockchain, the registration on the blockchain, the publication requirements and all the requirements we put into the Model Law in relation to that is regulatorily equivalent to registration in a jurisdiction but it seems that this is not the case for Utah and also other jurisdictions we have talked to.

Jarrell James:
So actually, I wanna just do a little bit of audience participation on this. It’s like, what would somebody think is one of the biggest hesitations of why they would want to interact with a single person or have someone, a single entity, a single person registered in their jurisdictions? Just raise your hand and answer that question if you want but we have, I think a lot of people tend to tell us it’s because of liability, it’s because of liability, it’s because of liability and I don’t believe that that is exactly where it actually comes from from these governing bodies. That’s not what their concern is.

Silke Noa Elrifai:
I mean, one other issue that usually is the elephant, the white elephant in the room is the taxation like without registration or jurisdictions that currently consider making a hospitable environment for DAOs, this is, I wonder, this is basically because they wanna have tax money, they wanna have a benefit from it and they feel that there might not be a benefit if they do not require the registered agents or general other formation requirement in their own jurisdiction. I’m very much interested to hear your opinion on that because obviously within the model also, this could be dealt with new, different taxation rule on DAOs.

Jarrell James:
Yeah, and I just wanna give quick our participants online a chance to tune in here. Murshad is very well versed on the equivalence issue and I would love to just give you the floor, Murshad and just discuss some of your own insights around these tensions. Thank you. I won’t take up too much time, especially given the very comprehensive and thorough presentation that’s been given as well as the interventions that have already been made.

Morshed Mannan:
But I think in addition to taxation, one of the issues that we found as a challenge when it comes to establishing regulatory equivalence has been what are considered to be like incorporation fees which is, I guess, a type of tax or is a type of levy that a state expects an entity to pay when they’re filing. And we anticipated that the revenue implications would be something that they would take into account but in the transposition process, this was quite a eye-opening aspect of it that this came up again and again as a discussion point. So I think going forward, when we look at different jurisdictions to work with with respect to the model law, the issue of how like regulatory equivalence cannot just take into account trying to meet a policy objective, but has to hands-on take into account these sorts of financial considerations as well and whether some other way of trying to meet these considerations, whether that is having a pool of assets that is kept to pay for these sorts of fees, it could be something that’s done individually, it could be done by unregistered DAOs as a group. There are many creative approaches that can be taken to do this, but basically that just trying to satisfy a policy objective wouldn’t be sufficient. The other point that I want to add is that in addition to this issue of registration, registered agents and so on, there’s also been a discussion about the role of accreditation. So who is going to actually verify that the different points that are mentioned for formation in the Dow Model Law, who gets to actually accredit that this is happening, who gets to audit it and so on? And we found that different jurisdictions have different views about who this should be. Some have said that, okay, a private accreditation body that sort of does this assessment is fine, while others have said, no, we would want to have some entity in our state, something that the state authorities trust to be able to do this so that they know that these formation requirements have been met. And again, so this will be an issue to consider when we try to establish regulatory equivalence in other contexts. And yeah, I’ll hand it over to Rick or anyone else who would like to add to this. Or back to you.

Jarrell James:
I think that was a really solid just overview. And I want to keep us a little bit forward moving forward on this, because I think this regulatory question is just going to come up again in this next little bit. Because I think what we’re not discussing is, yes, while we’re not fulfilling maybe the philosophical background and ideologies of these municipal authorities or these governing bodies, there is also a moment where they’re not fulfilling the actual decentralized ideology and the purpose of having these decentralized organizations be able to collaborate in both a mathematically ledgered way on a blockchain, and also in a way that allows for a number of different stakeholders to combine themselves under one coalition and demand recognition on that basis. And why I wanted to bring up the conflicting philosophies then around this for DAOs is I think a really important part of DAOs is the ideology around that is that people want to be able to make movements in a private and secure environment. And privacy, by its nature, I think we’re learning, is kind of anti-state in some ways. I think that there’s a desire to kind of eradicate true privacy on the digital sphere. And DAOs represent a collective movement that is also trying to maintain the privacy of some of its members and not put them in positions of compromise. So I wanted to hand it over to you to start off on the challenge two and discuss where privacy fits into all of this. Fatemeh. Hello, yes, now it works. Thank you for this.

Fatemeh Fannizadeh:
I was gonna comment actually on the previous challenge just to cite some case laws, but your prompt actually requires a response because you said that privacy is anti-state. And I think this is fundamentally not the case. Actually, privacy and our right to privacy is a constitutional right in most of the places and is actually why it’s very aligned with state mission. So privacy is not anti-state, but this is part of the current narrative that we are hearing more and more that privacy threatens some of our other rights. So privacy needs to be compromised with in order to protect and sustain anti-money laundering rules, for instance. Or privacy should suffer, encryption should suffer in the context of messaging apps, for instance, to protect the rights of other populations against some form of harmful content that can go through these apps and so on. So privacy as a right, I think, is not under question because it cannot be, I think, legitimately questioned. But here there is the question of whether privacy primes over these other rights. I even wonder whether this is an actual legitimate question in itself. Is this a dichotomy between should we protect privacy or should we protect against terrorism financing and money laundering? I think that this is a false dichotomy that forces us to choose one over the other while I believe that we can protect both and we should aim to protect both and fulfill all of our rights without harming one for a certain narrative.

Jarrell James:
Yes, Silke, I wanna hear your response. And then Rick online, who is a deep professional in the privacy design space, I’d love to hear just a couple minutes of thoughts following Silke.

Silke Noa Elrifai:
Just give him a one, two. One thing I wanted to add is obviously right now because DAOs, and we haven’t mentioned this, DAOs are premised on transparency, meaning that everything is transparent right now. And it’s actually, I’m not sure DAOs are actually advancing privacy at all. It’s the opposite. They are not advancing privacy. They have been undermining it and we are trying to get it back. Also, the model as it stands right now actually is based, premised on transparency and how transparent everything is. And that leads to regulatory and functional equivalence. And now we’ve seen several bad results out of that. One is, for example, that DAO workers, their right to have privacy of payment is being undermined because they’re getting paid by the DAO. Everyone can see they are paid like X amount, whereas anyone who works for a company just has this privacy. No one necessarily sees a person’s bank account. So what we’re trying to do is trying to get privacy back into the model law. And that is a challenge because DAOs are usually associated with, I mean, they’re squarely in the cryptocurrency space and there are KYC rules and anti-money laundering rules. And regulators would love DAOs at least to stay transparent while we’re trying to get this back to a certain extent.

Jarrell James:
Yeah, Rick, I think you’re online. I actually really love a bit you were just saying about how DAOs are more transparent than the salaries of CEOs and the salaries of all these different corporate entities and their officers. And that transparency does seem to be lost on regulatory authorities. When you tell them that there’s a ledger and there’s this published transparency, I don’t know that there’s a lot of understanding around that. So yeah, just go respond. And then, yeah, Rick, I’d love to hear your thoughts on the design for these sorts of pieces. Yeah, I think there’s a lot of issues around privacy.

Rick Dudley:
I think there is this sort of fundamental misunderstanding when talking, in my experience, in talking to regulators or sort of hearing the arguments of regulators secondhand, maybe more precisely. They seem to believe that the medium somehow is special and has these special properties that require a special treatment. And I think that that’s very misguided. I think that the medium being internet communication, frankly, and cryptographically signed messages, not even encrypted messages, really shouldn’t, should be a simple enough mechanism that we should be able to educate regulators on how existing laws and existing protections and existing regulations can be applied in a way that both satisfies the requirements of the existing online community, as well as the regulators. And so for me, a lot of these privacy conversations are, to Silke’s point, why are we giving up privacy? There’s a technical limitation. There’s sort of a engineering practicality that’s causing that at the moment, but we shouldn’t expect that. that to persist in perpetuity. And we should be able to, much like Fatima was saying, we should be able to have the privacy that we’re constitutionally guaranteed. And I think that that’s actually maybe the bigger issue is that there is an internal struggle within any government that I’m aware of where they want to know what people are doing in spite of the fact that there is a constitution or some other legal constraint on their ability to do that inspection. And I feel like this is just sort of classic, traditional internet privacy issues. It’s not really distinct. The dial privacy issues aren’t really that distinct from normal surveillance compromises, I guess you could call them, because surveillance still occurs. We can’t really avoid it. I think just touching on that,

Jarrell James:
you were saying that we shouldn’t give this up and there are ways to reconcile this. I fully agree, I think we all do. And I just would like to clarify that I think what is striking to me is that as things move towards a digital space, now we are starting to see this idea or this perpetuation that privacy is anti-state. And that’s kind of where I’m coming from on that. I’d like to just quickly discuss any ideas on how we would reconcile these kinds of differences and what’s going on in model law number two, or version two, and what maybe approaches are being taken around reconciling these mildly ideological differences. That’s an open question to any of the panelists, if you’d like it, but just take it. Yeah. Hello? Yeah.

Silke Noa Elrifai:
The African attempts to recreate the privacy that a normal bank account or like the payment into a bank account by a company would give you. I’m probably not the right person because that was the technical team was developed earlier. There’s actually a blog post about that. But it was just about that one little point. How can the workers get paid without everyone know how much money they get and on what regular basis they get the funds? It was, and you might have seen this if you’re in the space in relation to privacy pools. What it does is you send the funds into, am I the right person to talk about this? Maybe Rick wants to talk about that.

Jarrell James:
Yeah, Rick can talk about the privacy pools. Yeah, I’m capable of talking a little bit about privacy pools. I mean, I understand the underlying technology well enough.

Rick Dudley:
So yeah, I mean, there’s, so in fact, I should probably should mention this earlier. So I run a company. It’s a registered company in the United States. I pay my taxes. We get paid on chain by various blockchain foundations. I put those payments through Tornado Cash, specifically for this reason to add some financial privacy to my employees, frankly, who get paid this way. I thought it was a bit bizarre and invasive that anyone in the world could see what they’re being paid. There’s also a security issue that I’m always sensitive to, physical security issue of people knowing how much you’re getting paid. And so I use Tornado Cash and I’m happy to, I still have all of my notes and what have you. I can prove to any regulator that I was not a terrorist. I paid myself and what have you. But all of the sort of rigmarole and controversy around that, even prior to some of these other claims about funding terrorism that have also come up, again, it’s just a lot of confusion. It’s a lot of regulators sort of applying, trying to hammer in a screw. I think a lot of these problems, so privacy pools are just a technology that is really just trying to get you back to the basic level of privacy that you would have had otherwise. So basically being able to say, I sent this transaction in private to someone else, and now some regulator asks me or somehow requires that only certain types of transactions actually make it onto the payment rail, for example, like the Fiat payment rail. Privacy pools are a technology that allows you to have privacy when you’re transacting, but then reveal it upon request and demonstrate that your funds never were tainted by funds that are otherwise restricted or regulated.

Jarrell James:
All right, thank you. I think in the last few minutes, I wanna ask the panel, and I think it’ll be more fun if we do a little bit of a hypothetical situation. So let’s each person, let’s imagine you’re talking to the lead regulator of, let’s say, a major world power. What is something that you would want to get across to them, and what is the call to action to the legal practitioners of that government around this Dow Model Law that you would just, if you had five minutes in an elevator?

Fatemeh Fannizadeh:
Thanks, I can go first, maybe. I think that it’s not about lead power of the world. Actually, this is a global technology that knows no borders, and it should not be primarily regulated by one so-called lead power. I think that what I would wanna tell all of the regulators and practitioners who are interested in this space is not to rush into regulating or trying to capture through or shape this technology to regulating right now, because the technology itself is still growing organically, and we need to give it space to grow. And I believe that all of the regulation that already exists, whether it is anti-fraud regulation or securities regulation and so on, do grasp some of the activities that may be problematic within the technology, and we do not need new form of regulation right now. What we need is a sandbox. We need to give the possibility for this technology to mature. And what we’ve heard often during the past days here is how will the internet look like in 20 years? And I think that the internet would definitely look different in 20 years, and it would probably, and I think I’ve heard that also often, which was positively surprising, that it will be probably decentralized or have more decentralized components. And for that to materialize in a positive way, I think that we need to care for less capture right now and more sandboxing in order for this regulation to deliver on its promise.

Jarrell James:
Rick and Armashad, just a couple of closing thoughts. Let’s try and keep it to two minutes. Yeah, so just briefly, just to sort of mirror that previous comment, I strongly believe that most of what we’re doing in this space with DAOs fits under existing regulation.

Rick Dudley:
The vast, vast, vast majority, there might be one or two exceptions that I can’t even really think of right now. I feel, frankly, as a taxpayer, I feel it’s the responsibility of the regulators to educate themselves and understand how these technologies work so that they can then apply the law judiciously to new technology, and I’m happy to help them with that. There’s plenty of people who are interested in helping and supporting that, but I don’t think that we need to create a lot of excessive new laws. I think it causes more problems than it fixes. Yeah, that’s basically it.

Morshed Mannan:
Marshad, let’s give you an opportunity here at two minutes. I just wanted to add that we’re starting to see case law emerging, as has been alluded to, that starts looking at DAOs, and in some cases, this isn’t, is basically trying to achieve regulation by enforcement of certain existing laws, and as we started to see, in some cases, this can lead to all manner of unintended consequences, especially if this, let’s say, gets appealed to an appellate court where there is a decision that’s made that creates precedent, and I think issues that we raised in the model law, ranging from whether there should be implicit fiduciary duties or questions of tort, all the way to issues of how to deal with the limited liability or joint and several liability, the risk of this, has to be something that we should also try to proactively shape in the types of regulatory sandboxes that Fatemeh mentioned, hopefully with the idea that judges will eventually also come on board to start interpreting the law in a way that doesn’t end up constraining this space and creating new sorts of harms, because while we wait, or to wait and see, this risk might also emerge.

Silke Noa Elrifai:
My last comment would be to state that even if regulators or jurisdictions do not wanna implement the model law, because of course there are a lot of issues with it too, at least get rid of the default characterization of DAOs as general partnership or unincorporated associations that gives joint and several liabilities to any of the contributors, and this is one of the things we have seen recently which had a very chilling effect, chilling, not chilling, chilling, cold-ringing effect on DAO contributions and how, and the developing of code for DAOs. We’ve seen this in, especially in the UK case in the US recently, this really needs to go away. So even if you think the model law is nothing for you, you need to address this in your jurisdiction, because if you don’t, you’re not gonna have much development in the area anymore.

Jarrell James:
Yeah, I completely echo those sentiments. Code by its own self is not a crime. And I wanna just bring this all together, because I know that legal frameworks and regulation can be very dry in understanding the future and, okay, well, how does this actually apply to the future, and there’s a lot of coordination systems that have existed before DAOs, and this is just another innovation in the concept of coordination systems. And I think what the model law has done and Koala is trying to do is push forward the field of innovative solutions around coordination, and while maintaining a lot of new and exciting technologies such as blockchain, such as decentralized infrastructure and organizations. So for relativity towards this event, there’s a lot of civil societies here, there’s a lot of people that could be coming together and making their own coalitions, and for those online that are watching, I’m sure we’ve all seen a lot of frustration inside of making movements on the planet and trying to make changes inside of different jurisdictions. So I’m really excited to see Dow Model Law version two, and yeah, we’ll be around, and feel free to discuss your governments for us, but you know, send them our way.

Fatemeh Fannizadeh

Speech speed

165 words per minute

Speech length

1721 words

Speech time

625 secs

Jarrell James

Speech speed

193 words per minute

Speech length

2412 words

Speech time

748 secs

Morshed Mannan

Speech speed

159 words per minute

Speech length

768 words

Speech time

290 secs

Rick Dudley

Speech speed

173 words per minute

Speech length

874 words

Speech time

302 secs

Silke Noa Elrifai

Speech speed

158 words per minute

Speech length

1679 words

Speech time

638 secs

Digital Safety and Cyber Security Curriculum | IGF 2023 Launch / Award Event #71

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During a discussion on the ethics of cybersecurity, a student from Nepal studying for a master’s degree raised a question regarding the ethical concentration within the field. The specific focus of inquiry was on issues related to hacking and privacy. The student displayed a neutral sentiment, highlighting the need to consider ethical implications in cybersecurity.

Another individual also expressed concern about the ethical aspect of cybersecurity, displaying a positive sentiment. This person emphasized the importance of addressing the ethical dimension within the industry. Both speakers stressed that cybersecurity professionals should be mindful of the ethical considerations associated with hacking and privacy.

The discussion brought attention to the fact that ethical considerations in cybersecurity, particularly pertaining to hacking and privacy, are becoming increasingly important. It highlighted the need for cybersecurity professionals to operate within a framework that not only protects systems and data but also upholds ethical standards. By addressing these concerns, the industry can ensure that security measures are implemented in a responsible and ethical manner.

Overall, the discussion shed light on the growing recognition of the ethical dimension in cybersecurity and the need to address it within the industry. With cybersecurity playing an increasingly crucial role in our digital society, it is essential to prioritize ethical considerations alongside technical expertise to protect and safeguard individuals’ privacy and security.

Nabeih Abdel-Majid

There is a critical need for cybersecurity education, particularly for children and parents, as a significant percentage of students are using social media and the internet without proper knowledge of potential security risks. Many students believe that social media sites are safe and trustworthy, exposing them to potential dangers. This highlights the importance of educating children and parents about cybersecurity to protect their personal information and online presence.

A proposed curriculum has been developed that covers various aspects of cybersecurity, including social network security skills, file backup, password management, and web browsing skills. The curriculum is designed to be interactive and engaging, using videos and interactive screens to provide a comprehensive learning approach. It is not only targeted at children but also includes modules for educators and parents, emphasizing the need for a community-based approach to increase cybersecurity awareness and action.

Dr. Nabeih has been working on this curriculum for seven years, demonstrating his dedication and expertise in the field. Now, his aim is to share this curriculum with educational institutions to ensure that children and their parents receive proper cybersecurity education. The curriculum has been designed to safeguard children and maintain their self-confidence in the digital world.

Parents play a vital role in implementing this curriculum and protecting their children online. As such, they are encouraged to actively participate in the learning process and be aware of potential digital threats. Nabeih Abdel-Majid’s platform for the curriculum includes a survey for assessing the levels of knowledge and offers courses in multiple languages, catering to a wider audience.

However, there may be some challenges in implementing the curriculum. During the presentation, Nabeih faced trouble connecting to Wi-Fi, which highlights the need for reliable internet access to deliver cybersecurity education effectively. Additionally, structured learning sessions are necessary to ensure that students receive proper guidance and support throughout the learning process.

The curriculum also focuses on preserving privacy and emphasizes the importance of controlling personal data. It teaches students how to safeguard their personal accounts on platforms like WhatsApp and Google to prevent unauthorized access.

Furthermore, the curriculum includes review questions for the purpose of understanding rather than examination. This approach aims to reinforce learning and ensure that students fully comprehend the concepts taught.

A new learning program is being piloted, demonstrating a commitment to continuously improving the curriculum and its educational impact. It places a strong emphasis on cooperation between teachers, students, and program creators. The ultimate goal is to create a secure community where individuals are well-equipped with the necessary cybersecurity knowledge and skills.

The curriculum is targeted at children from grade 5 to grade 11, covering different levels suitable for each grade. Parents are recognized as crucial participants in the educational program and are required to attend group sessions along with their children. This collaborative approach ensures the involvement of parents in protecting their children online.

It is important to note that students can still find ways to circumvent digital restrictions despite cybersecurity measures being in place. Therefore, parents must actively watch their children and offer support and guidance against digital threats.

The project has received accreditation from KHD in Dubai and is seeking recommendations from the IGF to expand its implementation to different communities. This indicates the recognition and confidence in the curriculum’s effectiveness.

Overall, there is a clear need for cybersecurity education for children and parents. The proposed curriculum developed by Dr. Nabeih addresses this need comprehensively. With the involvement of parents and educators, a community-based approach can be adopted to increase cybersecurity awareness and action. Challenges such as reliable internet access and structured learning sessions need to be overcome to effectively implement the curriculum. The goal is to create a secure community, empower children with cybersecurity knowledge, and ensure their safety online.

Hala Adly Hussain

Blockchain technology is highly regarded for its ability to safeguard valuable data and assets from cyber criminals. It operates as a decentralised system that upholds principles of security, privacy, and trust. The architecture of blockchain allows for monitoring of ledgers, enabling the identification of any unusual or malicious activity. Furthermore, the implementation of smart contract security on the blockchain ensures that payment processes become more convenient and secure.

There are two prominent types of blockchain: public and private. Public blockchains, such as Bitcoin, operate on an open network, allowing anyone to participate in transactions. These transactions are validated using public key encryption, ensuring transparency and accountability. On the other hand, private blockchains offer more control, as entry to the network is restricted and heavily reliant on identity control through digital certificates.

In order to successfully implement blockchain technology, education and regulation play vital roles. It is necessary to conduct blockchain implementations while adhering to regulatory requirements. This ensures that the technology is utilised in a manner that aligns with legal and ethical standards. Additionally, it is crucial to educate individuals about the potential vulnerabilities of blockchain. By increasing awareness and knowledge, stakeholders can proactively mitigate risks and strengthen the security of the technology.

Moreover, Hala Adly Hussain demonstrates a keen interest in obtaining approval from the Ministry of Education in Jordan for their project. This highlights the significance of seeking validation and involvement from relevant authorities in the implementation of blockchain solutions within the education sector. By having regulatory bodies endorse projects, the credibility and potential impact of initiatives aiming to enhance the quality of education can be maximised.

In conclusion, blockchain technology possesses inherent capabilities that make it a powerful tool for protecting valuable data and assets from cyber threats. Its decentralised nature, coupled with principles of security, privacy, and trust, enhances the integrity and resilience of digital transactions. The availability of public and private blockchains offers flexibility and control over network participation. Education and regulation are instrumental in successful blockchain implementation, ensuring compliance with legal requirements and mitigating potential vulnerabilities. Collaboration with relevant authorities, such as the Ministry of Education, strengthens the credibility and impact of projects within targeted sectors. The power and potential that blockchain technology holds can be harnessed when combined with a comprehensive understanding of its intricacies and a commitment to ethical practices.

Moderator

During the discussion, the importance of integrating an AI Curriculum in education was highlighted. It was emphasized that teachers, parents, and children should all be educated in this curriculum. This is because AI is becoming increasingly prevalent in various fields, and it is essential for individuals to have a solid understanding of AI concepts and applications.

The AI Curriculum was seen as vital, and it was suggested that everyone should actively participate in it. By doing so, individuals can develop the necessary skills and knowledge to effectively utilize AI and stay updated with technological advancements. Moreover, integrating AI Curriculum in education can help bridge the gap between technological advancements and traditional teaching methods, leading to a better educational experience for students.

In addition to AI, the discussion also explored the integration of Blockchain technology into the cybersecurity curriculum. It was suggested that blockchain technology could enhance cybersecurity by countering the efficiency and effectiveness with which cyber criminals apply AI and machine learning in their cybercrimes. Blockchain, being a decentralized system built on principles of security, privacy, and trust, offers benefits such as real-time data delivery, cost-effectiveness, and strong encryption practices.

The importance of protecting children online while maintaining their self-confidence was also emphasized. The AI Curriculum aimed to equip educators, parents, and children with the necessary knowledge and tools to ensure their safety in the digital world. It was deemed crucial for parents to attend sessions and understand how to monitor their children’s online activities discreetly, thus ensuring their safety without compromising their privacy and trust.

Furthermore, a program to enhance community security was launched as a pilot test. This program aimed to support different communities and promote a secure environment through the collaboration of Dr. Ahmed Noura, Dr. Nermin, and other stakeholders. It was acknowledged that cooperation from all members of the community is essential in achieving a truly secure community.

The session also highlighted the significance of different roles with varying privileges in educational programs. By featuring roles such as students, managers, teachers, and site administrators, users can effectively monitor student progress and ensure a well-rounded educational experience.

The importance of monitoring and taking care of students was stressed. It was acknowledged that even in the digital age, where technological advancements can pose risks, it remains crucial to protect students from harm and extortion.

The session also delved into the aspects of privacy control, stopping hacking, and maintaining cybersecurity. Methods for controlling privacy on platforms such as iCloud and Android were discussed, as well as detecting if someone has hacked into personal devices using the camera or microphone. The importance of deleting files to ensure they are not retrievable by others was also highlighted.

Overall, the session concluded with the expression of hope for the implementation of the AI Curriculum in educational institutions. This implementation could effectively protect children and promote safe internet usage. The discussion underscored the need for continuous education and adaptability in the face of technological advancements. By equipping individuals with the necessary skills and knowledge, society can navigate the digital landscape securely and confidently.

Video

The analysis explores various arguments and topics related to cybersecurity, online learning, and digital security. One argument highlights the alarming possibility of devices being hacked without the user’s knowledge. This emphasises the need for taking necessary preventive measures. It is stated that many programmes have the ability to breach and sneak into your device, allowing them to start monitoring you without your knowledge. This raises concerns about privacy and the security of personal information.

Another argument discusses the alarming capability of hacking programmes to remotely control a device’s camera. This invasion of privacy poses significant risks and underscores the importance of safeguarding digital devices from potential breaches. The analysis further suggests implementing measures to ensure digital security and privacy, such as using internet-safe browsing and adopting strong personal digital security habits. The evidence provided includes the capability of hacking software to greatly affect a device if preventive measures are not taken.

Furthermore, the analysis highlights the importance of cybersecurity awareness, particularly in the education sector. It mentions that a teacher emphasises the significance of cybersecurity awareness in their classes. This observation reflects the growing recognition of the need to educate individuals about cybersecurity risks and the measures they can take to protect themselves.

In addition to cybersecurity, the analysis touches upon other topics as well. It mentions the creation of interactive screens and the use of QR codes for testing purposes. Creating interactive screens can enhance user engagement, while QR codes can provide a convenient and efficient way for testing.

The analysis also addresses online learning and its benefits. It highlights how online learning allows students to repeat sessions as much as needed and have personalised access to lessons from home through unique usernames and passwords. This flexibility enables students to learn at their own pace, which is a significant advantage of online learning.

Furthermore, it is noted that online learning sessions are structured and divided into different sub-sessions. This provides a systematic approach to learning and enables students to navigate through the content more effectively.

Regarding online privacy, the video suggests using the WhatsApp application and opening Google accounts through the browser to preserve user privacy in online learning environments. This highlights the importance of implementing measures to maintain privacy and protect personal information in the digital realm.

In conclusion, the analysis sheds light on several important aspects of cybersecurity, digital security, and online learning. It underscores the need for preventive measures to safeguard against hacking, the importance of cybersecurity awareness, and the benefits of online learning. It also highlights the significance of implementing measures to maintain online privacy and personal digital security. Overall, these insights provide valuable information for individuals and institutions in navigating the digital landscape safely and securely.

Session transcript

Moderator:
perhaps. If she can launch it again. Steemed attendees, allow me to welcome you on behalf of the creators union of Arab the consultative status of the economic and social Council of the United Nations, and we are delighted to have H.D.T.C. training center as our strategic partner, which has grandly contributed to spreading awareness about the importance of this curriculum among the A.I. revolution, assuring the need to educate teachers, parents, and most importantly, the children, who are the core of this curriculum. And we extend a welcome to all whom attending our session, even in person or online. Let me extend a welcome to our distinguished speakers in this session. In my right hand, Dr. Ahmed Noor, the president of the creators union of Arab the Arab media union, and on my left hand, Dr. Nabiha, a professor at the colleges of technological science in the United Arab Emirates, who is the intellectual property owner to the digital safety and cyber security circulated, which will be presented in our session today. And we welcome also our speaker from Egypt, Dr. Hala Adly Hussain, the secretary general of I’m pleased to welcome to the session the secretary-general of the Arab League, the secretary-general of the Union of Arab Women Leaders, the member of the state of Arab League. The session will address several key points. The mission of this curriculum is its implementation objectives and its impact on achieving safe Internet usage, and we will discuss the role of the supply chain in the development of the Internet. We will start with a brief outline of the agenda of this session. We will start with the welcoming word by Dr. Ahmed Noor, then the intervention of Dr. Hala Adel Hussain, who will be joining us online, and last but not least, we will share Dr. Nabeeh’s journey with the digital safety and cybersecurity curriculum. We will start with the welcoming word by Dr. not least, welcome to all. Thank you. Thank you, Dr. Ahmed. Now we’ll listen to intervention from Dr. Hala Adli-Hussain. I think she’s joining us online. Dr. Hala, are you here?

Hala Adly Hussain:
Yes, I’m with you. Yes, hello. Good morning. Good morning from Egypt. I don’t know.

Moderator:
Good morning. Here in Japan, we are afternoon. Okay. Please have the floor. Thank you.

Hala Adly Hussain:
Okay. Dear Dr. Ahmed Noor, President of the Creators Union of Arab Media Professionals, member of the United Nations, it’s only for me to be part of this valuable conference to discuss the importance of protecting and securing all the information in the name of digital safety and cyber security curriculum. My session today, I’m going to speak about the role of blockchain and how it will protect our valuable data and our business and our assets. The cyber criminals are increasing the frequency and sophistication of cyber attacks by pooling their knowledge and leveraging new technologies. The use of the artificial intelligence and machine learning help them to prepare a cyber crime more efficiently, causing more profound and widespread damage. So the traditional solutions alone are often insufficient to meet modern cyber security challenges. So we must explore other approaches for improving information security, including the blockchain technology. The blockchain technology, according to IBM, is a shared immutable ledger that facilitates the process of recording transactions and tracking assets in a business network. So it is a system for tracking anything with value, securely, transparently, and cost-efficiency. The name of blockchains come from the fact that each transaction is recorded as a block of data, and this block might record one or more data types, such as the quantity, price, or location. These blocks become a chain as the assets move from one owner to the other owner, and the chain contains the details of each transaction, including their time and sequences. So a blockchain can be advantageous to any use case. The whole benefits from secure, transparent, decentralized network, including healthcare, supply chain management, copyrights, and reality, and reality protection, the IOT, Internet of Things, messaging, voting, charity, even the new innovation for drugs, as I am a pharmacist, so it’s very valuable for me to protect the innovation and the know-how from stealing. The blockchain has two types. One of them is the public type, which we all know is the Bitcoin, which remains the most prominent example of the public blockchain. As anyone can join the public blockchain, and can do so any anonymous. This blockchain ecosystem use Internet connected to computer or mobile to validate the transaction and to provide the agreed upon consensus. The consensus here is achieved via Bitcoin mining, using computer resources to solve the cryptographic puzzle to create the proof of work by which to validate each transaction. This is the public blockchain, does not have many identity and access control. So, authentication and verification are largely carried out through public key encryption. In contrast to a private blockchain, which is more important for our business, for our organization, for our data, the private blockchain relies heavily on identity control, mostly through digital certificates, using them to make the blockchain private, through membership and access privilege. Typically, a private blockchain only allows access to known entities and organizations. So the consensus here is achieved on a private blockchain by selective endorsement. Known users with privileged access and permission to verify transactions and maintain ledgers. Due to this tighter control, a private blockchain can be likely to satisfy an industrial regulatory compliance requirement. Also, as we said before, the innovation of the drugs, all the secure data, so that I think it’s more compatible or to be more benefits for the defense army. So it’s very important for them. So cybersecurity is built into blockchain technology because of its inherent nature of being a decentralized system built on principles of security, privacy, and trust. In addition to transparency and cost efficiency and enhanced security, it is fast. Data on blockchain network is delivered in real time, making it useful to anyone who wants to track assets and see transactions in time, such as payments, orders, accounts, or know-how to our drugs. In addition to transparency, it’s cost-effective. It’s important to note that viewing a transaction or transmission may be innocent but due to encryption and serialization process, each record can be slow to upload compared to typical data network. Blockchain have many advantages like decentralization, also collaborative consensus. As blockchain collaborate consensus, algorithm means that it can monitor the ledgers for unusual or malicious activity. Also, the strong encryption practice and the digital signature effectively using a public key infrastructure for validating configuration, modifications, authenticating devices, securing communication, and infrastructure of asymmetric cryptographic keys and digital signature is often core to blockchain technology, providing verification of data ownership and data integrity. The immutable records, nobody can modify a recording on blockchain ledgers. If a record contain an error, it can only be rectified by making another transaction or another block in which case post-transaction will be legible. Nobody can interfere with the data or change any data on the system. Also, the Internet of Things protection, with increasing the application in various industry, devices are often targeted by cybercriminals due to their inherent vulnerability. So blockchain provide additional protection for those using IoT devices. Also, preventing DDoS attacks, distributed denial of service attack aims to overload A server with requests, it requires a focal point to target typically an intellectual property address or a small group of IP addresses, however, a blockchain-based domain named system DNS can remove that single focal point, neutralizing the cyber threats. Also the data privacy is very important, while the transparency is a prime benefit of using blockchain. With everyone able to see immutable transactions, the blockchain network will allow only the trusted network to view the participant transactions. This can be achieved with minimal governance. Furthermore, blockchain leaks the traditional targets sought by cybercriminals, making it more challenging for them to achieve unauthorized access by targeting a privacy rule. Blockchain will protect our contracts as smart contract security. The smart contract security are sets of rules stored on the blockchain that trigger transactions when the conditions are met. This automation makes payment more convenient, blockchain remains secure because its components are tested for authentication, data security, access control, and business logic validation. From the advantages of blockchain, it’s just an implementation of any business system through risk assessment and subsequent management processes are required to ensure the data protection and safety of business systems. So it is very important in the risk assessment management. And regulatory-focused, heavily regulated industries aim to protect the public and critical infrastructure with clear guidelines. regarding the information security. And any blockchain implementation should be carried out with close eye on regulatory required. Also in the disaster recovery plan, it’s very important that the minimum security requirements for blockchain participant and organization implementing a blockchain solution require detailed policies on identifying verification and access management. This is a critical area for blockchain applications since it is potential source, prints and contributes to firm vulnerability. So from my point of view, blockchain has no single point of failure. Every chain is immutable, so no participant can break a link to insert a block. It’s almost impossible to tamper with one of these cryptographic chains because an agreed consensus mechanism validate the accuracy of every transaction in the chain. However, blockchain also face some limitation and risks such as scalability, interoperability, regulation, governance and education. So it’s very important to share with you as the doctor tackle this point to be a curriculum in the school. It’s very important just for all of our students and our parents to be aware of this. Thanks for your kind listening, Dr. Ahmed, Dr. Mian.

Moderator:
Thank you, Dr. Hala Adli-Hussain. The clarification about the role of blockchain to cybersecurity, it will play a very important role in this matter. I think it’s very important to concentrate of blockchain and using it to achieve the cybersecurity. Thank you, Dr. Hala. Now it’s time for our journey with Dr. Nabi Abdel-Majid, Digital Safety Curriculum Owner as IP, Intellectual Property. Get the floor, doctor.

Nabeih Abdel-Majid:
Thank you very much. Thank you for giving me this opportunity to speak about this curriculum here in one of the distinguished events. I would like first of all to thank Dr. Ahmed Nour, the creator of Union of Arab, and the Arab Media Union. Thank you very much for giving me this opportunity. Thank you for Dr. Nermeen also, the secretary manager of the creator Union of Arab. Creator Union of Arab. And I would like also to thank the Higher College of Technology United Arab Emirates. Thank you very much for all the support that you give me. Today, actually, we are talking about something really critical for everyone, because everyone has a child, and everyone has need to know what happening with my kids. So are they using the devices in a safe mode or not? So what I did first of all, I studied the culture of more than one country, like in the Middle East, the Saudi Arabia, and also I studied the culture of United Arab Emirates, and we have a lot of, I have been talked about the number that I saw. So let me go through all these numbers, this study, and just to show you why it’s very important to take care about our children, and what they do actually when they’re using the internet. Are they playing? Are they thought that they are out of the target of hackers? And then we’ll talk about the solution that I’m providing, the curriculum where the students can know exactly, can teach themselves, we can cooperate to rehabilitate the students in a way that they can know exactly what’s happening when they’re using the social media. So what motivates me actually, what I saw of rapid development of information technology and the existing of diversity, an easy way for information. You know now everyone can reach to whatever you need within a few seconds. So it’s so easy to be connected to the internet. Although we have, as we hear, we have more than 2.6 billion, they don’t have a connection, but I’m talking about the other part of the world that has a connection, that they have a connection, but actually they don’t know exactly how to use it in a safe mode. Most of the people. And you know, there has been a high amount among the students to use the social network. Everyone cannot live without the internet, without the connection. Okay, and unfortunately, I have to say that all of the communities, they don’t, most of the communities, let me say, they don’t rehabilitate themselves. They don’t teach themselves first before they go to start using the internet. So that motivates me actually to stop here and take care. Let me study what’s happening in the market. Let me study what’s happening in between the childs in the schools. And one of the most stricken results for the unconscious turnout is the many misguided security practices, as I told you. And the recent successes of these hackers in presenting users’ privacy and extortion. So let me tell you exactly, first of all, the study samples, and what numbers I have. and then what the solution. This is what I’m going to show you today. So the samples actually identify the turnaround degree of using social sites. So how many hours everyone is using the internet daily and identify the students’ goals of using these sites. And I’ll tell you what I found, actually, and then identify percentage of those who have been hacked between the students in the schools. And I want to show you what is the relation between the level of the knowledge and how you are secured, actually. Imagine these hours is not included the hours of studying. So when I asked the students how many hours you use the devices daily, most of them said that more than six hours daily. And the hours of studying using the tablet is not included. So if you add just the hours of studying in the school or at home for studying, it’s going to be more than 12 hours a day. And that’s actually stopped me. What’s happening during these 12 hours? Do we know that as parents what’s happening with our kids? Look at these numbers. I will not read all of this, but I need you to stop with me on what’s happening, actually. So more than 79% of male and female students have more than one account of these sites. I don’t know why. I was asking some of the students why you need to do this. They’re just playing. And around 90% of the students say that it is very difficult to stop using these sites. So it is type of addiction. So it is type of they don’t stop. And 15.2% of the students are choosing fake names when they’re using sites. And also fake gender. Say somebody told that I’m a woman and I’m a man. So why, when I was asking them, why you need to do this? They just play. and they don’t know exactly what that might affect them. So more than 23% indicated that the person information is not real, where it can stop me also. And around 35% of the students confirmed that they had established false relationship through all these sites. I know, definitely sure that you had a lot of cases similar to these cases. But actually what was stopped me sometime that more than 15% indicated that they are being subjected to extortion continuously. And I think most of them, you know, in the Middle East, we have a culture that the women have a special, you know, I mean, privacy. Of course, the privacy is a special for every word, but the extortion 15% means that every 100 student, female student there has been extorted. 15 student of them has been extorted. That’s a big number. It’s a big number, really. And I’m sure that many other hide at this because they shy to say this. And also 22.2 of the students reported that they have been exposed to penetration during their use of these sites. And also 34 of their password have been stolen. 15.2 of the students have stolen their files and their own pictures. So where we are, and that actually stopped me. Look at these numbers. I’m sorry to read because it’s a very important. I need to read all of these numbers because it’s really, I need to all to cooperate to stop what happening with our kids. So 58.8 of the students, they expressed their acceptance to any friendship requested through these sites. So they trust everyone. And this is actually something where it make me stopped. 38.9 believe that all information presented through all these sites. Yeah, it’s, it’s, it’s we can believe it. And around 70% of the students believe the social network sites are safe and trustworthy. See, these numbers actually stopped me. Okay, I can, I can tell you a lot of numbers in this studying. Okay, and these slides, I hope it’s going to be shared with the, the, the site of United Nation of the, sorry, IGF United Nation. So what about the parents now? And also the parents stopped me. Do you know what’s happening with your kids? Are the, can, can you work with the social media and devices as your, as your kids? And when I study this, I found a huge bound between both of them. So, and there is a parallel relation between your level of education and what actually, how can you help your kids against any cases of, of extortion? And this table can, can show you that if your level of education high, then your fare will be low and the opposite to try it. And this is how many, if you high, how many case you found with your kids and how many cases so you help them. So it’s, it’s around like 70% of the cases, parents can help their kids if they will educate it, how to use the social media. And, and that’s actually make me decide that to solve the problem that is not only for the kids, you should not target only the kids. Actually, you have to target a different part of the community. The first one is kids. The second one is parents and also the educators, okay, in the schools. So if, if we cooperate all of these parts of the community, then we definitely will help the world to be more secure and the people being sure that how we, we’re going to use this social media. So what we need actually, a curriculum, and this curriculum that I created with the fully support of Arabic Creator and HDTC and also high accuracy of technology. A curriculum which is connected with a platform with videos and interactive screens. So we teach the students step by step how we know that this website is fake or not. How we know that this is somebody opening my account or not. And also we targeting the parents to help them to watch their kids in order to know exactly is somebody trying to attack my kids, to extort my kids or not. So we create this curriculum in three different levels. We’re starting from introduction to information security, going to operating system. Why we have to update operating system. Personal account management skills. I have to increase your skills, okay, to build your skills in a way that you can secure yourself. And social network security skills. And then early intrusion detection. So there’s some signs I have to teach the students. If you saw this, then you ought to be attacked. Take care. And then web browsing skills. And then we move to preventing electronic extortion and then external control. This is all for the first level. Of course, more details we have, and I can share it with you. The second level, we’re talking about file backup. How we know that if I delete something, then it is deleted. And no one can retain it back. It’s type of privacy. And how I have a backup and a lot of details about the privacy and control application activities and hide my moving in the Internet so no one can track me. And the last one is how to control. The last level is how to control your account iCloud or Android or any other cloud. So we start by iCloud. Management skills. Because I do believe that. The systems, the tools, has a lot of features, but actually the people doesn’t know all of these. We know how to use this mobile. We know how to start calling somebody, but how to hide my call, how to hide myself, how to be sure that no one observing me, this is the challenge actually to build the knowledge for the community. So these three levels, actually in order to what? We need a comprehensive program from basic to advanced in order to move up all the community and we need to develop a leader in cybersecurity experts. I’m dreaming in one day that we have a student before you go to the university, you secured enough. I know that no one can say that I’m secure 100%, but what we’re trying to do is to decrease the number of hits. So in a way that’s, it’s not easy for the others, for attacker, okay, for criminals to attack the kids. And let’s combine together, starting from fury to read world application and simulation. So what I’m looking for is not, it’s not only me, it’s not only United, sorry, creator United of Herb. It’s a cooperative for our community and that’s why we choose this event, okay? To announce about or to launch this curriculum in order to help all the community. We are trying to help different community in order to be secure by cooperative with all elements of the community. So let’s talk about SDSC, about this introduction about it. I need, I’m happy to show you how we build this curriculum and how it’s easy for the students, okay? Because it’s not only just education, it’s a career, okay, improvements. And it’s a good opportunity to bridge the global skill application. It’s a call for everyone to be at a front of cybersecurity training. So I do believe that the world has a lot of technologies. The only thing we need is to know how to use this technology. And these resources, it will be shared also, the study of different cultures. We call the curriculum SDSC, Student Digital Safety Certification. And by the way, we have a lot of certificates in the same area, like a school, safety schools, and a lot of curriculum. But actually, why it’s not working, okay. Do we have internet? Do we have a connection? Yeah, okay. So this is the curriculum, the website where you can find the Student Digital Safety Certification. And first of all, before we go through the curriculum, we have to start by survey. And actually, we build a three type of surveys. The first survey is for the parents, and the second one for students, and the third one for the educator themself. I think we have a problem with the speed of the interconnected connectivity. What’s wrong here? If you can help me to connect, sorry. So can we choose this one for example? Unfortunately, we are in a place that we should. Excuse me, there’s a problem in connection. There’s no internet connection here? Oh. Okay. Nevermind.

Moderator:
You can explain it, doctor, at the steps of, uh.

Nabeih Abdel-Majid:
Well, I’m very excited to show you the curriculum and videos and interactive screens. Let them discover it on the site. Let me try it. Yeah. Yes. Yes. I’ll try to do this. Yes. Um. Which one is? This one? Um. Which one is? This one? Yes. We are here to make infrastructure for this all. Okay. Okay. Okay.

Moderator:
We are here to discover and visit our site, doctor.

Nabeih Abdel-Majid:
No, you know, this culture is like my kids, you know. Yes, I know. I need to show everyone what I have.

Moderator:
I know.

Nabeih Abdel-Majid:
I’m trying to connect, yeah. Um. Cannot connect to this one. Dr. Nabi, working in this curriculum for seven years ago. Yes. And we get this chance to launch it to all educational institutions to get benefit from this curriculum. Because it’s very important to save our kids and the role of parents. They know how to use this curriculum to protect their kids without let them losing their self-confidence.

Moderator:
This is a very important point that we concentrate on.

Nabeih Abdel-Majid:
I’m sorry, we thought that the Wi-Fi will get online, so sorry. Who is Dinesh here? Can you help us please? Are you Dinesh? Yeah. Can we use your internet? Sure. Your password? I can’t see your name, but your password is here. Well, this is a good example that you should not use this person’s password. Okay, I got it. Connected? Connected? I think so. Let me see. No. Can you request something? Request? Okay. I think there is a connection. How do you get it first? There is no connection. There is no connection. I’m not going to connect. I’m not going to connect. Dinesh? Dinesh? How can I reconnect? Where is he? Where is he? Where is he now? Are you Dinesh? Yeah. Your password? Your password is in here? Yes. Okay. Your password? Your password is in here. Okay. Connected? Connected. Yes. I can’t get that. Can you read? Can you read? Mm-hmm. Can you read? Should be connected. Can I use the PC? It’s working here now. Sorry for this stop now we will start again okay okay I don’t know where was the problem this is the survey we stopped here that we start by giving a survey for everyone like your parents and students and teachers so if your parents we need to check your level or students or a security level I’m sorry in teacher so if you just click on the in the survey and you need just to fill the information and when you finish the the survey because we have a lot of questions okay it’s a multiple choice question and then you will receive an analyzed email that you this is your strong point and this is your weakest point right in actually you go to LMS LMS is where our curriculum and allow me to get access by sorry sorry okay this is our platform as you see here we can we can have more than one course in different languages we have Arabic and English now so what is your your device is it Android or iOS because you know the Android have a different screens okay and I different than iOS so if you busy or device iOS then click on iOS and then start the different the different curriculum here. This is the first level, as I told you, we start from introduction into the internet-safe browsing. And each one of them has a lot of assignments, like a group of assignment and exams. Okay, so you can see here, it’s a complete school, so assignment and assignment two. I need to show you, because the time is consuming, how one of them is working. So if you go to find who’s spying your device, for example, the video will start telling you introduction about the session, and start recording. Where’s the sound? There’s no sound. So it’s sound telling you, yeah. I don’t know, I don’t think. So this is sound? Yeah, it’s okay, it’s okay. Can I, can I, can I, it’s touch screen, sorry.

Video :
Did you know, dear student, that it is possible for your device to be hacked without you even realizing it, and without any change? Did you know, dear student, that it is possible for your device to be hacked without you even realizing it, and without any change in the performance of the device? Yes, this is real. Let’s consider what this means. Many programs have the ability to breach and sneak into your device, and then start monitoring you without your knowledge. These are called hacking programs. Let’s consider various examples of how hacking software can affect your device, if, of course, you do not take the necessary steps to prevent it.

Nabeih Abdel-Majid:
This is all one of the decisions, and teacher can stop whatever he needs to stop, can retain back and go forward, so it’s easy to use for other students. Allow me to take just two minutes from your time, and then we’ll proceed the different parts of the class.

Video :
Let’s start with the first example. What if these hacking programs remotely controlled your device’s camera? This means that a hacking program that infiltrated your device without your knowledge has managed to activate your device’s camera. Thus, the hacker who runs this program will be able to look at you and monitor you all the time, without your knowledge. The attacker is controlling your device’s camera, and may be able to see who is sitting with you as well. Do you agree with this?

Nabeih Abdel-Majid:
And so on. This is the first part of theoretical, I mean, media, okay?

Video :
And then the next, the next step when you finish, the next step, you start exercises, so it’s interactive screens. By WhatsApp program. Using the following two steps. The student has to click start, okay? Go to the WhatsApp web or desktop option in the WhatsApp settings menu. As shown in the following screen.

Nabeih Abdel-Majid:
So step by step we teach the students what to do in order to implement this idea and that idea.

Video :
Go to the web.whatsapp.com website using your laptop.

Nabeih Abdel-Majid:
Okay, click create, it’s not only video, interactive screen as I told you.

Video :
And scan the QR code as shown in the following screens. Go back to WhatsApp web or desktop and see the result of the test. The screen shown indicates that there is no tracking through WhatsApp.

Nabeih Abdel-Majid:
Okay, student can repeat it as much as he needs. At home also he has username and password and he can get access to the website and teach himself. Okay, and also we have next training in the same session. So we have different sessions and we have sub-sessions inside each one of them. Okay, so for the next you will start exercise number two. I need to go to the third part of the class where we have conclusion. So after the sub-sessions inside the session, then we go to conclusion and then we go to some sort of questions for the students.

Video :
Preserve our privacy by making sure that no one is using our personal accounts. The first way is by using the WhatsApp application. And the second is by opening the Google account through the browser. Make sure you can do them properly. Then move on to the next section. Wishing you all the best.

Nabeih Abdel-Majid:
Okay, student next and then we have like sort of three to four questions for each session. Just to be sure between faculty or teacher and student that you were with me or not. And let’s let’s answer this question and let’s review some of this question together. It’s not exam, it’s just reviewing.

Video :
Please answer the following questions.

Nabeih Abdel-Majid:
Yes, start and then start discuss between teacher and this make a real. interactive, or active, let me see, let me say, sorry, active class in the class, sorry. Yeah, so if you just answer, that will tell you wrong or correct. Okay?

Video :
Wrong.

Nabeih Abdel-Majid:
Let me have one right, please.

Video :
Correct answer.

Nabeih Abdel-Majid:
Okay, and then, and then. Please review this lesson, and try again. Yeah, because your score is, and you can review the quiz also. Discuss between the faculty and the teacher. You have to, the faculty, sorry, teacher and the students, you have to answer this one and why. And actually, allow me to tell you that we just implement this program as a pilot. Okay, a test, because we’re planning to launch here. Okay, in this event, by supporting of Dr. Ahmed Noura and Dr. Nermin. So, we launch it for a different community. Next. So, you say exactly where you fold. Okay. In next. Please review this lesson. If you stop it, you return back to the curriculum. Allow me also to tell you that we have a lot of rubbers, a lot of rubbers. Okay, and different privilege for student, different than manager, different than teacher. In a way that you can observe, you can monitor your kids, your students, where he was, or she was, and where he or she is exactly now. So, the survey, we start by survey, and we end by survey also. And we have a lot of rubbers. You can find it here, site administrator. Okay, and you can go to different and different options. Okay. So, we have rubbers here, and it’s also customizing rubbers. So, we gave you opportunity or an ability to create your own rubbers. I do believe, finally, let me say, I do believe. that if we cooperate, all of us, we can help the community to be secure. And this is our message, actually. We need to cooperate, all of us. It’s not only teacher, it’s not only the one who create this program. It’s an effort has to be taken by all different parts of community. So let’s start from this distinguished place to cooperate, because I have five kids. Everyone have kids, okay? And we need to be secure. So let’s cooperate together in order to reach the level of secure community. This is my message, and this is the message of creator of, creator union of Arab, and also the ACT, where I also supported from, and also HDTC. Thank you for everyone. Thank you for attending my session. And it’s yours. Thank you very much.

Moderator:
Thank you, Dr. Nabi. It’s amazing and outstanding, really, program. But I want to ask just how old are the kids that this curriculum targeted? Well, we targeted from grade five. So it’s here. They can use it starting from grade five? Grade five to grade 11.

Nabeih Abdel-Majid:
Yes. Because we have different levels of the. Okay, I think all persons in room, what the role of parents of the kids in this curriculum? Oh, thank you very much for this question, because the parents is one of the main, the main stone, let me say. Yes. Yeah, we have group of sessions also with the same curriculum. So whenever you participate in this program, okay, your parents has to attend like group of sessions just to show them how you, I don’t say, I don’t want to say observe or monitor. But just take care of your kids how you know exactly what happening with you

Moderator:
They should observe and monitor but without knowing of their kids to be safe because we are in open Environment of digital and there is a lot of risks

Nabeih Abdel-Majid:
Yes. Yes, but what I what I need to say that the students can do whatever they need Yes, sure. Even if you close all the doors, they will definitely find another door But actually the best thing is to think how to be Just just watch them watch them. Take care of them. It’s right to help them against any extortion if it’s happens This is my message.

Moderator:
Yes. Thank you. Dr. Nabiha. I think we have let then five minutes if anyone in the room have a question, please Get the floor in the middle of the room

Hala Adly Hussain:
Dr. Nermin, Dr. Ahmed Thank you doctor for this valuable session because it’s very very valuable for me because I’m targeting the education in our project So if this is approved by the education the Ministry of education in Jordan or still not approved

Nabeih Abdel-Majid:
No, we we are accredited from KHD in Dubai so It is it is it is accredited. Yes That’s why we are looking looking for the recommendation from the IGF here to be implemented in a different places in a different community

Hala Adly Hussain:
It’s just implemented in our in Emirates. Dr. Hanna. Dr. Hanna. We have to hope to share this

Moderator:
Curriculum for all over the world. So we are make this point a start for all the students all schools The goals of this session one of the goals of this session dr. Hanna to share this curriculum or with a large number of educational

Hala Adly Hussain:
I think you have to put it in the Arab League. Oh, yeah. Sure. Sure meeting the next meeting. Yes, sure

Moderator:
We are already Talking about this point with the Arab leg you Thank you for this intervention, Dr. Hala. It will be very useful, so thank you, thank you, doctor. Thank you, Dr. Hala. Thank you very much, thank you. Anyone have a question? Okay, I think no one have any question. Yes, please.

Audience:
Thank you for this opportunity. My name is Rohit Prasad, I am from Nepal. And right now I am a student, master degree. So my question is like, what are the ethical concentration in cyber security, especially regarding hacking and privacy? Thank you.

Nabeih Abdel-Majid:
Yeah, thank you very much. I think most of the session is about how to stop hacking the privacy. In the, how to know exactly if somebody attack my PC by watching my cam, my mic, and also how if I delete some of the files, being sure that no one retain it back and take my privacy. This is another session. If somebody attack my iCloud, for example, how would I know? And then how I stop them? Okay, this is another session we have in this curriculum. And also, where exactly you can control the specification of a lot of options in iCloud and Android. Okay, and then let’s teach the students also a lot of sessions in the curriculum, how to control, what is this and what is that, and how would I use this one and that one and control my privacy using my account? Yes, a lot and a lot of, basically, this curriculum build on this. Yes, thank you for your question.

Moderator:
Thank you very much for your question. Thank you, I think we are here because this is a revolution of all kind of technology. And we have a long-term works to achieve what you say. And I hope that we will be able to have a discussion about the cyber security in all categories of usage and users of Internet and the environmental world. So thank you very much for all attendees online and in person. And I hope thank you, Dr. Nabih, for this outstanding presentation. Thank you, Dr. Ahmad Noor, for your expertise for this initiative. Thank you, Dr. Hala, for joining us in your presentation. And thank you to all of you for your participation in this initiative. I hope that this initiative will be implemented by the largest number of educational institutions to protect our children and draw a new map for the safe use of the Internet. Thank you for you all. See you next IGF in Riyadh. Thank you. Thank you, Dr. Ahmad. Thank you, Dr. Nabih. Thank you, Dr. Hala. Thank you, Dr. Hala. Thank you, Dr. Hala.

Hala Adly Hussain

Speech speed

128 words per minute

Speech length

1505 words

Speech time

708 secs

Nabeih Abdel-Majid

Speech speed

157 words per minute

Speech length

4530 words

Speech time

1734 secs

Audience

Speech speed

155 words per minute

Speech length

50 words

Speech time

19 secs

Moderator

Speech speed

180 words per minute

Speech length

982 words

Speech time

327 secs

Video

Speech speed

151 words per minute

Speech length

398 words

Speech time

158 secs

Exploring Emerging PEยณTs for Data Governance with Trust | IGF 2023 Open Forum #161

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Udbhav Tiwari

Mozilla Corporation, owned by Mozilla Foundation, is a unique organization in the technology sector. It operates without the typical incentives for profit maximization and prioritizes user welfare and the public interest. While initially having a strong policy against data collection, Mozilla had to make changes due to limitations in product development. They have since explored privacy-preserving ways of collecting information, separating the “who” from the “what” to protect user privacy.

Privacy-preserving technologies have become increasingly feasible with the proliferation of internet availability, bandwidth, and computational power. Privacy has emerged as a key differentiating factor for products, leading to increased investment in privacy-focused solutions.

Mozilla has taken a critical stance on Google’s Chrome Privacy Sandbox set of technologies, acknowledging improvements but asserting the need for technical validation. They are also exploring the use of Privacy-Preserving Technologies (PETs) like Decentralized Ad Delivery (DAP) and Oblivious HTTP (OHTP) for telemetry information collection.

While recognizing the value of advertising to support internet publishers, Mozilla deems the current state of the advertising ecosystem unsustainable. They have introduced features like Firefox’s “Total Cookie Protection” to enhance user privacy while still allowing essential functionality.

Mozilla has raised concerns about Google’s Privacy Sandbox standards potentially becoming the de facto norms, with the potential to impact privacy and competition. They advocate for responsible implementation of PETs to strike a balance between privacy and data collection.

Human involvement in data collection decisions is crucial to consider the risks to user privacy. Mozilla emphasizes the importance of accountability and responsible practices.

In summary, Mozilla Corporation distinguishes itself in the technology sector with its focus on user welfare and the public interest. They actively explore privacy-preserving technologies, criticize Google’s Privacy Sandbox, and advocate for responsible data collection practices. Through their efforts, Mozilla aims to foster a more privacy-protective and user-centered tech industry.

Wojciech Wiewiรณrowski

The European Data Protection Supervisor (EDPS) plays an essential role in safeguarding privacy within the European Union (EU). Their key priority is the effective implementation of privacy laws through the use of tools. The EDPS serves as a supervisor for EU institutions and offers advice during the legislative process, ensuring that privacy concerns are integrated into decision-making. Their ultimate goal is to promote a safer digital future by advocating for the use of IT architects and a comprehensive privacy engineering approach.

In line with the EDPS’s efforts, Wojciech Wiewiรณrowski, a prominent figure in the field, acknowledges and supports the work of non-governmental organizations (NGOs) in enforcing privacy policies. He recognizes the vital role that NGOs play and suggests that their work should have been undertaken by data protection commissions much earlier. This recognition highlights the importance of collaboration between regulatory bodies and NGOs in effectively safeguarding individuals’ privacy rights.

Furthermore, Eurostat, the statistical office of the European Union, has developed privacy-preserving tools such as trusted execution environments and trusted smart surveys. These innovative tools aim to ensure privacy while conducting official statistics. The United Nations has included these tools in their guide on privacy enhancing technologies for official statistics, further validating their importance and effectiveness in maintaining data privacy.

Overall, the European Data Protection Supervisor, Wojciech Wiewiรณrowski, and Eurostat are actively working to uphold privacy rights and create a safer digital environment. Their focus on utilizing tools and collaborating with NGOs demonstrates their commitment to establishing a robust framework for data protection. Embracing these initiatives provides individuals with greater confidence in the privacy of their personal information.

Clara Clark Nevola

Privacy enhancing technologies (PETs) are becoming increasingly important in today’s digital era as they enable data sharing while protecting privacy. The Information Commissioner’s Office (ICO) in the UK has recognised the significance of PETs and has released guidelines that outline how these technologies can support data minimisation, security, and protection.

The ICO’s guidelines highlight the role that PETs play in achieving data minimisation, which refers to the practice of only collecting and retaining the minimum amount of personal data necessary for a specific purpose. By implementing PETs, organisations can ensure that they are processing and sharing data only to the extent required, thereby reducing the risk of potential breaches or misuse.

Furthermore, PETs contribute to data security, addressing concerns about the potential vulnerability of shared data. Different types of PETs, such as homomorphic encryption, secure multi-party computation, and zero-knowledge proofs, offer various solutions for securing data in different sharing scenarios. Homomorphic encryption allows computations to be done on encrypted data without having to decrypt it, while secure multi-party computation enables multiple parties to perform a computation on their data without revealing any sensitive information. Zero-knowledge proofs allow the verification of a claim without revealing the supporting data. These technologies can help protect data integrity while allowing for collaboration and data sharing.

Anonymisation or de-identification is another key aspect of PETs. By applying these techniques, organisations can remove or alter personal identifiers, making it more difficult to link shared data to specific individuals. This helps to protect privacy while still allowing for data analysis and research.

Despite the clear benefits of PETs, challenges remain. Technical standards for PETs need to be developed to ensure interoperability and ease of implementation. Additionally, the costs associated with implementing PETs can be high, posing a barrier to adoption for some organisations. Awareness and understanding of PETs also need to be improved, particularly among lower-tech organisations that could greatly benefit from them.

Data sharing itself poses challenges beyond legal considerations. Organisational and business barriers, such as concerns about reputation and commercial interests, can hinder data sharing efforts. Stakeholders often express reluctance to share their data due to uncertainties about how it will be used or what the outcomes may be.

To overcome these challenges, the ICO advocates for partnerships and collaborations between PET developers, academics, and traditional organisations like local governments and health bodies. By bringing together experts from different fields, these partnerships can elevate awareness and understanding of PETs and facilitate their adoption by traditional organisations.

In conclusion, privacy enhancing technologies are crucial tools for enabling data sharing and protecting privacy in the digital era. The ICO’s guidelines demonstrate how PETs can support data minimisation, security, and protection. While challenges exist in terms of technical standards, costs, and awareness, partnerships between PET developers and traditional organisations can help overcome these obstacles. By promoting the adoption of PETs, organisations can achieve a balance between data sharing and privacy protection, fostering innovation and collaboration while safeguarding individuals’ personal information.

Suchakra Sharma

The speakers in the discussion present different perspectives on privacy in software development. One speaker argues in favour of considering Privacy Enhancement Technologies (PETs) from the software perspective. This involves examining how software handles data, as it can provide insights into developers’ intentions and identify potential privacy violations. The speaker highlights the importance of evaluating the software in order to predict and prevent privacy breaches. As a solution, Privado is developing a tool that can assess how software handles data.

On the other hand, another speaker focuses on the significance of technically verifiable Privacy Impact Assessments (PIAs) in ensuring proactive privacy. They note that during software development, the necessary information for PIAs is already available. By incorporating PIAs into the development process, privacy regulations can be adhered to right from the design phase to deployment. To facilitate this, a tool has been built to perform verifiable PIAs, identifying potential privacy violations in advance. This approach is seen as a guarantee for proactive privacy.

The third speaker explores the possibility of certifying software for privacy compliance. They highlight the importance of evaluating the data processing and handling intentions of software. By doing so, privacy compliance checks can be conducted before the software is deployed. They suggest that regulatory laws such as GDPR and CCPA can be translated into fine-grained checks and tests for compliance. This certification process is considered a potential solution to ensure privacy in software development.

In conclusion, the speakers all emphasize the need to evaluate how software handles data and ensure compliance with privacy regulations throughout the entire software development lifecycle. By considering PETs, performing verifiable PIAs, and certifying software for privacy compliance, proactive measures can be taken to protect privacy. These perspectives highlight the increasing importance of addressing privacy concerns in the software development process.

Maximilian Schrems

NOIP, an organisation, has developed a system that automates the generation and management of complaints about General Data Protection Regulation (GDPR) compliance. This system has proven to be effective in achieving a 42% compliance rate by proactively sending guidelines to companies.

The system operates by performing an auto-scan of websites to identify potential GDPR violations, which is then followed by manual verification. Once a violation is detected, the system auto-generates a complaint, which is then transferred to the violating company for action. Additionally, a platform is used for companies to provide feedback and declare their compliance.

Interestingly, the system has observed a domino effect, wherein even companies that were not directly intervened with have shown improved compliance. This suggests that the awareness and actions taken by some companies have influenced others in the industry to improve their GDPR compliance as well.

Data protection authorities recognise the potential for efficiency that new technologies can bring, but they also express concerns and high levels of interest. They acknowledge that utilising new technologies, such as the automated GDPR compliance system, can increase efficiency by eliminating trivial tasks and increasing the quality of work through the use of well-proven templates.

However, implementing new technology poses certain challenges. The adoption of new technology requires technical infrastructures, such as programmers, to support its implementation. Additionally, a culture shift is necessary for organisations to focus on specific tasks related to the new technology and adapt to the changes it brings.

In conclusion, NOIP’s automated system for GDPR compliance has achieved a significant compliance rate and has demonstrated the potential for technology to enforce and improve GDPR compliance in a more efficient manner. While there are challenges associated with implementing new technology, the benefits of increased efficiency and quality are substantial. It is noteworthy that the system has also influenced compliance improvement among companies that were not directly addressed, highlighting its positive impact on the industry as a whole.

Nicole Stephensen

The analysis explores different perspectives on privacy-enhancing technology and data protection. One argument presented is that privacy-enhancing technology should not replace good decision-making. It is emphasised that governments and organizations have a positive duty to ensure that their information practices accord with relevant privacy and data protection laws and community expectations. This suggests that while privacy-enhancing technology can be beneficial, it should not be solely relied upon to make ethical and responsible decisions regarding data privacy.

Another argument highlighted is the struggle faced by organizations in identifying and mitigating risks, particularly when dealing with large volumes of data or complex vendor relationships. Data leakage is mentioned as a common occurrence that often happens without the organization’s awareness, and it qualifies as a personal data breach. This indicates that organizations may face challenges in effectively managing and protecting data, especially in situations involving extensive data sets or intricate vendor arrangements.

However, the analysis also acknowledges the utility of privacy-enhancing technologies in controlling data leakage. Specifically, the example of Q-Privacy is provided as a tool that allows organizations to audit for data leakage and enforce rules about data usage. This suggests that privacy-enhancing technologies, particularly those focused on data accountability, can play a valuable role in preventing and controlling data leakage incidents.

Furthermore, the importance of prioritizing purpose specification and collection minimization in data protection practices is highlighted. The argument put forward states that these are the building blocks for a culture that limits the use and disclosure of personal data as much as possible. This implies that organizations should be cautious in collecting only necessary data and clearly defining the purposes for which it will be used. By doing so, they can actively contribute to a privacy-conscious environment.

Lastly, the analysis identifies several barriers to the implementation of privacy-enhancing technologies. These include the privacy maturity of the technology suppliers, their geographical location, and the budget of the organization. Additionally, it is noted that decision makers in the privacy domain tend to be more in the legal space and have a less technical focus, which could also be a barrier for adoption. This suggests that a multifaceted approach is necessary to address these barriers and promote the effective adoption and integration of privacy-enhancing technologies.

In conclusion, the analysis provides an overview of various perspectives on privacy-enhancing technology and data protection. It emphasizes the importance of good decision-making, compliance with privacy laws and community expectations, risk identification and mitigation, data accountability tools, purpose specification, and collection minimization in ensuring effective data protection practices. Moreover, the analysis sheds light on the challenges and barriers associated with the implementation of privacy-enhancing technologies, highlighting the need for a comprehensive approach to overcome these obstacles.

Christian Reimsbach Kounatze

In the realm of technology and privacy, it has been established that these two areas can provide scalable solutions to effectively address problems. Maximilian Schrems, a prominent figure in this field, emphasizes the advantages of implementing efficient systems that can eliminate trivial work and enhance the overall quality of work. By using proven templates and carefully selecting cases to work on, these systems greatly improve efficiency and productivity.

Privacy tools, in particular, are seen as indispensable in supporting the work of agencies involved in data protection. These tools enable agencies to effectively navigate the complex landscape of privacy management. However, barriers hinder the widespread adoption of privacy-enhancing technologies. Factors such as low budgets, a lack of technical focus in decision-making teams, and the prioritization of larger organizations impede the adoption and implementation of these technologies. Addressing these issues is crucial to fully benefitting from the advantages offered by privacy-enhancing technologies.

Automation is widely regarded as a crucial component in privacy management. It allows for scaling efforts and addressing the challenges posed by the ever-increasing scale of privacy concerns. However, human involvement should not be replaced entirely. Speakers agree that a balance must be struck between automation and human decision-making. While automation can streamline processes, human oversight and decision-making play an integral role in ensuring ethical and responsible practices. Striking this balance is key to realizing the full potential of automation in privacy management.

In conclusion, the speakers at the event highlighted the significant role that technology, privacy tools, and human involvement play in addressing problems and supporting the work of agencies in the realm of privacy and data protection. Scalable solutions, efficient systems, and the adoption of privacy-enhancing technologies are essential in tackling the challenges at hand. While automation is critical, it should not replace the human touch. By acknowledging these factors and working towards effective implementation, privacy can be ensured in an increasingly digital world.

Session transcript

Christian Reimsbach Kounatze :
Okay, I would say our speaker has arrived, so we can actually start the session. So welcome everyone to this session, IGF session, on emerging privacy-enhancing technologies, but also a little bit more. So maybe as a short introduction, my name is Christian Reimsburg. I’m a member of the OECD Secretariat in charge of privacy and data governance. And today we have a good, interesting set of different speakers that will talk to us about the role, essentially the role of technologies for enhancing privacy and data governance with trust. We will not only talk about classic privacy-enhancing technologies such as, let’s pick one, homomorphic encryption, or federated learning, which has been discussed in the past a lot. But we actually will have a broader discussion about what is the role of digital technologies for not only being, for going beyond to be the problem when it comes to privacy, but also to become a solution or to be used as a solution, and what are the challenges related to that. So we have different speakers that will make an intervention. We will start, indeed, with the role of privacy-enhancing technologies, but then we will move to broader discussions. And without further ado, I would like to invite our very first speaker from the UK’s Data Protection Authority to make her intervention. And maybe I would let each of you briefly introduce yourself, because maybe that’s a little bit quicker instead of going through each of you individually. I would say let’s start with the very first presentation and Clara the floor is yours But maybe very briefly if I may say so so the idea in terms of the run-up show is to have a series of interventions by our speakers They have roughly seven minutes and after that we will have a first Set of questions and discussions and we will then open the floor roughly 30 minutes before the end open the floor to the audience And we may have also a second round after that So be prepared and Clara the floor is is yours if you may introduce yourself very briefly Also talk a little bit about the ICO if you want and then go ahead with with a subject matter. Thank you Thank you. And can I just check other slides showing while on your side? It’s showing pretty well. Yes. Okay, great

Clara Clark Nevola:
So, my name is Clara Clark Nevella and I’m joining you from the UK this morning. Well my morning I guess your afternoon and and thanks very much for having me and I’m sorry. I can’t be there in person so My bit will be to talk about the privacy enhancing technologies aspect and in doing so I’ll also kind of introduce my role and maybe the role of the information commissioner and So the way that and the ICO regard privacy enhancing technologies is basically as a tool to enable data sharing If you’re not familiar with the information commissioners office where the UK’s independent data protection authority We regulate data protection and wider information rights and where as other data protection authorities we are Independent of government but publicly funded and We produce guidance. We take enforcement action. We provide advice and support for organizations members of the public and and we also engage with governments and other stakeholders on Advancing policy positions in this area. So within this within a technology policy team. And our role is to anticipate and understand and shape how emerging technologies and innovation impact people in society. And that’s very much how I’ve approached privacy enhancing technologies. So maybe the first question would be, well, what are privacy enhancing technologies? But actually I’m not going to start from there because I think, although it’s interesting to understand how they work and what they do, it’s, I think, more interesting to approach what does a privacy enhancing technology actually do? So this is quite a vague term, it covers multiple disparate techniques and I see it more as a sort of toolbox that each one of toolbox can do a different thing. And instead of explaining, you know, what is a hammer? It’s more interesting to see, well, what can a hammer do for you? So if you have some furniture that you need to assemble, how can you put this furniture together? So not so much like how do you make a screwdriver or the technical components of a screwdriver, but the screwdriver allows you to screw in two pieces together. And with that optic is how I’d invite you to approach privacy enhancing technologies. So instead of focusing straight away on the tools, I’ll start with explaining what the problem is. So what is this furniture that we’re trying to assemble? And broadly, the furniture we’re trying to assemble, the problem statement that we have is that data sharing is difficult. There’s lots of different scenarios in which data sharing has challenges. And these challenges are sometimes data protection law, but in many cases, they’re much broader. So they will be reputational, commercial, organizational barriers. So typical scenarios of data sharing involve two or more organizations who are trying to share data between each other. So for example, a hospital and the local government might want to. to share data to see what the overlap of patients or social services is. And we have a scenario, another very common scenario is publication of data. So this is no longer reciprocal sharing, but outputting of data to an audience or to the public at large. Then we have sort of putting multiple databases into one. So one organization ingests data from multiple sources. So you might think of a local government wanting to make road layout improvements and they need to take in data from the police and maybe from hospital, maybe citizen feedback, site campaign data, and they need to bring it all together. And another typical scenario is just the need to keep that secure. So for example, if a government uses an external provider to host data, they may need to be sure that it’s particularly secure. So these are the sort of problem statements we have with various tasks to be done. And now I’ll move on to explain what tools might be the best to use for this. And this is kind of where the privacy-enhancing technology bit fits in. So what are they, what do we do? So in the first scenario where you need to share data between multiple parties, I’ve clumped together the types of privacy-enhancing technologies that would be useful in that scenario. So I won’t dwell on them in detail. Given the time constraint, I’m happy to go back to them later if anyone has questions, but I’ll just give a brief overview. And homomorphic encryption, the kind of underlying concept is that it allows computations to be performed on encrypted data without the data first being decrypted, which keeps the data much more secure and minimizes the access that you can have to it. Secure multi-party computation is a relatively similar protocol, but more suitable for large groups. And zero-knowledge groups are a bit different. They refer to a protocol where one person needs to prove something to somebody else. we could say typically whether you’re above a certain age, so you’re eligible to do a certain activity, drive a car, purchase alcohol. And instead of revealing the underlying data, so maybe data birth, you can just prove that you’re over whatever the threshold is. So it minimizes data that’s shared. For publication and ingestion, two techniques are both applicable. So differential privacy is a way to prevent information about specific individuals being revealed or inferences about them being made. So it adds noise, adds records and measures how much information about a certain person is revealed. Well, synthetic data is essentially artificial data, which replicates the patterns or statistical properties of the real data. So you would have a real data set generate a synthetic data set that maintains its same properties, but is not the real underlying data. So either anonymizes or significantly de-identifies the data depending on which route you go down. And then finally, trusted execution, not finally, but I’ll say federated learning first. Federated learning is very useful for ingesting data from multiple sources. So typically you would need to move all the data across to a central hub. So imagine you are developing a tool for medical imaging. You need to collect all the medical images from a whole group of hospitals to have a large enough data set to train the model that you’re then going to use to detect these images. With federated learning, you avoid the need to move the data across and you instead train a model locally and then bring together centrally the improvements in that model. So it really reduces the need to share data. And then finally, trusted execution environments are essentially a security application that sort of makes… hardware and software that allows data to be isolated within a system. So that’s a whistle-stop tool. And I’ll move on to talk a little bit about our involvement in this area as the ICO. So this year, in June this year, we published Guidance on Privacy Enhancing Technologies. So if you’d like to know more detail about anything that I’ve talked about, I would highly recommend you read the guidance. And we focused on the link between these technologies and the benefits they bring to data protection law. So how privacy enhancing technologies can support data minimization, data security, and data protection by design and by default. And we’ve provided explanations for all the technologies, people who are not familiar with them, and also that mapping between the use of the tool and the compliance with the law to help both decision makers in organizations and developers technologies. And that’s a flavor of what the guidance contains. This kind of one-to-one mapping with, OK, you’re using a tool. How should you use a tool and how will it help? And we’ve also provided examples of scenarios in which pets could be appropriate. And I’m sure we’ll come back to talking a bit more about the risks and benefits. But I think it’s important to note that privacy enhancing technologies really, really help with data sharing and data reuse. But they’re still a relatively emerging field. And there’s some very, there’s some great examples of them being used already in practice. So they’re not an academic concept, but they’re still relatively new. So this issue still with maturity and expertise. I’m just, I can see Christian’s looking at me. So I’m going to finish up, which is that there’s still a few challenges to solve. mentioned. So, you know, they’re a great screwdriver but they’re maybe not yet an electric screwdriver and there’s still issues to understand how we can match up well the users of privacy enhancing technologies with the developers. So, how do you bring the expertise to the people who need them and how can technical standards develop in this area and how can costs be brought down. So, that’s my introduction to privacy enhancing technologies and I’ll hand back to Christian.

Christian Reimsbach Kounatze :
Thank you very much, Clara. And maybe before we move on to the next presenter, I just want to provide a little bit of context why we started with Clara’s presentation because I realized that I missed maybe to clarify that point. And the reason is because privacy enhancing technologies have been traditionally been looked at as the, this has been essentially the first kind of approaches and tools if you want. When you ask, if you ask people think about the role of digital technologies and how we can protect privacy, people would traditionally or typically point to privacy enhancing technologies. And as Clara’s presentation has highlighted, this has evolved definitely. So, there are now new types of privacy enhancing technologies that she addressed. And maybe one, if I may, Clara, ask you one question because it actually also opens up a little bit for latest discussion. Why has the ICO decided to look into this and to publish the guidance? If you could elaborate that a little bit before we then move to our next

Clara Clark Nevola:
presenter who is sitting next to me. Of course. So, we’ve long been advocates for kind of responsible data sharing and it’s something that stakeholders frequently tell us that data sharing is really hard, that even no matter how much we say data protection is not a barrier to data sharing, there are always challenges. And a lot of the challenges we’re seeing were not so much legal. but they were more organizational and business wise in the sense of you would have a data set and you would not want to share it because you don’t know what’s going to happen to it afterwards which is a legitimate concern and with privacy enhancing technology you can massively reduce that risk so I was talking about homomorphic encryption if you hand over a data set to a third party you don’t know what they’re going to do with that you know you have a contract to say how they can use it but you don’t have ultimate visibility over it while if you implement homomorphic encryption there’s a technical limit to the queries that you can put in so you have a guarantee that the data is only being queried for a pre-approved set of things so we thought it’s exciting and useful to develop data sharing yeah thank you I think now it’s a good time to move to our next speaker I will also ask you to introduce yourself but maybe as a as a kind of a

Christian Reimsbach Kounatze :
context why you are next essentially we thought that everybody probably knows the Mozilla Foundation and they are obviously also user using privacy enhancing technologies and we will hear about that so essentially it is a good illustration about not only the potential of privacy enhancing technologies but also an example where every one of us is potentially interacting with this kind of technology so again I will please introduce yourself maybe you want to talk about the

Udbhav Tiwari:
Mozilla Foundation eventually and yes thank you so hi I’m Udbhav Tiwari I work with the Mozilla’s public policy team where I’m the head of global product policy my job is to work with internal technical experts and external regulators and lawmakers to help them understand the consequences of regulation as well as ways in which that regulation could be improved to further Mozilla’s mission and Mozilla is a unique organization because we’re of course known most for our browser but we’re actually a corporation that’s owned by a foundation so the Mozilla Corporation has a single share that’s owned by the Mozilla Foundation and that means that most of the typical incentives that apply in the technology sector don’t necessarily apply to us shareholder pressure the driver or pressure for profits and which at some level we believe are responsible for some of the more egregious practices when it comes to data collection in the space. And the reason that context, I think, is particularly important for this session is Mozilla as an organization, when we started the Firefox browser now almost 25 years ago, for the first maybe 10 to 15 years, had a very strong policy of simply not collecting any data at all. And usually, when organizations say that, they’re actually talking about user data. So for example, even today, Mozilla’s browsing history is end-to-end encrypted, which means that if you have history, say, on your desktop and you’re accessing it on your phone, the only two places where that history exists in an unencrypted format are those two devices. Mozilla does not have access to that. But 15 years ago, we didn’t even collect any telemetry. And obviously, both it came from our very strong privacy credentials and the idea that we would not collect any data at all, even if it’s not directly about our users or their practices. But ultimately, we realized, as we became a more popular browser, that for a product that people used to access hundreds of millions, in fact, billions of websites around the internet, not having access to any telemetry would mean that we would never be able to make a product that would actually serve our users. Because that telemetry was used to detect which websites were breaking and which websites were throwing compatibility errors so that we could then go investigate those websites and speak directly to developers in a manner that we could resolve that and make changes in our products to help make sure that they don’t happen again. And that’s the period when we started exposing privacy-preserving ways of collecting this information, which within Mozilla essentially means separating the who from the what. And that separation for us has been quite a long journey. And that journey specifically, I think, over the last three to four years has crystallized around maybe three issues. And I think those are the three kind of maybe samples that I will be talking about to both explore Mozilla’s thinking, but also to react to developments that are taking place in the external world. The first is there’s definitely been a recognition that the proliferation of internet availability, bandwidth, and connectivity, along with computational power, has enabled certain kinds of privacy-preserving technologies today that were not available or not as feasible a few years ago. The second is that privacy post-2014 has definitely, both because of laws like the GDPR, but also because of reputational concerns, actively started to become a differentiator between products. And people are choosing products because of privacy. So the net investment that is coming into the space in these technologies has increased. And finally, and this is both related to Mozilla, but something that we don’t do ourselves, is the developments that are taking place in the advertising ecosystem. Specifically, Google’s Chrome Privacy Sandbox set of technologies, which have garnered a lot of attention over the last couple of years, for attempting to do all of the parts of the advertising ecosystem, targeting, attribution, remarketing, in a more privacy-preserving manner. And Mozilla has arguably been one of the biggest and most vocal critics of some of these technologies. Because we think, while they are better than the current practices that are enabled by the third party ecosystem, the technical validation of many of the claims that they make still requires some work. And those are the three things that I’m actually going to talk about. On the first piece, which is Mozilla’s own practices, there are, I would say, two standards that people at Mozilla have been integrally involved in, that are now almost done at the IETF. One of them is actually done. One of them is oblivious HTTP.

Christian Reimsbach Kounatze :
And the other is DAP, which is the Distributed Aggregation Protocol. Both of these standards essentially

Udbhav Tiwari:
work by, firstly, sending data in a manner where there is an intermediary or a proxy in between that separates where the data is coming from from what the actual substance of that data is. For the individuals in the room and on the session, if you use Apple’s private relay service, which is available on iOS, it works in a very similar manner in order to set it so that even Apple does not know either your DNS lookups or your browsing history, because it’s first sent to a proxy, where the proxy strips the information about where it’s coming from. And then it’s sent to the destination. ultimately. Mozilla is actively exploring ways in which we could use these technologies in order to collect telemetry information. And we expect to make some announcements on this regard in the coming weeks and months. There’s been a lot of progress. But one of the things that has actually held us back, I would say, is that the number of players in the ecosystem that are willing to engage with these technologies is still actually quite limited, both from the demand side, which is how many players actually want to collect technologies with these privacy-preserving manners and in this manner. And as you can imagine, the more suppliers they are, the more customers they are, the more competition is, the cheaper they will be, has definitely not happened yet, despite the fact that in comparison to some of the more complicated and possibly more promising technologies like homomorphic encryption, these are much, much cheaper. And it’s not actually technology that is holding the deployment of DAP or the deployment of oblivious HTTP back. It’s the fact that there are actually very few service providers that provide the infrastructure to be able to utilize these technologies, which are, relatively speaking, much easier to implement than differential privacy or homomorphic encryption. On the second point, which is Mozilla’s own thinking with regard to the developments in this space, I would say that when it comes to the evolution around targeted advertising that’s taken place, it’s almost certain now that the only browser in the market that still collects or has not disabled third-party cookies yet by default is Google Chrome. And the pressure that Google has been subject to by privacy advocates, by regulators on this has been quite high. So what Google has done is now proposed a set of technologies called the Privacy Sandbox Technologies that attempt to do what the current advertising ecosystem does in a more privacy-preserving manner. What Mozilla has said on this more broadly is that we support the idea. We support the concept of why the idea exists, because Mozilla, for example, does not block ads by default in Mozilla Firefox. We do believe that advertising is a valid way to support. publishers on the internet. However, we do think that the current state of the advertising ecosystem is absolutely unsustainable. And that’s the reason we block trackers, that’s the reason we block fingerprinters, and all of the underlying infrastructure that may enable the advertising ecosystem, including third-party cookies, are actively harmful to user privacy and security. And we’ve done a lot of technical work in the last couple of years in order to implement that. The biggest one there is TCP, or Total Cookie Protection, which actually creates jars of information in which people can, when websites, when you visit a website, say the newyorktimes.com, and there’s a button on the newyorktimes.com that lets you like a Facebook, like it on Facebook or share it on Facebook, Facebook actually gets the ability to drop a cookie onto your computer that will then also note the fact that you’ve been to newyorktimes.com, you’ve been to instagram.com, you’ve been to washingtonpost.com, which may also have that button. And what Firefox does is it creates jars where each time a website is accessed, there’s a separate jar in which the cookie for that website and many other identifiers are dropped, and these jars cannot talk to each other. So that’s a way of limiting the harm of the ecosystem by still giving users the ability to gain from the benefits of third-party cookies, because we also use heuristics in order to determine is this an advertising third-party cookie or is it a third-party cookie that’s actually enabling single sign-on, which is essentially when you click on sign in with Google or sign in with Apple on different websites as well. And as we develop these technologies, the one thing that we realize is firstly, it’s actually possible to give users a good balanced experience between those two things, which is not having tracking, but still allowing them to support publishers if they choose to do so, and giving them the option to say, go to the Mozilla add-on store and download an ad blocker if that’s what they want to do. So we think that that choice has been very valuable. And finally, because I know I’m at time as well, is on the Google Privacy Sandbox piece, what we have said is that right now, there’s a very serious. risk that the standards and technologies under Google Privacy Sandbox will become the de facto way in which large parts of these activities are carried out on the internet and we think that that’s both a privacy concern but also more importantly a competition concern because it’s that interplay between privacy and competition where traditional advertisers who are not Google don’t like those technologies because they say that that will mean Google’s own technology and first-party motive data will become more valuable and people like us and privacy advocates don’t like those technologies because they don’t go far enough right so it’s definitely a scenario where everyone is like quite unhappy with the state of play but what Mozilla thinks is that if these standards are going to be deployed and they are Google has announced that they will stop third-party cookies by the end of next year we think that they should happen at standards bodies because there is a process in standards bodies like the W3C and like the IETF that vets and validates these standards for both their technical capabilities as well as for their potential for interoperability with other ecosystems in a world where more than 60% of the individuals who use the internet are running on a variant of Chrome which is the chromium browser engine these technologies have a very strong ability to shape what the future of the internet and advertising and tracking may look like and while they are privacy enhancing technologies if privacy enhancing technologies like them are adopted at the scale at which they will be adopted they need a lot more scrutiny than they have received so far and which is why we’ve advocated a lot with the Competition and Markets Authority in the UK and we’ve also had engaged with many other regulators around the world both privacy and competition advocating for why processes need to be better than some of them have included conversations with Google as well so with that I’ll end and happy to answer any questions. Thank you, thank you very much, I think you raised quite a

Christian Reimsbach Kounatze :
number of points that will that we will definitely need to come back to during our discussion and one of the points if I may because it actually also opens up a little bit with the door for the next intervention to some extent. But let’s say more broadly for all of us, it’s a question about the difficulties related to the validating the technical claims, and what is actually that means for the selection and also for policy makers and the regulators that are trying to promote the adoption of privacy enhancing technologies, but also the issue of interoperability. I think this is maybe a topic that I also would like us to discuss about. But what I also find interesting was that you were talking about the current state of the ecosystem, of the advertisement ecosystem, and highlighting that there are obviously some challenges. And I think our next speakers, starting with Max and then Stefan, will address exactly that state. But what is more interesting, and this is really why I look also forward to date presentation is because they are essentially talking now about a different role of the technologies for supporting privacy, which is the enforcement side. So because interestingly, you talked about that a lot of those technologies have played, gained a higher adoption because or thanks to the GDPR. So we have a legal regime in place, but apparently we will hear what is happening with cookies and how they’re being used. And so without further ado, I will give you the floor, Max. And I understand you will co-present with Stefan. So I’ll let you manage that between the two of you. The floor is yours. And if you may introduce yourself and what your NGO does. Thanks a lot.

Maximilian Schrems:
Thanks for the invitation and early morning from Vienna. Stefan is on the second one for practicality reasons. I’m just gonna do the presentation myself. Stefan is the developer that actually works on a lot of these things. So if to get it maybe also out of the policy only discussion and maybe some hands-on discussion, that is especially what Stefan would be here for. So I’m just going to run through our presentation, trying to be as quick as possible. So fundamentally, we at NOIP do different enforcement projects. We have deep dives if there’s really a big legal issue, but we also see that there’s just mass violations. So violations where the GDPR is just violated, just like I usually compare it to speeding, where it’s not a big complicated legal case. It’s not a big overly dramatic situation, but we just see mass violations where people just basically do that and violate the law in masses. Typically in the privacy or in the even digital community, we’re still working on most of that in a rather analog way. Typically when lawyers work on digital issues, it gets as digital as word usually, and that’s about it. So the idea was if we have these hundreds and hundreds of violations, we have to speed up, especially we’re a small organization with being based mainly on donations. So you have to be efficient in what you’re doing as well, which is a similar issue for governments as well, I guess. What we thought about on how to approach all of that is a bit like a speeding camera. I can tell you from an Austrian perspective, if you speed in Austria, typically your license plate is automatically read by the speeding camera. The speed is absolutely automatically calculated and it’s automatically transferred into a ticket that you would get mailed to you and you basically get a code to pay to find. There is no human intervention in any of these legal procedures anymore. They’re fully automated, and that’s basically for these standard violations, what we do in other areas of the law as well, because it’s just inefficient to have people for that. Now, we thought to kind of take that thinking and apply it to especially web technologies right now in future plans that could also be used for mobile technologies, for example. And the idea was basically to come up with a multi-step system that allows us to generate complaints automatically, manage them automatically, and also settle. cases with the companies automatically without the need to send hundreds of emails back and forth. This all is in background basically a big MongoDB, now a PostgresDB in the file system where all of that lands for the tech geeks. I’m just going to go very roughly through the steps of how all of this works to make it a bit practical. What we started with is OneTrust. It’s the biggest provider for cookie banners, so it’s kind of the standard cookie banners you at least see in the European Union. They’re typically done by four or five big tech bigger service providers. Websites usually don’t have their own cookie banner, they usually use one of these services. That also allowed us to scale up because we know thousands of websites are using exactly the same software to do this cookie banner and OneTrust actually has a JSON configuration file where most of the configurations of the cookie are stored so we can actually or the computer can read it quite well because for example this is like the banner show reject all button false so it basically doesn’t show you reject button on the first layer and you can take it right from the JSON file to know that that is there or not there. Same thing for many other of the configurations of a cookie banner. In the background OneTrust provides a interface where the admin can change that so we also took screenshots to explain to the companies which button they would have to fix to make sure that they comply with the GDPR and that was kind of like the systems basically the back end and the technology like the technological way of saving these settings and that we could basically auto collect. We did a first kind of code search there’s a website for example called public www where you can search like you can search on google on this you can search for code in the website so to see which software a website is using and thereby you get a list of all the websites that use in this case the OneTrust cookie banner and that already allows us to only focus on the websites that actually have it used and not have to scrape the whole web for random pages that actually use OneTrust. What we then have is that we actually first auto-scan the website to see if there’s any violations, and then we actually have a manual scan where an individual really goes to the website and checks it. We did have a two-screen setup usually where there’s a test environment on one side which was a virtual machine. We’re right now changing that and then you have basically a management interface where you can manage the case yourself. We need to do that also because under the law, we need to have a data subject to someone that is directly concerned to actually bring a case. All of that basically gives you a big fancy list where you can filter all the cases and then take a case and do your assessment. We only filed if the human and the computer basically decided that it’s a violation. So we have this two people have to agree system to make sure that there’s a low error rate. Once you’ve done that, we basically auto-generated a complaint, which is text blocks that generate the PDF, where you have certain elements that are filled automatically, certain elements that turn on and off, depending on which violations you found on the website, basically fed from the JSON file and what you found in that. We typically then send that to the individual company first. That was one of the biggest issues because we had to make sure that they don’t think it’s them, because if you get like there’s a legal procedure against you, most people will just throw it away. We even tried to use some of the systems that the companies use. They typically use, for example, A-B testing to figure out which type of interaction works the best. So we A-B tested that as well and saw for different types of e-mails, we sent to the company, we get a better or lower compliance rate. So we thought if they can manipulate the users into clicking the Yes button with A-B testing, we can probably manipulate them into compliance with the law. By doing A-B testing, that was the approach there. As I said, we even have a full guideline on how to be fully compliant. So it was served on a silver spoon, on a silver plate to actually have that done. If companies actually decided to comply with that, they could go to a platform where they could log in with case number with a password and then were able to actually let us know that they have fully complied and that they have fixed the problem. We then automatically were able to scan that and prove that. And also from a lawyer’s perspective, we were able to get all the feedback from the companies in an automated format. So we didn’t have hundreds of emails with some law firms that send you endless text. We basically got that in a structured way, the feedback as well. Now what’s super interesting is if you look at that from a statistical point of view. We were doing the first version and that’s pretty much what I showed to you in more of a duct tape technology version. We just did a first test and saw how well it worked. And what was interesting was two things. First of all, we had a 42% compliance rate just by sending the companies an email with a specific instruction of what’s legal, what’s not legal, and that there would be further action taken if they’re not compliant. And that was already a huge number. That’s better than what we get from the data protection authorities if we’re doing cases there. So that was really interesting that we had a very good compliance rate here. The second thing that was interesting, that’s kind of dependent on the violation. I don’t go into that, but it’s different per violation on how good the compliance was. There was only, and that’s a side note, there was only about 18% that fully complied because we typically had six or seven violations and they fixed some of them. So the 40% is the overall number of violations. But the really interesting part was the domino effect that came out of it. Typically in law, we do not go after every person and go after everybody that’s speeding. We intervene often enough that people feel, oh, speeding can actually be a problem. And what we saw is we scanned about 5,000 pages and then actually only sent an email to about 500. When we continued with the rest, we suddenly saw that hundreds of the other websites have all fixed their cookie banner, even though we’ve never intervened with them. So what happened? In the background, companies understood, oh, there is actually no enforcement action going on. I heard that from a colleague. I heard that from software provider that also sent emails around. And suddenly we saw a huge number of compliance without even intervening. And that’s exactly this idea of general deterrent. that we usually have in other areas of the law that work well once you can speed it up and be a credible threat or a credible interference. Now, just to wrap it up, we also upgrade this now to actually become like a long-term project, which is Stefan’s main job right now to get all of that in a very structured and very nice way of using it. We also do that in a way that the authorities could use it in the future, ideally. What we added is basically a bigger admin panel where you can manage all of these cases and make it all much more modular. So you basically can go between the steps back and forth when it used to be like more linear. That adds a lot of the options to manage cases better and also attribute cases better and filter cases better. So we can, for example, say we only bring certain cases in anymore. The other thing that we basically do here is that we upgrade a lot of the individual functionality that we can actually, the first version wasn’t cookie banners, but you can use that tomorrow for tracking pixels, for some other web technology, some script, anything else. And these modular parts, you can basically plug into the software and take it back out. That is fundamentally what this is gonna make a lot different. The rest is mainly really making the interfaces usable for an average lawyer so that we don’t need a tech person every time you need to change something in a PDF. That is the elements that we’re working on right now. For us, that was really one of the most useful projects I think we’ve done, especially considering like input-output ratios and really moving enforcement forward. So on that side, I think it’s a very interesting approach in the sense that we’re kind of working in a digital sphere, but still do kind of pretty analog procedures. And we could probably learn from a lot of areas on how we can do that better. So thanks for that. And I hope if there’s questions, especially technical questions, that Stefan can jump in on all of these. Okay, thank you.

Christian Reimsbach Kounatze :
Thank you very much, Max. I think I have probably one brief question if you could elaborate on that because it will actually. also be a good transition for the next speaker, which is I think you mentioned that you had talked to data protection authorities. Could you briefly say what the feedbacks were that you received on that and how high is the interest among those data protection authorities to implement this kind of tool in their processes? So I think on a very personal level,

Maximilian Schrems:
if I may say, I think the answer was a mix of fear because too much, never seen that different world and high interest in the sense of really how can we be efficient in our work and also, let’s say, get rid of useless work for employees in the sense of like a lot of these tiny things are just very trivial. You don’t need a lawyer for a lot of that. Usually, I think one element that I forgot to mention, the quality usually gets better because if you have a one-time template that was proven well by the more senior people, you know that what you’re doing here is going to produce good results while if you have some, let’s say, more junior person that has to do that the first time, you have a very good chance that something is going to go wrong or that something gets forgotten. So it’s also, I think, efficiency plus quality that you can get through systems that work well. However, the big problem in reality is you need to implement that, you need to have programmers, you need to have people that really understand that, and you need to have the management skills for it in the sense of to really find the right cases because this doesn’t work for every case. A big thing also with us was to not get entangled into details anymore, to really tell the lawyers, we’re only doing these two things. There may be 10 other violations on the website, which we just ignore them for now because that doesn’t scale. That is a bit of a, let’s say, culture change that you need as well, even with annoyed to say, okay, that is really a thing where we just go for this one topic, we do that well and quick, and the next time we do the next topic, which is a very different approach in procedures where we usually do everything.

Christian Reimsbach Kounatze :
Cool. I think just as to highlight, and I’ve noted that because it’s a good topic for the later discussion because you just mentioned the word scale. And I think this is definitely one of the common themes when it comes to using technologies for addressing privacy problems that we have a potential, let’s say solution, or let’s say a support of a solution or part of the solution that basically helps us scale with the scale of the problem, so to speak. But we will get to that point, hopefully. Now it’s my pleasure to give the floor to the European Data Protection Supervisor. And I guess, obviously, one particular question given, Wojciech, that you are following Mark’s presentation is the question, to what extent are these tools are relevant for your agency, but also for basically your colleagues’ agencies? And maybe also talk about the role of privacy, or technologies more broadly, for supporting your work and your cause. So the floor is yours. Thank you very much.

Wojciech Wiewiรณrowski:
Thank you for having me there. Thank you for being able to talk with you, even in such an early morning here in Brussels. So all the best from Brussels. First few words on the institution itself, the European Data Protection Supervisor. I guess most of you are familiar with it, but for those who first time hear about the very complicated system of the governance of privacy in Europe, the European Data Protection Supervisor is the supervisor of the EU, European Union institutions, bodies, and agencies. So I’m not the super data protection commissioner for all Europe, but I’m the commissioner for the EU bodies and EU institutions. At the same time, we have 27 member states jurisdictions and 27 data protection commissioners in each of the member state. Some of them have an even more complicated structure. Anyway, what is rather more important for today’s discussion is not our supervisory role towards the EU institutions, but the fact that we are advisors in the legislative process in the European Union, and also the fact that we are the providers of the Secretariat for the European Data Protection Board, which is consisting of all these data protection authorities. So I’m not speaking in the name of all these authorities, but I can somehow provide you with the approach that we have among the European data protection authorities. Well, that’s a good idea to put me just after Max, because I can somehow react to what he said about the resonance that his work makes among the data protection authorities and in the market. That’s true that there are a lot of data protection authorities who are interested in the practical deployment of the solutions similar to the ones that NOIBS does. That’s also true that for some data protection authorities, it’s strange that the NGO, the civic society movement, can do the things which are called enforcement. Actually, it is enforcement. That is the way to make the thing running. And I’m saying that also as the person who always said that what Max did in his life was the thing that the data protection commissioners should do 10 years before. And they never asked the right questions. Anyway, coming back to the main point of discussion, that’s true that these very tools that are prepared by NOIBS, including also the information retrieval systems which are connected with it, are the things that should exist in most of the data protection authorities, especially those that are… that have really independent IT structure from the other institutions. We rather try, as data protection authorities, we rather try to deal with the legal and guidelines way of doing the things. But it’s true that some of the data protection authorities do have their laboratories and do have their IT teams that are preparing the tools as well. We try to do it as the European Data Protection Supervisor as well, because we still remember that there is a kind of limit for the legislative actions that we can do. Making more law does not necessarily help. What actually, the point on which we are in the European Union is that we have the law, and the law is not bad. The thing is that we have to operationalize it also by promoting the role of the IT architects and promotion of the comprehensive privacy engineering approach. So that is something which lies in the roots of our strategy as the EDPS. And for this mandate strategy, shaping the safer digital future, a new strategy for the new decade, we, as one of the pillars, put the tools. The tools, so we are going to use the tools and we are going to develop the new ones. Of course, as I said, it’s not that easy for all the data protection authorities to create the laboratory where these tools are really produced. But the authorities like ICO, like KNIL, like Canadian authority, like some of the German authorities are ready to do it and are ready to prepare their own tools. What we do as EDPS, apart from the very small… things connected with the remote control and the remote audits, we try to organize the society. We have the IPAN, which is Internet Privacy Engineering Network, which is a platform for engineers that are preparing the best solutions to discuss on them and also to disseminate information about different solutions which are done by different organizations. But we also try to make use of the fact that the European Union is, European Union institutions, that’s 70 institutions which have their own achievements in this field. And let me here just give two examples of such solutions, which are both coming from the Eurostat, which is the statistical office and the agency which is dealing with statistics in the European Union. And they are both also giving us examples in the current guide on privacy enhancing technologies for official statistics, which have been produced by the United Nations. So the first one is the processing of longitudinal mobile network operators data, where Eurostat has developed the proof concept solution with a technology provider with the main goal of this project is to explore the feasibility of secure private computing solutions for privacy preserving processing of mobile network operator data. The technology itself is, for the project was a trusted execution environment with the hardware isolation, which has been delivered by the market. So this is not only the Eurostat who… preparing that, Eurostat is deploying it and let’s say localizing it, but the business is involved in it. And the second one, also from Eurostat, is developing of trusted smart surveys. And once again, that’s the situation in which Eurostat is trying to localize on the IT infrastructure for the EU institutions the solution which is prepared for the market. So these are the things that we develop, these are the things that we try to promote, and this is a kind of culture which we try to deploy among the clerks of the quite byzantic institution as the European Union administration is. Thank you very much, Wojciech. Just a question, because I think

Christian Reimsbach Kounatze :
what I liked about the examples that you pointed out was that you are essentially directing, or your speech was basically directing us towards a solution how to promote the use of those different technologies, and you gave also examples of, let’s say, data protection authorities that were kind of leading the way. I was wondering also if you could talk a little bit about the importance of guidance in that particular role, given that, or maybe we can talk about that when we later on, when we talk about solutions, how to promote that, because obviously this is where the UK ICO guidance plays a role. So let’s put that on the side because I just realized that time is running and we need to move on, sorry, to our next speaker. And here obviously I would say start and give you the floor, Shushokra, to maybe introduce yourself and introduce your organization and what it does and how it relates actually to the discussion about technology and the role of technology for privacy protection. I think one of the key elements, at least from my understanding, is that what you are doing is helping us scale with the problem and help us address some of the issues related to privacy. But I let you talk and introduce yourself. So I’ll just share my screen as well so

Suchakra Sharma:
everybody can see and then we can talk. So I am Shushokra. I am chief scientist of this upstart, nice little upstart called Privado. And what we are trying to do is to look at PETs from a very different perspective. The way PETs have been developed right now is that the solution providers are using it or the way privacy itself generally is looked at is from the perspective of user. But what we are thinking is that data is not floating in the ether everywhere. It is moving from one system to another system by software. So why not just look at the software itself which is handling the data. It can give you an interesting perspective of what was the intention of the developer when they were developing the software. And then you can track what is happening. So essentially we are trying to catch privacy violations before even they manifest inside the system. So even before you release software you can actually understand how it’s going to handle data. And if you do it at all the points in the chain of where the software is handling the data that that’s where it would be. So you know like as Max was pointing out you know automating everything a ticketing system. So that ticketing system is using a software. When it takes a photo of a car, it captures some information, that’s private information, and then it translates into ticket, which goes through five, six systems behind. Those are all points where data is flowing. How about we understand that whole system itself, and then we can predict what’s going to happen to the data. So that’s the perspective. So I’m Suchak Rai, I’m the Chief Scientist here. I have a PhD in Computer Engineering, have been working in cybersecurity for six years, something now, and almost two years now in privacy. I’m going to implement all the learnings that I have from the cybersecurity industry in this environment now. Okay, so visiting a doctor, this is how you do it these days. You fill up a form, you have a lot of private information there, PHI information, doctor looks at it, keeps it safe for some time, and then it gets shredded, hopefully. But now we have something new in this millennium. We have a software, and the software is now handling your data. So things have not changed much, but now with the advent of the software, what has happened is that this data gets exchanged through multiple hands, goes through logs, gets to an advertiser. You don’t even know what that software is doing. You just trust it. You go to a doctor’s office, you fill in the details, you just trust it. But what’s happening behind, and this is true because we have observed software, we have analyzed it, we know that it’s using a lot of technologies that proliferate this data. So essentially what happens is at the development time of the software, you have no data. You just have the intention of what to do with the data. But as the software gets deployed, some of the data gets put into an analytic service, some goes to a third party, and then databases everywhere, the data expands, you know? So it’s nice. if we try to look at the software itself, because that’s where the intention of what to do with the data is, and you can actually do it. So what happens to your data is actually defined in software. So at the time when the software is built or it’s getting deployed in those locations, we can get information about data inventory, doctor’s name, patient’s name, etc. We can get a map of the data. The intention of the software is to take the patient’s name and put it to this analytic service based on where the data is going to be stored. For example, the data that it’s taken and it’s going to be put in a data center in US East, you can actually get the location of where the data will be. Again, there’s no data that is being processed. It’s just the intention of what to do with the data. Our third-party transfers. So if that doctor software has some weird connection which goes to some other connection, goes to another piece of software, and that is using advertising, you can actually track it all the way. This gives us something which I would like to call as technically verifiable PIAs. So every organization tries to do PIAs, Privacy Impact Assessments. But that cycle is too long. There are documents that have to be filled and then you go back to the engineers, you go back to the developers, and then the lawyers also get involved. They want to see the document in a specific format. But what if you have all this information very early on in the game? So if you try to do it at that stage, it’s easy, it’s early, and it’s proactive privacy. If you try to do it at later stages, try to understand where the data went and use 10 other technologies, it’s a little bit late at that time. So this is one PET that we would like to say. It’s like expansion of PETs by actually making the software itself secure, making the software itself not leak your private information in many places. So one example is, In Canada, I’m in Toronto right now, it’s pretty late, and there is a directive which is released by the government on privacy impact assessments. And we see that all the organizations have to actually fill in this BIAs, go through a process. And Canada had this dental benefit last year, and they created a summary. And there’s a small text which fulfills one of the points which says individuals submit their personal information on the CRH, the Canadian Revenue Agency website. It’s using HTTPS and et cetera, et cetera. That’s about it. And to get this kind of assessment, they would have been going through multiple places, looking at previous assessments, looking at software. But software changes so rapidly. The moment you introduce a new kind of dental benefit or a vaccination plan, this is rapid. So the software gets developed rapidly, and you never know what went inside it. But you have all this information, we discussed that. This is already there because when the software was developed, we know what is supposed to happen to the data. So what if you can, so imagine, before even making that kind of a service public, what if you could find whether it’s collecting your PII, PHI, and it’s transmitting it to some other weird service that you never imagined. These days it can be open LLMs. And you can actually do it. And we have built a tool, which is also open source, you should check it out, which allows you to really identify that if a developer decided to collect address inside the software, it can say new data element found. This is at this exact place. If it’s a violation, if it’s a privacy violation, fix it very early on. You don’t have to wait for a big assessment and then going back. You can immediately know that, yeah. And today, this developer sat down and they decided to. collect address information. You have this information right there in itself. You can then see the flow, where it went. You can actually analyze the software. Just like a human is writing, our tool tries to analyze that software to see the intention of the human. You can see that that will eventually go to OpenAI, it will go to MongoDB database somewhere, or it gets leaked to a console, which again is a privacy issue. People don’t understand, but it is a very big privacy issue. You can get this deeper understanding just by looking at code, because code has the intention of what the developer wanted to do with the data. That’s essentially what it is, and having these technically verifiable PIA opens a new door. You get a chain of trust, so this accountability perspective also comes into picture here. You get a chain of trust because you have a record of modifications right from the design to development and to deployment. You have an opportunity to certify software now. You can have privacy-certified applications because you know that this application is handling private data in a more secure manner. They have not integrated these weird advertising things inside them. You can try and translate privacy intentions of legal directives that we were seeing, big documents, into very fine-grained checks, which are followed. This can open doors to actually understand high-level laws like GDPR, CCPA, and the nuances in them, and convert them to really fine checks that can be run on software to say that, yeah, this is compliant with it, and this is even before it gets deployed, so kind of like automating what Max is trying to do in a manner, but doing it very, very early, even before the software gets developed. And then it again opens a paradigm for privacy engineers. They can now proactively help build privacy-respecting apps because privacy engineering gets involved. It’s a new role. that should be there, it’s very important, and they can help build privacy respecting app. But what we have also observed is, it cannot replace human processes. They are absolutely essential. So what if there’s no policy to share the document? You can do as much nice things as possible on the software side, but that’s essentially what it is. Yeah, that’s about it. Questions?

Christian Reimsbach Kounatze :
Thank you very much, Susharkar. I was actually, and I also thank you very much for making the connection to the previous presentation by Max. And obviously one of the question that I was wondering is, this is also an approach that theoretically NGOs could use to, or privacy advocates could use to kind of, yeah, enforce a privacy law or even, and obviously also data protection authorities could use when they are doing in-house screening to our impact assessment and the likes. But obviously we also have a set of professions that are operating within the firms. And I would say this is a good link to our next speaker, Nicole. So if you could introduce yourself and how your work and your experience relates to what the previous speakers have said. The floor is yours.

Nicole Stephensen:
Thank you so much. Hello, everyone. I feel really honored and delighted to be following such a wonderful group of presenters. Thank you so much for having me today. My name is Nicole Stevenson and I’m a partner at IIS Partners, which is an Australian privacy and data protection consultancy. And we’re in our 20th year of operation. You’ll also hear from my accent that I am a Canadian, which means I’m both a Canadian and an Australian citizen, but I’ve been living here in Australia for 20 years now. I lead our privacy services functions at IIS Partners, where my specialism is in privacy program management and culture building. So you can just… can sort of picture how I’m potentially going to wrap up today’s session. And I’d like to really start with the essence of my interventions in mind and put to the group that privacy-enhancing technology should not replace good decision-making at the outset. So our governments and organizations still have a positive duty to ensure that their information practices accord with relevant privacy and data protection laws and community expectations. Now, in my work, there is a large focus on strategic privacy risk management, which is natural, right? Because the work of a privacy consultant often relates to identifying and mitigating risk around decisions that have already been taken. So for example, organizational information policy or practice projects or programs, and then, of course, technology deployments. And sometimes I find that our governments and organizations can be educated on what their risks are. But particularly where there are large volumes of personal data or complex vendor relationships involved, they might struggle to solve for these using conventional methods. So as an example, where there’s a risk of unauthorized disclosure of personal data into those vendor processing environments, such as through vendor APIs or single sign-on digital handshakes, it can be quite difficult for organizations to test whether a risk exists only in the realm of possibility, right? And we often see those types of risks borne out in privacy impact assessments, right? Consultants like me say, oh, you might have a risk of unauthorized disclosure here. But is that only in the realm of possibility, or is it actually playing out in reality? Now, unauthorized disclosures to vendors that are processing personal data on an organization’s behalf often happen without any real awareness of the organization. And we refer to this, or we often refer to this as data leakage. But this is also. really highly likely to qualify as a personal data breach, depending on the jurisdiction that you’re in. And although I’m a huge proponent of administrative controls like contracts, data leakage isn’t something that a contract with a vendor is going to eliminate properly, or even control for sufficiently at the outset. All of you know, when we are remediating data breaches, this is actually a backward looking exercise. This is where I think privacy enhancing technologies do have a deep potential utility. Now, in the context of controlling for data leakage, so let’s use this as our sort of example space. Privacy enhancing technologies are probably gonna take the form of data accountability tools. And this is more of a gray area category for pets, right? As compared with some of the technologies that have been discussed already here today, where technology can assist an organization to enforce rules about what should or should not happen to personal data. And the rules are gonna be found in things like our data protection laws that are applicable to the organization. Or they might be set out and or they might be set out as commitments to the community in the context of an organization’s privacy policy, or they might be expressed as contractual provisions between the organization and its various vendors or service providers. Now, all of this said though, the implementation of privacy enhancing technologies doesn’t remove from our governments or organizations those initial accountabilities that are associated with things like purpose specification. Why do we need the data in the first place? Do we have a fit and proper purpose for collecting and using it? And then of course that collection minimization, are we only collecting the personal data that we need to fill that proper purpose? Because these are those vital building blocks, right? For enforcing a climate or a culture that limits use and disclosure of personal data to the greatest extent possible with or without the involvement of privacy enhancing technologies. Now, all of that said, right? And in my experience. the business case for implementing privacy-enhancing technologies, at least as I’ve seen here in Australia, can be complicated by a number of factors, including whether the pet supplier is a small business or startup, because they themselves might lack the necessary privacy or cyber maturity. I’m not saying that’s in all cases, but it can certainly be in many cases, particularly where there’s not that bucket of vendor capital sitting behind the small business or startup. Second is the geographical location of the pet supplier. There are many associated legal requirements or barriers that might impact an organization or government’s ability to engage that pet supplier. There might also be some socio-political biases depending on where that supplier is. If we look at privacy in the Western conceptualization of privacy, if we’re looking at a potential pet supplier that’s based somewhere that doesn’t have those same socio-political norms or ideals, that might be a barrier. Finally, budget of the government agency or organization. One thing that we’re noticing is that where privacy-enhancing technologies are dealing with large volumes of data, if they are being priced based on units of data or volume of data, sometimes the budget can blow out and really remove from the government agency or organization the ability to use that technology at all. Now, I wanted to share with you that IIS Partners recently established a subsidiary company, and it’s called TrustWorks 360. That’s because we think privacy-enhancing technologies are a thing and are an important thing in Australia and in the wider global market. TrustWorks 360 is working to bring privacy-enhancing technologies and other privacy and security management solutions to the ANZ and Asia-Pac market, which is where we play. The feedback so far has been that it’s a real challenge. I approached actually one of our privacy-enhancing technology partners when I was considering the comments that I would bring to the group today. They are called Q-Privacy, and they deploy tools that both allow organizations to audit for data leakage, so remembering that context, that example I gave you before, and then also establish and enforce rules that ensure only the personal data specified for a processing purpose is able to be pulled into those vendor environments. Now, I think that this type of data accountability tool is exciting for the global privacy marketplace, and I think it’s got great utility for organizations that deal with large volumes of data that can’t possibly be monitored by a person, right? And in these cases, and with my consulting hat on, I would say that automated solutions are much more ideal than relying on, say, the privacy officer or the DPO in an organization to try to get a handle on this. But there are barriers to uptake, and when I asked Q-Privacy to share what, in their experience, those barriers are, they gave me a couple of points to share with the room. The first is that there seems to be a low priority for uptake of pets in, you know, sort of your small to medium organizations or your smaller governments because there’s such a focus on big tech from a regulatory perspective, and if everybody’s eyes are on big tech, it means that no one sees what we’re doing over here, right? So we’re sort of risk managing our decisions in relation to privacy, possibly waiting for a data breach before we take action on anything. Second is that there tends to be an avoidance for zero-trust approaches to personal information or personal data management of the likes that Q-Privacy is deploying, and low budgets. And so there tends to be more of a focus on those third-party risk assessment tools and using standard legal contracts and treating those as sufficient. And finally, the most decision makers in the domain of privacy tend to be more in that legal space, right? So we tend to see legal teams or potentially corporate services teams dealing with privacy issues for the government or their organizations, and they have a less technical focus. So, you know, the barrier, the lack of privacy engineers or folks that understand how privacy enhancing technologies is a barrier for uptake. And with that, because I know we want to have at least 15 minutes for questions, I will end my discussion here. And again, thank you to all of you and to the room for attending

Christian Reimsbach Kounatze :
today. Thank you. Thank you very much, Nicole. And I think you pointed out a number of questions that I would like us to discuss. Just wanted to invite the audience in the room, but as well online to feel free to raise questions. But given, I mean, I have a couple of them, so I will take my privilege as a moderator maybe to ask a few of them. And I mean, one is definitely the question about adoption that was raised. If we all agree that all those technologies are great, why is it that not everyone is using it? I mean, some of these technologies have been around for a long time. So how comes that it still seems to be something exotic that needs to be discussed at the IGF? So this would be my number one question. And another one, if I may, and then because that’s actually the one that strikes me out of the discussion, kind of everyone has agreed, seems to agree that automation is great. It’s needed to scale with the problem clearly. But at the same time, everyone seems to be saying, or at least I heard this multiple times, humans should not be replaced. There should be a role for humans to be kept in the process. So if you could elaborate on that. because I think that’s maybe something that may, some people may, yeah, for different reasons try to forget or ignore. So I let you maybe intervene. We start maybe with Clara and we keep the orders of intervention. If you could address some of these points, put the emphasis where you wish to do. So Clara, if you could start. And maybe, sorry, before you do, I just wanted to acknowledge and express my appreciation that you are joining, some of you at least, are joining from very early in the morning. In particular, you, Shukra, from Canada. So this is very much appreciated.

Clara Clark Nevola:
Well, it’s starting to get light, you can see in the background. Nearly normal morning now. I think I’ll take your first one about like why do we not see it ingrained yet. It’s something that we’re working on at the ICO. It’s our next step after the Privacy Enhancing Technologies Guidance. And I think basically our explanation for this is that the organisations who would most benefit from Privacy Enhancing Technologies do not yet know that they exist. So there’s a real interest in them in community. And that’s where the use case is. Is my sound OK? Your sound is OK. The video is freezing a little bit, but we can hear you well. OK, thank you. So basically the kind of lower tech organisations are not yet aware of these technologies. And one of the things that we’re working on is how can we bring people who are more experts in pets, so pets developers, academics, organisations who are more technically minded, who have already implemented them, together with more traditional organisations, local government, health bodies, to really understand like why would you use a pet? So that’s my explanation for question one, and I’ll hand over.

Udbhav Tiwari:
Okay. So. I think that on the point of question two and why humans are important, and it’s not just a question of automation. There is a very real risk that we also often discuss within Mozilla, that privacy enhancing technologies may make it so that people just start collecting even more data than they already do because it’s so easy to collect it, and a lot of the risks that are associated with it no longer exist. Independent of the technology that is being deployed or whether you’re using a tool to check code or whether you’re trying to make sure that data leaks don’t take place, I think it’s really important for organizations to first question, should this kind of data be collected in the first place? Is there a real use for it? What use will it be put for? Is it worth the risk of what may happen if this data ends up leaking? As much as they invest in resources and tools in order to protect that information. For me at least, that’s the primary reason why human beings are important, because the decisions of what to collect are obviously made by human beings. If you are collecting more information than you need and it ends up leaking, rather than investing in the tooling around preventing that from happening, maybe you should reconsider whether that data should have been collected in the first place or not. I think that it’s been a very enlightening conversation also because there are two parts. One is privacy enhancing technologies once the data exists within an organization, but there is also the piece of privacy enhancing technologies that allow you to collect data without the parts that actually sometimes even make it private, which are identifiable information. So for example, both of the things that I had mentioned, Oblivious HTTP and DAP, allow you to collect information in a manner that is aggregated, equally useful, but with almost zero consequence to what happens if that entire piece ends up being in the real world. Because it’s been collected in a manner where the unique identifiers no longer correspond to the people who would actually operate them. So I think that’s also an interesting point to keep in mind for the first one. Thank you. Max, if you’d like to say a few words.

Nicole Stephensen:
It’s like privacy enhancing technology to be quite impenetrable for organizations and governments. Folks who are not technical, who are not engineers, who may not even be policy people, right, with an awareness of what privacy enhancing technologies do. Finding a way to capture what they are in plain language, almost like a sales pamphlet. You know, these are the types of privacy enhancing technologies that are out there. This is what they look like and this is how they can be deployed within an organization or government. That type of stepped approach, I think, would be really, really useful, particularly in jurisdictions like this one.

Christian Reimsbach Kounatze :
Thank you very much to all of you for being here in person, online, to incredible hours. I took note of the different suggestions and what is also great is that I think with this event we have been able also to extend the understanding of what PETS or what the role of digital technologies could be beyond just those almost today traditional technologies to something that is much, much broader. And with that, thank you very much and we look forward to definitely continue the conversation. Thank you. Bye. Thank you. Thank you. It was a good discussion. Yes, definitely. Some of the other, especially the Privido presentation was stuff that I’d heard about, but I didn’t know that it had advanced to some of the things that they are attempting to do. I mean, for me, if I were to deploy that software in a company, I would probably say that is such a huge privacy risk, just the software, because it actually has to have access to everything. Yes, that’s an interesting point. Let’s say, I had a separate conversation with him about that and obviously one of the challenges that this cannot be run by a third party, except obviously you would have to really put in place trust mechanisms because not only the problem of privacy is one. I could imagine that being the least, the smallest problem in the company. Tristan, your mic is still live, just to let you know. Thank you. Thank you. Thank you. Thank you. you you you you you

Christian Reimsbach Kounatze

Speech speed

169 words per minute

Speech length

2384 words

Speech time

844 secs

Clara Clark Nevola

Speech speed

178 words per minute

Speech length

2146 words

Speech time

722 secs

Maximilian Schrems

Speech speed

218 words per minute

Speech length

2595 words

Speech time

713 secs

Nicole Stephensen

Speech speed

176 words per minute

Speech length

1786 words

Speech time

609 secs

Suchakra Sharma

Speech speed

188 words per minute

Speech length

1845 words

Speech time

589 secs

Udbhav Tiwari

Speech speed

202 words per minute

Speech length

2672 words

Speech time

795 secs

Wojciech Wiewiรณrowski

Speech speed

141 words per minute

Speech length

1119 words

Speech time

476 secs

Cybersecurity of Civilian Nuclear Infrastructure | IGF 2023 WS #220

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Giacomo Persi Paoli

The Open-Ended Working Group (OEWG) was established to ensure greater visibility and active participation in discussions dealing with international cybersecurity. It has had six iterations, with each iteration involving approximately 20 countries, including the permanent members of the Security Council. The OEWG is seen as more transparent, as everything is open to the public. Furthermore, if consensus isn’t reached on a report, the chair has the authority to publish a summary.

The OEWG has focused on the protection of critical infrastructure, which has been a prevalent subject of discussion. As part of the framework for responsible state behaviour in cyberspace, critical infrastructure is a focal point of multiple norms. States are called to protect their own critical infrastructure and are encouraged not to target the critical infrastructure of others. International assistance is also encouraged for states whose critical infrastructure is targeted by cyber attacks.

However, the OEWG may not be the right forum for detailed discussions on how general norms apply to specific sectors or types of infrastructure. It is viewed as more suitable for discussions on evolving threats, norm implementation, and international impact. There is a need for a dedicated forum to discuss the implementation of general purpose norms for cyber nuclear security. Discussions within the OEWG have covered various aspects of critical infrastructure, such as medical infrastructure, energy, and financial sectors. However, the limited time available has made it challenging for states to deeply explore any of these topics.

Concerns regarding threats to civilian nuclear infrastructure by cyber operations are growing, as states have flagged their increasing concerns over cyber threats to the Secretary General. Cyber attacks have also been on the rise during the pandemic, affecting all sectors of society, including critical infrastructure.

The private sector can play a significant role in helping states develop cyber resilience. The private sector has capacities and capabilities that can contribute to enhancing cyber resilience efforts. Public-private partnerships have been suggested as a tool to increase cyber resilience and have been flagged as a way forward.

In conclusion, the OEWG serves to enhance visibility and participation in discussions on international cybersecurity. It has addressed the crucial issue of critical infrastructure protection. However, it may not be the ideal platform for discussing specific sectors or types of infrastructure. The need for a dedicated forum for discussing the implementation of general purpose norms for cyber nuclear security has emerged. Concerns about threats to civilian nuclear infrastructure by cyber operations are growing, and the involvement of the private sector in developing cyber resilience is seen as significant. Public-private partnerships are also being considered to increase cyber resilience.

Rowan Wilkison

Concerns have been raised regarding the security failures within the IT networks of nuclear plants. These concerns arise from the potential harm and disastrous outcomes that could result from such failures. It is imperative to address these shortcomings and take measures to prevent any adverse consequences.

The modernization of cybersecurity and civilian nuclear infrastructure is seen as a high priority in mitigating the risks associated with these security failures. This would involve implementing advanced and robust security measures to safeguard the IT networks of nuclear plants. By prioritising the improvement of cybersecurity, the likelihood of breaches and potential threats can be significantly reduced.

Furthermore, gaining a better understanding of the threat landscape is crucial. This entails identifying potential vulnerabilities and weak points within the IT systems of nuclear plants and staying updated on the latest cyber threats. By doing so, appropriate measures can be taken to prevent any breaches or malicious activities.

It is worth noting that these issues align with various Sustainable Development Goals (SDGs). Specifically, they relate to SDG 9 – Industry, Innovation and Infrastructure, as the modernisation of cybersecurity and civilian nuclear infrastructure falls within the scope of enhancing industry and infrastructure. Additionally, these concerns also relate to SDG 13 – Climate Action, as the disastrous outcomes of security failures within nuclear plants can have severe environmental implications due to the link to radiation.

Moreover, the issues raised have implications for SDG 16 – Peace, Justice, and Strong Institutions. By addressing the security failures in nuclear plant networks, stronger justice systems and institutions can be established to ensure the safety and security of critical infrastructure. This, in turn, contributes to promoting peace and stability.

In conclusion, the concerns surrounding security failures in IT networks of nuclear plants highlight the need for immediate action. Modernizing cybersecurity and civilian nuclear infrastructure is crucial not only for the industry but also for addressing environmental concerns and maintaining peace and justice. By prioritising these areas and adopting proactive measures, the risks posed by security failures can be effectively mitigated.

Priya Urs

The analysis examines the issue of cyber operations targeting civilian nuclear infrastructure within the framework of international law. The first argument highlights the absence of specific rules in international law that directly address cyber operations on civilian nuclear infrastructure. While states recognize the importance of protecting civilian nuclear infrastructure as critical infrastructure against cyber operations, there is a lack of concrete legal protections.

The second speaker argues that while general rules of international law, including treaties and customary international law, may potentially apply to this context, their specific application presents challenges. These general rules encompass aspects such as the use of force by states, the prohibition of intervention in another state’s affairs, respect for state sovereignty, and the due diligence obligations of states. However, it is important to note that these rules were not designed with cyber operations in mind.

The third and fourth arguments focus on the prohibition of intervention, a principle agreed upon by states, but with variations in the definition of activities that constitute intervention. The generally accepted requirements for intervention to be deemed unlawful are that it must address the internal or external affairs of a state and that it should coerce the targeted state. However, there are disagreements among states regarding the specific activities that fall under this prohibition.

The fifth speaker emphasizes that a cyber operation that disrupts the production of nuclear energy can be seen as coercive and may therefore constitute unlawful intervention. This reflects the belief that if a state adopts a policy regarding the generation of nuclear energy, a cyber operation that disrupts its production would be deemed coercive and thus unlawful.

On the other hand, the sixth speaker argues that cyber operations such as surveillance or data breaches may not be perceived as coercive since they do not directly hinder a state’s policy implementation. These types of operations, which do not interrupt the implementation of a state’s policy, may not be considered unlawful intervention.

The analysis also highlights the importance of preventative measures in cybersecurity and the need for legal accountability. It emphasizes the significance of addressing the cybersecurity problem from multiple angles, including proactive measures and holding accountable those responsible for incidents.

In conclusion, the analysis underscores the lack of specific rules in international law regarding cyber operations on civilian nuclear infrastructure. While general rules of international law may have some relevance, applying them in the context of cyber operations poses challenges. The debate surrounding the definition and scope of intervention further complicates the issue. The analysis also emphasizes the complexity of distinguishing between coercive and non-coercive cyber operations. Finally, it underscores the necessity of comprehensive cybersecurity measures and legal accountability in addressing this complex issue.

Talita Dias

Increased cyber and nuclear risks present a significant threat to national security and global stability. Cyber operations are targeting critical sectors such as healthcare and energy, as well as civilian and military nuclear systems worldwide. It is urgently necessary to develop international technical standards, rules, principles, and non-binding norms to ensure the cybersecurity of civilian nuclear infrastructure. This is particularly crucial given the growing use of small modular reactors and artificial intelligence, which could expand the potential targets for cyber operations.

The International Atomic Energy Agency (IAEA) plays a vital role in this area by providing guidance and recommendations for computer security measures. They also conduct ongoing security audits and assessments to detect vulnerabilities and offer training sessions for nuclear facility operators. However, there is some debate surrounding the binding nature of the IAEA’s recommendations.

To enhance cyber resilience, it is essential to foster multi-stakeholderism and public-private partnerships. The private sector’s involvement in assisting states in building their cybersecurity capacities is recognised, and public-private partnerships are seen as a robust strategy for enhancing the cyber resilience of member states.

One area of contention involves determining what constitutes intervention in the cyber landscape regarding civilian nuclear infrastructure. Understanding the threat landscape in both the cyber and nuclear sectors is critical, as accidents within the nuclear sector can have significant consequences.

Improved dialogue between the cyber and nuclear sectors is necessary to effectively address these risks. Through dialogue, stakeholders can exchange knowledge and best practices, identify potential gaps in cybersecurity measures, and collaborate on developing effective strategies to mitigate cyber threats.

The need for specific cyber nuclear norms, rules, or best practices is currently being debated. The current feedback on this issue indicates a score of 6.4, highlighting the ongoing discussions and varying perspectives on the necessity of such measures.

In conclusion, the increasing cyber and nuclear risks pose significant threats to national security and global stability. Developing international technical standards, rules, principles, and non-binding norms is crucial to safeguarding the cybersecurity of civilian nuclear infrastructure. Collaboration between stakeholders, including public-private partnerships, is necessary to enhance cyber resilience. Clarifying the prohibition on intervention in the cyber landscape and understanding the threat landscape in both the cyber and nuclear sectors are key areas of focus. The necessity of cyber nuclear specific norms, rules, or best practices is subject to ongoing debate and discussions.

Tomohiro Mikanagi

The interpretation of sovereignty in relation to cyber attacks varies among different countries. The UK does not see any standalone obligation arising from sovereignty apart from the non-intervention rules, while France views any cyber operation causing an effect within its borders as a violation of sovereignty. The US, Germany, and Japan believe a certain level of harmful effect needs to be caused in their territory for it to be considered a violation of sovereignty.

In terms of cyber attacks targeting nuclear facilities, it is argued that they could have severe effects and are likely to be considered unlawful under international law. Mikanagi believes that there needs to be a consensus on what constitutes a harmful effect in a cyber attack in order to determine if a violation of sovereignty has occurred. Additionally, the due diligence obligation in international law is not clearly defined, leading to uncertainty among states as to whether this obligation applies to cyber operations.

Furthermore, there is no clear application for the territorial state’s due diligence obligation in the area of nuclear security, and discussions on this matter are ongoing.

The existing Convention on the Physical Protection of Nuclear Materials could potentially cover sabotage through cyber attacks, despite not explicitly mentioning cybersecurity. Given this, it may be more feasible to discuss cyber security issues related to nuclear facilities within the context of established conventions such as this one.

Overall, the varying interpretations of sovereignty and the lack of consensus, clarity, and application of international laws and conventions contribute to the complexity of addressing cyber security issues effectively.

Michael Karimian

The tech sector plays a central role in providing digital solutions for safety, security, and everyday processes, including nuclear systems. It provides ICT infrastructure that is crucial for these purposes. However, the tech sector’s involvement also increases the risk of cyber threats due to the many entry points into its IT systems. Therefore, it is essential for the tech sector to prioritize cybersecurity by design.

One of the main arguments is the ever-evolving threat landscape. The continuous advancements in technology result in a constantly changing and sophisticated threat landscape. Thus, the tech sector must prioritize cybersecurity measures to effectively combat these threats.

Continuous innovation and transparency in threat sharing are also considered crucial. Actively researching and sharing threat intelligence is essential to stay ahead of cyber threats. By engaging in innovation and sharing information, the tech sector can contribute to creating a safer online environment.

Education and training in cybersecurity are also highlighted. Tech companies can provide guidance on cybersecurity best practices, contributing to the education of individuals and organizations in protecting themselves against cyber threats. This emphasizes the importance of quality education and training for ensuring cybersecurity.

The significance of multi-stakeholder engagement and collaboration in addressing cybersecurity challenges is underscored. Collaboration between the tech sector, governments, civil society, and other companies is seen as essential to effectively tackle cybersecurity issues. By working together and sharing knowledge and resources, it becomes easier to address the complex nature of cyber threats.

Microsoft’s stance is mentioned, as they believe in proactively taking steps to address cybersecurity risks. As part of their commitment, they are involved in initiatives like the Cyber Security Tech Accord, which aims to improve cybersecurity across the industry. Microsoft’s active involvement showcases the importance of industry leaders taking responsibility and actively addressing cybersecurity challenges.

Basic cyber hygiene practices are also highlighted. It is mentioned that good yet basic cyber hygiene can significantly reduce the risk of cyber threats. This includes practices such as protecting user identities, applying updates as soon as possible, using advanced anti-malware, enabling auditing resources, and preparing incident response plans. Following these practices allows individuals and organizations to mitigate many cybersecurity risks.

In terms of technology solutions, cloud-based systems are recommended over on-premises systems for better cyber protection. Cloud-based systems offer holistic, adaptive, and global cyber protection, which is facilitated better compared to on-premises systems.

Lastly, the summary emphasizes the importance of adherence to general guidance for cybersecurity across all sectors, including the nuclear sector. Protecting user identities, applying updates as soon as possible, using advanced anti-malware, enabling auditing resources, and preparing incident response plans are considered essential for all sectors. The International Atomic Energy Agency’s guidelines align with this general guidance, further emphasizing the importance of adherence to cybersecurity measures across sectors.

Overall, the summary highlights the tech sector’s importance in providing digital solutions for safety, security, and everyday processes. It emphasizes the need for prioritizing cybersecurity by design, continuous innovation and transparency in threat sharing, education and training, multi-stakeholder engagement and collaboration, adherence to basic cyber hygiene practices, and the use of cloud-based systems. These measures are crucial to mitigating cyber threats and creating a secure online environment.

Marion Messmer

The analysis explores the topic of cybersecurity risks in nuclear facilities and their potential impact. It highlights that cyber attacks can target civilian nuclear facilities either due to their specific role in nuclear systems or their importance to a country’s power supply. Given that nuclear power plants are a crucial part of a nation’s energy infrastructure, any disruption or compromise can have significant consequences.

The analysis notes that awareness of these risks has evolved over time, indicating a need for improved security measures. It mentions that older nuclear power plants initially believed they were safe from cyber threats due to their bespoke IT infrastructure. However, as plants updated and integrated off-the-shelf IT systems, they also had to incorporate cybersecurity measures. Consequently, new regulations and training procedures were required to address these emerging risks.

Moreover, the addition of cybersecurity concerns to the nuclear energy sector, where physical safety has always been of utmost importance, has changed the game. This realization of cyber threats has caused worry among many individuals and organizations involved in the nuclear energy sector.

The analysis also highlights the risks and opportunities presented by new developments in the nuclear sector, such as small modular reactors and microreactors. While these developments can provide a stable power supply to remote regions, they also increase the risk due to the presence of more reactors. The diversification and length of the supply chain in these systems can introduce cybersecurity vulnerabilities. However, the analysis emphasizes that newer reactors are designed with a focus on safety, and awareness of cybersecurity in these systems is more advanced than before. Advancements in design and operator training contribute to reducing the potential risks associated with these developments.

Notably, the war in Ukraine has brought new risks to civilian nuclear infrastructure. The analysis mentions the Saporizha power plant in Ukraine, which has been directly affected by the conflict. Regular physical and cyber attacks on the power plant underline the vulnerability of such infrastructure during times of conflict. The analysis also notes that managing these risks requires particular attention to potential disruptions to the cooling system for the reactors. A disconnection from the grid, for example, could interfere with the cooling system, leading to a reactor meltdown. Backup generators have been put in place at the Saporizha power plant to ensure that cooling can still occur.

The International Atomic Energy Agency (IAEA) has had a positive impact by actively supporting the personnel operating the power plant. Their monitoring and actions have played a crucial role in mitigating risks. It is evident that their involvement is essential in maintaining the security and safety of nuclear facilities.

Additionally, the analysis emphasizes the importance of addressing environmental, health, reputational, and equipment risks associated with nuclear energy. While it may be challenging to determine the exact likelihood of these risks, the potential severe outcomes warrant preventive measures.

Marion Messmer, a noteworthy figure referenced in the analysis, offers insights into the topic. Messmer finds reassurance in the current safety operations and mitigating actions being taken, particularly in the case of the Saporizha power plant. This implies that efforts are being made to address the risks involved in nuclear facilities caught in conflicts. Furthermore, Messmer highlights the significance of reactor design in reducing the likelihood of a Chernobyl-like incident.

It is essential to consider potential scenarios as nuclear energy becomes more prevalent due to the energy transition. Conflicts involving power plants could increase, necessitating effective management strategies for such situations.

Lastly, the analysis raises concerns about putting reactors underwater, as even small modular reactors can pose severe consequences for the environment in the event of a radiological incident. While the idea of hiding reactors underwater may seem appealing, the potential spread of radiation due to water mixing remains a significant risk.

In conclusion, the analysis provides a comprehensive overview of cybersecurity risks in nuclear facilities. The increasing awareness of these risks has led to improved security measures and regulations. New developments in the nuclear sector offer both opportunities and risks, which are being addressed through advancements in design and operator training. The war in Ukraine and the associated risks to civilian nuclear infrastructure highlight the need for managing potential disruptions to cooling systems. The involvement of organizations such as the IAEA has proven valuable in mitigating these risks. Additionally, the analysis emphasizes the significance of preventive measures to address environmental, health, reputational, and equipment risks in the nuclear energy sector. Marion Messmer’s insights further contribute to the discussion, emphasizing the importance of safety operations, reactor design, and effective management strategies.

Tariq Rauf

The International Atomic Energy Agency (IAEA) has issued more than 30 documents providing guidance and recommendations on nuclear security. These documents primarily focus on the integrity of the control systems, containment and control of nuclear materials, and ensuring the safety of nuclear facilities. The IAEA plays a significant role in promoting nuclear security.

However, the primary responsibility for nuclear security lies with states and operators. While international conventions like the Convention on the Physical Protection of Nuclear Material do exist, states and operators are responsible for ensuring the security of their nuclear facilities. The Convention primarily focuses on nuclear security and aims to protect nuclear material during international transport.

Cybersecurity is a crucial aspect of nuclear security and safety. A malicious cyber attack can lead to serious consequences, including the compromise of the cooling system of a nuclear facility. There have been incidents suspected to be caused by cyber attacks that have resulted in leaks in the cooling system of operating nuclear facilities. It is crucial to implement robust cybersecurity measures to prevent, respond to, and recover from such attacks.

Small modular reactors (SMRs) and sealed reactor units are seen as more secure options compared to larger nuclear power plants. SMRs are compact and have sealed reactor units that do not require frequent refueling. This enhances their security and reduces the risk of accidents or material misuse.

The IAEA plays a pivotal role in providing IT security guidance to nuclear facilities. It collaborates with its member states to produce comprehensive cybersecurity measures, which include defense in depth approaches, risk assessment, security policies and procedures, access controls, network security, and incident detection and response protocols.

Capacity building and international cooperation are essential elements in improving nuclear security. The IAEA facilitates capacity building by conducting training sessions at various locations to enhance the skills of nuclear facility operators. It also encourages participation in security audits and assessments to discover new vulnerabilities.

While the Convention on the Physical Protection of Nuclear Material (CPPNM) is an important international instrument for nuclear security, it is not universally binding. Only countries that have acceded to the CPPNM are subject to its provisions. However, the CPPNM amendment in 2005 extended its scope to cover nuclear materials in peaceful uses, domestic storage, and transport.

There is significant concern regarding the potential risks associated with cyber attacks on nuclear facilities. Fukushima and Chernobyl disasters have highlighted the transboundary effects of nuclear accidents. The release of radiation resulting from cyberattacks on nuclear facilities is a major concern. Balancing the protection of national sovereignty and the prevention of widespread radiation is a challenging task.

It is argued that every nation, especially those with nuclear power plants, should accede to the CPPNM to promote international safety. Iran, for example, operates a nuclear power plant but has not yet acceded to the convention. After the Fukushima accident, there were efforts to make the CPPNM mandatory for all 31 states that operate nuclear facilities.

The involvement of the private sector in nuclear security is increasing. International organizations like the IAEA are interacting more with industry, which provides expertise and technology solutions to enhance overall nuclear security efforts.

However, international organizations like the IAEA face the risk of system penetration by state actors. The IAEA deals with highly classified information about the nuclear activities of more than 180 states. State-originated cyber attacks like Stuxnet and Olympic Games on Iran’s enrichment facilities have underscored the need to address this challenge.

Building trust and cooperation with industry is crucial for the IAEA. While the organization has purchased commercial products for managing big data, its IT experts may not match the expertise and capabilities of states. Strengthening cooperation with industry can help overcome suspicion and further enhance nuclear security efforts.

The conclusion drawn from the analysis suggests that the IAEA should have the authority to regulate nuclear security and cybersecurity. An international, legally binding framework for cybersecurity in nuclear facilities is necessary to address the current reliance on national responsibility. Conventions for liability also need to consider damage resulting from cyber incidents at nuclear facilities.

Overall, the summary highlights the importance of nuclear security, the role of the IAEA and international conventions, the need for robust cybersecurity measures, and the challenges posed by cyber attacks. It emphasizes the significance of trust, cooperation, and capacity building to enhance nuclear security and promote international safety.

Session transcript

Talita Dias:
Today, this afternoon, entitled Cybersecurity of Civilian Nuclear Infrastructure. This session is being co-hosted by Chatham House, as well as Microsoft, and the University of Oxford. It’s a real pleasure to be with you all today, both in person and online. Special thanks to all those of you who are joining us from different time zones, especially six or seven hours behind, especially our online speakers. Thanks so much for joining us, for being with us today. What I’m going to do in two and a half minutes is really go through our run of show, to take you through what we’re going to cover today, and introduce our brilliant, our stellar line-up of speakers for this afternoon. My name is Talita Diaz. I am the Senior Research Fellow on the International Programme at Chatham House. This session is being co-hosted with my brilliant colleague, Rowan Wilkinson, who is sitting by my side, who is the Programme Assistant at Chatham House’s Digital Society Initiative, and the International Law Programme, who is an expert in tech policy. What I want to do now is talk a little bit about this topic, to give you a little bit of an overview, to set the scene, and to really speak to the importance of why we are here today. This session is really about the convergence of cyber and nuclear risks. We have, on the one hand, an increasing number of malicious cyber operations of all kinds. all shapes and forms, targeting all types of infrastructure, including critical infrastructure, like the healthcare sector, the energy sector. And at the same time, we have long-standing nuclear risks that have been around since nuclear energy has been around. And so when the two come together, that poses a significant threat to national security, as well as to global stability. And cyber attacks against civilian and military nuclear systems, though our focus is on civilian infrastructure, they have been reported in different parts of the world, both developing and developed countries. So we all heard about what happened in Iran with Stuxnet or Olympic Games. That was probably the most widely reported cyber attack against a nuclear facility. But there were also different kinds of cyber attacks against different kinds of nuclear facilities in India, North and South Korea, Norway, Germany, the United States, and now Ukraine. And even the International Atomic Energy Agency has been the target of malicious cyber operations. And the actual and the potential risks of these attacks, they include the extraction of sensitive information about nuclear capabilities, malfunctioning of equipment, as was the case with Stuxnet in Ukraine, disruption of energy supplies or of places that are supplied by nuclear energy, increased radiation levels, which is very concerning, and potentially disastrous consequences of nuclear accidents for human lives, for health, and for the environment. These risks have now been amplified with the push for green energy, with the spread of what we call modular or small modular reactors and micro reactors, the use of nuclear energy, including these small reactors, to power AI, as well as the use of AI to automate and diversify the different types of cyber operations against different targets, including critical infrastructure and potentially nuclear infrastructure. So we’re going to talk about this in more detail during this session, I hope so. But these operations, they include disruptive cyber operations that might affect the operation of software and hardware. They include data surveillance or data gathering operations, as well as information operations like misinformation and disinformation. Now, for many of you, the film Oppenheimer might have sort of resurrected some of those fears of nuclear threats and nuclear holocaust. For me personally, being in Japan and having had the opportunity to visit Hiroshima has been a real life changing moment and just highlights for me the importance of what we are discussing today and the kinds of threats that we are facing, that humanity is facing. So Chatham-Howes is worried about these risks, so is Microsoft, so is the University of Oxford. And so Chatham-Howes has done work in the past from an international security perspective on the risks, the cybersecurity risks against civilian and both and military nuclear infrastructure. We are at the moment carrying out a project on this topic, on this exact topic, focusing on international law and norms. And so this session will explore in more detail these issues, including in particular international technical standards, rules and principles of international law, and non-binding norms of responsible state behavior that protect the cybersecurity of civilian nuclear infrastructure. So the session will be divided into three parts, or we’ll have three segments. The first one will be an in-conversation session with Marian Messmer, who is speaking online from London. She’s a senior research fellow on the International Security Program at Chatham-Howes. She’s an expert in arms control and nuclear weapons policy issues. We’re going to talk about cyber security risks and the consequences facing civilian nuclear facilities. Then in the second part of our session, we’re going to have a discussion with Tarek Raouf, who was head of nuclear verification and security policy coordination at the International Atomic Energy Agency, IAEA, with years of experience in nuclear disarmament, nonproliferation and arms control, as well as Giacomo Persi-Paoli, also joining us online from Geneva, who is head of the security and technology program of UNIDIR, the UN Institute for Disarmament Research, and he’s an expert on the implications of emergency technologies for security and defense. And we’ll also be joined by Michael Karimian, who is here in person with us, who is director for digital diplomacy at Microsoft in the Asia-Pacific region, with extensive expertise in human rights policy. We’re going to talk about technical and policy approaches to protect civilian nuclear infrastructure from cyber operations. And then we’ll have a final section of our discussion, which will look at the legal and normative aspects of the issue. And for that, we’ll have a chat with Tomohiro Mikanagi, also in person here today, who is legal advisor of the Japanese Ministry of Foreign Affairs, and a partner fellow of the Lorapak Center for International Law at Cambridge University, who has written extensively on cyber and international law. And also joining us online for this discussion is Priya Erse, junior research fellow in law at St. John’s College, Oxford, whose expertise spans across public international law, including cyber operations targeting critical infrastructure. Michael will also join us for this segment of the program. I’m going to turn to Rowan for a few housekeeping announcements. Rowan, over to you.

Rowan Wilkison:
Yeah, so hello. Good morning, afternoon, evening, wherever you are. Thank you so much for coming. So yeah, just some brief housekeeping things. We’re going to be running an interactive survey on Menti alongside this session. So we urge all people online and also in the room to scan the QR code when it comes up and please take part as we go along because we’d love to hear your thoughts. And then at the end of the session, we’re going to be having the usual Q&A. So for those in the room, we have the mics. So if you line up behind, if you have a question and those online, please just use the chat function. So to kick us off with the first question.

Talita Dias:
Yeah, so I actually wanted to show a video first, Rowan. So technical team, would you mind putting up our slides so we can actually show a video that Chatham House has produced on this issue just to give people an idea of what we’re talking about today? to heat water that turns into steam. The steam then drives a turbine to provide electricity that goes into the national grid. An advantage of nuclear energy is that it can reduce the reliance on fossil fuels and can help fight climate change. The energy that is produced by nuclear reactors is controlled for output and safety by sophisticated computers. But cyber attacks can interfere with these network computers, potentially shutting down power plants or causing other safety issues. A cyber attack using a computer worm called Stuxnet. was used to disrupt Iran’s nuclear enrichment program by interfering with the control systems for the centrifuges. So we need to put measures in place to protect nuclear plants against cyber attacks. Great, thanks. So I’m going to turn over to Rowan who will kick us off with our survey, actually to give a little bit of background to the topic and also get your views on what worries you the most when it comes to cyber security of a civilian nuclear infrastructure. Rowan?

Rowan Wilkison:
Yeah, so you can see on the screen, I’m just going to put up the first question that we have for you all, which is when you think about nuclear cyber security, what risks come to mind? So yeah, please feel free to scan the QR code or you can enter the code to join the room and we’ll see what you all have to say. I’ll give about a minute and a half or so just for people to answer.

Talita Dias:
So really, what do you think about when we talk about this issue? And we really want to get your views on what is most concerning for you, because sometimes it’s not obvious when we talk about… It can be a very technical or sometimes an intimidatingly technical topic. So we want to get your views on what worries you the most when you think about cyber and nuclear. Should we have a look at the responses that we’ve had so far? Cool. Okay. Wow. Okay. So we’ve got radiation, radiation, environmental disaster, significant loss of life in long term radiological fallout.

Rowan Wilkison:
We’ve also got reputational harm to institutions, environmental destruction. We’ve got security failures in IT networks of the nuclear plants, which leads to disastrous outcomes, which I suppose links a bit to the one before about radiation.

Talita Dias:
So yeah, a wide range of harms, as you can see here. So to discuss or to delve deeper into those harms, we’ll have a chat with Marion online, joining us from London. So Marion, let’s talk about cybersecurity risks and consequences facing civilian nuclear facilities. So welcome.

Marion Messmer:
Hi, everyone. Good to be here. Great. Thanks, Marion, for joining us so early for you. So first question I have for you, Marion, is what types of cyber operations have targeted or can target civilian nuclear systems? So broadly speaking, I think it’s really important to remember that there isn’t just one type of cyber operation or cyber harm that could target civilian nuclear facilities. Because as you already mentioned in your introduction, they could become targets for different reasons. So they could be targeted because they are specifically part of a nuclear system or nuclear network. And so perhaps the theft of specific nuclear related information is the goal. of the attack, or they could be targeted because they produce energy and they are an important backbone of the national grid or of a country’s power supply. So, you know, you could imagine a whole range of scenarios in which they are targeted either purposefully or where they actually become collateral damage of some sort of other attack. You already mentioned some of the examples that we’ve seen where nuclear power plants or other aspects of civilian nuclear infrastructure were targeted. And I think what’s really interesting about this conversation about cybersecurity and nuclear infrastructure is that when we first began to worry about cybersecurity, because a lot of existing operating nuclear power plants are older or perhaps have very bespoke, very purpose designed IT infrastructure, people originally thought that maybe this was a risk that they didn’t have to worry about so much. So there was this idea that maybe nuclear operators would be safe because the IT infrastructure that they’re using is so specific or is so unique. Whereas what we’ve seen over time is that as nuclear power plants have had to evolve, have had to update their systems or as new nuclear power plants have come online, a lot more of the IT infrastructure is also off the shelf or is the same as that of other systems. And then you are in an environment where nuclear operators all of a sudden also have to think about cybersecurity and the aspects of IT security that they previously didn’t have to worry about so much. So there was a bit of a catch up that had to take place in the nuclear energy sector where operators had to think about new regulations, new training procedures. And that’s really interesting to me because it’s of course the sector where. have physical safety and security has been so paramount for a long time. So now that we also need to think about cyber security, that can change the game a bit. And I think that worried a few people quite a lot when that first emerged as a possible risk.

Talita Dias:
Thank you, Marion. And there are also the risks to hardware rights, to physical components of what we call cyberspace. Right. So it’s not just software risks, say a hacker hacking into the system, but also like there’s human failure that might lead to, say, a breach in the hardware system of a power plant. So that’s exactly what happened or allegedly what happened with Stuxnet, for example. So great. So my second question to you is, to what extent have new developments in the nuclear sector or in particular in the nuclear energy sector, such as the spread of small modular reactors and microreactors, which I’ve mentioned earlier, to what extent have these developments increased those risks that you have just discussed?

Marion Messmer:
Yeah, so I think these developments pose both a risk and an opportunity. So, you know, if I had to say it simply, then having more systems increases the risk just by virtue of the fact that there are more reactors out there. That’s that’s one aspect where the risk is coming from. Small modular reactors and microreactors are specifically designed to be more accessible. And the hope is that they will be able to help, you know, bring a more stable power supply to perhaps regions of the world where that’s currently not possible or very remote areas or areas where it’s really difficult to have to have a stable infrastructure network for other reasons, perhaps because of remoteness, perhaps because of geography. So so that’s, of course, a huge chance. But at the same time, that also. means that if you multiply the number of reactors that exist around the world, you of course also increase the risk of something going wrong. The other concern about some of these reactors is that because they are developed by many different commercial actors, when I was preparing for this, I tried to figure out what the most accurate number is at the moment. And the IAEA estimates that at the moment, around 80 different reactor designs or small modular reactor designs are being considered by different kinds of commercial actors. There are concerns about the supply chain security. So you of course have a situation where different components for the reactor or the steering modules or whatever you need in order to put this together, are being developed by lots of different commercial entities. And, and in order to ensure the highest standards of cybersecurity, you also need to have quite a good understanding of that supply chain, and where some of those, where some of those security risks might come in. So that’s, that’s another concern that just because of the length of the supply chain, the diversity of the supply chain, and the numbers of different actors involved, it might be hard in the end to trace where some of those risks might come in. And then the other component, I think, where it introduces some newer risks, or might actually highlight risks that already exist, but just multiply them, is that, as I mentioned, when I spoke about the use cases, a lot of these use cases can be in quite difficult operating environments, or they can be perhaps in regions that are already less well off, and therefore perhaps have less money to spend on cybersecurity. So that’s, of course, a risk that, you know, systems in that region would already be facing, but you then just combine that with the additional risk of, of nuclear energy. So And then generally speaking, what I mentioned earlier about this tension between sometimes a really bespoke or unique system also being just a tad more secure, because perhaps it has fewer vulnerabilities or the existing vulnerabilities will be less enticing to exploit. These small modular reactors and micro reactors will, of course, have completely up to date software solutions and in many cases off the shelf software solutions. So if there are any vulnerabilities that we’re not aware of, then they would, of course, be there as well. But for the advantages, opportunities, these newer reactors are designed differently. So in some cases they are already like the nuclear aspect is already safer by design than it would have been in some older power plants. So that’s, of course, one advantage. And the same also goes for the cybersecurity considerations. So because the awareness of cybersecurity in those systems is much more advanced now than it was even five years ago, or certainly 10 years ago, 15 years ago, there are already much more considerations about cybersecurity in the design and then in the training of potential operators. So, of course, we need to be vigilant and, you know, in part by having this panel because the cybersecurity conversation for nuclear civilian infrastructure needs to go farther. But at the same time, I think we shouldn’t let that forget us about the opportunities that come with some of these new developments as well.

Talita Dias:
Great. Thanks, Mayor. So there are both challenges and there are opportunities. And one issue that we will touch on in this panel later is the question of regulation. Right. And you’ve mentioned the spread of these reactors in different parts of the world. But, of course, we don’t know how, you know, different states regulate the acquisition and the operation of these small modular reactors. So that’s probably also a risk that we need to be aware of. Now, pivoting to from peacetime to wartime, and I know that the war in Ukraine is on everyone’s minds at the moment, it should be, not just the situation in Gaza. Hopefully we haven’t forgotten about that. So I want to talk about the new and existing risks against civilian nuclear infrastructure that have resurfaced in the context of the war. Are there any particular risks that we need to be worried about because of the war, Marion?

Marion Messmer:
I mean, what we’ve seen happen around the Saporizha power plant in Ukraine has, of course, been horrendous. And I think one of the really new things there, or maybe not new, but rare occurrences is of a nuclear power plant being caught directly in war and directly being on the front line. So the combination of physical and cyber attacks taking place at the same time is something that I suppose we were worried about, but that luckily doesn’t happen all that often. So the personnel at the Saporizha plant has been incredibly dedicated. Many of them have stayed in place despite the risks to their own lives, but the power plant has had to operate with reduced personnel on site who are, of course, now working under much more stressful conditions and much more uncertain conditions. And so I think the combination of there being physical attacks that are very regular over a prolonged period of time, at certain points in time being quite constant, and then also having to worry about cyber attacks at the same time, which, of course, have taken place all over Ukraine with regularity, has created a particularly difficult-to-manage environment. The results of that could be. We’ve not seen that so far, of course. and the IAEA has also done its best to support the personnel operating the power plant to ensure that everyone can stay safe and that the running or the management of the power plant, I should say, can continue safely. So while I would say that some of the biggest risks probably were in place early on in the conflict, when the reactors at Sapariza were still running, they have now all been in some type of shutdown for the past several months. So that, of course, mitigates the risk significantly. One of the things about nuclear reactors is that you can’t turn them completely on or off immediately because some of the nuclear reaction continues on, which is why I’m talking about different types of shutdown. But five of the six reactors have been in cold shutdown for several months now. And then there is a sixth reactor which has been in hot shutdown because they’ve had to use some aspects of the reactor for safety operations. But the IAEA has monitored all of that and has tried to support the personnel at the power plant. So what we have been worried about, specifically with Sapariza and specifically early on in the war, is that a potential loss of power or disconnection from the grid could interfere with the cooling system for the reactor. So that’s when you could get into a reactor meltdown situation, which could, of course, have devastating consequences. So, yeah, there were various mitigating steps taken to make sure that those risks were managed a little better, such as ensuring that there were plenty of backup generators on site so that cooling could still take place.

Talita Dias:
Thanks. Thanks, Marion. So I’ll pause the chat with Marion for a second because we want to hear your views on this. And so, Rowan, over to you.

Rowan Wilkison:
Yeah, so we have the next question on the Menti. So bearing in mind what you’ve just heard, thinking about both stable and context of instability, why should we be worried about cyber operations targeting civilian nuclear infrastructure? And for those of you that have just joined the session, please feel free to take part in the polls that we’re running, because we’d love to hear your views on this topic. Sorry, we seem to be having a problem with that one. So I think we’ll leave that one for today. Yeah, or maybe go back to it later. Okay, so Marion, back to you.

Talita Dias:
Now, since we’re talking about risks and what we should be worried about, what is the actual likelihood of all these risks that we have been discussing on the environment, on lives of individuals, on health, on reputational harm of international institutions, equipment malfunctioning? What is the actual likelihood of these risks materializing? And a related question is, what would be the consequences? You know, concretely, what would be the consequences? Do you agree with the responses that have been provided in the previous question about the consequences of those risks materializing, for example, health, environment, and the international system?

Marion Messmer:
Yeah, I mean, it’s really hard to say how likely it is that those… risks might materialize. I think what’s important is that the consequences could be severe. And so we have to take the risks seriously and we have to do our best to mitigate those risks. As I already mentioned a little in my previous answer, in the Sapariza case especially, while of course there is still a risk there, for the time being, I’m a little reassured by the accident safety operations that are taking place there. And also by some of the other mitigating steps that have been taken. The other thing I would also say in that regard is that we heard a lot, especially early on in the war, that Sapariza could lead to the next Chernobyl. And that there is a significant difference in how the reactors are designed at Sapariza versus at Chernobyl, that would actually make that outcome less likely. So I’m not trying to say people should be complacent. These risks are very severe. And if something was to happen, then that would have really grave consequences. So we need to be vigilant. But in terms of people being overly worried or seeing another Chernobyl type situation on the horizon, I think there are reasons why that is less likely than people might have feared. And the other thing I would say is that, you know, as you mentioned in your introduction, we’re hoping that nuclear energy can play a really important role in the energy transition, in moving towards net zero, and in ensuring that we’ve got a more stable energy supply while we are trying to figure out, you know, sustainable and renewable types of energy, so that we can hopefully slow or halt climate change. And what really worries me in that regard is that what we have seen in Ukraine, this combination of nuclear power plants being caught in conflict could actually happen. more frequently around the globe, because if more countries end up using nuclear energy as an important part of their energy supply, and you also mentioned the increasing frequency of cyber attacks, then I think this unique combination of a power plant or other types of energy infrastructure being caught in conflict might become a much more frequent occurrence. So if we can think about now what we can do to manage that situation for the future, then that’s going to leave all of us much better off.

Talita Dias:
Thank you, Marion. Now, to summarise or to get your views on what we just discussed in this segment of our panel, we will go back to Menti with a survey. This time, hopefully it will work. Rowan?

Rowan Wilkison:
Yeah, fingers crossed. I think we’ve fixed it now. So this one is about the risks that we’ve just heard. So we’re wondering which of these risks worries you the most. So we have some five different options here for you to choose from. So picking your kind of priority option, we’ve got disruptive cyber operations.

Talita Dias:
For example, ransomware attacks, distributed denial of service attacks. We’ve got information operations like disinformation, propaganda, misinformation revolving around nuclear energy, which have occurred in the context of Ukraine, for example. We’ve also got data gathering or surveillance operations. So basically, solar winds, for example, operations that try and get access to sensitive nuclear data. We’ve got physical effects of these operations, for example, as what happened with Stuxnet in Ukraine. So lots of centrifuges stopped working. And as Marion said, there is a risk of a new Chernobyl, of a cyber-generated Chernobyl, even though that risk might be more remote now. and we’ve got non-physical effects that we have discussed already such as effects on the reputation of the international system and also going back to physical effects we can’t forget about health and the environment so just just vote there we want to we want to see what you what you think and as Michael just reminded me there’s also the psychological effects of information operations well the fear of nuclear holocaust and war as well that’s a good one yeah ready okay so let’s see what you voted on see the results of this okay so everyone so most people are worried the most about physical effects that’s what I answered which which makes sense given that at the beginning a lot of people mentioned radiation yeah yeah disruptive cyber operations I think that’s because they carry the most risk of you know of interrupting the energy supply for example or destroying power plants for example information operations comes in third place day at the gathering operations and fourth and the non-physical effects come in in the fifth place that’s interesting go keep that in mind so moving on to the second and thank you Marion so much again for joining us so early for you it was great so let’s move to the second part of our panel which is about technical and policy approaches to protect civilian nuclear infrastructure from cyber operations For that, we have a conversation with Tarek Rauf, former IEAE. We’ve got Giacomo Paoli at UNIDIR, and we’ve got Michael Karimian at Microsoft here. So I’m going to start with a question for Tarek. Tarek, do we have any international technical standards on how to mitigate those risks and consequences that we have been talking about?

Tariq Rauf:
Well, yes, at the International Atomic Energy Agency, cybersecurity, which is usually referred to here as computer security of nuclear facilities and nuclear materials, is considered being a subset of nuclear security. And nuclear security is the responsibility of the state and the operator. And while there are international conventions, such as the Convention on the Physical Protection of Nuclear Material, as amended, the primary responsibility still remains with the state and the operator. And the IAEA has issued more than 30 documents on guidance, recommendations, and fundamentals of nuclear security, and there is a parallel sub-series of guidance and recommendations on enhancing cybersecurity or computer security. And in the discussions here, cybersecurity or computer security also has implications for nuclear safety. So there are two aspects to it, not only the security of the facility and the material and the integrity of the instrument control system, but also the safety of the nuclear facility, because as we discussed a little bit in the first session, we are dealing with radioactive materials and containment of reactivity or release of radioactivity from an operating or a shutdown nuclear facility. is one of the highest objectives of nuclear safety. There’s also consideration of ensuring that the heat removal and the cooling system of a nuclear reactor, whether in operating status or shutdown is not compromised. And then also there is the confinement and control of nuclear materials, whether in spent fuel bundles in cooling ponds or nuclear fuel bundles stored inside the reactor that are cooling down, and then also the fuel in a reactor itself. One important element here is to ensure that there is no loss of coolant. There has been at least one incident where it is suspected that because of a malicious cyber attack, some coolant was leaked from an operating nuclear facility, but the control room managed to detect it early on and they shut off the pump that was discharging water from the cooling system. Later on, I can give you more details about specific IAEA documentation and guidance.

Talita Dias:
Great, thanks, Derek. I can see that there are some comments or questions in the chat, and I want this to be as interactive as possible. So maybe we should take the questions now. So apologies for mispronouncing the name in advance. So Tumi is saying, he thinks that the reuse of old submarines and add SMRs, I’m not sure what that means, but maybe, okay, small modular reactors, okay, great, to generate electricity permanently under the sea, we’ll be able to isolate ourselves from problems on land, and then Tyrell says it’s a great idea. What do you think, Tarek, or maybe Marion, do you wanna? I know it’s bringing the Q&A to the session.

Tariq Rauf:
So we do have a floating reactor that is operating in the Russian north. This is actually a modified reactor from a nuclear propulsion unit of icebreaker. There are nuclear powered submarines, but at the moment there is no consideration of using submerged small and medium-sized reactors for power generation. All of the designs that were referred to, there are about 80 designs currently under discussion of which about three are close to maturity for testing, first of a kind, but these are all land-based. Now one advantage of SMRs and MMRs is that these are sealed reactor units as compared to large nuclear power plants which need to be refueled partially or completely every year or every few years. So that is one inherent in-built safety consideration for SMRs and MMRs. But nonetheless, one needs to ensure that the integrity of the instrument control system and regulation of the reactor itself is not compromised. The instances that have occurred of compromise usually have been through back doors, either left open by contractors so that they could do the servicing sitting at home or from their office, or inadvertent back doors that were created through the use of USB sticks that were inserted into some part of the computer system in the facility, although this is strictly prohibited not to bring in any outside USB sticks or other data-carrying devices and to insert them into the computer systems of nuclear facilities.

Talita Dias:
At least in theory. Okay, Marion, do you want to comment on that or should we move on?

Marion Messmer:
I can just add one bit, because what I wanted to say is that, you know, even if it seems tempting to, for example, put small modular reactors or other types of reactors underwater to have them away from land, you have to remember that, of course, the ocean is also part of our ecosystem. So even if there was to be a radiological incident underwater, that would still have pretty severe consequences for that environment. And the water will, of course, mix, so the radiation would still spread. So while it wouldn’t be the same kind of fallout that we would get if it was in air, it’s not like it’s just out of sight, out of mind in that sense, because it’sYeah, we also drink some of the water that comes from the sea.

Talita Dias:
So that’s a very important point, Marion. So I want to go back to Tarek, and I want you, Tarek, if you can, to take us a little bit through the IT security guidance for nuclear facilities that the IAEA has produced for member states that have operational nuclear power plants or nuclear fuel cycle facilities. Can you talk to us a little bit more about these documents, these over 30 documents that the agency has issued?

Tariq Rauf:
So the way the IAEA is approaching this, and this is in cooperation with IAEA member states. So this is not just the bureaucracy of the International Atomic Energy Agency that is producing this guidance or recommendation. They do it in concert with technical experts from the IAEA’s 176 member states, those that are interested, and this is an interactive process between the technical experts of member states and the experts of the IAEA Secretariat. and jointly they draft and produce these documents, which then once they are approved, become the guidance recommendations or fundamentals. So computer security measures in the context of cybersecurity for nuclear facilities as discussed and considered at the IAEA are to prevent, detect, delay and respond to criminal or other intentional or unauthorized attacks. Then to mitigate the consequences of such attacks and to recover from the consequences of such attacks. So computer security measures can be assigned to one of three categories, technical control measures, facility control measures or administrative control measures. So the agency has been actively involved in developing these and they’ve come up with a taxonomy, which is number one defense in depth. This is having a defense in depth approach to cybersecurity with multiple layers of security controls and measures to protect nuclear facilities, including physical security, network security, access controls and monitoring. Also risk assessment that nuclear facilities should conduct a comprehensive cybersecurity risk assessment to identify potential vulnerabilities and threats. And then this assessment forms the basis for developing appropriate security measures. To institute, this is number three, to institute security policies and procedures, which is to establish and implement cybersecurity policies and procedures tailored to the specific needs of specific nuclear facilities. This is called design basis threat. Designing security policies and measures specific to a particular nuclear facility, its technological peculiarities and the risks that that particular facility might face. Then of course, there are obvious things such as access controls, network security, patch management. Incident detection and response, this is a increasingly important element. As you mentioned in your introduction, the IAEA is subjected to daily cyber attacks on its system from different sources. Some are trying to access the highly confidential safeguard information, some are just opportunistic attack. My colleagues at the IAEA in the IT sector, this is their biggest challenge, is to make sure that there is no intrusion into the IAEA’s computer security system. They are very proud that they have managed to detect and to counter any of these potential attacks on the system. But we say nuclear security is not an end, it’s a journey. Cybersecurity is also the same as the threats are evolving, the responses also need to evolve, so to speak. Then there’s also, of course, encryption, physical security. An important element is also to do security audits and assessments on a continuous basis, to see if there are new vulnerabilities that have come in, supply chain vulnerabilities. Other important issues are information sharing, international cooperation, training and awareness, and then capacity building. This is one of the IAEA’s biggest activities. Every year, the IAEA holds hundreds of sessions, both at headquarters here in Vienna and in different cities to build capacity to strengthen the capacity and the training of nuclear facility operators. Sorry, Tarek, is there anything else that you want to comment about the guidance? You sent me some questions, so I will come back when you get to the next question where I can cite. some of the specific IAEA documentation, which is all available freely on the internet. It’s not password control and people can download the PDFs. A lot of this is quite technical, but it’s all up there.

Talita Dias:
Great, thanks. So you’ve mentioned a lot of guidance, a comprehensive range of best practices from every step of the way of nuclear cybersecurity, from design to implementation to risk mitigation and so on and so forth. But all of those guidances, as the name suggests, they are non-binding guidances. They are documents that are not mandatory for states, right? But I wanna ask you if any of those measures that have been proposed or recommended by the IAEA have been adopted by the Convention on the Physical Protection of Nuclear Material, which is a binding document under international law.

Tariq Rauf:
Well, this unfortunately is the situation when we are dealing with sovereign states. So the Convention on the Physical Protection of Nuclear Material, as amended, is only binding on those states that have acceded to it, unfortunately. It’s not universal international law that if a country has nuclear material and nuclear facilities, it must be a party to the CPPNM. So the way around it is that those countries that have signed onto it, for them it is internationally legally binding. Now, the amendment to the CPPNM, which took place in 2005, was more to extend the scope of the CPPNM to cover nuclear material in peaceful uses, in domestic storage, and in international transport. But unfortunately, state parties were not able to. agree on the application of the CPP-NM to military nuclear material. And as you know, we have had five nuclear security summits. People only remember four of them, the ones that started in 2010 in Washington, but the very first one was in 1996. So 83% of the world’s dangerous nuclear material that is highly enriched uranium and plutonium is in the custody of the nine countries with nuclear weapons, and it is completely outside of any international accountability or monitoring. Only 17% of the material is under International Atomic Energy Agency safeguards, and as part of the safeguards agreement of a state with the IAEA, physical security and safety is obligatory. And then, as we just mentioned, cybersecurity being a subset of nuclear security is also something that the state needs to implement. So even after the Fukushima accident, there were attempts to make the CPP-NM mandatory and compulsory for all of the 31 states that operate nuclear facilities. At the moment, only Iran remains outside, a country that has an operating nuclear power plant that has not yet succeeded to the CPP-NM as amended, and also not to the Convention on Nuclear Safety. So this again is the stessel between protection of national sovereignty, and on the other hand, protecting against cyber and other malicious attacks, because the effects of those will be transboundary. They will not be limited to the territory of the affected or the accident state. As Chernobyl showed, as Fukushima showed, we have transboundary transport of radiation, and that is… the biggest concern as regards a disruptive cyber attack on a nuclear facility that results in the release of radioactivity.

Talita Dias:
Thanks Derek, and it affects that stretch in time as well, because even this year we’ve had issues about the disposal of water from Fukushima. Thanks for clarifying the scope of the convention, and I guess what you said just highlights the importance of international law and strengthening international law, and discussions that might lead to new norms and rules on this issue. I’m going to turn over to Rowan just for another question for everyone here in the audience and online. So based on what we have just heard from Tariq, in your opinion, should there be enhanced

Rowan Wilkison:
interaction and cooperation on cyber security between agencies like the IAEA and also the tech industry? We’ll give a little bit of time just as people come into it. So I guess that’s a clear unanimity here on yes, right?

Talita Dias:
And Michael will come back to this point about cooperation or the role of the tech industry to tackle all of these issues that we’ve been discussing. But now on, so Tariq mentioned state sovereignty. He also talked a little bit about international law, the role of the Convention on Physical Protection of Nuclear Material, and I’ve mentioned the need for states to be discussing this issue more often. So I want to turn to Giacomo, who is joining us from Geneva. Hi, Giacomo. Hi, good morning. And I want to ask you, how has the protection of critical infrastructure been discussed in the context of the open-ended working group on the security of information and communications technologies, also known as the UNOEWG? Thank you.

Giacomo Persi Paoli:
Thank you, Tarita, for the question, and thank you also for inviting me. It’s great to be able to participate in this great panel. So let me give you like a 30-second summary of 25 years of history before I get specifically into the question. But I think this summary is useful particularly to those in the audience that may not be too familiar with the various UN processes and the jargon that is associated with them. So states have been discussing about international cybersecurity. I would say actually this year is the 25th anniversary since the first draft resolution on this topic was put on the table at a time by the Russian Federation in 1998. Since then, we had six iterations of a process called the Group of Governmental Experts. Now, this is a closed-door process that on average involves about 20 countries, of which five are always the P5 and the five permanent members of the Security Council, and then others are invited to join. The specificity about this process is that the only public thing that exists, public trace, is the mandate that sets up the process and the report at the end of it, which means that there isn’t really a lot of visibility as to what the discussions actually are. And if states do not agree on a consensus report at the end of it, the report that we have at the end of the deliberations, it’s a very procedural one that says, you know, We came, we met, we didn’t agree, move on. Now, the situation started to change in 2019, where in parallel with the last, at least to date, group of governmental experts, another process was set up, the Open-Ended Working Group. Now, the Open-Ended Working Group is open to all membership of the UN. It has a multi-stakeholder component to it, as well. But most importantly, it’s all public. So, all statements that are made can be consulted online. All sessions can be followed on UN TV. And the chair has the opportunity, even if there isn’t a consensus report, to publish its own summary. So, there is definitely more visibility in the actual workings of the process. Coming to your question, I think it’s important to realize that one of the most significant achievements that states collectively had is data since 2015, when a framework for responsible state behavior in cyberspace was adopted. And as part of this framework, there are 11 norms. The topic of critical infrastructure is probably the topic that features the most, either directly or indirectly. Three of these 11 norms focus on the topic of critical infrastructure, whether it is to basically call states to protect their critical infrastructure, whether it is calling states to not target critical infrastructure of others. And then there is a dedicated norm that focuses on ensuring that international assistance is provided to those states whose critical infrastructure is being targeted by cyber attacks. Now, these three norms have an explicit reference to critical infrastructure. And there are a whole set of others which are more indirect related. particularly related to vulnerability disclosure or the supply chain security, you can see how some of these topics may be indirectly relevant to critical infrastructure protection as well. I’m probably bridging to the next question here, but by design, these norms, and until very recently, I didn’t really go too much into the detail of which type of critical infrastructure. The OEWG is probably not the correct forum to have in-depth discussion as to how each of these general purpose norms applies to specific sectors or specific type of infrastructure. However, it is definitely a topic that has been discussed quite extensively, both in relation to how the threats are evolving, in relation to how norms can be implemented, as well as what could be some of the consequences from an international perspective.

Talita Dias:
Thanks, Giacomo. So in your opinion, it’s not the best forum to discuss specific risks to particular types of critical infrastructure, but to your mind, and you’ve been deeply involved in this process as part of UNIDIR, and does it come to mind that any state has specifically raised the issue of cyber nuclear risks within the OEWG, or perhaps other UN forums? Can you remember if any state has ever raised this issue?

Giacomo Persi Paoli:
So it’s a very interesting question, because if you look at the consensus reports, we couldn’t really find any explicit reference to nuclear itself in the consensus reports. However… The discussion is evolving states individually in their national submissions to the Secretary General that then compiles all of these submissions and releases a report, big flag, the nuclear security issue, characterized in different ways, whether it is more, again, expressing growing concern over the threats that cyber capabilities and cyber operations can pose to civilian nuclear infrastructure, so more on the threat side, or to highlight some of the efforts that they’ve put in place at the national level to protect their nuclear infrastructure as part of wider interventions. Some states have dedicated national cybersecurity strategies that have been designed and dedicated specifically to protect their nuclear infrastructure. So there is definitely a lot more that is going on at the national level that is flagged in the context of the OEWG, but if you look at how the OEWG discussions have been evolving, they went from being very general, then in 2021, also as a result of the pandemic and the sheer increase of cyber attacks that have characterized all sectors of society, including critical infrastructure, the report of the OEWG that concluded in 2021 did mention things explicitly, critical infrastructure types, such as medical infrastructure, or energy, or financial, et cetera. So we are going down the path of discussing these topics more broadly, but my personal opinion, my personal sense is that as long as there isn’t a dedicated forum for states to discuss implementation more than a kind of normative framework, but actually the implementation of these. quite general purpose norms that have been designed, it’s gonna be difficult for states to really go deep into any of these topics, simply also because of matter of time that they have available. However, I think it is important to acknowledge that the topic has been, despite not necessarily being captured in consensus reports, it is being flagged by an increasing number of states in their national capacity when they make their interventions.

Talita Dias:
So maybe just a question of some of those states trying to bring the issue to the general fora that the UN offers for these discussions, and maybe we should take up your idea of having a sort of like a more concrete implementation-focused forum for these discussions. Thanks, Giacomo, for your thoughts and for your input. I’m now gonna turn over to Michael, and perhaps maybe Giacomo and Tarek want to comment on this point, which is about the role of the tech industry in addressing those risks. So Michael, what is the role of the tech industry? You work for Microsoft, so what does Microsoft have to say about this?

Michael Karimian:
Thank you, Talita, not just for being our moderator, but of course to yourself and the team at Chatham House for being essential partners in this session and brought a project in the same to Priya from the University of Oxford. I’d like to underscore a couple of topics or key points, I guess, in this regard. One is that, of course, the tech sector broadly, and as Marion mentioned, there are many companies who supply ICT infrastructure to this industry that we’re looking at. Of course, the tech sector plays a central role in providing the digital solutions that underpin quite a broad range of operations, safety and security of nuclear systems, but also, to be frank, just mundane, everyday processes. applications like payroll or accounts receivable. And so because of that, there are many entry points into the IT systems. And so the risks are quite broad, and as Marion mentioned, the supply chains are very deep. As we’ve been discussing, of course, there is this convergence of cyber and nuclear risks, which poses a quite serious threat to national security and global stability. So with that in mind, I think it’s important to recognize that as a provider of these systems, we have quite serious responsibilities accordingly. And so to address these risks effectively, the tech sector can and should, more broadly, take a number of proactive steps, including but not limited to, of course, cybersecurity by design. So prioritizing the cybersecurity of systems from the very inception of their products and services and embedding security into the design, development, and deployment of processes. And by doing so, that will go a long way to reducing vulnerabilities and strengthen the overall resilience of nuclear systems. Continuous innovation is very important. As we’ve been discussing, the threat landscape is ever-evolving. And therefore, continuously innovating to stay ahead of cyber adversaries is essential. That requires actively researching, but also sharing threat intelligence to detect and respond to emerging risks. And doing that with governments, international organizations, and other stakeholders. So a degree of transparency and threat sharing from the tech sector is also very important. Equally, education and training plays quite an important role. Tech companies can be pivotal in educating and training end users and administrators of their technologies. So providing guidance on cybersecurity best practices is essential too. And of course, multi-stakeholder engagement has already come up as a topic in this session so far. But collaboration is key to addressing the complex challenges that we’re discussing here today. The tech sector, big and small, should be quite actively engaging with governments, civil society, and other companies to jointly tackle the cybersecurity issues that we’re talking about. We already do see initiatives that are doing that broadly, like the Cyber Security Tech Accord, which promotes collaboration and protection of critical infrastructure. That’s a prime example of these efforts, and we can delve into them more in this session.

Talita Dias:
Thanks, Michael. Giacomo and Tarek, do you have any thoughts or comments or reactions to what Michael just said about the role of the private sector in addressing those risks? Yep, would I comment on that? Absolutely.

Tariq Rauf:
I completely agree with what Michael just said. Again, this is the issue of state sovereignty. So international organizations like the IAEA are based on interactions with states and not with other actors such as industry. However, this pattern is changing and more and more industry is being brought in to provide its expertise and experience in providing technological solutions. To these new problems, but a main challenge for an international organization like the International Atomic Energy Agency that is dealing with highly classified information about the nuclear activities of over 180 states is the risk of penetration into the system by state actors, not so much non-state actors, given the high politics involved. And Talita, you in your introduction mentioned the cyber attacks on Iran’s enrichment facilities, Stuxnet and Olympic Games. So those were state-originated threats, and those are still continuing because of high politics here. So I don’t want to name states, but there are no innocent parties, so to speak. Anyone can be a threat for the IAEA’s computer security system at the agency here in Vienna. And then the IAEA has to buy commercial products. So one product that the IEA bought some time back was Palantir, which is to manage big data. Palantir was originally developed for the intelligence agency. So an international organization’s IT experts will never be able to match the expertise and the capabilities of offensive IT capabilities of states if they choose to deploy them against the IEA. So there’s this in-built suspicion, which is one potential roadblock for the IEA interaction with the industry beyond a certain level. And I think we need to overcome this and build more trust and build more patterns of cooperation and interactivity.

Talita Dias:
Thanks, Tarek.

Giacomo Persi Paoli:
Giacomo, one or two thoughts very quickly. Yes, conscious of the time, very quickly. I can only agree with both Michael and Tarek here. I think it’s important that even in relation to what we’re discussing with the AWG, the AWG covers state behavior. It’s discussed by states to regulate or guide their own behavior. It doesn’t deal with threats coming from non-state actors, which can be significant. But I think the private sector here can play a significant role in helping in states develop capacities, providing capabilities. Public-private partnerships have been almost at every single session flagged as a way forward that really needs to be investigated as a general purpose tool to really increase cyber resilience of member states. And this includes also the energy sector and in particular, the nuclear one. So absolutely, it is key that we bring the private sector along in the journey.

Talita Dias:
Great, so multi-stakeholderism is a recurring theme in this Internet Governance Forum, and we also need it to protect civilian nuclear facilities from cyber security threats. So that’s the main lesson, that’s the main takeaway from this discussion so far. So Michael, back to you.

Michael Karimian:
What best practices or recommendations have been developed by the tech sector, the tech industry operating in the civilian nuclear sector, including Microsoft itself? So I’ll speak on behalf of Microsoft and say that actually we haven’t developed specific guidance to the tech, for the nuclear sector. The reason being, although the outcomes of the risks are differentiated, the underlying cyber security risks are almost universal. We see these same risks applying to all sectors across the board. And it’s surprising that the sort of gaps that are out there, so 80% of incidents can be traced to missing security practices, which can be solved by quite basic modern approaches. Over 90% of accounts which have been compromised by password-based attacks did not have multi-factor authentication or any strong authentication in place. According to a study, 78% of devices are not patched within nine months of a critical patch being released, and the number of users who use multi-factor authentication is actually only around 26%, it’s pretty low. But what’s interesting here is that attacks by nation-state actors can be technically sophisticated, however many of these actors use relatively low-tech means, such as spear phishing and other efforts to deliver quite sophisticated malware into the systems. And actually, we mentioned the case in Germany, the case was mentioned about a USB stick. The case in Germany, as was publicly reported, was the entry point there was a user brought in a USB stick and then the rest is history, so to speak. So a lot of these issues can be mitigated by good, yet basic, cyber hygiene practices, and that’s meant to be holistic, adaptive, and global in nature, and a lot of that can happen better in the cloud than on-premises. So, the general guidance which would apply to all sectors, and include in this sector, is to protect the identity of users, apply updates as soon as possible, use extended detection and response anti-malware and endpoint detection solutions, and also to enable the auditing of key resources, and quite importantly, prepare incident response plans. That’s actually very much aligned with the IAEA guidelines, which really speaks to the strength of the guidelines that they have produced.

Talita Dias:
Thank you, Michael. The question of putting all of this together, you know, what the IAEA has already put out, what the private sector has advised operators to do, and also what states have also agreed to do or are willing to agree in this sector. So, I want to pivot to the third part, or the third segment of our panel today, of our workshop today, which is about international law and norms. And we have, for this segment, Priya Ers from Oxford joining us online, and Tomohiro Mikanagi from the Japanese Ministry of Foreign Affairs, and Michael will also join us for this discussion. I’ve noticed that there are a couple of questions about international law, international regimes, agreements, so I’m going to take those questions later from the chat and ask to our panellists in this segment. But I want to start with Priya and Tomohiro with a question, a very general question about the applicability of international law to all of those issues that we have been tackling today. So, to what extent can a cyber operation that targets civilian nuclear infrastructure breach existing rules of international law? I don’t know who wants to start, but maybe Priya, because you’re online and it’s very early for you. Do you want to kick off?

Priya Urs:
Absolutely. Thank you so much. And it’s been a fascinating discussion so far. I think what’s tough when discussing international law… on this context is that unlike the technical and policy guidance we’ve been talking about so far, international law doesn’t yet have specific rules that prohibit or otherwise address cyber operations. And so even while states are increasingly recognizing, as we see, civilian nuclear infrastructure as part of what they call their critical infrastructure, which states suggest should be protected against cyber operations, this hasn’t really translated into specific legal protections. And so what we’re left with, at least for now, is more general rules of international law that could be applicable in this context. And this includes not just treaties, such as one we’ve already discussed with Tariq, but also rules of customary international law, including rules governing the use of force by states, the rule prohibiting intervention by one state in the affairs of another state, any other conduct that could also be prohibited as a consequence of a state’s sovereignty over its territory. And also, on the other hand, due diligence obligations for states. And so maybe I’ll just say for now that although none of these rules was actually designed with cyber operations in mind, and certainly not thinking of civilian nuclear infrastructure, they can in principle be applicable to this context. But of course, the particular application could raise some challenges. So I’ll leave it there for now. Thanks.

Talita Dias:
So Tommy, do you want to address this question about international law in general, but also a more specific question that I have for you on sovereignty. So I know you’ve written a lot about cyber and international law. So on top of the general sort of like landscape of international law applicable to this phenomenon, can you talk to us a little bit about the threshold for a violation of sovereignty by a cyber operation affecting critical infrastructure in general? And if that threshold for critical infrastructure in general differs for nuclear infrastructure? Thanks.

Tomohiro Mikanagi:
Thank you. Thank you for inviting me to this wonderful panel. This is a good experience for me to think about, you know, connection between cyber security and nuclear security. Actually, in my brain, these two issues have not well connected before, but this is a great opportunity to think in depth on these issues. Sovereignty issue is really difficult issue among international lawyers because of different positions taken by different countries. It is already very famous that the United Kingdom takes rather specific position that they don’t think there’s any stand-alone obligation arising from the sovereignty apart from the non-intervention rule into internal or external affairs of states. That is not supported by many states, I must say, but that is a very strong position expressed by United Kingdom. The other extremes are probably position is taken by France. France is saying that any effect caused by cyber operation in the territory of the country would amount to violation of sovereignty. In between, there are several other countries like U.S., Germany, and maybe Japan is also a part of this group, which set certain level of harmful effect caused in the territory that would amount to the violation of sovereignty or territorial integrity of the state. There’s no consensus, but I think there’s a general tendency of agreement, I must say, that the more serious the effect of the cyber operation, the more likely for states to accept that it is unlawful under the rule of law. So, I think, you know, nuclear, you know, cyber operations targeting nuclear facilities are more likely to cause more harmful effect. If that is the case, the states should be able to agree on the unlawfulness of that kind of particular kind of cyber operations to be unlawful. But this does not necessarily mean there’s a lower threshold for nuclear cyber attack against nuclear facilities. Rather, nuclear facilities are more vulnerable and more, I think, likely to cause severe, serious physical and other effects. So they should, I think, secure more support for states. When they are talking about application of the rules of sovereignty.

Talita Dias:
So it’s more a question of fact than law, right? So the law would apply a bit differently to the fact of an attack against a civilian nuclear infrastructure than other types of critical infrastructure because of the severity of harms. And because of that factual difference, then maybe states will be driven to agree on the applicability of sovereignty in this space. Thanks Tomo. So I’m now going to go back to Priya and talk a little bit about another important principle of international law that plays out in this context, which is the principle of non-intervention. So what is the relevance of the principle in this context? And in particular, could a cross-border cyber operation against a civilian nuclear infrastructure constitute an unlawful intervention, breaching the principle of non-intervention? Priya?

Priya Urs:
Thank you. I think this is an interesting question alongside the sovereignty discussions that Tomo was discussing. And the prohibition on intervention is interesting because states widely agree that such intervention is prohibited, but there is a serious lack of disagreement as to what kinds of activities are actually prohibited under the rule. And there are essentially two requirements for intervention to be unlawful, which will equally apply in the context of cyber operations, targeting civilian nuclear infrastructure. The first requirement is that the intervention has to do with or has to address the internal or external affairs of a state. And when we think about civilian nuclear infrastructure, which is responsible for generating energy, I think it’s quite easy, I would say, to satisfy this requirement and to make the case that the intervention does address a matter falling within a state’s internal affairs. The second requirement for unlawful intervention is somewhat more tricky because the intervention needs to coerce the targeted state or be coercive in order for it to be unlawful. And there seems to be quite a lot of disagreement still as to what actually amounts to coercion. And the general view that’s taken is that conduct is coercive when it deprives the targeted state of the ability to make a choice or to decide freely with respect to such matters. And I think there’s also now an emerging view, which could be relevant here, which suggests that if a state deprives another state of its control over the implementation of a policy falling within its internal affairs, then that could also be coercive. And I think this is relevant here because if a state adopts a policy with respect to the generation of nuclear energy, I think a cyber operation that actually disrupts the production of such energy could be coercive and therefore unlawful. But on the other hand, what this implies is that other kinds of cyber operations that involve surveillance or data breaches may not be coercive and therefore may not constitute unlawful intervention. because they’re not actually interrupting the implementation of a state’s policy. So, of course, just to conclude, there’s still a lot of clarity that’s needed in the context of the prohibition on intervention, but I think tentatively looking at these requirements, it could be that this rule is implicated in the context that we’re discussing.

Talita Dias:
Thanks, Priyan. I think most would agree that deciding on nuclear policy is part of a state’s internal affairs, and so far as a cyber operation can be seen or deemed as coercive, then the principle would be violated. Now, bearing in mind, and there’s a question here in the chat, so someone other than the attacker is to blame, all of these rules that we are discussing presume that the cyber operation in question can be attributed to a state. So we’re talking about state responsibility as opposed to the responsibility of individuals. Now, maybe I should just jump into that question of individual involvement in cyber operations because it has come up in the chat, and maybe, Tom, you can talk to us a little bit about the rule of due diligence, or the principle of due diligence, which precisely addresses this question. When we have a non-state actor that is involved in a cyber operation and the cyber operation cannot be linked to a state, and then what are the obligations of states? What does international law have to say when that’s the case, when the operation comes from a non-state actor? Tomo.

Tomohiro Mikanagi:
Thank you. Yeah, due diligence, the name of due diligence obligation is probably not really defined by international law, but when we talk about due diligence obligation, we often think about something which was announced by the International Court of Justice in the 1949 Kof Channel case. It was between UK and Albania. In that judgment, the court mentioned obligation not to allow knowingly the territory to be used for acts contrary to the rights of other states. So this obligation is interesting because it talks about territorial states obligation to prevent or mitigate the acts done by the non-state actors inside the territory. But because this unique structure or feature of the obligation, there is not a clear consensus among states whether this obligation or principle applies to cyber operations emanating from the territory. And again, UK is probably the most skeptical state in this regard, again. And the US is also a little bit skeptical about the application of this rule to cyber operations. Japan, Germany, and India are more flexible. But how this principle should apply, that is not clear yet. In the area of environment law, there is more discussion, advanced discussion going on. Like International Law Commission, UN International Law Commission adopted a document called draft articles on the prevention of transboundary harm from hazardous activities in 2002, I think. And this draft article is not binding. And it does not specifically talk about cyber operation. But I think when there is a transboundary harm to the environment, especially, there is more agreement among states that there should be a due diligence obligation applied to the territorial state. So I think here, again, there is no lower threshold for. due diligence obligation in the area of nuclear security. But I think it is likelier for a state to accept the existence of due diligence obligation in the area of transboundary harm, especially close to environment, I think.

Talita Dias:
Thank you, Tomo. And I can see some questions about negligence of the operator. And I can also see questions about accidents. And the principle that Tomo has been referring to, which is called as a no harm principle, which addresses transboundary harm, also covers non-intentional operations or incidents. So I hope that answers your questions. There’s also a question here in the chat, which is very interesting, from Rohana. Is it better to develop generic cybersecurity best practices for nuclear plant operators and employees and aware them is a must? Is there such a global initiative about cybersecurity best practices for nuclear plant operators? So does anyone want to answer that question? Maybe Tarek. And there’s also an interesting question about prospects for a multilateral agreement on cybersecurity of nuclear facilities. So what do our panelists think? And anyone, feel free to jump in. Tomo?

Tomohiro Mikanagi:
May I respond to the latter question? Since I was given this question about the relationship between nuclear security and cybersecurity, I studied some conventions. And Tarek mentioned the Convention on the Physical Protection of Nuclear Materials, which was amended in 2005 and covers nuclear facilities as well. In 2005, we didn’t discuss cybersecurity issue in a specific manner. But conceptually, the sabotage, the definition of sabotage in this treaty convention could theoretically cover sabotage through cyber attack. So I was wondering which path we should take, OEWG and the UNO spaces, where states are discussing general norms. Can we agree on the existing customary international law rules under OEWG? Or should we discuss this under IAEA or spaces, especially with reference to this convention, to apply or interpret this convention to the cybersecurity issues relating to the nuclear facility? So there are several paths, but I think this latter path, connected to the existing convention, might be easier from my personal point of view.

Tariq Rauf:
Could I comment on that? Yes, absolutely. Tariq. So I would suggest that since the IAEA is the internationally designated competent authority to provide regulations for nuclear safety and security and for safeguards, this discussion at one level properly belongs at the IAEA. And I will just list, in response to the previous question, some of the guidance that the IAEA has produced that is available to all member states. And as I mentioned, nuclear security, cybersecurity is considered by states to be a national responsibility still, and they are not willing to have an internationally applicable legal framework which is mandatory. And this is, I think, this thinking needs to change. For example, the IAEA has computer security techniques for nuclear facilities guidance, security of information technology for nuclear facilities, implementing guides for security of information technology, also computer security of instrumentation and testing. control systems, and approaches to reduce cyber risks in the nuclear supply chain, plus computer security aspects for design of more instrumentation and control systems at nuclear power plants, also for incident response planning at nuclear facilities, and also for assistance and so on. So there is a lot of… There’s a big body of literature and guidance, but it’s up to states and the operators. So nuclear facilities have to be licensed. Most nuclear facilities are state-owned, but some are also privately owned. So in order to have an operating nuclear reactor or a nuclear facility, the regulator of the state provides a license, which is usually valid for one to three years, and has to be renewed constantly, otherwise the regulator can shut down operations at the nuclear facility. So there is a robust system there, but we need to develop it further to encompass these new and evolving threats from cybersecurity. And my final comment here is there are also liability conventions, the Paris Convention and the Vienna Convention for Liability. Although this is covering an accident, but one could also envision that if an operator has been negligent and their facility suffers a cyber-related incident, which causes either nuclear damage or civil damage, who is liable and who provides compensation to the affected parties? Great.

Talita Dias:
Thanks. That’s a good question for states to take up in their negotiations about future conventions on the topic. Then, Priya, I know you wanted to comment on that as well, and then we were running out of time, so we only have three to four minutes, and then I want to end the session with a survey for everyone. Priya?

Priya Urs:
Thanks. Yeah, I’ll be quite brief, but I just wanted to highlight the importance, I think, of getting at the problem from different angles. And I think Tomo put it quite well too. You know, on the one hand, we need to take certain preventative measures of cybersecurity, which Michael mentioned as well. But we also, when incidents occur, need to be able to address them and address questions of legal accountability as well. And I think it probably remains to be seen how useful it will be to apply general rules of international law in this context and also to admit where those general rules might not apply and where there may be a need for some sort of further regulation. And whether that actually happens is, as Tariq mentioned, up to states at the end of the day to decide that they want to implement certain measures or not. So I’ll just, yeah, end it there, thanks.

Talita Dias:
Thanks, Priya. So to end the session, and thanks once again to our brilliant panelists, we have a question for you in the audience online and in person. So in light of everything that we have just discussed, the risks, the initiatives, the approaches that have been developed, Rowan?

Rowan Wilkison:
Yeah, we wanted to ask you, so what else should states, private companies, and all the other stakeholders that we’ve discussed today be doing to address the cyber nuclear risks? So we’ll give just a couple of minutes.

Talita Dias:
Okay. Okay, so let’s see what you have responded to this survey. What do you think? we should be doing next. Okay, so I think everyone, I think that the highest priority here Rowan is. Yeah, we’ve got better, oh we’ve got modernized

Rowan Wilkison:
cybersecurity and civilian nuclear infrastructure that scored a, oh moving, it’s still moving, 9.1. I guess, yeah and then coming in at second we’ve got to better understand the threat landscape. Currently 8.6. Yeah, so I guess that’s what Marion sort of like spoke to us about at the beginning.

Talita Dias:
We need, we need to better understand the threats, both the cyber attacks that are out there and the types of cyber attacks that are out there, accidents that might happen as well, but also the consequences of those, those harms and we also need to hear improved dialogue between the cyber and nuclear sectors. I think that’s an important step forward. Now on law, do we need cyber specific, cyber nuclear specific norms, rules or best practices? That got a 6.4, so maybe we should stick to what we already have. Okay everyone, thanks so much for joining us today for this panel. Thanks to our speakers, thanks to your involvement, thanks to your answers to the survey. It was a real pleasure to be with you today. If you want to know more about our work, just go on our website. We also post things regularly on Twitter. Just follow our work, the work of our panelists and yes, we will keep you informed about future developments that we are doing in this space. Thanks everyone again and bye. Greetings from from Kyoto. Bye. you

Rowan Wilkison:
. . . . . .

Giacomo Persi Paoli

Speech speed

153 words per minute

Speech length

1311 words

Speech time

516 secs

Marion Messmer

Speech speed

171 words per minute

Speech length

2410 words

Speech time

844 secs

Michael Karimian

Speech speed

190 words per minute

Speech length

939 words

Speech time

296 secs

Priya Urs

Speech speed

178 words per minute

Speech length

871 words

Speech time

293 secs

Rowan Wilkison

Speech speed

139 words per minute

Speech length

585 words

Speech time

252 secs

Talita Dias

Speech speed

153 words per minute

Speech length

4668 words

Speech time

1833 secs

Tariq Rauf

Speech speed

144 words per minute

Speech length

2507 words

Speech time

1047 secs

Tomohiro Mikanagi

Speech speed

129 words per minute

Speech length

934 words

Speech time

433 secs

Data Governance in Broadband Satellite Services | IGF 2023 WS #307

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Uta Meier-Hahn

The analysis explores the topic of internet connectivity and considers various arguments and supporting facts related to its significance for development. It suggests that regions with better internet connectivity tend to progress more rapidly compared to those with limited or no connectivity. This supports the claim that internet connectivity acts as a catalyst for development.

Another important point raised in the analysis is the growing digital divide. As time passes, the gap between regions with adequate connectivity and those without expands further. This emphasizes the urgency to address the issue and find effective solutions to bridge the digital divide.

One potential solution that is highlighted in the analysis is the use of Low Earth Orbit (LEO) satellites. It is argued that LEO satellites require minimal terrestrial infrastructure and can complement the development of fibre and mobile infrastructure. This suggests that LEO satellites have the potential to bridge the digital divide faster than other connectivity solutions.

Furthermore, LEO satellite internet is seen as a valuable resource during times of conflict or natural disasters, when traditional communication networks may become unavailable. This underscores the importance of having alternative means of communication that can remain functional in such challenging circumstances.

The analysis also discusses the benefits of connectivity alternatives. It suggests that offering a range of connectivity solutions can lead to an enlargement of the market and stimulate competition. This variety allows end-users to have more choices, potentially leading to improved services and affordability.

An interesting point made in the analysis is the global nature of the governance of LEO satellite internet. It asserts that all global citizens are stakeholders due to the shared risks associated with the technology, such as potential space debris and environmental costs. This highlights the need for collaboration and cooperation among stakeholders to address these issues effectively.

The analysis concludes by suggesting several recommendations for further action. Countries are encouraged to document and share best practices and explore opportunities to align their interests with providers. This can help in authorizing and licensing LEO systems in a timely manner. Additionally, engaging with financing and investment opportunities is seen as crucial to support the advancement of satellite internet.

Other noteworthy observations from the analysis include the importance of transparency and multi-stakeholder input, as well as the need for research and twinning programmes to further understand and advance satellite internet. The analysis also stresses the significance of quick onboarding and activation of services, and the need for coalition building to foster consumer interest.

Overall, the analysis highlights the positive impact of internet connectivity on development and the potential of LEO satellites in bridging the digital divide. It provides valuable insights and recommendations for countries, stakeholders, and providers to collaborate and work towards achieving better connectivity outcomes.

Akcali Gur Berna

Satellite connectivity and data governance have geopolitical dimensions, especially in Ukraine and Iran. During the Russian invasion of Ukraine, Starlink satellite internet service proved crucial in providing communication support to the war-torn country. However, in Iran, requests for internet restoration were limited due to US restrictions and authorization issues with the Iranian government.

Concerns surrounding data privacy and monopolization have sparked discussions on the need for international treaties to address these issues in the context of satellite broadband. A survey conducted for the ISAAC Foundation-funded research revealed that respondents had concerns about data privacy and suggested an international treaty approach to combat data monopolization. This indicates that global recognition is growing regarding the concerns associated with the data value chain in satellite broadband, and international treaties on data flows and standardization may provide potential solutions.

Certain European Union countries and the UK have licensed Starlink to provide services, but under the condition of compliance with domestic data governance regimes. This shows that countries can employ regulatory measures to address data governance concerns in the use of satellite broadband services. Additionally, major space-faring nations like China and the EU are embarking on their own satellite constellations, citing data governance issues as one of the justifications for these projects.

It is crucial for satellite broadband technology to operate within existing rules and regulations, respecting the importance of the rule of law. This ensures that the deployment and use of satellite broadband services adhere to legal boundaries and prevent potential conflicts. International legal boundaries may restrict broadcasting capabilities in certain countries, and approval is necessary for landing rights and spectrum usage. Turning on satellite services without approval in particular countries would attract international pressure and potentially cause political conflicts.

In terms of domestic regulations, developing countries are advised to reevaluate and update their regulations related to licensing and authorizing satellite broadband services. By reassessing their regulations, these countries can create an environment that promotes the growth and accessibility of satellite broadband while also addressing governance concerns.

In addition, countries are recommended to form regional alliances to enhance the achievement of local policy goals. This collaboration can foster cooperation in addressing common challenges and advancing the benefits of satellite broadband in the region.

Active participation in ITU (International Telecommunication Union) consultations is also encouraged. By engaging in these consultations, countries can contribute to the development of international standards and policies that govern satellite connectivity and data governance.

Countries should also reassess their commitments under trade treaties, ensuring that their satellite broadband initiatives align with international trade agreements and obligations.

Moreover, it is essential for countries to familiarize themselves with space law. Having a comprehensive understanding of space law will ensure that satellite activities are conducted legally and in accordance with international norms.

Finally, a holistic approach is necessary to ensure that satellite broadband initiatives align with sustainable development goals. By considering the environmental, social, and economic impacts of satellite connectivity, countries can maximize the benefits of satellite broadband while minimizing potential negative effects.

In conclusion, the geopolitical dimensions of satellite connectivity and data governance are prominent, particularly in Ukraine and Iran. Addressing data governance concerns through international treaties, regulatory measures, and domestic regulations is crucial for the responsible and effective use of satellite broadband services. Collaboration, active engagement, and adherence to legal frameworks are essential in optimizing the benefits of satellite connectivity and data governance while working towards sustainable development goals.

Dan York

The analysis explores the different aspects of satellite connectivity, specifically focusing on Low Earth Orbit (LEO) satellites and their potential impact on internet accessibility. LEO satellites are seen as a promising solution for providing high-speed and low-latency connectivity, which is crucial for efficient internet access. In comparison, geostationary satellites, which have been providing internet access for many years, have high latency, making them unsuitable for fast connectivity.

The potential of LEO satellites for revolutionizing internet connectivity is highlighted, particularly in terms of their ability to deliver faster and more efficient connections due to their closer proximity to Earth compared to geostationary satellites. Additionally, LEO satellites can be mass-produced and launched in bulk using cost-effective methods, such as reusable rockets, resulting in significantly reduced expenses. However, it is important to note that LEO satellites have a shorter lifespan of around 5 years, requiring continuous deployment to maintain uninterrupted connectivity.

Despite the advantages, there are concerns regarding the implementation of LEO satellite networks. One significant concern is the economic, societal, and environmental implications associated with these systems. Affordability and capacity remain major challenges, and the lack of established standards and privacy concerns pose potential issues for future LEO systems. Additionally, there are concerns about data handling through the required infrastructure and the generation of space debris, which can have potential environmental impacts.

The analysis also addresses the issue of regulatory and legal restrictions, which act as significant barriers to the global implementation of satellite internet. Providers must secure landing rights and obtain spectrum approval in each country they seek to operate in. Operating without proper authorization can lead to international pressure and attention, underscoring the need for adherence to legal and regulatory frameworks.

Moreover, the control of satellite internet by a limited number of billionaires, such as Elon Musk and Jeff Bezos, raises concerns about unequal access and power dynamics. The high cost of launching satellites prevents smaller players or community networks from entering the field, potentially exacerbating inequalities in internet access.

The analysis also raises concerns about the potential risks associated with satellite internet, particularly in terms of two-way communication. This vulnerability could make users, especially those in conflict zones, susceptible to targeting or surveillance.

The importance of healthy competition within a regulatory framework is advocated to address potential issues and failures in the LEO sector, as witnessed in the 1990s. Furthermore, the need for regulation is emphasized to ensure equitable access and prevent regulatory capture, which may impede progress or lead to unfavorable outcomes.

While advancements in satellite technology, including mass production capabilities and improved launch capacities, have greatly improved over the past few decades, uncertainties remain regarding the viability and success of proposed systems. Careful evaluation and addressing of these uncertainties are essential to ensure the effectiveness and sustainability of satellite communication networks.

Alternative solutions, such as optical connectivity, are also discussed. Optical connectivity provides a direct and unshared connection, but its infrastructure is still in the early stages of development.

Finally, the analysis highlights the critical role of satellite communication in disaster management, as evidenced by the deployment of communication resources in disaster-stricken areas to provide Wi-Fi connectivity for first responders. Additionally, the potential use cases of LEO satellites are emphasized, and the need for increased conversations and attention towards the International Telecommunication Union-Radio (ITU-R) is suggested to address the challenges and opportunities presented by LEO satellites.

In conclusion, the analysis provides a comprehensive exploration of the various dimensions of satellite connectivity, with particular emphasis on LEO satellites. While LEO satellites offer promising high-speed and low-latency connectivity, there are concerns regarding environmental impact, data handling, affordability, regulatory restrictions, and broadband inequality. The importance of healthy competition, regulation, and planning ahead to address potential challenges is stressed. Caution and further evaluation are needed before implementing proposed systems, given the uncertainties that exist. Overall, satellite communication, including LEO satellites, holds great potential for improving internet accessibility, and leveraging it effectively requires careful consideration of various factors.

Peter Micek

The analysis examines several significant concerns surrounding the low-Earth-orbit satellite sector. A major apprehension is the potential regulatory risks posed by Starlink, the sector’s first mover. The consolidated control that Starlink holds over the industry raises concerns, particularly due to its dominance and associated risks.

Another worrisome aspect is the heavy reliance of Ukraine on Starlink and its controller. This dependence on a single company creates vulnerability, as any disruption or manipulation of Starlink’s services could have severe consequences for the country.

The analysis also highlights potential security vulnerabilities in low-Earth-orbit satellites. It presents evidence from a live hacking competition at the DEF CON conference, where teams were able to hack into a satellite’s camera and capture pictures of specific locations on Earth. This finding underscores the need for robust security measures to protect these satellites from malicious activities.

Furthermore, the analysis points out the significant dependence of civil society on government in the space sector. The report underscores the substantial funding and procurement efforts made by governments, particularly in defense industries and spending. This heavy reliance on government support poses challenges for civil society to have equal say or influence in shaping sector policies.

Additionally, the analysis identifies an asymmetrical disadvantage in influencing public policy in the space sector. Despite efforts to engage with public policy directors, calls often go unanswered. This lack of responsiveness hampers the ability of concerned parties to have a meaningful impact on policy and regulation development.

On a positive note, the analysis suggests promoting higher standards in government procurement and support for new and emerging technologies. Initiatives like the donor principles on human rights in the digital age launched by the Freedom Online Coalition aim to harmonise and raise standards, addressing challenges in the sector.

Overall, the analysis highlights the need for careful consideration of regulatory risks, security vulnerabilities, and power dynamics in the low-Earth-orbit satellite sector. It emphasizes the importance of inclusivity, human rights, and data protection in policy and regulation development. Promoting higher standards and fostering partnerships in government procurement and emerging technologies are seen as promising approaches going forward.

Larry Press

The analysis explores the topic of optical laser communication between space and the ground, highlighting its potential impact on sustainable development. It is noted that this type of communication is related to SDG 9: Industry, Innovation, and Infrastructure. The technology has gained attention and investment from various smart individuals and organizations.

Optical communication offers several advantages, including faster speed, significant data capacity, wide directional angle, and license-free operation. However, it also faces challenges related to atmospheric conditions, such as clouds and rain, which can distort or weaken the optical signals. Despite these challenges, the overall sentiment towards optical communication is neutral, acknowledging its potential but also recognizing the obstacles it faces.

The involvement of noteworthy organizations, such as NASA and universities, in experimenting with optical communication is highlighted in the analysis. NASA has been working on this technology since 2013 and has achieved transmission rates of up to 200 gigabits per second. The Federal Technical University in Switzerland achieved even higher transmission rates, reaching 0.94 terabits per second using optical communication. This evidence shows that there is active research and development ongoing in this field.

However, there is some skepticism regarding the success of optical to low Earth orbit communication. The president and CEO of KSAT, an established optical ground station company, doubts the viability of this type of communication. The analysis suggests that additional investments and research are needed to overcome the challenges associated with this technology.

In addition to the topic of optical communication, the analysis also examines the criticism directed towards Elon Musk for his political posts on Twitter. Larry Press expresses disappointment and fear towards Elon Musk’s political content. This negative sentiment is further supported by Larry Press’s mention of following Elon Musk on Twitter and disliking the political content.

Another area of discussion revolves around the failures in the past attempts at providing internet connectivity through satellites. The analysis cites the example of Teledesic, a project funded by Bill Gates and a Saudi prince, which failed in the 90s due to technological limitations. It is noted that at that time, the technology and economics did not support internet connectivity via satellites. The limitations in technology made it economically unviable as the internet was primarily text-oriented and had limited technological capacity.

The analysis also includes Larry Press’s viewpoint that connectivity should be affordable based on what people can afford. He argues that if people in an area or nation cannot afford connectivity to services like SpaceX, it implies they have excess capacity. Therefore, he suggests that adjusting prices according to an area’s available capacity would be more feasible.

Furthermore, Larry Press criticizes Elon Musk’s initial pricing structure for SpaceX, stating that it was unrealistic. He points out that Musk initially stated he would charge the same price everywhere, but different rates are now used in different countries. This observation highlights a disparity between the initial intentions and the current pricing policies.

In conclusion, the analysis provides an in-depth exploration of optical laser communication, its advantages and challenges, ongoing research and development, as well as potential skepticism towards its success. It also examines the criticism directed towards Elon Musk for his political posts on Twitter and highlights the failures in past attempts at internet connectivity through satellites. Additionally, it presents Larry Press’s viewpoint on affordability and pricing, emphasizing the importance of adjusting prices according to capacity and income levels. These insights contribute to a comprehensive understanding of the subject matter.

Kulesza Joanna

The panel discussion will delve into the intricacies of data governance in broadband satellite services, with a specific focus on satellite infrastructures and internet connectivity. Comprising seasoned experts in the field, the panel boasts a wealth of experience in both low Earth orbit satellites and internet connectivity. They will shed light on the technological aspects of these systems while also examining the regulatory constraints that come into play, including those imposed by SpaceX.

In addition to exploring the technical and regulatory dimensions, the panel will address the impact of regulations within different jurisdictions. Recognising that various countries may have differing approaches to governing satellite connectivity and internet access, this discussion aims to shed light on the potential consequences of these divergent regulatory frameworks. Civil society feedback, often instrumental in shaping policies and regulations, will also be taken into consideration.

One of the speakers, Kulesza, brings a unique perspective to the table. Working on an ISAC foundation project, she is deeply involved in comprehending the legal framework underpinning low Earth orbit satellites and internet connectivity. To emphasise the significance of this understanding, Kulesza stresses the need to discuss the regulatory impacts that governments attempt to enforce across different jurisdictions. By examining these impacts with a critical lens, the panel hopes to foster a more comprehensive understanding of the legal dimensions surrounding satellite infrastructures and internet connectivity.

Furthermore, the panel recognises the importance of community engagement in these discussions. To facilitate a fruitful exchange of ideas, the audience will be encouraged to participate by posing questions or sharing comments through the chat function. Alternatively, they can wait until the dedicated Q&A session to provide their feedback. This commitment to fostering dialogue and incorporating diverse perspectives aligns with the broader goal of partnership for the goals, as outlined in SDG 17.

In conclusion, the panel discussion on data governance in broadband satellite services promises to offer valuable insights into the technological, regulatory, and legal aspects of satellite infrastructures and internet connectivity. Through the expertise of the panelists and active audience participation, this discussion seeks to advance our understanding of the challenges and opportunities in this rapidly evolving field.

Session transcript

Kulesza Joanna:
ready and I do not hear an objection, I would be glad to start this off. Welcome to session 307. This time we encourage you to join us to discuss data governance in broadband satellite services. That’s the theme we have chosen for this panel. The group of presenters we have managed to to complete for this panel has been working on satellite connectivity and internet access for a while. We will go through the introductions in due course and for this specific session we have decided to focus on data. These new technologies that support internet connectivity all rely on what has been referenced as the new oil so we are very much looking forward to discussing that specific aspect of internet connectivity and satellite infrastructures. My name is Joanna Kulesza. I work as an assistant professor of international law at the University of London in Poland and for the past year and a half together with my co-lead on an ISAC foundation project we have been working to better understand the legal framework behind low earth orbit satellites and internet connectivity and Berna Akcalikor is one of the panellists on this project as well. We have managed to connect to put together a panel of excellent speakers whom I’m going to kindly ask to introduce themselves in due course for the purpose of time and our scoping questions for this session do include both the technological aspect of low earth orbit satellites and internet connectivity and that is a kind request to our first two speakers to shed some light on that specific theme. We will then move forward to better understand and what are their regulatory constraints behind using technologies like SpaceX, but I’m certain our speakers will emphasize that that is by far not the only company that is offering satellite infrastructures for internet connectivity. And then we will look at regulatory impacts that the governments are trying to cause within different jurisdictions, as well as the civil society feedback to the possibility of deploying these new infrastructures and regulating, managing, processing the data that flows through them. I have kindly asked our panelists to present for seven to 10 minutes. As already said, we have quite a rich agenda. So without further ado, I am going to ask them to take the floor and then we will move directly into the Q&A. So if our audience members do have questions or comments, they are more than welcome to either post them in the chat. I will be monitoring the chat or simply wait until the Q&A session. It will be moderated in the room by Bernhard Salibor and we will give you ample time to share your feedback. With this, I hand the floor over to Dan New York, who has been leading a dedicated project within the Internet Society on Low Earth Orbit Satellites, completed with an insightful report. I am certain that we will be provided with a link to that report in due course. Dan has been working for ISOC as the Director for Internet Technology. So we could ask for no better speaker than Dan to give us an introduction into satellite infrastructures and internet connectivity. Dan, thank you so much for joining us. The floor is yours.

Dan York:
Thank you very much, Joanna, and thank you for everybody who’s coming in. attending this session, whether you’re in the room there in Kyoto or online, wherever you may be. And this is a fascinating topic around data governance. And I could go off on any topics, but I’ve been asked to kind of focus on the technology side and set the stage to make sure we’re all using the same terms, working in the same kind of space, and working with that. So to begin with, I work for the Internet Society. I’ve been there for 12 years. I am currently the Director of Internet Technology. I have a focus around one of the aspects is connecting the unconnected, and how do we do that using low-Earth orbit satellites, among other technologies. Let me go and talk a little bit. It’s all focused on the internet for everyone, and how do we bring those people together. To begin any conversation on satellites, we need to talk about orbits. And this is the critical part to understand what’s going on right now and why there’s so much energy and excitement. We’ve had satellites that have been providing internet access for decades now. Almost all of those have been out at what is called geostationary or geosynchronous orbit out at around 36,000 kilometers away from the Earth. These are large satellites, typically the size of a large bus or something bigger. They cost millions of dollars, many millions of dollars sometimes. They take a long time to get out there. But they provide service for sometimes 15, 20 years or more. They can provide decent bandwidth. The challenge that they have is they are so far out that the amount of time it takes for a packet to go from the Earth out to the satellite and get back can be 600 milliseconds, 800, 900 a second, or even more. And the challenge that has is that in today’s world, when we want to have video conversations like this one, you need something with a much smaller amount of what we call latency or lag. And this is where we start to look at the other areas. There is a medium Earth orbit, which is between 2,000 and 36,000. And there’s a range of things that are in there. there is a provider, SES, which has the O3b satellites that do exist out in that kind of range. They are a little bit closer, have a little bit better latency, but the energy, the excitement is all down in this space below 2000 kilometers, which is the low earth orbit or LEO, as we say here, or LEO, however you want to call it. This is where the space stations are. This is where so many of our satellites are, imaging, sensing, everything else. All of this is happening in this space. Now, part of what goes on and why we’re getting into this is that the farther away you are, the more, the bigger the range of the earth that you can cover. So you can go and without it, the geosynchronous area, you can have three satellites and you can be able to cover basically the entire earth by positioning them in different areas. If you’re in the middle earth orbit, some of the systems there can do maybe 20 or so. They’re orbiting, they go faster, et cetera. When you get down into the LEO area, you need a lot of satellites because they’re moving constantly in motion around there. OneWeb, which is now Eutelsat OneWeb, is around 1200 kilometers away from the earth and they have about 600 satellites. SpaceX with their Starlink and Amazon Project Kuiper and others who are in this play are a little bit lower. They’re about 500, 500, 600 kilometers away from the earth and they need about 3000 satellites to go and cover it. So it’s a different scale that you see here going on. These are this world of LEOs or low earth orbit satellites that we see around here. What’s happened that’s driving this interest in LEOs is that this need for this high speed, low latency connectivity. We wanna have connections like this. We wanna be in gaming, we want virtual worlds, we want e-sports, we want fast connectivity to be able to communicate and connect with people. And the challenge is that just hasn’t worked in the past with GEO, but the thing that’s driving it is this massive reduction in cost. These LEO satellites might be the size of a car or even smaller in some cases, they can be mass produced and rolling off production lines, they can be sent up in rockets with 50 of them at a time. And those rockets can be reusable now, as we’ve seen with SpaceX. So there’s this massive change in the way that we’re able to go and deploy rockets and things that are out there. Three parts to any of these systems. One is this constellation of satellites. Okay, that’s the thing we all think about when it goes up there. Each of them are launched at different altitudes, there’s different what they call orbital shells that are around, that there are different ways. There’s also the user terminal is the language used in satellite speak, the ground terminal or something. Normal, I mean, people just out there often just call it an antenna, or a dish or, you know, that kind of thing. But that’s the piece of that’s the hardware that you use. The big difference that’s happened is that you need a fancier antenna. With a geostationary satellite, you can just put an antenna on the side of your house or top of the house, you point it at the satellite, and it’s done because that satellite is fixed over a certain part of the Earth as it rotates. And so you can just put the dish up there. And that’s what you see in all over the world. Well, that doesn’t work when your satellites are moving at a high pace, and they might only be over the Earth in view for five or 10 minutes. So you need these new antennas that are electronically steerable phased array, lots of different words for them and gets in there. But basically, they’re the things that you see if you’ve seen anything with Starlink there, they look like a pizza box or something. Amazon Kuiper has similar ones. OneWeb has some similar kinds of ideas. The companies that are selling direct to consumer often accompany that with a Wi Fi router or something else. And then there’s also ground stations. And these are the receiving end of where that signal goes up to a satellite comes down to a ground station connects out to the internet. Now, these are different for each of the providers. OneWeb’s ground station is different than SpaceX’s, which will be different than Amazon Kuiper’s, which is different from ones used by Intelsat or one of the other geo providers. They’re all their own separate space in there, but they need that ground station to connect to. Now this is something, and Larry’s gonna talk a little bit more about this in a bit, but this is something that’s changed a bit. Historically, you needed to have a ground station in each country for legal reasons and things, but also within a certain range. The satellite had to be able to look down and see the ground station. So you had to have them maybe every 900 kilometers, something, you had to have them spaced out around the earth. And this is why, because you would have this user terminal, the dish, connect up to a satellite, bounce down to a ground station, and go out to the internet. Of course, in the Leo space, it might look a little bit more like this. Some of your packets would go to one satellite, the other ones would come back there. One of the big changes or revolutions in this space is what if you’re not in range to a local ground station? This is what Larry’s gonna talk a little about is this idea around what are called inter-satellite lasers, which allow you to go and connect up to the satellite, bounce across the mesh, and then drop down to a ground station, and then connect out there. Now, SpaceX has demonstrated this already when you look at things such as, they did some experiments in Antarctica with Starlink dishes there that connected up to the Starlink mesh. The constellation went across the constellation and dropped down to a ground station somewhere else. There are no ground stations for this in Antarctica. It was connecting up and across. It was also demonstrated in the Iran protests when the US government and others asked Starlink to turn on Starlink access in the country of Iran, and they did. There aren’t any legal ground stations in Iran. They were taking that data up into the satellite constellation. and then dropping it down somewhere into some other ground stations there. There’s a range of different kinds of data flow tech issues we could talk about here, about where does the data get dropped down to? Who’s in control of that? A lot of different topics around that that I’m not going to get into, but we’ll talk more about that. Just quickly, some of the concerns or things that we have to think about are affordability. Can these systems really be affordable for the people who need them the most? There’s a bunch of different business models that are being brought in here. Will they have the capacity to support all that we need? Certainly, we’ve seen in some areas they provide tremendous capacity for everything you need. When you get in the more densely populated areas, actually, you wind up with having challenges in some of this. Will there be competition? What are the business models? Right now, one of the biggest challenges is simply deployment. There’s a limited number of providers, really only SpaceX right now, who is able to go and launch satellites up into space at the pace that you need to launch because you’ve got to get thousands of satellites up in the low Earth orbit. Because they only have a five-year lifespan, you need to keep replacing. We’re in a weird spot where a lot of the other launch providers, Arianespace, United Launch Alliance, Jeff Bezos’ new Blue Origin, they’re in between launch vehicles, like the Ariane 5, there’s no more rockets, and the Ariane 6 hasn’t been deployed yet. There’s other pieces like that. We’re in a weird spot. One of the big challenges is just getting the satellites up there in the first place. There are other concerns, security, privacy, standards, what standards are being used. Now, if you use a Starlink connection, it works with all the typical internet standards. Those are all open. It works across there. How they’re routing inside their infrastructure is right now primarily proprietary. There’s issues around space debris, lots of things that come into these kinds of spaces. We don’t fully understand the sustainable business models. There’s questions around the environmental impact of all of this. What will it be? The impact on astronomy. There’s a lot of open questions. And that’s really one of the reasons why we need to have sessions like this at the IGF and other places, is because this is an industry that is still in its infancy. Need to understand a bit of this. I will put a point on the urgency around this. The next several years are going to be very critical because there’s a lot of people launching these systems. Starlink has already launched much of its generation one, its first phase, which will ultimately be about 4,400 satellites. They’re in the process of launching the first part of their second generation, which will be 7,500 satellites, growing to around 30,000 satellites. OneWeb has completed their first phase of around 600, but they’re going to be launching more. They’re on the books to do that. Amazon, just last week, launched its first two demonstration satellites, but it’s on the track to go and launch another 3,200 over the next couple of years. China is proposing their own constellation, which will rival Starlink’s in about 13,000 satellites. The European Union is looking to develop its own Iris constellation. If you look at the numbers that are filed with the ITU in terms of satellites, it’s conceivable that over the next four to five years, we could have 40, 50, 60, maybe even 90,000 satellites orbiting the earth. And this is just the internet access ones, not even thinking about imaging or sensor networks or other stuff. So it’s a very crowded space up there. Data flows are going to be a big part of thinking about through how all this works. And with that, I will just say, Joanna’s right, we did have a report that we issued last year. We’re still working on that. You can get it at internetsite.org slash leo is where we talk and frame a lot of these kinds of issues. And with that, I’m going to turn it to. Larry to dive in the lasers a little bit more.

Larry Press:
All right. OK, can you guys see me and hear me? We can hear you and see you. Can see me, OK. Let me, let’s see, I’ve got to figure out how to share my screen and get some slides going too. All right, can you guys see my slides? We do. OK, all right. Yeah, what I’m going to talk about, as Dan said, he gave a great overview. I’m going to be very focused in kind of a narrow niche, which is optical laser communication between space and the ground, not even just that one slide on the inter-satellite links. And the reason I’m doing it is because I think it may have a significant impact on this sustainable development goal, number nine in particular. So you can see the picture on the right. It depicts a few satellites in the sky in the space. And the kind of narrow lines between them are inter-satellite links that Dan talked about. And then those thicker lines depict laser links communicating with ground stations or gateways on the ground. And I’m going to focus my talk on the links to the ground stations. And I’m going to have one slide. Let’s see. Here you go. One slide. One slide on the inter-satellite links. Dan said SpaceX was the first. They now have about 8,000 optical terminals in orbit. And they’ve recently begun launching their second generation, which go faster. They go up to 100 gigabits per second. As you can see, each satellite has three terminals. Two of them point forward and backward in the same orbital plane as the satellite is going. The third one can go left or right. And I’m not sure, who knows, but I think it can perhaps go down, point to the ground. And that’s what we’re gonna talk about now. Satellite communication between the satellite and the ground. Why are we concerned with, or excited about optical, what optical communication? Right now, it’s radio frequency communication to those ground stations. And optical has many, many advantages. I’ve listed them there on the left. I’m not gonna read them to you. Maybe the most interesting is license-free. There’s no problem with getting, with interference with spectrum that there is with the radio frequency. It’s like a laser pointer. And RF is more like a flashlight that kind of spreads out the signal that gets diffused. And there are even some little side signals that completely don’t go to the right place. What’s not to like? It’s the atmosphere. Things like clouds and rain and stuff get in the way of optical signals. They can distort them and cut back their power. So the payoff would be really great, as was just illustrated. And for that reason, many really smart… people and business people are working on. I’m going to run through really quickly five groups. I’m not going to say much about any of them, but I will have links, a lot of links, that you can follow up on all of these. OK, NASA has been doing it since 2013. They’ve got many projects, many experiments with space-to-ground communication optical. I’ll just say this one is 200 gigabits per second from a little CubeSat from space to the ground. That is way fast. That’s 1,000 times faster than we’re used to. And that’s the kind of payoff that will come from this stuff if it works. All right, universities are doing a lot of experiments and research. This one’s interesting. It’s from the Federal Technical University in Switzerland. They have got a deal where they’ve got a terminal up here on top of a mountain, and they’ve got a terminal down here at their institute. The whole distance depicted there is about 53 kilometers. And you can see that it’s going through some of this stuff, like turbulent air, and it’s over a lake with water vapor, the kind of stuff that screws up laser transmission in the atmosphere. And with adaptive optics, they have a little tiny chip with 97 tiny adjustable mirrors that can make adjustments 15,000 times a second. Things like that are inconceivable, but they exist. And they’re also working on modulation schemes, a way to encode the 1’s and 0’s into the signal. And so they’ve been able to achieve 94 terabits per second, 0.94, almost a terabit per second transmission rates. They say they’re working on new modulation schemes. new software, to encode things and make it go faster. And it can be scaled up to 40 channels. So that would be an incredible amount of data coming in from space. And the second university one has to do not with the data transmission rate, but with being able to track the satellites, like Dan says, as they move across the sky. And so what these guys have done is put up a drone. And it goes back and forth at 65 kilometers per hour. But that simulates the sort of one degree per second that a satellite in low Earth orbit would transcend. And in fact, they have no trouble tracking it and transferring data from it. The military, no surprise, is really interested in this stuff. One really interesting thing is the Space Development Agency. It’s part of the Space Force. They have what they call the Transport Layer Constellation. It’s going to have between 300 and more than 500. They haven’t really decided yet satellites. And these will have laser links between the satellites and also space to ground laser links. And a key thing is this. They have a real philosophy of working with commercial suppliers. So that’s really an interesting one to watch. Speaking of commercial suppliers, I think the most interesting one is a company called Elyria. It’s a startup. They acquired their intellectual property for two products from Google. It’s really a bunch of guys that used to work at Google. The products are called Space Time and Tight Beam. And Tight Beam is an optical communication technology. And space time is sort of a network management system. Let me tell you about TightBeam because that’s what we’re talking about. Like the guys in Switzerland, they are working on a hybrid approach and it sounds real similar. They have adjustable mirrors and clever software. And they say they are getting now, they also do tests from a mountain near their headquarters and they’re getting tests that are going at 400 megabits per second. And so if you have four of those, that’s one point. Oh yeah, you can put channels together which gives you 1.6 terabits per second. And the reason I wanna kind of bring them up in this context is on the right-hand side, you see a couple of slides from a demonstration that they’ve done to put together this. I’ll tell you a little bit more about it in the next slide. But one of the things that demonstration or the software takes cognizance of is the surface temperatures on the earth and atmospheric conditions. And that enables space time, which is their other product, which does the routing and whatnot to route around the kind of bad atmospheric conditions that I spoke of before. Let’s look at space time. These are again from the same demo. You can see the scope of this thing. This is a demo of a hypothetical network that reaches from the moon to earth. And if you zoom in, you can see it’s also working on ships at sea and airplanes in the air and of course, satellites in orbit. So it’s a very comprehensive kind of a network operating system for controlling both fixed and mobile assets and the links between them on the earth and wherever they are, outer space, deep space. They definitely have deep space in their planning. The guy sent me a… I had a little exchange on Twitter yesterday about, yeah, they’re heading for Mars, not just the moon. This project is super comprehensive, but it’s reminiscent to me of the ARPANET back in the old days. And I list some of the reasons here. The software is open source. They’re trying to do standards. Networks can federate and access each other’s assets. It really sounds both ambitious and like the ARPANET, but 1,000 times more ambitious. But I would strongly advise you to watch the demo and these slides came out of. OK, another commercial thing. Oh, it says university. It should say commercial. I’m sorry. Another commercial company that’s worth paying a little attention to is IntelSat. They’re one of the traditional geostationary satellite operators that Dan talked about. But they are doing interesting partnership products. They are working with SpaceX to test space to ground optical communication with one web on airline connectivity. And they are going to use the ALERIA operating system. So keep an eye on them. OK, I mentioned that China. You have to talk about China these days. Dan mentioned Guolong. That’s really something. But they’re going to have a hard time launching all those satellites before Elon Musk is sitting on Mars. But at any rate, China is behind. It seems to be behind in this optical communication between space and the Earth. I can only find these two projects just kind of looking around for this talk. I talked to a friend of mine who’s a colleague who’s in China and knows everything about the Chinese internet and space business. And he couldn’t add to this, so they don’t seem to have much going at the present. Okay, and there’s bad news though. That was a lot of good news and a lot of people, smart people, putting a lot of energy into this. The bad news is there are no optical ground stations anywhere, and so that’s gonna take a bunch of investment. One approach is, or some of it can be done by augmenting the existing RF, some of the RF gateways that are already existing. If they’re in good geographic locations, that might make sense because they already have the real estate around the ground station, they have power coming in, most important, they all have high-speed internet connectivity at their locations. If you look at this map, these are the green pinpoints of the SpaceX gateways, excuse me, and in North America, there’s 75 of them. And you can see though, that some of these gateways are in Southwest United States, some are in Northern Mexico, some are in Arizona, in Arizona, in Australia, places that might make suitable locations for a optical gateway. The other thing though, that won’t be enough, you’ll have to construct new gateways. One would try to put them in arid regions, at locations near centers of demand, at locations that have already high-speed internet, terrestrial connectivity. Observatories come to mind as likely places to have them, they have a lot of those characteristics, but it’s gonna take a lot of money, careful analysis to build that infrastructure out if this stuff takes off. Yeah, to come back to the development goal, sustainable development goal number nine, I just wanna talk for a second. to about Africa. Right now in Africa there are only two gateways to the SpaceX. SpaceX has only two publicly known gateways and so they could use some connectivity. They have an advantage in that they’re the brown, the sort of arid spots on this map tend to be in the north and the south and I know there are others and that is an advantage because the satellites have inclined orbits. They don’t just go around the equator but they kind of go north and south. Some of them are almost go over the poles and what that means is these inter-satellite links are going to be more efficient for them for north-south links than they are for going east and west. So that’s looking good for Africa. You can imagine some gateways in the north and some gateways in the south. The other thing is seasonal variation. Obviously in the northern hemisphere it’s different than in the southern hemisphere and by having this kind of north and south having these two areas that are in the same longitude gives them another advantage. They will have good weather at least somewhere or maybe in both places at all times. Now I’m giving you kind of a really fast positive view of the whole thing. Here’s a reality check. This quote personally I don’t think optical to low earth orbit is really going to go. The guy that said it is the president and CEO of KSAT which is a Norwegian company. It’s an established optical ground station company. And they tried optical ground station in Greece in 2020 and it failed commercially. So this is not a slam dunk. There are tons of investments needed and there’s tons of research and development that needs to be done. Okay, that’s about what I was gonna say. I’ve got, you can see here my email address and a place where I talk about this stuff a lot. And if you’d like to see a copy of those slides which have tons of links, just send me a link or send me a request. Oh yeah, here’s a frequency terminology cheat sheet for those who would like it. And that is the end.

Kulesza Joanna:
Thank you, thank you so much, Larry. That was a lot of information and we particularly appreciate the developing countries focus. That is one of the themes we have been exploring throughout both of the projects, the one that Dan mentioned and the one that our next speaker and myself have been working on. So it’s most appreciated that you have provided us with this very broad technological overview and my sincerest thanks to Dan for his lasting support and yet another great intervention. With that, without further ado, I’m glad to hand the floor over to Professor Berna Akhtarigoura from Queen Mary University in London who’s a convener in outer space law, which brings us to the regulatory component of this panel. Again, with a kind request to our speakers to try and limit their intervention to seven to 10 minutes, I hand the floor over to Berna with a kind request for a brief review of whether all of these wonderful novel technologies are actually regulated. And if so, if there is a data regulation component that we might wish to focus on. Verna, the floor is yours.

Akcali Gur Berna:
Thank you, Johanna. I have PowerPoint. Well, I’m delighted to be here today to discuss data governance in broadband satellite services. I am joined by an esteemed panel of experts who bring a wealth of knowledge and experience on this topic. And, as you said, my task is to delve into the regulatory aspects of satellite connectivity and hopefully provide you all with some insight. So, the mega-satellite constellations attracted wide-scale global attention on 26 February 2022, two days after the Russian invasion of Ukraine started. Elon Musk, SpaceX founder and CEO, responded to a request from the Ukrainian Deputy Prime Minister confirming on Twitter that Starlink satellite internet service has become active in Ukraine. This news came after the cyber-attack by Russia on another satellite system owned by Viasat. The primary target of the cyber-attack is believed to have been the communication lines of the Ukrainian military, as it was just one hour before Russia launched this major invasion of Ukraine. But the impact was more extensive. It affected thousands of internet users and internet-connected devices, including the wind farms in Central Europe. It is unclear whether the spillover was unintentional. Well, the solution for the disruption was another satellite system, Starlink, a new mega-constellation then. Well, until this time, the provision of broadband internet had been considered an experimental alternative to undersea and on the ground telecommunication services, but suddenly it became the communication lifeline for a war-torn country. As expected, this received a lot of press coverage. The celebrity status of the company owner also contributed to this. Around this time, we saw it being used in disaster zones such as the flooding in northern New South Wales and remote villages in Tonga after volcanic eruption and tsunami. Well, soon after they launched services in Ukraine, an uprising in Iran started. The government applied restrictions on Internet access, so the protesters called Mr. Musk to help restore their Internet connectivity. This time, he wasn’t able to help at first, and he was but achieved limited reach. And it wasn’t because Starlink services did not have coverage over Iran technically, but primarily for legal reasons. There were U.S. restrictions for providing services to Iran, and Iranian government had not authorized Starlink to provide services within their borders. So in both of these examples, the company acted in a manner that reflected the preferences of its home state. So in the first year that this company started providing services, it didn’t really shy away from making political choices. And as we all know, the concerns regarding cross-border data transfers and data governance have a geopolitical dimension as well. In that sense, relying on this infrastructure for transferring, storing, or processing data is very much perceived as relying on a U.S. infrastructure for connectivity and data transfers. As one would expect, in the current state of affairs, Russia and China have already declared that they will not allow the provision of satellite broadband by a U.S. service provider and cited cybersecurity as the main concern. OK, so it’s not… OK, sorry. OK. Confirming the prevalence of data governance concerns, in a survey Joanna and I conducted for our ISAAC Foundation-funded research on the global governance of satellite broadband, the respondents chose data privacy as one of their primary concerns. And in another question, they chose an international treaty on data flaws and standards development approach as the best way to tackle concerns regarding global data value chain being monopolized by a small number of real broadband companies. This survey was more than a year ago. We are still in the early stages of this technology, so we’ll see what the future brings and how the data governance regulations take shape. Now, so far I have established two things. There’s a geopolitical dimension to the use of satellite broadband, and data governance has started to be associated with its use. So what sort of measures can countries employ to address their concerns? I’ll go back to this, yes. Well, some EU countries and the UK have already licensed Starlink to provide services, although they have or plan to have their own satellite systems. The plan is to create a competitive market, but all licensed service providers are expected to comply with the domestic data governance regimes. On the PowerPoint, you see Starlink’s commitment on its website to comply with the GDPR for its customers in the EU. Major space-faring nations have also embarked on projects that will give them their own satellite constellations. A good example is China and the EU. The justification of these ventures… goes beyond data governance, but it is a significant factor. So what is the exact contours of domestic jurisdiction over satellite services? Let me go back. Yes. So while the provision of satellite services in a particular country is subject to that country’s laws and regulations, and the framework covers much more than data governance, the satellite companies need to comply with all to be able to provide services in a particular jurisdiction. The ground station. For that, the companies will need authorization from each relevant jurisdiction. Even if they do not need to establish one technically, they may be required to. They will also need to obtain a license to use the frequency spectrum. The frequency spectrum is coordinated at the international level by the ITU. However, at the domestic level, it is a national regulatory agency that assigns them. Of course, in compliance with what is agreed at the ITU. If the companies provide their services directly to consumers, they will also likely need an internet service provider license, which will include a license for the use of terminals by consumers. The importation of their user terminals will also be subject to the import requirements of the national authorities. The states will want to check the conformity of their new measures with their commitments in their trade treaties. While satellite connectivity is not new, and the fact that it is being provided via megaconstellations does not mean existing regulations do not apply. Regulators are updating the provisions to address the unique challenges of megaconstellations, but essentially, the existing regulatory framework is applicable. I hope this brief explanation gives you an overall idea. Okay. I couldn’t find that website. If you would like to read more on the topic, please check our website. I’ll provide the link in the chat where you can find a detailed report on the subject and shorter policy papers for governments and civil society organizations. Thank you.

Kulesza Joanna:
Thank you, Berna. Wonderful. Thank you very much indeed. There seems to be a lot of regulation on both telecommunications and data. Yet when we look at these new advancements in infrastructure, the question is whether these are sufficient, whether they are relevant, whether we are back to national laws and national regulations, and whether the multi-stakeholder model still matters with regard to internet connectivity. And with that question in terms of how developmental help should be provided to countries who are still deciding on how to expand internet connectivity in their jurisdictions, I turn the floor to our next speaker, Dr. Uta Meier-Hahn, who is the advisor for digital technologies at the Deutsche Gesellschaft fรผr Internationale Zusammenarbeit. And I’m very much looking forward to Uta discussing the developmental context of new technologies supporting internet connectivity, and Leos in particular. I know you have been working on these topics, so I’m very curious to hear your perspective. Uta, thank you for being here. The floor is yours.

Uta Meier-Hahn:
Thank you so much. So my name is Uta Meier-Hahn and I’m with GIZ, which is a public benefit federal enterprise. So we support the German government and a host of public and private sector clients in achieving their objectives in international cooperation. GIZ, some may know this or not, but we work in around 120 countries around the globe on a wide variety of areas, and that also includes So, why do we, as an organization, in the field of international cooperation, work on the US satellite or satellite internet in general? Isn’t that this expensive niche technology with limited capacity that will never, ever be the internet for you and me? These arguments I keep hearing, and they may sound and be valid, so I feel like we need to do some clarification about what we can and what we cannot expect from new satellite internet. And here I would like to make four points. The first point is about time, which we don’t have, because internet connectivity is widely recognized as a catalyst for development. This means that regions with access to better internet connectivity are progressing at a relatively rapid pace compared to those without. And this means, again, in other words, that the digital divide or divides grow larger with time. Therefore, it’s important to not only increase meaningful connectivity overall, but to do so quickly. This is where Leo Satellite or broadband from space may come in. It requires minimal terrestrial infrastructure, as we still do today. Minimal terrestrial infrastructure, as we’ve just heard, which is heavily under development. And because of that very feature, it could bridge digital divides faster than other connectivity solutions. So this, to my mind, is not a discussion about either or. It’s not about either fiber and mobile infrastructure development. We must continue this, obviously. But we can complement those efforts with broadband from space to make speedy advancements in connecting the unconnected. So I find that there’s the sense of urgency in the discussion about connectivity that sometimes gets lost in this discussion. My second point is about robustness. Leo Satellite internet, broadband from space, can provide communications when traditional local networks are not available. may have gone down, as was just mentioned by Berna, due to conflict, due to natural disasters, due to man-made disasters. And having this type of connectivity from space in place can be like a safety net for critical infrastructures. I wish it was not the attack on the Ukraine that would serve as an example over and over for the criticality of satellite internet for governmental communication in conflict. My third point is about the market, the market for internet connectivity solutions. And that point is very simple. Alternatives for connectivity enlarge the market, and depending on the business models of the providers, which vary, as we have heard, choice may arise for end-users. That again can stimulate competition, and if some other factors about the local connectivity situation and the ecosystem on the ground are given as well, the affordability of internet access can increase, not only for the users of broadband from space. This is a thesis. I encourage us to monitor the pricing level development in regard to this empirically. My fourth point goes more directly to the global dimension of the governance of LEO satellite internet. It has been alluded to in the previous talks. All global citizens can be viewed as stakeholders in broadband from space, because they share the risks that are associated with this technology, like the serious damage that could occur from space debris, the environmental cost of launching rockets and others. And at the same time, there is, and probably will be, only a handful of space-faring nations who host industries that are actually operating, or are at the verge of operating, their own satellite constellations for broadband from space. And what does this mean? It means that for the foreseeable future, the shared fate of most countries will be that they will remain customers. of only a few providers of broadband from space in a very concentrated market. Also due to the limits of natural resources, such as space, such as frequencies, as long as the advancements with the, what Larry Press was talking about, are not reality yet. So these countries may ask themselves if the connectivity that the providers of broadband from space deliver together as well as individually, comes at acceptable conditions for them. Think of the digital policy quality of that type of connectivity. What do I mean by that? For one, every provider can be expected to comply with the rules of their own jurisdiction of origin when it comes to how they treat the traffic, the data that they transmit. Think of varying provisions for data protection, cybersecurity regulation, or frankly surveillance. And then of course, in addition, everything that Berna has just mentioned with regard to the national regulation. But also the jurisdiction of origin matters. And second, how can countries make sure that their connectivity is not terminated involuntarily? For instance, because a provider goes bankrupt as we have seen in that first wave of industry development. Or because of political leanings, as Berna has just pointed out. So I encourage us to think about the qualities of those policy underpinnings for Leo satellite connectivity, and that they matter. Another aspect of this is the ability to switch providers easily, because being dependent on one company or one man puts customers in a difficult position, especially when broadband from space shall safeguard critical infrastructures. That is an issue of global internet governance, because the limited resources in orbital space and frequencies prohibit unlimited growth of this industry. So there’s not better policy qualities by growth. There’s a privileged position of a few. And that may give rise to a different notion of responsibility for these providers as well. So far, all providers offer their own proprietary hardware, as we’ve heard, for base stations and other equipment. So working towards standardization and interoperability of equipment could go a long way towards preventing log-in effects. From what we hear at this moment, the European Union constellation, IOS Square, might be the first one to go into that direction of at least standardizing such hardware. We will see about the degrees of openness. Let me close with a few empirical observations so we don’t only speak on this high level. Because in order for Leo Satellite Internet to operate in a given country, as we’ve just heard, certain regulatory and institutional setup is favorable. However, this can be a major undertaking, specifically as the industry is developing so quickly, to put such a framework in place. And that is why it appears beneficial for non-spacefaring nations to, on the one hand, document and share best practices, in order to, on the second, possibly identify opportunities to align their interests vis-a-vis providers. To get an initial idea of where we are standing, we have looked at emerging policy environments in 10 of the partner countries, initially on the African continent, really just to get a very rough idea. And I don’t have time to go into much detail, so I will keep it very brief. But we found that countries are moving relatively quickly to authorize and license Leo systems. So there is demand. Just to give you some examples, Ghana, Kenya, Mozambique, Nigeria, and Rwanda currently all have commercial Leo services deployed in their countries. Tunisia is considering trialing Leo connectivity. And others are actively deciding what path to take, or what regulatory approach towards making requirements for businesses, etc. These countries are Senegal, South Africa, Tanzania, and Uganda. One thing that will be important to note is also that we found that all of these countries already participate in international satellite organizations. They are all WTO members. They have experience in negotiating issues at the relevant ITU conferences for world radio communication. And they also have experience from previous satellite developments in introducing other satellite systems into their connectivity ecosystem. And what comes on top of that with regard to the topic of our session about data governance is that they are all members of the African Union, which is actively examining issues related to data localization and cross-border data flows, and just has recently put in a framework that will serve to develop local policies around this. So, these experiences will have provided most regulators and policy makers in those countries with years of experience, with skills to handle broadband from space, and I suggest that we build on this to fast-track participation by others. So, to sum up, if asked why Leo Satellite Internet is important for development, I would answer Leo Satellite Internet broadband from space can contribute quickly to closing the digital divide or divides. It can serve to increase robustness of Internet connectivity. It enlarges the market for Internet provision. It is not going to go away for the foreseeable future. And so, there’s a lot of room for dialogue, for coordination, and for mutual capacity building, particularly, not only, but particularly among non-space-faring nations to shape Satellite Internet to the benefit of all. Thank you.

Kulesza Joanna:
Wonderful. Thank you very much. That is exactly the intervention we were looking for with the targeted approach to developing countries and possibly recommendations to governments who are looking into deploying LeoS into their jurisdictions. I will save follow-up questions for the Q&A, and I’m certain there will be questions from the room. But thank you very much for highlighting that specific aspect of new technologies rapidly developing. And last but not least, please let me turn the floor over to Peter Mietek, who’s the General Counsel and UN Policy Manager within AXIS now, an NGO that needs no introduction. But I am certain that in his intervention, Peter will tell us more why AXIS now might have an interest in data governance through low-Earth-orbit satellites. Peter, thank you so much for joining us. The floor is yours.

Peter Micek:
Well, thank you. And yeah, I thank the other panelists for well-laying out, I think, the facts as they stand now, and then some of the potential and current regulatory risks and opportunities. I will come in with our perspective as a human rights organization. AXIS now always needs an introduction. We’re a global organization that defends and extends the digital rights of people and communities at risk. And our team members in more than 35 countries are encountering the emerging low-Earth-orbit satellite sector in a number of different ways, and that is what I hope to present a bit of. So I suppose I could start with some of the risks that we see as a human rights organization. We are very concerned about the consolidated control over this sector as it stands now. Speakers have mentioned Starlink is the first mover. They have that advantage here, but it is up to the whims of the founder and controller of that firm, which constitutes the industry right now of available retail services. And our partners in Ukraine are very concerned that the entire nation, its military, civilians, and civil society are dependent on this one company and its egotistic owner who seems to want to decide the outcome of the war. And there’s really little that we can do about it. So civil society, again, you know, desperate for connectivity, eager to reach the sustainable development goals and, you know, access and exercise our fundamental rights like freedom of expression. And of course we’ll reach for any opportunities we can. Access Now runs, coordinates the hashtag keep it on coalition against Internet shutdowns. This is a global coalition of more than 300 civil society organizations fighting intentional disruptions of connectivity. And inevitably, especially during longer term shutdowns as we see in Sudan and Kashmir and Myanmar, people look to the skies with hope. With hope that they can find a connection that will let them tell their story to the world, release the evidence that they’ve collected on human rights abuses and atrocities, tell loved ones that they’re still alive or that they need electronic money transfers. All the things that we rely on for connectivity become compounded and pressurized in situations of armed conflict and desperation. And of course people are going to look to satellites. And unfortunately, though, as I said, this leaves us in, you know, the hands of very few, you know, Western companies again. So I think it’s It’s worth noting that, you know, the user terminals themselves do put people at risk. So another risk here is that, you know, this consolidated control creates single points of vulnerability. And I know we don’t want to get too much into cybersecurity, but it was really exciting to see this summer at the DEF CON conference a live competition where teams actually hacked into a satellite, a low earth orbit satellite orbiting the earth in real time. And that was, I believe, the first ever such competition where a satellite was hacked in real time for prizes. It was a Leo satellite launched on June 5th. And if someone could put in the chat, it’s HackASat is the website that they used. I’ll put it there. And, you know, a few things were learned from this competition. I think one was it was really interesting to see the satellite went dark for four hours as it crossed over Antarctica, I think it was. And so the teams didn’t know if their hacks were successful. They had to wait until the satellite came back within reach to both deliver their payloads and extract the data. And the winning team was able to hack into the camera on the satellite, which was about this big, and take pictures of specific points on earth, which was pretty cool to see. But underscores that there is active interest in attacking the cybersecurity of these. And so to the extent that we’re dependent on them, you know, with incredibly sensitive data, if we’re talking about places where people are vulnerable and at risk, which, you know, probably overlaps a bit with those spaces that are currently not covered by terrestrial connectivity, then, you know, that highlights. and exacerbates the risks. Same goes for these humanitarian contexts. Many operators are looking at ways to, operators of aid organizations, providers of humanitarian assistance, are looking to more efficiently deploy after natural disasters or human disasters, and are certainly looking at these solutions. But again, what kind of, are we sending them into a trap, you know, where there’s actually increased vulnerability and dependency on these systems that can be, you know, turned off or, you know, deprecated through commercial phase-outs at a moment’s notice. And, yeah, the last point I kind of want to get at was this pixelated regulatory picture, right? We’ve seen the number of different potential frameworks that apply. I’ve mentioned international humanitarian law. There’s, of course, space law. Out here in the Convention Center Expo, there’s actually a high-altitude platform system, a giant wing that’s being demonstrated this week. That’s not a low-Earth-orbit satellite, but it is meant to fly for six months at a time on solar power at about 62,000 feet. Maybe somebody can do the metric conversion. But it’s really exciting to see, right? People are excited about these. But that, you know, would bring in yet another, you know, I think aviation law would apply there. Telecoms law, you know, I think in various ways, these firms are more akin to, you know, the telecoms that we know. In other ways, they’re more akin to fly-by-night, you know, top-of-the-stack application and session-layer web startups. And it’s interesting to see how, you know, these different analogies and different bodies of law might apply. and regulation might apply or might not be adaptable. But as civil society, again, in this pixelated regulatory picture, we don’t know where to engage. We don’t know how to engage. We don’t have access to the International Telecommunication Union, as many companies and governments do. And we are not adept at a space law forum. I don’t know where the intricacies of space law are open to civil society input. I do want to finish by talking about the data protection and privacy at issue. And the positive is that human rights are universal. So these rights that are interdependent, indivisible, they’ve got laser links between all the human rights already set up. This is a framework that we can depend on and that we should utilize. And it’s no different for the fundamental right of data protection. The fundamental right in here is in the individual where they are, where they reside. And to the extent a processor of this data touches and concerns the EU, then the GDPR will apply to any personal data that’s flowing. And we can assume that it will. And so I think it behooves this sector to put a foot forward and to engage in civil society organizations like Access Now, like EDRI in Europe, but across Africa, where data protection, the Malibu Convention is growing steam. Convention 108 already has a footprint. There is a basis for global protection of our fundamental rights to data protection. There’s a growing system of regulators to enforce and apply that right. And we are going to be looking to do so. One caveat, sorry, I’ll finish on this, is that with respect to your presentation, these companies do not need to comply with these various laws and regulations. They are currently operating in Iran and in many other places where they are not welcome. They are not in compliance, but they are delivering services to people, including people at risk on the ground who need the services. And so I think in that sense, it may be more akin to the top layers of the stack in that they may decide not to establish offices in local countries and submit themselves to various jurisdiction if they find it in the interests of the companies. And I will assert that users at risk in Myanmar are very keen on gaining access to these tools in a way that probably will not ever comply with the local jurisdiction regulations. So I’ll leave it there. Thank you.

Kulesza Joanna:
Thank you. Thank you very much, Peter. There is nothing more comforting to a moderator than speakers who have differing opinions that is a discussion ready-made. But just to keep us on track, and I do note that our panelists likely do have direct feedback to the further interventions. And I would like to turn the floor over to Berna and kindly request her assistance with the Q&A. There might be questions in the room which I’m not able to assess moderating remotely. If there are questions in the chat or from our remote participants, do feel free to raise your virtual hand and you will be granted the floor. Berna, if you could support us here with the Q&A, that would be most appreciated. Thank you. So if any of our guests on the floor, if you have any questions, you may come to the microphone.

Akcali Gur Berna:
At the moment, we do not seem to have any questions. So maybe, Joanna, you can. start off with your question and give time to our guests to think about theirs.

Kulesza Joanna:
Great, thank you. I do note that Dan would like to directly respond. Dan, do feel free to take the

Dan York:
floor. Sure, I know it’s great to hear what Berna said and Uta and Peter. I think, Peter, I’m with you on sort of when I got involved with the Internet Society Projects back at the beginning in late 2021, I sort of naively had this idea because I had no exposure to satellite information. So I had this naive idea that, for instance, in Sudan, you know, we could somehow get a terminal into Sudan somehow and be able to provide it to people so they could be able to have internet access and share information, all this kind of stuff. And my naivete lasted until I got talking to people like Berna and Joanna about ITU and space law and the regulations around that. And you’re absolutely right, Peter’s absolutely right, that there is no technical reason why this cannot happen. You know, Starlink can be turned on for every country in the world at some point. And on a technical level, that can go on. And it’s what we see happening in Iran. The challenge, of course, is the legal side and the reality that it is bounded on the borders based on this fact that, as Berna talked about, you know, they have to go into each and every country and get approval for the landing rights, for the spectrum to be able to go for down and up. They have to get a consumer approval. They have to go and do all of that for each and every country. And so it is a case where, and if they, you know, I think you can get away with it and doing it in Iran, because quite honestly, the rest of the international world is not really going to be too concerned. And in fact, they would probably prefer it to be turned on there. However, you turn it on for other countries and other spaces, you start to get into, you know, very lots of international pressure, attention, things like that. It’s just, it’s not something you can go and do. You have some countries such as China that have been very clear that if it gets turned on in China, they might take actual activity. they’ve done war gaming scenarios around what it would take to go and shoot down satellites. I mean, there’s lots of different pieces that sort of keep that in check at the moment, which to be honest, I was disappointed about because I was hoping it could be that, you know, get that freedom, get it out there and everywhere. You also raised the other good point, which is that unlike a passive, like a geostationary dish for broadcast TV that’s pointed up at a geostationary satellite, it’s a one-way downlink. It’s just receiving the signals. It’s just passively getting that. But once you do this for internet access, you’re doing two-way communication and you do, to Peter’s point, you can expose, you’re exposing that transmitter. You know, in the Ukraine, I know that there’ve been some of the groups that are there that are making sure that they only turn the transmitters on at certain times, that they put them away. You see pictures of groups of people putting them at a distance away from where the people are in case the signal intelligence hones in on where it is and targets it with a weapon or something. So you are exposing yourself because it is two-way communication. And that is a critical difference in what we’re talking about here. And I also join you, Peter, and others in that concern about, you know, the control of billionaires. It is right now, it’s primarily, you are seeing, you know, SpaceX with Elon Musk. You see Project Kuiper, which ultimately is Jeff Bezos. You know, you see those kinds of solutions up there. OneWeb has now been purchased by Eutelsat. So it’s now a corporate entity under, and Eutelsat is a French corporate, you know, different things around that. But it’s all these bigger players. We don’t have what we had in the early days of the internet, for instance, in the terrestrial-based, where you had university networks, hobbyist networks. A large challenge is just the sheer cost of launching all of this in some certain way. But lots to be, lots to be going on in there. I’ll defer to others.

Akcali Gur Berna:
I would also like to make a short note. So as lawyers, we tend to to explain what the law is, how the regulations apply. And so that doesn’t always represent how we personally think about the matter, yes. So if you ask me a question about the human rights law approach, then my answer would have had a different perspective on the matters that we have just discussed. So I think, as always, we tend to believe that rule of law is important, and that if you are going to breach the rules, then you are damaging the system as a whole. So taking these into consideration, my talk was more about explaining how the rules and regulations apply to the satellite broadband technology as it is. So of course, the civil society approach would be different, the human rights law approach would be different, but that wasn’t my, I didn’t include that in my speech. So I just wanted to make a little note of that, yeah.

Kulesza Joanna:
Thank you very much, Berna. I have a sense that our other panelists might also have something they would like to add. So I’m going to check first if Peter, Uta, or Larry have anything to immediately respond, for example, to Dan’s comments.

Larry Press:
Now, all kinds of stuff has been kind of thought-provoking. I guess I am really, I’ll be upfront. I am disappointed and kind of frightened by Elon Musk. He did amazing things, but he’s getting, if you follow him on Twitter and stuff that he’s starting to post now, it’s very political, and it’s political in a way that I don’t like. So I guess maybe that’s, do the rest of you guys have concern about that guy?

Kulesza Joanna:
Now I can see our other speakers, Peter, Uta, please do feel free to take the floor.

Dan York:
Yes. Okay. Yeah. I mean, you know, let’s get to a place where there is, you know, meaningful competition, but within a regulatory framework. I mean, you know, we appreciate innovation. And Larry, I was thinking of your presentation, because you didn’t talk about the 90s, right? Which my understanding is when there was a ton of interest in the Low Earth Orbit sector and a lot of failures. And so I was, you know, wondering, yeah, if you could…

Larry Press:
There was the one you’re probably thinking of is Telesat. And Telesat. Not Telesat. Tell us. What was it called? It was, well, I mean, Iridium GlobalSat, Global… No, no, no, no, no, no, no. Before that. Tell us. Teladesic. Teladesic. Teladesic. Yeah. Okay. Yeah. But they… It was Bill Gates and a Saudi prince and a guy who had, at the time, recently sold a mobile company. They did a… They attempted to do this in the 90s, but the technology just wasn’t there. I think it’s the main reason it failed.

Dan York:
And the other point is it was focused on telecom. It was not necessarily fully focused on providing internet access at the kind of scale. And it was really… Which is what… I mean, Iridium is still up there. And actually, they’re looking at launching a new range of satellites to provide data services and pieces like that. But it was a… But, you know, and we don’t know. A lot of the systems that are being proposed right now may fail in a similar way. You have to figure out, do you have the business product that’s there? And the other part is now, 20 years later… almost, you know, 30 years later, I guess, in some ways, in some of that you have this enormous change in the capacity of launch systems and mass production of satellites. That’s a lot of what’s

Larry Press:
changed today. I think Teledesic was, they weren’t going, they were in fact going for, you know, internet connectivity. Internet was different in those days. It was mostly text, for me it was text-oriented, only uppercase because I had a teletype at home. But they, there were, the technology was not up for it and it just wasn’t economically viable. The satellite technology, the launch technology, it just, it couldn’t have been at the time. It was, yeah.

Kulesza Joanna:
Great. Thank you so much. We do have a question from Mike before I hand the floor over to Uta. Please just let me read out the question. It just might be that you would like to reference that question as well. The question from Mike reads, radio spectrum access is regulated to prevent interference and allow coordinated usage. However, in the optical domain, there is effectively no interference that would warrant regulation. What tensions could we see from governments trying to extract fees from the optical spectrum? If you wish to address that question directly, Uta, do feel free to do so. Do take the floor and then I will ask our other panelists if they wish to address Mike’s question directly. Uta, please, the floor is yours.

Uta Meier-Hahn:
Thank you. I very much appreciate the question. And at the same time, I find it very far reaching and at this moment, a little bit beyond the level of discussion at the moment of, at this stage of development, but also it’s something that I would want to think about, frankly. But I have also been asked, so what are possible avenues if we acknowledge, or if we all establish together that there is an importance of some kind of multi-stakeholder input into the development, the further development of this industry, and possibly policy options? And what could be things that we could be doing? And I just wanted to throw a couple of things in the room, so maybe those can be picked up by people who listen here. So for one, of course, there’s an option to hold listening sessions by all the providers and future providers of these systems. This, of course, includes the EU, but maybe also the other providers could be interested. It would certainly go a long way towards providing some transparency into their system, which, as this session exemplifies, could be demanded. And it would give the public an opportunity to have their views heard. Another important thing could be to also talk to financing and investment opportunities and see what the ways of support, having, for instance, blended finance impact investors come in to support satellite internet from space in the countries that currently cannot or have not afforded it so far. We should and could document the best practices in terms of regulatory approaches, also with regard to how do these companies that do exist and the countries that do want to be customers, how can they do a quick onboarding and how can they activate the services quickly? There’s another aspect of really doing research, financing research about this, because as we’ve probably all seen in our preparation for the session, there is not so much empirical evidence with regard to many of the important questions of this topic. There may be an opportunity for some countries to think about it. about twinning programs to sort of move together forward on this topic, and specifically with regard to Iris Square, I feel like it’s worth throwing in the room that, depending on the views that are being held from the finances of this constellation and the populations that stand behind them, there may be an opportunity to also think about connectivity from space as an in-kind sort of development service, if you will. So not only providing countries with the capacity building they need to set up their institutions, etc., but also to really directly just provide that connectivity. I’m not sure if that’s being done much before, but it could certainly be an avenue. And then certainly there’s coalition building in general, just to foster the interest of this very large common consumer group. Thank you.

Kulesza Joanna:
Wonderful, thank you very much. I’m curious if any of our speakers might have an answer for Mike as well. That seems a really interesting question. I do agree it is an early stage of development for the optical spectrum infrastructure to governmental. Yes, Dan, please go ahead.

Dan York:
I think it’s a good question. I mean, the basic point is that if you’re doing optical connectivity, it’s a direct connection, you’re not in the, yeah, it’s not shared as Mike said. I think it’s really early. I think we have to see where these things get proved out. Larry provided a great overview of a lot of the different work that’s happening in this space to ground connectivity and what’s going on in that. But I think we’ve still got a bit to go. To Mike’s point, it’s probably good to be thinking about that in advance so that these things don’t get trapped into regulatory capture or wind up with great impediments to doing that. But I think we’re still early.

Kulesza Joanna:
Mike, turn it up a bit. You know, just make sure

Larry Press:
Yeah, I just, I feel like if we just have a kind of a bull session here, actually, I should turn on my. There you go. You know, with respect to kind of having how to subsidize it and whatnot, to some extent, I think that takes care of itself. If the people in an area, people in a nation, can’t afford connectivity to say SpaceX or to one of these little things, to the extent that that will mean they have excess capacity over that nation. And to some extent, I remember when Elon Musk first did, he came out and said, hey, we’re going to charge the same price everywhere. And that was crazy, because it makes no sense. You want to charge a price that’ll kind of keep your up to ease up your entire available capacity. So to some extent, just the economics of it take care of kind of different income levels of different countries in different regions. Make sense? I mean, it’s coming to pass, he definitely charges different rates in different countries.

Kulesza Joanna:
Great. Thank you very much, Larry. I’m just going to quickly check if any of our panelists would like to add anything to the session we are about to wrap up. And before I do so, just going to check if anyone would like to add anything we might have missed, or if there’s any direct feedback from the room. Berna, please go ahead.

Akcali Gur Berna:
Just to add to Uta’s points, well, we overlap. But, you know, what would we advise to the developing countries? So I want to refer back to our policy paper and quickly list what we had recommended them to effectively use this technology. So we recommended them to re-evaluate and update the domestic regulations related to licensing and and authorizing satellite broadband services to consider the different business models and the impact on their autonomy when deciding on gateways, for example. And we recommended forming regional alliances to enhance achievement of their local policy goals. And we also recommended them to participate actively in the ITU consultations, especially in the ITUR, which manages frequency spectrum and orbital resources. And again, if this is done through regional alliances as they are doing now, it will enhance their chances of achieving their desired outcomes. And also they should reassess their commitments under trade treaties. They are not set in stone. They could be renegotiated and these should be considered with their renewed interests and priorities associated with this technology. And also familiarize themselves with space law, which hasn’t been of interest to many non-space-faring nations. I think awareness of rules is essential to make informed decisions. And holistic concentrations of these actions, I think is necessary to ensure that their initiatives align with their sustainable development goals.

Kulesza Joanna:
Great, thank you very much. And Dan, please go ahead.

Dan York:
Sure, I would just, one thing I wanna say about the panel was I just wanna say to Uta that I loved her points that she had because I think you very succinctly summarized really some of the key issues and points around here. I would add a point, the robustness, the resiliency is something that we’ve seen as a critical part. I’m a volunteer here in the United States for an organization called the ITDRC, which is the IT Disaster Resources. And they have been deploying into places like Florida when there was Hurricane Ian, and also into the wildfires that are going on out in the Western part of the United States. And they can take a satellite dish on a pickup truck, for instance, and be able to bring it in and provide wifi connectivity for the first responders and the other people who are in the incident command area. It’s a kind of ubiquitous connectivity that we have never had access to before. It’s just mind blowing and what it can do and the kind of spaces around that. So I think it’s important to, for all the challenges, there’s an amazing amount that it can do in the right ways. And I think we need to figure out how to get it right. I think it really is. I would also point what Bernard just mentioned. A lot of us in the internet space, if we interact with the ITU, we primarily interact with the ITU-T, the telecommunication sector, or the ITU-D around development. We don’t do as much historically with the ITU-R, the radio telecommunication side. But that’s where all of this happens in satellites because of the spectrum. And people should pay attention to the World Radio Congress coming up later this, in the next November here, so November, December, because that will be the, every four years, the gathering of people to talk about this. And while LEOs aren’t directly on the agenda, there’s side conversations, there’s other places, there’s things that will be paying. So I would encourage people to pay attention to that. And my final point would just be, we need to have more of these conversations because this is this new emerging field. There’s a lot of satellites gonna be launched over the next while that’s happening. And we need to collectively make sure that we can get it right to the degree that we can from a societal point of view. So I encourage everybody to read Berna’s document that was in there, read our LEOs document, read other documents and share this, get people talking about it because we have to be talking about these questions.

Kulesza Joanna:
Great, thank you. Peter, do go ahead.

Peter Micek:
Quickly, thanks. Yeah, to sort of piggyback and reinforce Dan’s comments, we need to have more conversations, but as civil society, we are heavily dependent on governments in this space. Governments are… I think putting forward a lot of the funding necessary, they’re going to be doing a lot of the procurement, including through their defense industries and defense spending. And presumably, they’re the ones talking to these companies. I’m a very privileged person, white male in the US. I know the public policy director for SpaceX, and I can’t get any of my calls returned. And so I think just to underscore what an asymmetrical disadvantage we’re at when we’re trying to influence public policy in this space, that we are heavily dependent. And governments, it seemed to be a lot of competition over this sector. But I’m buoyed by things like yesterday, the Freedom Online Coalition launched these so-called donor principles on human rights in the digital age. And I think those are getting at ways to harmonize and raise standards around government procurement and support for new and emerging technologies and should urgently be applied to this space. Thanks.

Kulesza Joanna:
Great. Thank you very much, Peter. I could do nothing more but to strongly support all the points that have just been made. We do need to have more of these conversations. And I do welcome a relatively significant presence of Leo’s on the agenda of the IGF. It is a theme that the multi-stakeholder community should pay attention to before it’s too late, as our speakers have emphasized during this panel. We are out of time, so I will refrain from summarizing the panel more thoroughly. Thank you very much for joining us. Sincere thanks to our speakers. Thank you for all the points that you guys have made. Thank you for being here, both virtually and in person. And to those of you who are in the room or online joining us, do feel free to reach out to the speakers directly and share your feedback because this is the time to do Leo’s policy that serves the broader internet community. Thank you, everyone. With this, the session is adjourned. more. Thank you, Joanna. Yeah, I wish we could keep the bull session going. Thank you, Joanna, for leading us. Thanks a lot. Thank you. It’s always a pleasure. Thank you, gentlemen. Have a good afternoon. Thank you. Bye. Bye, everyone.

Akcali Gur Berna

Speech speed

148 words per minute

Speech length

1678 words

Speech time

682 secs

Dan York

Speech speed

204 words per minute

Speech length

4131 words

Speech time

1216 secs

Kulesza Joanna

Speech speed

170 words per minute

Speech length

1934 words

Speech time

682 secs

Larry Press

Speech speed

160 words per minute

Speech length

3010 words

Speech time

1132 secs

Peter Micek

Speech speed

150 words per minute

Speech length

1765 words

Speech time

707 secs

Uta Meier-Hahn

Speech speed

167 words per minute

Speech length

2221 words

Speech time

798 secs

Building a Global Partnership for Responsible Cyber Behavior | IGF 2023 Launch / Award Event #69

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Pablo Castro

Chile’s new national cybersecurity policy places a strong emphasis on promoting international norms and applying international law in cyberspace. This commitment is vital for achieving the goals outlined in SDG 9 (Industry, Innovation and Infrastructure) and SDG 16 (Peace and Justice). The policy reflects Chile’s dedication to upholding principles that respect human rights and international law in cybersecurity operations. Chile began working on cybersecurity in 2017 and released its cyberdefense policy in 2018, which stated that cyber operations would be conducted with respect for international law and human rights. The upcoming national cybersecurity policy reaffirms Chile’s commitment to promoting international norms and law in cyberspace.

In Latin America, there is a need for further discussion on attribution in cyber attacks. Unlike other regions, there is little dialogue about responsibility for cyber attacks. Governments in Latin America must decide whether publicly attributing an attack to a foreign power is beneficial. This highlights the need for comprehensive conversations and analysis on attribution in the region.

Capacity building and international cooperation are crucial for cybersecurity in Latin America. A lack of national cybersecurity agencies is often seen, with governance falling under committees. However, training courses offered by countries such as the US, Canada, Estonia, and the UK are helping enhance capacity building efforts. These courses focus on applying international law in cybersecurity and play a critical role in equipping Latin American countries with the necessary skills and knowledge to combat cyber threats effectively.

It is stressed that Chile needs to develop a national position on international law in cyberspace. The new cybersecurity policy mandates the establishment of this position. Defining Chile’s stance and approach towards international law in cyberspace is essential to ensure consistency and effectiveness in its cybersecurity efforts.

Regarding cyber attack response, a collective approach in the region is recommended as an effective way to express condemnation without attributing the attack directly to a specific actor. This approach allows for a unified stance against cyber attacks, maintaining diplomatic relations and avoiding unnecessary conflicts.

Pablo Castro, an expert in cybersecurity and related areas, supports discussions taking place in United Nations working groups on emerging threats and technologies such as artificial intelligence and cyber mercenaries. His previous experience in dealing with these issues, particularly in the field of cyber mercenaries, further underscores the importance of these discussions. However, caution is expressed regarding potential difficulties and disagreements in reaching a consensus within the working group. Maintaining a good working relationship among members is prioritised to ensure the effectiveness of the discussions.

In conclusion, Chile’s new national cybersecurity policy highlights the importance of promoting international norms and applying international law in cyberspace. This commitment aligns with the goals of SDG 9 and SDG 16, aiming to foster innovation, ensure infrastructure security, and promote peace and justice. Latin America faces challenges in attributing cyber attacks and requires further discussion. Capacity building and international cooperation are crucial for the region, with training opportunities provided by the US, Canada, Estonia, and the UK. Chile is encouraged to develop a national position on international law in cyberspace to enhance consistency and effectiveness. Furthermore, a collective response to cyber attacks in the region is recommended to express condemnation without directly attributing the attack to a specific actor. Discussions in the United Nations working groups, supported by Pablo Castro, are of vital importance in addressing emerging threats and technologies, while maintaining a good working relationship within the group.

John Hering

The Cybersecurity Tech Accord is a coalition of 168 tech companies from around the world committed to upholding foundational cybersecurity principles. It was established in 2018 with 34 companies and has grown quickly in size and influence. The primary objective of the accord is to give the tech industry a voice on matters of peace and security in the online realm.

One of the driving forces behind the growing interest in joining the Cybersecurity Tech Accord is the pressure from customers, as cyberspace has become an emerging domain of conflict. Companies feel the need to clarify their stance on not weaponising their products and services. This pressure compels companies to actively participate in initiatives like the accord to demonstrate their commitment to cybersecurity principles.

However, a challenge for the accord is getting companies with different capacities on the same page. While some are large multinational corporations with significant resources, others may not have the same level of resources. Bridging this gap is an ongoing challenge.

The accord advocates for coordinated vulnerability disclosure policies. It encourages companies to have these policies in place to address and disclose potential vulnerabilities in a timely and responsible manner. Over 100 coordinated vulnerability disclosure policies from the accord’s signatory base can be reviewed online.

Microsoft, a prominent member of the accord, has played a significant role in the context of the war in Ukraine. The company has prioritised strengthening security for its customers in the region and has responded to multiple generations of wiper malware used in operations targeting Ukrainian data. Microsoft also actively reports its findings in the context of the conflict, providing insights into the activities of broad threat actor groups aligned with military campaigns.

The importance of a robust multi-stakeholder coalition is highlighted, particularly in the context of hybrid warfare. The accord, which includes both private sector companies and public agencies, can provide asymmetric benefits to defenders as hybrid warfare becomes a domain of conflict. The collaborative efforts of the Ukrainian CERT, which had the necessary authorisations and coordinated efforts effectively, have been crucial in thwarting cyber operations in the Ukraine conflict.

Policymakers are urged to carefully consider the impact of their regulations on the security research community. John Hering, a cybersecurity expert, raises concerns about potential negative consequences if regulations do not prioritise fixing vulnerabilities and ensuring customer and user security. Poorly considered policies may inadvertently compromise product security and data safety by creating a race to the bottom.

On a positive note, accountability in cybersecurity is improving. Governments are taking steps to include norms violations in public attribution statements, and the International Criminal Court (ICC) has declared its intention to investigate potential cyber-enabled war crimes. These developments demonstrate progress in holding actors accountable for their actions in the cyber realm.

Overall, the Cybersecurity Tech Accord has garnered significant support and interest from tech companies worldwide. Its commitment to foundational cybersecurity principles and efforts to give the industry a voice in online peace and security are noteworthy. Challenges remain in bringing companies with different capacities together, but the focus on coordinated vulnerability disclosure policies and the active role of Microsoft in securing customer data in the Ukraine conflict show the practical impact of such collaborative initiatives. Policymakers must be cautious in crafting regulations that consider the impact on the security research community. Nevertheless, positive strides in accountability in cybersecurity, with government actions and ICC involvement, indicate progress in creating a safer and more secure online environment.

Koichiro Komiyama

The analysis reveals several important points regarding cybersecurity incident reporting and vulnerability information sharing. In Japan’s case, it is highlighted that sharing information with JP CERT (Japan Computer Emergency Response Team) or the National Cybersecurity Centre is crucial for effective incident handling. On the other hand, the US Securities and Exchange Commission has introduced a new regulation that requires financial institutions to disclose any cybersecurity incidents they experience.

However, it is noted that the role of CSERT has slightly changed. The specific details of this change are not provided, but it suggests that there may be some adjustments or updates in the way CSERT operates in handling cybersecurity incidents.

JP CERT, being a key player in incident reporting and response in Japan, receives around 20,000 incidents per year. This indicates the scale of the cybersecurity challenges faced by the country. Furthermore, JP CERT predominantly communicates with entities in the United States and China, indicating the importance of international cooperation in dealing with cybersecurity issues.

One of the supporting facts provided highlights a negative incident involving a Chinese security researcher. After identifying a vulnerability issue, the researcher promptly shared the information with Log4j developers. However, the researcher was subsequently summoned by Chinese authorities. This incident raises concerns about the potential hindrance to global information sharing and collaboration on cybersecurity matters.

The analysis also suggests that cyberspace is not as global as imagined, with over 80% of JP CERT’s incident engagements involving the US and China. This indicates that despite the interconnected nature of the internet, there are still significant gaps in global information sharing and cooperation in the realm of cybersecurity.

Another significant point raised is the localization of data and vulnerability information. This localization hinders global information sharing and collaboration, resulting in a chilling effect among Chinese security researchers. The introduction of regulations in China has had an impact on the willingness of researchers to share valuable vulnerability information due to potential legal repercussions.

The speakers argue that regulations should not hinder international information sharing and that vulnerability information should not be localized. They emphasize the importance of global cooperation and partnership in addressing cybersecurity challenges effectively. By overcoming barriers to information sharing and collaboration, the international community can collectively work towards a more secure cyberspace.

In conclusion, the analysis highlights the need for effective incident reporting and vulnerability information sharing in cybersecurity. It underscores the significance of international cooperation and the potential implications of regulations on global information sharing. The argument is made for regulations that foster collaboration rather than hinder it, ensuring that vulnerability information is not localized and that the global community can work together to address cybersecurity threats.

Charlotte Lindsey

The Cyber Peace Institute is an organisation dedicated to studying the impact and harms caused by cyber attacks. They recognise the importance of having evidence and data-driven understandings of the harm inflicted by these attacks. They emphasise the need for a context-aware approach to accurately calculate the harms and impacts.

One of the main concerns highlighted by the institute is the increasing targeting of vulnerable communities, specifically humanitarian, human rights, and development organisations, by cyber attacks. To help these organisations respond and enhance their capabilities, the institute has established a humanitarian cybersecurity centre and a cyber peace builders programme. This initiative aims to support these organisations in preventing and responding to cyber attacks effectively.

Understanding the impacts of cyber attacks on vulnerable communities is crucial for policy-makers. The institute believes that lessons learned from data analysis can be injected into policy discussions to develop efficient strategies and measures to address the issue.

During the height of the pandemic, attacks on healthcare infrastructure became a significant concern. Critical healthcare infrastructure experienced an alarming increase in cyber attacks. In response, the Cyber Peace Institute collaborated with the government of the Czech Republic and Microsoft to develop a compendium of best practices aimed at protecting the healthcare sector from cyber harm. This initiative provides guidance and recommendations for safeguarding healthcare facilities and systems from cyber threats and vulnerabilities.

The institute also stresses the need for clear accountability for breaching cybersecurity laws and norms. They are actively monitoring 112 different threat actors related to the Ukraine and Russian conflict. By holding these actors accountable, the institute aims to deter future cyber attacks and ensure a safer cyber environment.

In conclusion, the Cyber Peace Institute’s work revolves around deepening the understanding of cyber attack impacts and harms. They actively support vulnerable communities through their humanitarian cybersecurity centre and cyber peace builders programme. Their collaboration with the government and industry partners highlights the importance of protecting critical healthcare infrastructure from cyber threats. Additionally, the institute advocates for clear accountability to prevent future breaches of cybersecurity laws and norms. Overall, their efforts contribute to creating a more secure and peaceful digital space.

Regine Grienberger

Germany is actively taking steps to strengthen the normative framework for cyber behaviour. They are dedicated to implementing, monitoring, capacitating, and attributing cyber incidents. To protect critical infrastructure, Germany is developing national legislation in alignment with the EU directive. This signifies their commitment to safeguard essential systems and services from cyber threats.

In order to promote transparency and the sharing of best practices, Germany intends to document its progress in implementing cyber norms. By doing so, they hope to contribute to an international dialogue on cybersecurity and encourage other nations to adopt similar measures.

Germany has also established a national attribution procedure, which is coordinated by the Foreign Ministry. This procedure involves conducting comprehensive analyses and making informed political judgments regarding cyber incidents. By attributing cyber attacks, Germany aims to hold perpetrators accountable and deter future malicious activities.

Moreover, Germany recognises the importance of attributing cyber incidents as an essential practice. They believe that it is both achievable and necessary to respond effectively. Germany’s attribution procedure involves extensive analysis and political judgment, demonstrating their commitment to accurately identify and assign responsibility for cyber attacks.

Furthermore, within the context of the European Union diplomatic toolbox, sanctions are considered an instrument for responding to cyber incidents. This highlights Germany’s support for using sanctions as a means to deter and punish those responsible for cyber attacks. By leveraging sanctions, the EU aims to send a strong message that cyber aggression will not be tolerated.

In conclusion, Germany is actively working towards strengthening the normative framework of cyber behaviour through various means. Their efforts include developing national legislation, establishing a national attribution procedure, documenting progress in implementing cyber norms, and supporting the use of sanctions as a response to cyber incidents. These initiatives showcase Germany’s commitment to promoting cybersecurity, accountability, and international cooperation in tackling cyber threats.

Eugene EG Tan

This comprehensive analysis examines the viewpoints presented by Eugene EG Tan on various aspects of cybersecurity research and responsible behavior. Eugene expresses genuine excitement about a project that takes a broad perspective on cybersecurity, inclusive of diverse stakeholders such as states, industry, civil society, and academia. He believes that the project’s wide consultation and intersectionality greatly contribute to the richness of insights generated.

In terms of academic research in cybersecurity, Eugene argues that it has historically been limited to documenting state actions on an individual or regional level. He identifies a critical need for the development of universal measures of responsibility that can be applied across different contexts. Eugene suggests that this lack of common measurement has impeded progress in defining responsibility in the field of cybersecurity.

Furthermore, Eugene advocates for a collaborative and region-interactive approach within the academic community to enrich cybersecurity research. He highlights that academics often tend to focus on individual contexts or specific topics, but funding opportunities are now emerging, enabling cross-regional interactions. By broadening the conversation and understanding different contexts, this inclusive approach can greatly enhance the overall quality of cybersecurity research.

Controlling for cultural and contextual variables across different regions and states in a global study proves to be a significant challenge. Eugene acknowledges the difficulty in establishing a baseline definition of responsible behavior when conducting research on such a broad scale.

To address this challenge, Eugene suggests that it would be reasonable to identify common aspects of responsible behavior while also acknowledging deviations from the norm. This approach would help establish a baseline definition of responsible behavior and provide valuable insights into how the concept of responsibility varies across different states or businesses.

Eugene also emphasizes the crucial importance of implementing additional measures to ensure responsible behavior in cybersecurity. He believes that it is of utmost importance to determine how these measures can be effectively implemented to mitigate irresponsible behavior, subsequently benefiting the entire cybersecurity community.

Accountability and transparency are highlighted as key concerns in the use of commercial spyware. Eugene points out the lack of transparency surrounding the utilization of such tools and the pressing demand for a systematic focus on providing redress for victims. He argues for a coordinated response that effectively shapes the political and normative environment related to spyware. Furthermore, the ability to attribute responsibility becomes crucial in holding individuals accountable for their actions.

Eugene also supports the notion of state responsibility in protecting human rights and holding violators accountable. He emphasizes that states have a legal obligation to protect and promote human rights. Eugene fervently advocates for individual and collective action by states in bringing perpetrators of abuses, such as abusive surveillance technology, to account. He emphasizes the importance of relying on legal avenues, such as formal investigations and subsequent legal cases against the financiers and commissioners of abusive surveillance technology.

In conclusion, Eugene EG Tan highlights the need for a comprehensive perspective in cybersecurity research, the development of universal measures of responsibility, and a collaborative approach within the academic community. He emphasizes the challenges of controlling cultural and contextual variables in global studies, the critical importance of implementing additional measures to ensure responsible behavior, and the urgent need for accountability and transparency in the use of commercial spyware. Furthermore, Eugene supports state responsibility in protecting human rights and holding violators accountable.

Louise Marie Hurel

The analysis explores various perspectives on responsible cyber behavior and the challenges associated with its implementation. It highlights the importance of understanding different interpretations of responsibility in cyberspace, especially in different contexts. The global partnership, which involves over 70 scholars, aims to map practical understandings of responsible cyber behavior and how it is interpreted by different stakeholders. It emphasizes the need to give a voice to less dominant countries, as their interpretations of responsibility are often overshadowed by larger powers.

In promoting responsible state behavior, capacity building and proper implementation of cyber norms are seen as crucial. Germany, for example, has established a national attribution procedure to hold malicious actors accountable, while Regine Grienberger emphasizes the importance of monitoring and sharing information on the implementation process. However, it is also noted that attribution should be a political decision based on effect-based and responsible analysis, rather than an automatic step towards sanctions. There is a growing desire for sanctions in response to malicious behavior, with the EU having the instrument of sanctions in its diplomatic toolkit.

The analysis also stresses the involvement of other actors, such as the private sector, academia, and civil society, in promoting responsible cyber behavior. Louise Marie Hurel argues for more space to be given to less dominant countries in the debate, including private sector companies like Microsoft. She also highlights the role of academia and research in the global cybersecurity landscape, emphasizing the need to connect researchers with the realities on the ground. Hurel acknowledges the multifaceted aspect of cybersecurity, which encompasses statecraft, private sector involvement in conflict situations, and civil society engagement.

Trust-building and better interregional channels are also deemed essential for advancing responsible cyber behavior. Hurel mentions the Point of Contact directory within the Confidence Building Measures at the Organization of American States as an area for development. Furthermore, the analysis highlights the importance of creating a common understanding of responsible behavior in different states and regions, as well as identifying deviating elements in norms across different states to better understand variations in perceptions of responsibility.

The analysis also explores the nuanced implications of state regulations on cybersecurity. While regulations are necessary to ensure vulnerability disclosures and establish necessary procedures, there are concerns about whether these regulations hinder communication channels that are already established. Hurel advocates for careful contemplation and assessment when developing regulations to ensure effective communication channels and feasible job roles.

In conclusion, the analysis underscores the need for understanding different interpretations of responsibility in cyberspace, providing a voice to less dominant countries, capacity building, proper implementation of cyber norms, the role of sanctions and attribution in promoting responsible state behavior, the involvement of the private sector, academia, and civil society, trust-building and interregional communication, and the nuanced implications of state regulations on cybersecurity. It highlights the multifaceted aspect of cybersecurity and the importance of research and academia in connecting with real-world issues. The significance of creating a common understanding of responsible behavior and identifying variations in norms across different states is also emphasized.

Session transcript

Louise Marie Hurel:
Thank you so much for being here. We’re starting the session just in case you’re checking the room is building a global partnership for responsible cyber behavior. My name is Louise Marie Rell, I am a research fellow over at the Royal United Services Institute, which is a think tank based in London. So we work with security and defense and we have a cyber security program over there. And I’m leading a project that’s on responsible cyber behavior. And today I’m very happy to welcome you all to what is the regional launch of initiative as part of this project, which is called the global partnership for responsible cyber behavior. So what is then the global partnership and why is this important before I turn to our great speakers both here and online. So the focus of the global partnership is really to map practical understandings of what responsible cyber behavior means, how it’s interpreted by different stakeholders. And for this first year, we’re looking specifically at how states see responsibility in practice, what are the regional nuances, what are the contextual and cultural elements that shape the understanding of responsibility. And we have, as part of this global partnership, we have a structure. So we have an advisory board and I see that Chris is over here in the room representing the advisory board. Thank you, Chris. We also have members. So the global partnership consists mostly of researchers and research institutions from across different regions. So we have over 70 scholars and researchers involved. And the idea is that we have working streams for each of the regions and we’ll be producing regional papers out of that, which will be a global compendium on responsible cyber behavior throughout this next year. So it’s quite exciting. Stay tuned. But as part of thinking about the global partnership, I think there’s a bigger question of why is this important, why is this relevant and why now. So for those that have been following closely the UN negotiations, the open-ended working group, there are increasing tensions and there are things and tough questions that sometimes it’s very hard to deal from, let’s say, a diplomacy or a geopolitical kind of standpoint. But as a research community, this is something that we can do. We can ask tough questions. We can come together and look at our differences and our commonalities as researchers from across different regions. And I think there are other some challenges that are, let’s say, in the background of this conversation. So first, there’s a lot of understanding or, let’s say, even publication around big powers that often dominate the debate, and that’s fine, I mean, but that leaves little space for other regions and other countries to kind of vocalize their own kind of like understandings and interpretations. So I think it’s important to think about, you know, how do we think the research agenda around that. Second is that international peace and security discussions are the highest level of conversation that one can have when it comes to, let’s say, responsibility in cyberspace, right? And obviously, in the context of the UN, we’re talking about negotiating a document, right? So it’s a place where you actually have an output, which is a consensus document, and you don’t necessarily see the regional nuances in those particular documents. And perhaps you’re just focusing on the highest political angle. So responsibility is potentially not just that. There are other layers that we need to consider. And finally, that there is, you know, of course, a need for a greater, let’s say, contextual or cultural understanding of where the values that come into each country’s way of seeing and perceiving responsibility, in addition to these norms that have been agreed at the international level. So to think about that and to reflect, I think there’s nothing better to do this over at the IGF where we can actually have a multi-stakeholder perspective. So that’s the objective of our conversation here today, is to bring stakeholders from each stakeholder group to reflect on how they see responsibility in cyberspace in practice, to have their views. So we’re going to pick a bit. So it’s a snapshot of each of them because we only have an hour, but definitely and hopefully this is a trigger for food for thought and for future, let’s say, conversations that we can have around each of these topics. So today with me, we have two people online, but I’ll present all of them right now. So we have Regine Greenberger, which is joining us online. She was here. Some of you might have seen her, but she unfortunately had to leave, but she’s very kindly agreed to join us and committed to being online. So thanks, Regine. Regine is the Cyber Ambassador at the German Federal Foreign Office. We also have Pablo Castro over here on my side. He’s the Cybersecurity Coordinator at the Chilean MFA. And you have a crowd cheering for you over there as well. We have on my other side, John Herring, which is the Senior Government Affairs Manager at Microsoft. We also have Charlotte Lindsey, which is joining us online. She is the Chief Public Policy Officer at the Cyber Peace Institute. And we also have Eugene Tan. He is an Associate Research Fellow at the Nanjaratnam School of International Studies. And hopefully I pronounced that correctly, which is the shorthand for ISIS. And we also have Koichi Rokomiyama, which is the Director of Global Coordination Division at JPCert. So as you see, we have a lineup of government representatives, private sector, academia, and technical community here. But I’ll stop talking now because I think the most interesting bit is for us to have this kind of back and forth. And Regine, I hope you’re here with us in cyberspace and we can see you at any point. Is she online? Can you confirm with… Is she online, Regine? Yes? Wonderful. So Regine… I am. I am. Hi. Wonderful. Hi, Regine. Thanks for joining us. So Regine, the idea of this conver… Is really to be a conversation, right? So it’s supposed to be dynamic. Regine, I wanted to start with you for us to unpack some of the layers when it comes to what responsible cyber behavior means in practice, right? So while the discussion at the UN has really provided this framework for responsible state behavior, there’s still many nuances that we are kind of exploring, right? For some states, for example, responsibility might be seen as calling out bad behavior or irresponsible behavior through public attribution, right? Or sanctions, let’s say. So how has Germany been positioning itself with regards to that? Could you elaborate a bit?

Regine Grienberger:
Thank you, Louise. First of all, congratulations on the creation of this global platform. I think both the past OEWG and the current OEWG and also the attack committee negotiations on cybercrime show that the era when cyber norms were only negotiated by few capable states is definitely over. We have now the whole UN member states, the members involved in these negotiations. And also a lot more of non-governmental stakeholders, which is a good sign. But still, we need more smart people to sort out the complex issues that we have here. So I’m really grateful that you established this platform. Now for your question, I wouldn’t start with attribution. The first thing that I would like to mention, how states can strengthen the normative framework is, of course, implemented. It sounds a little bit trivial, but it is not. I mean, we in Germany have no problem with the negative norms. So refrain from, we would never attack critical infrastructure. But the positive norms, so like protect critical infrastructure, are much more difficult to implement. We have, for example, at the moment, negotiations about a national law that is going to implement a new directive on the European level. It’s the NIS directive, which is a legislation to protect critical infrastructure. It sets benchmarks and standards for entities of critical infrastructure. And it will request a lot more of cybersecurity experts to actually do this. I mean, to do all the jobs that are mentioned in this legislation. So where do we find them? So this is very difficult to implement. The second thing that states can do is, of course, monitor their own implementation and share it with others. In the last OEWG, we had discussions about a national survey. I think it was a Mexican proposal. And I think it’s a very good thing to document also what you are going or what you are doing in order to implement cyber norms. It’s also a way to share best practices and get others on board. And as we all know, it’s a cross-border endeavor to implement the cyber norms. So this is also a possibility to define the interfaces between national jurisdictions. Then the third element I would like to mention, still before attribution, is capacity building. And this has been defined in the last negotiation round as a two-way street. We had a very nice panel also during IGF describing the challenges to coordination for cyber capacity building measures. And I think we all have to do a lot more work to get this really going. It’s not only a question of money. It’s also a question of, again, human resources that have to be invested, but also coordination to get the right things done. And then the last thing is attribution. Attribution is holding malicious actors accountable. It’s very difficult in practice, but it’s doable. We reject this notion that you cannot properly attribute. I think we can. We have technical possibilities, and we have to use, of course, also political judgment to put this in the international, the observations that we do on a technical level to put these into an international context. So, we have established in Germany a national attribution procedure. The foreign ministry is the penholder of this procedure, and it works together with other ministries and agencies and intelligence services who might have intelligence or other effects to contribute to this procedure. And we do it in a very thorough, responsible way, so that when we go out with an attribution decision, you can be sure that we have the necessary background information collected and that this is something that is not done. It’s a political attribution because it’s a political decision, but in the basis, there is a really effect-based and responsible analysis of what has happened. So, sanctions still is something else. It doesn’t require attribution, and attribution doesn’t require automatically sanctions, but in the European Union, within the diplomatic toolbox, we have also the instrument of sanctions to use it together. And this is something that we will probably see more often in the future. There’s a lot of appetite for sanctions out there because malicious behavior is really increasing from different sides. So, I’ll leave it with that.

Louise Marie Hurel:
Thank you. Wonderful. Thank you. Thank you very much, Regine. And I think what we see from your, let’s say, points is that there are positive levers to thinking about responsibility, right? So, a positive understanding of responsibility where you build capacities, where you think about the development of national laws and how do you connect that with the regional level when it comes to the EU, right? I mean, implementing things like the NIS Directive, and also monitoring implementation. But there are also, let’s say, negative, not in the sense of a judgment call on it, but negative in the sense of what it proposes, right? There are also levers such as attribution and then sanctions that are within, let’s say, the statecraft toolbox to think about responsibility as something that’s external, right? So, there’s the internal responsibility of the state to necessarily have the capabilities and the capacities to be held accountable when it comes to its own citizens, but there’s also the external responsibility over there when thinking about if another state is acting or a non-state actor that’s within another state and vice versa that applies that vision of responsibility externally. So, Regine, thanks a lot. Given our time, I’m going to try to do a first round of questions, and if we have time, I’ll do the second round of questions just because I’m mindful of that. So, Pablo, so passing over to you, I know that over in Chile there’s a lot of discussions about the development of national policy right now, and also a national law, right? I mean, focusing on cyber security. How does it work then, and I know that one of the components is trying to connect, let’s say, the domestic institutions development, the principles with, let’s say, the framework for responsible state behavior and the implementation of international law in cyberspace. So, how, can you explain a little bit more and give us a little bit of an insight into that process because, as I know, it’s still underway, right?

Pablo Castro:
Thank you, Luis, with the timing to be very on time. Well, thanks very much for this invitation. It’s a very important, fascinating topic, and also congratulations on the global partnership. Well, it’s still a challenge because, basically, in Chile we started back in 2017 when we released our first national cybersecurity policy, and that policy, well, we tried to cover many things in cybers, you know, but we set up, I mean, five goals, and one of them was related with foreign policy, which is very important because, for the first time, the Minister of Foreign Affairs was really engaged in this process. And we basically, what we did was, okay, our foreign policy has a lot of, you know, principles, and we basically said those principles also apply to cyberspace, you know, respect of international law, promotion of human rights, you know, restraining multilateralism, and so on. So, we said those principles are there, part of the foreign policy, and also a part of our view and policy in cyberspace. That’s very important for us because it was quite easy at that moment to start, I mean, this work. There’s still, I think, and then our cyberdefense policy was released back in 2018, was also very important because it was, I think, one of the first times with the, we basically set a statement like the, for example, cyber operation will be conducted under the respect of international law, IHLs, and international human rights, and it was actually initiatives coming from the Ministry of Defense, part of the whole this process, you know. But even before that, the Ministry of Foreign Affairs started, I mean, to make those sort of statements. So, of course, in coordination with the Ministry of Foreign Affairs, unfortunately, maybe this policy is not, maybe too well known because it was released, I think, one week before the new administration was coming in 2018, but it is in English, so if everyone wants a copy of it, I’m really happy to share it with you. And I think it’s still a lot of challenge that we would like to address in the new national cybersecurity policy, which is the text is ready. It was approved by Inter-Ministerial Committee on Cybersecurity in May this year, and with respect to it, it can be released, you know, during 2023. The new policy is actually, I mean, a commitment to promoting, you know, international norms, the application of the international law in cyberspace, CBS, which is a very important component in our foreign policy. There’s been a lot of work we’re doing at the level of the U.S. with the establishment of 11 CBMs in cyberspace. And also, we will have a commitment to work in international cooperation strategy, you know, in cyberspace, and also on a national position, international law in cyberspace. I mean, it doesn’t mean we are not trying, I mean, to work on this, but now it’s going to be part of the mandate of the new policy, and I think this is going to be very important because it’s basically a commitment, you know, it’s coming from the president, and so we have a mandate, and so we have to be composed to work on this. But I think this is still a challenge when it comes to responsible state behavior in our regions, because, I mean, Regime 1’s measure of attribution, there’s not going to be too much discussion about attribution right now in our regions to see, I mean, what other states think about it. In my own experience, it’s sometimes been complicated when you speak and talk with your authorities, say, we were maybe under attack for some foreign power or something, and the question is, what is the benefit of making this attribution? I mean, is it something necessary to do, or made a press release? But I think there are some benefits, and it’s something we still need to discuss more internally at the level of the government, other ministries. As you know, in Latin America, you have this problem of governance of cybersecurity, where you don’t have, sometimes, national cybersecurity agencies are in charge of this. You have committees, et cetera. So that discussion is something we still need to improve more, and exchange view with other states, you know. We’ve been trying, I mean, to promote this sort of dialogue, I mean, what other states think about the application to national law. What is your experience on implementing the 11-0s law? I would like to mention what Serene said about capacity building, which is now a region that’s critical. It’s very important. The U.S. has been playing a very important role with a lot of training courses regarding application of international law. Because basically, if you want to take some important decision on this, and just develop a national position, you need people that could really understand what we’re talking about. So I think that could be, I mean, the only, I think, lawyer we have right now, Mr. Ford, which is really good, and it’s thanks to the training that we have, thanks to the U.S. And I want to actually, in that case, highlight the, I mean, outstanding work that’s been done for some states, like Canada, the United States, Estonia, the U.K., that have been actually helping to access these training courses. I cannot also mention the Global Emerging Leaders Program on cybersecurity. Thanks to that program right now on the Internet Global Forum, because basically, one of the main focus is to promote responsible state behavior. So I think it’s something that’s quite important in terms to promote this sort of dialogue. And I think global partnerships can play a very good role in our regions to try to, you know, create a sort of space for a state and come together, exchange point of view. But as I said before, it’s still a challenge. There’s a lot of things we can do. My aim is the next time there could be an attack to one state in our region, as Costa Rica, we can maybe come together and make a collective response to say we’re really condemning this attack. Not maybe necessarily to say who was behind it, but as the leaks have showed, it’s sort of condemnation. And I think it’s something that can be done, you know. Thank you. No, thank you very much. And I think it’s interesting to have two government representatives kind of in this panel, because then you have kind of two ways of thinking about, right, or the nuances already of thinking about that internal dimension.

Louise Marie Hurel:
And Pablo, you mentioned, you know, the whole development and the history of how Chile arrived where it is right now and what it needs to kind of like, it’s important to have the policy right now, because then the whole conversation of how to better connect the, you know, the domestic side of things and how the policies have been developed with the international kind of law and how to advance and to have that mandate, as you said, to be able to do that, which is quite important. And we know that in terms of policymaking. in the region, it’s really always about that. And I think your point and attribution is also quite interesting, right? It’s not necessarily that there’s a political interest in NME and shaming, but that on the other hand, this external responsibility is something that, you know, there needs to be a further trust building within the region to think about what are the channels, how can we make the POC directory within the CBMs at the OAS kind of advance in that way and be more implementable. So now I wanted to shift to you, John, because we talked a lot about states, but I think, you know, a huge part of the whole conversation about responsible cyber behavior goes through the private sector, right? It’s thinking specifically like big companies like Microsoft, right, as we’ve been seeing its engagement. So I wanted to do a very, very quick kind of question, and I think I’ll do a sandwich already with the second question that I was gonna ask you because I’m quite excited about that one. So the first one is really kind of, so as I said, responsible cyber behavior is broader than just thinking about state behavior. So what are the main lessons learned and perhaps the challenges of bringing together the private sector within the tech accord? I mean, many people, I imagine some might be familiar, but others might not. So do you wanna just do like a quick reply on that, and then I’ll just go for my second question because I’m very excited about it. Sure, yeah, thank you so much for having us and thanks to IGF for putting on this session.

John Hering:
For those who are unfamiliar, the Cybersecurity Tech Accord is a coalition of now 167, 168 technology companies from around the globe committed to some foundational cybersecurity principles, but really what it is is trying to be the industry organization that gives the industry a voice on matters of peace and security online. And the group’s been around for five and a half years now, and I’ll tell you what has not been a challenge is getting folks on the same page on that. It’s sort of been remarkable how much there’s been a lot of interest in joining the group. We kicked off in 2018 with just 34 companies and then pushing 170 now, and I think that reflects a lot of pressure that companies feel across the industry from our customers as cyberspace continues to emerge as a domain of conflict to make clear where do we stand, what is our role as the folks who are developing the products and services that are so often weaponized by various actors, but including increasingly governments. So it’s been easy to sort of get folks on board to say, hey, we have commitments to good security, protecting our customers, we are not interested in weaponizing our products and services to undermine peaceful security or peaceful technology. One of the challenges though is just sort of, I think, getting companies that have just such widely different capacities on the same page. Some companies, like you said, are very large multinational firms and have the resources to dedicate to some of these challenges, and for many of the companies that have joined the Cyber Security Tech Accord beforehand, familiarity with UN processes on peace and security online were very, very foreign. And so it’s been interesting to sort of bring a broader swath of the industry into the conversation. And we’ve also seen, I think, some real meaningful progress taken across the industry by virtue of the work of the Tech Accord, maybe most notably, starting a few years ago, we started encouraging companies to have coordinated vulnerability disclosure policies in place as a matter of just sort of baseline expectation. When we started calling on companies to do that within the group, there were, I think, maybe a dozen or so CVD policies that we could find easily online. And today you can find over 100 coordinated vulnerability disclosure policies from across that Tech Accord signatory base that are reviewable online and can serve as a proof point, I think, for action for that group, but then also a point of reference for other companies seeking to think about, well, what would a CVD policy look like in our particular context? So that’s just one example, and yes, you debrief, so I’m gonna cut there. No, that’s fine, and I said I was gonna do one round,

Louise Marie Hurel:
but I’m gonna squeeze in, just because of our time, the second question over here to you, John, which is, you talked about the Tech Accord, and I think it’s a really interesting kind of like endeavor to kind of bring folks together from industry and across, as you said, like different levels, you know, not necessarily just strictly tech companies, right, I mean, in that case. But when we think about Microsoft’s role specifically, and I mean, that doesn’t apply just to Microsoft, but maybe other companies that have been engaging, like in context of conflict, crisis scenarios, right, I mean, the war, the Russian-Ukrainian war. So what is the role, then, of the private sector in those contexts, right? What is the responsibility of the private sector in engaging in conflict situations, as we’ve been seeing right now in Ukraine? So what would you say about that?

John Hering:
So a lot of that question, I don’t think, is my place or Microsoft’s place to answer in terms of what is the proper role of industry as it relates to armed conflict. I will say it’s something that’s been thrust to the fore, though, in the past year and a half since the war in Ukraine started, and certainly Microsoft has played a very forward-leaning role here. I should say that the Tech Accord early on in the conflict also did come out with a statement on industry responsibilities in times of armed conflict. But in particular, for Microsoft, I think we focus on doing three things as it relates to the conflict in Ukraine. The first is hardening security for our customers that are in the region. If you’re gonna be exposed to particularly sophisticated threat actors, making sure we’re providing the best security that we can. We did a lot of work to migrate Ukrainian data into secure cloud environments, which made data centers in Ukraine redundant targets. We also did a lot of work, then, on the active defense side. It’s the second thing we’ve done. We’ve responded to now, I think, upwards of 10 different generations of wiper malware in the context of the operations targeting Ukrainian data. And then the third, and this has been, I think, something we’ve leaned into more over the past year in particular, is regular reporting on what we’re seeing in the context of the war in Ukraine. We’ve redoubled, I think, a lot of our efforts around threat context analysis in particular, so not just talking about what one cyber event was, but painting a picture about the activities of a broad threat actor group, how they’re aligned, then, and oftentimes with a military campaign. We’ve seen, often, missile strikes either immediately preceding or taking place right after cyber operations, often against the same targets or same geographies. Microsoft obviously can’t know the level of coordination and where that takes place within government agencies, but the correlation would seem to suggest that. And then the other, but Microsoft certainly hasn’t been alone in this. There have been a lot of private sector companies that have been leaning forward in similar ways, and then, obviously, a lot of the success of those efforts to thwart cyber operations in the context of that conflict are attributable to the work of the Ukrainian CERT, which was so prepared to readily provide necessary authorizations, to move quickly, to coordinate the efforts of a broad multi-stakeholder coalition. This is sort of the first example we’ve ever seen of large-scale hybrid warfare. It certainly won’t be the last, but I think one silver lining and encouraging element here is that it looks like a robust multi-stakeholder coalition that is well-coordinated and determined can at least ensure that as this emerges as a domain of conflict, there can be asymmetric benefits to defenders.

Louise Marie Hurel:
Wonderful. I think that gives us a lot of food for thought. I mean, of course, there are various types of companies engaged, right? I mean, tech companies, threat intelligence companies, and you can go more and more kind of nuanced in the classification of companies involved in conflict. Right, I mean, they’re evolving questions of whether they are combatants or not, on whether the private sector has an extra responsibility because they’re infrastructure providers. But anyway, I wanted to pass over to Charlotte because since we’re talking about conflict situations, I wanted to also talk about the more, let’s say, human element and the organizations that sometimes are the primary target, or let’s say the ones that suffer the spillover of a lot of that geostrategic competition. So Charlotte, I don’t know if you can hear us. I just wanted to check. Yes, I can hear you. Can you hear me? Lovely. Thanks so much, Charlotte. It must be so early over there. So thanks so much for joining us. So Charlotte, I know that Cyber Peace Institute has been doing a really great work in trying to measure the impact of the harms that cyber incidents have to civilians and to civil society organizations. And normally, individuals in civil society organizations and the third sector are left by themselves to actually know how to best respond or to protect themselves and their infrastructure. So could you share a little bit more what can be done better to support these groups? Thank you, and good afternoon.

Charlotte Lindsey:
I’m really sorry I can’t be there in person, but thank you for inviting me today. So yes, the Cyber Peace Institute has been working to understand the impact and harms of cyber attacks. And I think firstly, it’s important to build evidence and data-driven understandings of the harm inflicted by cyber attacks. There’s always a lot of hypotheses, but I think what we’ve been trying to do is really foster more context-aware approaches so of the harms and impacts, so that we can also look at then what’s the best way to support and engage in capacity building and building resilience for particularly vulnerable communities. And so I think that’s a very good starting point, understanding the evidence and data-driven impact and harms. What we’ve been looking at, for example, a particular vulnerable group who’ve become more and more impacted and targeted by cyber attacks are humanitarian and human rights and development organizations that are working to support victims of armed conflict and vulnerable populations in crisis situations. And what we have done there is really built both a humanitarian cybersecurity center, but also a very specific cyber peace builders program where we match the needs of individual organizations to cyber resilience and capacity building support that can be provided free to those organizations to help them respond and build their capabilities to prevent or to respond to attacks. And I think that’s a very important point, but then also on the policy side, it’s really important to take the understanding and lessons learned from that and inject that understanding into policy discussions, for example, at the Open-Ended Working Group or the Ad Hoc Committee on the Cybercrime Convention in order to be able to say, look, this is what is happening and this is what needs to be done to prevent that. Another particularly vulnerable community we saw during the pandemic was the healthcare community. And we saw also during the pandemic, particularly the heightened two years of the pandemic, we saw increasing attacks against very critical infrastructure, the healthcare infrastructure linked to the response to the pandemic. One of the things that we did with our partners there, which is the government of the Czech Republic, Microsoft and the Cyber Peace Institute, we built a multi-stakeholder compendium on best practices on protecting the healthcare sector from cyber harm, which was looking at really practical recommendations that could improve the resilience and protection of the healthcare sector. So another concrete way is looking at the data, what it’s telling us about what the harms are, putting together those people who are impacted from the healthcare sector in this case, looking at practical recommendations of what’s worked and then building that into resilience programs. And then just lastly, we’ve been working over the last two years on the cyber attacks in times of conflict, particularly related to the Ukraine and Russian conflict. And there we are monitoring currently at the moment, 112 different threat actors who are very loud and proud about the attacks that they’ve been carrying out. They have been self-attributing. So obviously that there still needs to be more technical, policy, legal attribution behind that. But I think that speaks to what Regina and Pablo were talking about at the beginning, about being very clear about the responsibility of states, also to make sure that attacks don’t happen from their territory or to then potentially hold persons accountable for that. And I think that will be very important steps going forward, looking at how those who’ve breached the laws and norms are going to be held accountable.

Louise Marie Hurel:
Thank you so much, Charlotte. And I think that starts to paint to us like a, let’s say a gradient of understandings of responsibility that are complimentary, right? We discuss the national, like the domestic and the external notion of responsibility when we’re talking about statecraft and what that means when it comes to applicability of the norms. We talked about the private sector and the evolving understanding of what it means to engage in conflict situations, being a company, not that private sector has not been involved in Lincoln conflict. I mean, when we look at other, let’s say, contexts, it’s not new. But I think when we’re talking about the tech sector engaging in protecting and providing support and assistance, then maybe we’re talking about new dimensions of responsibility over there. And now, looking at the third sector, looking at civil society organizations and what the Cyber Peace Institute has been doing, I think there’s extra layer of responsibility there, which is thinking about how the civil society organizations can feed back into government and say, these are the harms, be very thorough about the data that we collect and be able to hold them accountable for the actions and the spillovers of many of these activities, right? And Charlotte, I will get back to you on the second question, definitely. So I will now pass it over to Eugene. So Eugene, now we’re in the sweet spot because as a person that comes from academia, you know, my heart goes out to you as well as a fellow person from the same sector. So I was wondering, at the heart of the global partnership really lies this commitment to foster research-led dialogue with different views from different countries and regions on the topic. Are we doing enough as a research community to really connect to those realities or are we really in our own silo? So how does RSIS kind of done and worked through those different silos? Thanks, Louise.

Eugene EG Tan:
So let me first say that it’s an honor and a privilege for RSIS to be involved in this project. And I think this project represents a wonderful opportunity for us to shape and build what responsible behavior in cyberspace looks like from a global multi-stakeholder perspective through dialogue and research. So for the longest time, I think academic research has been done on a very individual regional case study basis where actions by states are documented and on the actions and commitments made by states. And it’s from this where we draw what we think is best practice and also maybe implement it in a arbitrary manner. So what I think has been lacking in research is this common measurement of what responsibility actually is, which is what makes this project so exciting. What makes this project doubly exciting is how wide the consultation is and the intersectionality that each individual on this panel or online or even in this room here brings to the whole project. This means the discussions, the findings come from a group of people and not just a snapshot from a specific region or from an academic perspective, but rather one which considers a wider context of responsibility with states, industry, civil society, academic view coming together on a very global scale. So bringing back to your question about having the need to connect different realities when doing comparative studies among the region. So I think as an academic community, we haven’t necessarily done enough talking across regions and academics tend to focus more on our individual contexts when talking about cybersecurity. This can be area studies, these can be specific topics that you’re interested in. But I think that has been changing, especially when funding is starting to come online where academics like myself can actually interface with different regions. I mean, I met you first in Mexico. What’s an ASEAN person meeting someone who is based in Europe doing in Mexico, right? So doing so helps us build that bridge, helps us understand the different contexts that we actually reside in. And I think this broadens the richness in conversation, broadens the conversations that we have. And I think we’re all richer for that, yeah.

Louise Marie Hurel:
Wonderful, and I wanted to follow up on that, actually, Eugene. And yeah, it’s quite interesting that the need to connect to the global, let’s say, research community around this, and definitely it’s at the heart of what the GPRCB, the Global Partnership for Responsible Cyber Behavior, seeks to do. But Eugene, what can we do better? I mean, you started alluding to some points over there, but what can we do better to develop a research agenda that’s more attentive to the cultural, contextual kind of elements that might play into defining responsible cyber behavior? So you’re asking a fellow academic how to do research design. Yes, absolutely, because I mean, this is part of what we can do, right? Yeah, so personally, I think, because this is a global study,

Eugene EG Tan:
it’s gonna be really difficult to control for all the cultural and contextual elements across the regions and different states. So what would be reasonable would be to pull out the common strands of what constitute responsible behavior and note these deviations from the norm. This would enable us to put out a document which potentially defines responsible behavior as a baseline rather than building on existing research, which is to provide a case study on how states think or how businesses think, how they’re being responsible, because it’s such a nebulous concept of responsibility. There is no one measurement, like I was speaking about earlier, because there’s no one measurement. Everyone thinks they’re responsible, right? So it’s how we draw out these extra measures, how we could actually inform the whole community as a whole, how these extra measures can be actually implemented that will bring value to the whole ecosystem.

Louise Marie Hurel:
Absolutely, and I think, if what I’m hearing potentially is painting a spectrum of responsibility. So we already have the norms, right? They are at the international level. How they’re interpreted, we have the area studies, of course, but I think your point on understanding the deviation element is quite fundamental, right? And how do we access those, let’s say, practices to be able to draw that. So that is part of what we’ll be doing like in the next year so that’s quite exciting. I wanted now to turn to Koichiro. So Koichiro, you know, you have been engaged in so many different bits and pieces of the of the technical community, right, as JPCert, being part of FIRST’s advisory board and so on and so forth. So I wanted to speak to you particularly about, you know, the certs have a really important role. So at the UN, you know, the norms, there’s a norm to protect certs against being targets and they have a fundamental role in maintaining the security of networks and systems and for many years now. But many countries have now establishing, have established reporting requirements, right, and we already discussed that a bit, for incidents. Is it realistic to expect organizations to report incidents within a short time frame sometimes or to have governments require that some vulnerabilities and incidents be first reported to them? So I see that there’s a responsibility from the side of the cert community, right, but is it realistic to expect some certain things from, especially when it comes to vulnerability reporting and reporting requirements, is it realistic to expect that given your experience in the field? Thank you, Ruiz. Hello everyone, my name is Koichiro

Koichiro Komiyama:
Sparky Komiyama from Japan Computer Emergency Response Team. Well, you know, I’m glad that Ruiz mentioned the role of CSERT or cert to protect the global Internet and my contribution is to explain the role of CSERT has been changed slightly since last few years. I have three points. First, of course, we see more rules or regulation or local registration for anyone to report the vulnerability and incidents to authorities, which also, which includes, for example, India’s case, reporting cybersecurity incident to CERT India, the Indian CERT, within a few hours of occurrence or since I spent a week in IGF meeting room this week, I just learned Sri Lanka will have a similar regulation in a few months and I also like to note certain, you know, there are many other authority or government agencies who to receive the security incident reports. For our case, Japan, if there’s a cybersecurity incident, they share information with JP CERT or National Cybersecurity Center, but if it is, if the case is associated with personal information leak, then they have another government-led commission, which they are mandated to report up. And just recently, US Securities and Exchange Commission also introduced a new regulation for incident disclosure to US financial institutions. Now, my second point is, you may be not familiar with what we are receiving. For example, JP CERT, we receive 20,000 cases or incidents per year and about half of the cases or half of the incidents, we need to engage or we need to communicate with someone in United States, the ISPs, platformers, researchers in United States. Then, that’s a half of our received report. Another 30 to 40 percent, we need to reach out to China. So US-China combined is more than 80 percent and from this fact, I like to suggest to you, cyberspace may not be as global as you imagine. What’s crucial on the internet is not this part, not very, not very distributed, but rather concentrated in a few places on earth. And the other thing is, you know, often regulator misunderstood. If they got more information, they can make more accurate decision or assessment. To us, like among 20,000 incident cases, what we like to see is less than 1%. Only, you know, less than 100 cases can be used or can be very beneficial for us to analyze what type of APT attack is happening, which specific Japanese critical infrastructure is compromised already, and others. The rest is not a garbage, but it’s not something, you know, it’s not very informative or actionable, at least for us. Now, I’d like to conclude my last point. The worst-case scenario is the local registration hinder or undermine the international or global information sharing, which we have been, we have been doing for us 10 or 20 years. Log4j is a very good example. There’s a common software library widely used everywhere, and this vulnerability was first identified by a Chinese researcher working for Alibaba’s subsidiary. They made a great job to identify the issue, and then also sharing it with Log4j developers immediately. But far from being praised or get a reward, you know, they are summoned by Chinese authority, and since then, there’s a chilling effect among Chinese security researcher community. I do not expect they can be, you know, they can share vulnerability information with, for example, JPSET or other government agencies in the future. So, like we see data being localized, we also see vulnerability information being localized, and we’re in the middle of the process, and I don’t have, yeah, and I like to, you know, together with you, I’d like to explore how we can fix this issue and, you know, make sure vulnerability information being shared among stakeholders who should be, or who should know. Thank you. Thank you. Thank you very much,

Louise Marie Hurel:
Koichiro. I think then we see a double entendre over there, because it’s, on the one hand, you know, the state, and we go back to, like, Regine and Pablo over here, where they were talking about, you know, as a state, we need to actually kind of develop, like, regulations and develop national policies that we make sure that we have, you know, vulnerability disclosure, that we have kind of, like, procedures in place, and then, on the other hand, it’s kind of like, let’s think more carefully about, you know, what the procedures are and, you know, whether that actually hinders our communication channels that have been established, right? And I think, you know, we could see that is not just the case of, let’s say, Log4J, but we could talk about, like, the NIS directive. When all of these regulations come first, right, there’s always this process of adjusting in many ways, like, is the timing correct for expecting certs to report? Is it responsible? I mean, it’s an understanding of what certs are responsible to do, like, what’s their responsibility, but at the end, I mean, is it feasible or not? And I think we’re always trying to figure that out in one way or another. We have 10 minutes left, which I think it’s, like, thanks so much to my panelists for really sticking to the time, and I wanted to open the floor to all of you, whomever has any questions to the panelists. I definitely have lots of questions, and I imagine, hope you have also questions to each other, but I wanted to open up the conversation. Are there any questions from the audience or any kind of comments or anything, if we have government representatives in the room that would like to share also their views, that would be great,

John Hering:
or are we just very tired because it’s the last day? Absolutely, go ahead. I think co-signing a lot of the same concerns and would advise a lot of policymakers to start thinking about what the impact, especially to the security research community, is going to be of any policies you’re pursuing, because it’s not just some of the ones you were citing, but also, you know, the current negotiations around the Cyber Resilience Act in Europe, which would mandate reporting of, you know, non-exploited vulnerabilities to, you know, central government agencies, which are not in a position necessarily, then, to take action to fix that, and making sure that we’re reporting in a way that is prioritizing getting a fix and keeping customers and users secure, and also, just emphasizing your point that there’s then people who want to replicate that policy. You kind of create a race to the bottom where you have different imitators who are all sort of creating similar vulnerability reporting requirements, which may not be in the interest of actually the best product security and keeping the most sensitive data secure. Great. Any other points from the audience? No. Everyone’s very tired. It’s the

Louise Marie Hurel:
last day of the IGF. I get you. It’s overwhelming. I wanted to go back to Charlotte. Charlotte, if you’re still online, hopefully. Are you there? Yes, I’m here. Lovely. Charlotte, I wanted to follow up on, let’s say, this dimension of civil society organizations, right? I think it’s undeniable when we’re talking about, you know, state responsibility, when we’re talking about private sector responsibility, there’s an interesting spot, which is definitely the development of commercial hacking tools or spyware, which is often a very tricky topic, both for democracies and, let’s say, those in the spectrum and even authoritarian regimes. So, what kinds of accountability measures do we need to be setting in place to protect citizens from

Eugene EG Tan:
the misuse of those kinds of technologies? Thank you. It’s a great question, and there’s probably a very long answer, but I will try to keep it short in view of the time. Firstly, I think, so the use of commercial spyware surveillance tools, the associated lack of transparency, the consequences of its use and abuse on human rights and respect of laws. So, we see this as a growing and very lucrative market, and I think the issue of accountability, first, we have to look at as being a responsibility of all actors. Particularly, we also have to look at the focus on how do we get redress for victims? So, if their governments are able to hold accountable those who cause the violations of human rights, what’s the redress to victims? But if we look at some of the measures that need to be taken, and we’ve talked about this before on here, public attribution. So, you have to be able to identify the actor and build on and complement and reinforce findings of any technical analysis to achieve accountability. You have to be able to hold somebody accountable. So, attribution is going to be a very important aspect of this. Then, looking at legal action, we’ve seen some countries who have taken legal action now. So, formal investigations, and then if those investigations build enough evidence and cases to then be able to bring legal cases, which will then focus attention on who commissions, who’s financing and sanctioning such abusive use of surveillance technology, and that can support driving accountability. I think that we do, I think it’s important to look at, you know, states have a legal obligation to protect and promote human rights and hold those who violate them to account. So, you know, looking at state responsibility and how states are taking up this responsibility is important. And then, also looking at how do you operationalize accountability at the international level? And I think this is going, this is very important. So, collectively, governments have to shape the political and normative environment related to spyware, and particularly where spyware is now being carried out as a service to and abusing human rights. So, that needs to have a coordinated response to ensure responsible state behavior at the international level and to promote accountability between states because, obviously, there’s a lot of cross-border issues that are critical here. So, states will have to act on their responsibilities in order to engage individually and collectively to bring perpetrators to and hold them accountable. But accountability also requires transparency, and I think that’s one of the very difficult things about this use of offensive surveillance software or spyware. And that’s something that has, there has to be a willingness to be much more transparent about what is today a very opaque market about the supply and the demand and the use. So, transparency is a really important step. And then, yes, as I say, I think there are a number of laws, norms that can be brought to, that can be invoked. And I think that’s going to be very important to look at where human rights of individuals have been breached, holding them to account that can be under something like the International Covenant on Civil and Political Rights, the Covenant on Economic, Social, and Cultural Rights. So, there are a number of ways forward. I would just like to conclude by saying there is actually between a collaboration ongoing to a number of civil society organizations at the moment, and co-chaired by the Paris Call and the Cyber Peace Institute, where we’re working on a multi-stakeholder agreement for transparency around this spyware and cyber mercenaries market. And this, the first iteration of this will be brought to the Paris Peace Forum in November. Wonderful. I see that you want to

Louise Marie Hurel:
chip in. Yeah, just two quick points also on accountability, because I saw a colleague earlier today who, on the other side of IGF, she was saying, oh my goodness, we’re having the exact same conversation on cybersecurity that we were having when I left cybersecurity five years ago.

John Hering:
But I, like, want to assure folks that things are moving forward, and especially as it relates to accountability, you know, first on accountability via attribution statements. One thing that’s been really exciting over the past year, year and a half, has been to see government start to, really for the first time, include norms violations explicitly in attribution statements that they’ve released publicly, which has been sort of the first innovation in a public attribution statement that I’ve seen in a while. And my jaw dropped it when I saw it, so I hope yours can now too. And then the second piece has to do with the sort of, again, that innovation of the use of cyber operations in the context of an armed conflict. And we did see just probably six weeks ago now, the ICC prosecutor come out and say publicly that his office has a mandate to, and will be, investigating the potential of cyber-enabled war crimes for the first time, which when you think about what it would mean to uphold expectations for responsible behavior, both in the context of peacetime, but then really importantly in the context of warfare, that’s a really important innovation or, you know, evolution as well. So just one more thing to add. Absolutely. Any of the other panelists would like to chime in or have a tweet of a last remark? No? I’ll trigger then Regine and Pablo very quickly

Louise Marie Hurel:
if they want to respond to this. So I think in terms of the last point on thinking about transparency measures and accountability, over at the OEWG, there has been a lot of discussions as well as to whether include the actors like, you know, cyber mercenaries or include, you know, SPIRA as something that’s more explicitly defined or made recognizable in the emerging threats kind of discussion there. How can we evolve that particular kind of discussion? Is it ripe for inclusion or is it ripe for further kind of elaboration or discussion on, let’s say, these kinds of emerging threats right now over there? Because I know this was one of, let’s say, a key point of contention. So I don’t know if, like, again, a tweet from either Regine, if you’re still online, if you can hear us, or you, Pablo, putting in the last spot over there.

Pablo Castro:
So, Regine? No? Okay. It’s a good question. I think in my point of view, just maybe a personal point of view, when it comes to in our conversation and how to move on at the working group, you know, in different, you know, sections, it’s sometimes we have to be very careful about what exactly we want to put there because, you know, we have to agree by consensus. So that’s the point, you know, how you can start a conversation, discussion, and things with definitely things that could be important, you know, there. But the other point is, if we start some conversation, things are probably going to create maybe not the consensus we want. It’s going to make our conversation more difficult in the very end. So it’s a difficult balance. Now, it is true that, especially in the threats, you know, we were including, for example, artificial intelligence and new techniques, but I still want to be sometimes a little bit careful because we, especially in AI, for example, that we just start to maybe other conversations, other discussion, and I think it’s probably one of the challenges we have in emerging technologies, you know, where exactly we have to discuss one of the things or another. But it’s still up to the state, you know, to, in a way, to try to see how we can address this point. The cyber mercenary can be something really challenging. I used to be in charge of mercenary years ago. It’s a concept that I’ve never seen before, but I think it’s something that, in a way, it is reflected, you know, the concern of some state. In that case, of course, it’s legitimate to discuss this in that forum because that is the place that we have right now to have this conversation. So, in a way, we cannot stop it, but, again, how can you see if we cannot not create this problem at the very end, especially at the end of Friday in the United Nations when everyone wants to really, I mean, go back home, let’s try to get this consensus. Thank you. Thank you, and thanks

Louise Marie Hurel:
for taking that last kind of, like, curveball over there. Well, I just wanted to thank you all for sticking over here. I think having a, you know, a slightly kind of full room at the end of the IGF is not trivial at all. I hope you can stay in touch. The Global Partnership for Responsible Cyber Behavior has its website where you can access more information on our members, our institutional partners, and please do get in touch if you want to get involved in doing research, and I’d like to thank my panelists, Regine and Charlotte, that are online. Thanks a lot, and thanks to all of you, and keep in touch.

Charlotte Lindsey

Speech speed

182 words per minute

Speech length

687 words

Speech time

227 secs

Eugene EG Tan

Speech speed

153 words per minute

Speech length

1285 words

Speech time

504 secs

John Hering

Speech speed

213 words per minute

Speech length

1562 words

Speech time

440 secs

Koichiro Komiyama

Speech speed

111 words per minute

Speech length

694 words

Speech time

375 secs

Louise Marie Hurel

Speech speed

191 words per minute

Speech length

4136 words

Speech time

1300 secs

Pablo Castro

Speech speed

198 words per minute

Speech length

1589 words

Speech time

482 secs

Regine Grienberger

Speech speed

154 words per minute

Speech length

780 words

Speech time

305 secs