Policy Network on Internet Fragmentation | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Bruna Marlins dos Santos

During the session, a comprehensive presentation will be given on the Policy Network’s discussion paper. The paper examines various aspects outlined in the Policy Network framework, and debates will be held to delve further into these topics. The aim of the session is to foster a thorough understanding of the discussion paper and encourage insightful discussions among participants.

The presentation and subsequent debates are of significant importance to the Policy Network as they provide an opportunity to seek feedback, gather perspectives, and refine the framework. The Policy Network values the contribution of its volunteers and acknowledges their role in shaping the document. Bruna, in particular, expresses profound gratitude to all the volunteers who helped shape the document with their time and effort. It is heartening to note that some of these volunteers are present during the session, indicating their continued commitment to the Policy Network’s values and goals.

The discussions and presentations align with two Sustainable Development Goals (SDGs): SDG 16 and SDG 17. SDG 16 focuses on promoting peaceful and inclusive societies, providing access to justice for all, and building effective, accountable, and inclusive institutions. The Policy Network’s efforts to facilitate debates and discussions on the various aspects outlined in the framework contribute to these goals. Furthermore, SDG 17 emphasizes the importance of partnerships and collaboration to achieve the SDGs. The Policy Network recognizes the significance of collaboration and appreciates the volunteers who have worked alongside them, highlighting the importance of partnership for the goals.

In conclusion, the upcoming session will involve a detailed presentation of the Policy Network’s discussion paper, as well as debates on the various aspects outlined in the framework. The volunteers of the Policy Network are greatly appreciated and thanked for their invaluable contribution in shaping the document. The discussions and presentations align with SDG 16 and SDG 17, incorporating elements of peace, justice, strong institutions, and partnerships for the goals. By engaging in these activities, the Policy Network aims to further progress towards achieving the SDGs and creating positive change.

Olaf Kolkman

The discussion revolves around the topic of internet fragmentation and its implications on connectivity and global inclusivity. One aspect highlighted is the lack of a clear and operationalized definition for technical fragmentation, resulting in different frameworks for understanding the concept. While fragmentation is often seen as a negative phenomenon, certain types of fragmentation, such as decentralisation, lack of connectivity by choice, or temporary network glitches, are considered to be non-problematic.

However, the evolving nature of the internet and its changing routing behaviour may lead to a different kind of fragmentation, potentially increasing the digital divide. This digital divide could be more prominent in lesser connected parts of the world and could result in a disparity in user experience. Therefore, it is important to address and mitigate these effects to ensure global connectivity.

A key argument presented is the need to protect the critical properties of the internet for global connectivity. Fragmentation in the technical infrastructure is likely to be reflected in the user space, affecting the overall user experience. It is crucial to continually evolve the internet and avoid ossifying it in its current state.

Furthermore, a multi-stakeholder approach is deemed necessary to ensure global connectivity and prevent fragmentation. Stakeholders include the private sector, technical communities, civil society, and governments. By involving various stakeholders, it is believed that a collaborative effort can be made to address global connectivity issues effectively.

One notable observation is the call for a more nuanced understanding of the issues surrounding internet fragmentation. It is suggested that a broader perspective is required to fully comprehend the implications and consequences of different forms of fragmentation.

Another important point raised is the protection of an open internet architecture. This open architecture should be safeguarded to promote common protocols and interoperability. It is argued that an open internet architecture allows for the evolution of the internet and ensures its continued effectiveness and accessibility.

Additionally, the affordability and accessibility of the internet are highlighted as crucial factors in preventing the creation of a digital divide. Issues such as the concept of the “death of transit” and pricing disparities are mentioned, which can hinder individuals’ ability to access the internet. To prevent exclusion, it is important to address these affordability and accessibility challenges, ensuring that everyone who wants to connect can do so.

In conclusion, the analysis emphasises the need for a clear definition of internet fragmentation and a comprehensive understanding of its various forms. Protecting the critical properties of the internet, adopting a multi-stakeholder approach, preserving an open internet architecture, and addressing affordability and accessibility issues are crucial steps towards ensuring global connectivity and preventing the creation of a digital divide. The ultimate goal is to provide equitable access to the internet, ensuring that everyone who desires to connect can do so.

Rosalind Kenny Birch

Fragmentation at the governance layer of internet governance can have negative consequences, such as duplicative discussions and excluding certain groups from the decision-making process. This fragmentation occurs when global internet governance and standards bodies fail to coordinate inclusively. The lack of coordination can lead to redundant conversations and the marginalisation of specific stakeholders.

Furthermore, this fragmentation at the governance layer does not just impact that particular level; it can also have knock-on effects on other layers of the internet user experience and the technical layer. The issues arising from governance fragmentation can trickle down to affect the overall user experience and technical functionalities of the internet. This highlights the interconnectedness of the different layers and the need for holistic approaches to address fragmentation.

To combat fragmentation, inclusivity is considered a central approach. When multi-stakeholder community participation is limited or not fully empowered, fragmentation tends to occur. Therefore, promoting inclusivity becomes crucial in combating governance fragmentation.

Instead of introducing new bodies into the internet governance landscape, it is recommended that existing internet governance bodies focus on improving coordination. Introducing additional bodies may further complicate the already complex governance landscape. Therefore, enhancing coordination among existing bodies is seen as a preferable solution to address fragmentation.

Moreover, it is important to ensure regional nuances and cultural contexts are considered in global internet governance bodies. Internet governance bodies should strive to accommodate the perspectives and voices of all stakeholders, regardless of their cultural or regional background. This can be achieved through better coordination and utilising platforms like National and Regional Initiatives (NRIs) or the Internet Governance Forum (IGF). These platforms provide opportunities to discuss local nuances, regional contexts, and ensure diverse perspectives are heard. For instance, the Africa IGF was identified as a fruitful opportunity to learn about regional perspectives and the importance of cultural and regional inclusions.

In conclusion, fragmentation at the governance layer of internet governance has negative implications, including duplicative discussions and exclusion of certain groups. Inclusivity is crucial to address this fragmentation, and existing internet governance bodies should focus on improving coordination rather than introducing new bodies. Additionally, considering regional nuances and cultural contexts in global internet governance is vital for inclusive decision-making processes. Platforms like NRIs and IGF can play a significant role in fostering regional and cultural inclusivity.

Suresh Krishnan

The internet is a decentralised set of networks that lacks a single point of control. It is a collaborative effort involving multiple individuals who have built this expansive network. This characteristic of decentralisation is a fundamental aspect of the internet, allowing for its widespread connectivity and accessibility.

Technology plays a crucial role in the internet’s functioning by enabling interoperability between these networks. It provides the means to bind different networks together, allowing seamless communication and data exchange. This interoperability is essential for the smooth operation of the internet and facilitates the flow of information across various platforms and devices.

Openness and incremental deployability are critical properties of the internet. The internet constantly evolves with the deployment of new technologies. This adaptability and openness enable the integration of innovative technologies onto the internet, keeping it up to date and capable of supporting new applications and services.

Content filtering is an important consideration in the context of the internet. It is argued that content filtering should occur at higher layers, taking into account the differences in laws across countries, states, and localities worldwide. This approach acknowledges the diverse legal frameworks and ensures that filtering is done in a way that respects local regulations whilst maintaining the internet as an open and inclusive platform.

The multi-stakeholder approach has played a significant role in the development and governance of the internet. This collaborative approach involves stakeholders from various sectors working together to shape policies and decisions regarding the internet’s management. The internet has thrived and evolved due to this inclusive approach, allowing for diverse perspectives and expertise to contribute to its growth and stability.

Efforts in internet measurement are critical for understanding and improving the internet’s performance. There is a need for more measurement points across the globe and a platform for individuals to conduct their own experiments and assessments. By increasing the focus on internet measurement, we can gain valuable insights into the network’s strengths, weaknesses, and overall quality, leading to targeted improvements and advancements.

However, a noteworthy critique is the lack of references in the document. It is important to provide credible sources and citations to support the arguments and claims made. For example, referencing RFC 1958, which discusses the architecture of the internet, would add credibility and depth to the document’s assertions.

In conclusion, the internet’s decentralised nature, enabled by technology’s interoperability, openness, and incremental deployability, has shaped its development. Content filtering should be approached in a way that considers the differences in laws worldwide whilst maintaining the internet’s accessibility. The multi-stakeholder approach has been instrumental in managing and evolving the internet. Finally, efforts in internet measurement are necessary for ongoing improvement, but it is crucial to provide proper references to support the document’s claims and arguments.

Sheetal Kumar

The Policy Network on Internet Fragmentation has spent the year exploring the complexities of Internet fragmentation. They have developed a comprehensive framework that allows them to understand and address fragmentation from different perspectives. The network aims to unpack the elements of the framework, identify priorities, and formulate recommendations for action. They advocate for a multi-stakeholder approach, recognizing the involvement of diverse stakeholders in fragmentation. Seeking feedback from the community, the network wants to align their priorities with the international community and ensure comprehensive recommendations. Their ultimate goal is to provide clarity to the complex and contentious issue of Internet fragmentation, foster ongoing dialogue and engagement, and contribute towards a more connected digital landscape.

Marielza Oliveira

User experience fragmentation refers to the division or segregation of users into different information environments or platforms, resulting in varying levels of access to content and features. This issue has both positive and negative aspects.

On the positive side, user experience fragmentation can include features and content that are specifically designed to benefit the user. For example, certain platforms may tailor recommendations based on the user’s preferences, resulting in a more personalised experience. Additionally, some users may appreciate being able to navigate through smaller, more specialised content ecosystems that align with their interests or values.

However, on the negative side, user experience fragmentation can restrict users’ access to certain content and limit their exposure to diverse perspectives. This can create information bubbles or echo chambers, where users are only exposed to information that supports their existing beliefs or biases. As a result, users may be deprived of opportunities to engage with differing opinions and challenge their own viewpoints. Moreover, this kind of fragmentation can lead to the reinforcement of social, political, or cultural divides, as it inhibits the free flow of information and impedes dialogue and understanding among different groups.

Negative user experience fragmentation affects all users and is a cause for concern. It has significant implications for the rights to access information and freedom of expression. When users are unable to access certain content or are forced into specific information environments, their right to freely seek and impart information is restricted. Additionally, non-targeted users, who may have diverse perspectives, are hindered in their ability to associate with those who are isolated in different information spaces. This ultimately curtails the richness of public discourse and limits the potential for fostering inclusive and diverse dialogue.

Furthermore, user experience fragmentation can be classified as either good or bad. Good fragmentation describes situations where fragmentation is achieved through a multi-stakeholder process and upholds principles of openness and accessibility. On the other hand, bad fragmentation tends to be the result of unilateral decision-making processes, disregarding the interests of users and reducing openness and accessibility.

It is argued that principles regarding user experience should be rooted in human rights standards. Human rights standards are globally accepted and provide a solid jurisprudence foundation for assessing the legitimacy of interfering with the freedom of expression. Adhering to these principles ensures that user experience is guided by ethical considerations and serves the broader goal of promoting peace, justice, and strong institutions.

To mitigate the negative effects of fragmentation, it is suggested that enforcing platform interoperability, data portability, and enhancing users’ media and information literacy can be effective strategies. Platform interoperability allows users to seamlessly navigate between different information environments, fostering exposure to diverse sources and perspectives. Data portability enables users to retain control over their personal information and move it between platforms, preserving their agency and reducing reliance on a single platform. Strengthening users’ media and information literacy empowers individuals to critically evaluate information and navigate the vast amount of content available on the internet in a safe and informed manner. These measures can counteract the negative consequences of fragmentation, such as echo chambers and the spread of misinformation.

In conclusion, user experience fragmentation has both positive and negative dimensions, with its impact extending beyond individual users to society as a whole. While it can provide tailored experiences and niche content, it also limits access to diverse perspectives and contributes to societal divisions. Adhering to human rights standards and implementing measures to mitigate the negative effects are essential in ensuring that user experiences are inclusive, ethical, and conducive to fostering an informed and democratic society.

Jordan Carter

In the analysis of internet governance, several key points were highlighted. Firstly, there was a strong argument for the need for broad-based participation in standards bodies and global internet governance organisations. The analysis acknowledged the Western bias in participation that currently exists and stressed the importance of greater inclusivity to ensure a more equitable representation.

Another critical issue discussed was the definition of governance fragmentation in internet governance. The analysis criticised the current definition, stating that it is too narrow. This suggests that a more comprehensive understanding of fragmentation is required to effectively address the challenges.

Further examination revealed that the narrow mandates of many technical internet governance organisations contribute to governance fragmentation. While these mandates serve important purposes, they can restrict organisations from adopting a systemic view of the internet. This limitation hinders their ability to address the complex governance challenges faced in the digital age.

The analysis also emphasised the need for better coordination between internet governance bodies. It highlighted the potential for meaningful collaboration among the individuals involved in global internet governance bodies, stressing that improved coordination would enhance effectiveness and outcomes.

Lastly, the analysis touched upon the relationship between the multi-stakeholder-driven internet governance system and the multilateral or state-based regulatory and legal system. It argued that these two systems should work together and influence each other positively. By shaping policies and practices collaboratively, a more effective and balanced internet governance framework could be achieved.

Overall, the analysis underscored the importance of broad-based participation, the need for a broader definition of governance fragmentation, and the significance of coordination and collaboration between internet governance bodies. It also highlighted the potential benefits of aligning the multi-stakeholder-driven system with the multilateral or state-based system. These insights bring attention to key areas where improvements are necessary to ensure a more inclusive, effective, and cohesive approach to internet governance.

Roswitharu

The issue of user experience level fragmentation is a complex one, with perspectives depending on one’s geographic and socio-economic context. People in Silicon Valley and the US West Coast express major complaints about actions taken by governments in authoritarian countries or the privacy laws of the European Union. Conversely, Europeans primarily complain about the actions of Silicon Valley platforms.

Maintaining a balance between the global nature of the internet and the preservation of local sovereignty is vital. The original vision of the internet was to unite the planet by enabling unrestricted communication. However, disparities in values, economic systems, and languages have caused tension and division.

Efforts to address these issues should focus on pragmatism and determining the existence of a problem rather than getting caught up in semantics. Rather than engaging in unproductive debates over definitions, it is more constructive to seek agreement on the existence of a problem. This pragmatic approach allows for practical solutions and avoids getting stuck in semantic disputes that do not lead to meaningful progress.

In conclusion, addressing user experience level fragmentation requires considering different perspectives based on geographic and socio-economic contexts. Acknowledging concerns raised by individuals in Silicon Valley and the US West Coast about governments in authoritarian countries or EU privacy laws, as well as addressing European concerns about the actions of Silicon Valley platforms, is essential for improving overall user experience. Striking a balance between the global nature of the internet and the preservation of local sovereignty is crucial. Taking a pragmatic approach that focuses on assessing the existence of a problem rather than getting caught up in semantics will drive progress towards resolving these challenges.

Wim Degezelle

Internet fragmentation is a complex concept without a clear definition, as there are different views on the subject. However, three categories or “baskets” of fragmentation have been identified: fragmentation of Internet user experience, fragmentation of Internet governance and coordination, and fragmentation of the technical layer. The complexity of the topic led to the abandonment of creating a precise definition for Internet fragmentation.

To facilitate discussions and understanding of Internet fragmentation, a framework was developed. This framework aims to provide a structure for discussing the various aspects of Internet fragmentation rather than providing a strict definition. It outlines the three aforementioned categories or “baskets” of fragmentation: fragmentation of Internet user experience, fragmentation of Internet governance and coordination, and fragmentation of the technical layer.

Multi-stakeholder discussions are crucial when addressing Internet fragmentation. These discussions involve various stakeholders, who may differ depending on the specific category of fragmentation being discussed. This highlights the importance of different groups coming together to discuss Internet fragmentation, with each category attracting different stakeholders.

To effectively address Internet fragmentation, it is necessary to have discussions that span across all categories. This is because guidelines for avoiding or addressing fragmentation may not be fully complementary between different categories. By having discussions across the “baskets” or categories, a cross-category approach can be developed to better tackle Internet fragmentation.

In conclusion, Internet fragmentation is a complex issue without a definitive definition. However, through the identification of three categories of fragmentation and the development of a framework for discussions, progress can be made in understanding and addressing this issue. Multi-stakeholder discussions that encompass all categories are essential to effectively navigate the challenges posed by Internet fragmentation.

Audience

The analysis delves into the topic of internet fragmentation and its various implications. It highlights the negative effects of technical fragmentation on the internet’s ability to evolve, innovate, and adapt. The argument is made that when the internet is split into different networks, its potential for growth and development is hindered. The analysis underscores the importance of maintaining the unity and interconnectivity of the internet to enable progress and positive outcomes.

The need for a uniform and unharmful user experience on the internet is also explored. It is noted that elements representing the user experience should be safeguarded to ensure a consistent and positive online environment. Additionally, the significance of interoperability is underscored. It is stated that interoperability is crucial for the smooth functioning of the internet, allowing different systems and devices to communicate effectively with each other.

The harmful effects of fragmentation are examined, particularly in relation to blocking user access to certain sites or content. This type of harmful fragmentation is seen as a significant problem, as it restricts users’ freedom and limits their ability to fully utilize the internet.

The analysis further delves into the impact of fragmentation on democracy and the digital space. It is argued that the integrity of the digital space is crucial for the defense of democracy. The risks associated with fragmenting the digital space are highlighted, bringing attention to the potential negative consequences.

Additional topics discussed include the ownership of IP addresses and the importance of decoupling IP addresses from networks. The analysis suggests that everyone should own their own IP address, allowing for more control and autonomy in the online space.

The involvement of regional or cultural leaders in internet policy formation is explored as a way to mitigate the impact of internet shutdowns and address the needs of specific communities. Engaging these leaders can lead to more inclusive and effective initiatives.

The potential widening of the digital divide due to the availability of satellite internet is also discussed. The rise of satellite and private corporate satellite internet is seen as a concern, as it could lead to the exclusion of certain populations and affect the quality of the online experience for many.

The challenges of implementing recommendations for internet fragmentation and the importance of internet governance are also addressed. The analysis acknowledges the difficulty in implementing recommendations due to the evolving and decentralized nature of the internet. It is concluded that there is a need to create governance to prevent internet fragmentation and ensure a cohesive and inclusive online environment.

Overall, the analysis offers a comprehensive examination of the topic of internet fragmentation, highlighting its negative effects and the importance of maintaining a unified and interconnected internet. It emphasizes the need for a uniform and unharmful user experience, interoperability, and inclusive internet policies.

Session transcript

Bruna Marlins dos Santos:
th th th th th th th th th th th th th th th that we, the Policy Network, just put out. So the session today will be a little bit of a presentation of the discussion paper and also some debates between both the pen holders of the documents and some community commentators on the three aspects that we described in the Policy Network framework. So, and before we move on to that, I just wanted to start with a very big thank to every single volunteer of the Policy Network that helped us shape this document. Some of them are on the stage with us and some of them are also here in this room, so thanks a lot for joining the conversation and helping us construct this debate. I’m gonna hand the floor to you, Wim, right? And then we can move on

Wim Degezelle:
with the agenda. Thank you, and this is on, and as you see we have a presentation. I think the first, Bruna, thank you, you already gave the overview of the agenda, so I will give the brief introduction. My name is Wim de Gezelle. I’m part of the Policy Networks. They are an intersessional activity by the IGF. That means that they also receive support by the IGF Secretariat, and I’m with the Secretariat as consultant to help this Policy Network to start. So, a brief introduction on the Policy Network on Internet Fragmentation. It is an intersessional activity. That means we not only work at this IGF meeting, but we start way earlier. We started working in May, and even before that, to prepare and work to this session and to the IGF. So, it is the Policy Network Internet Fragmentation. There are other Policy Networks also on the agenda, but this one on Internet Fragmentation has the So, what are the objectives of the policy network? The policy network is an open and interconnected network. It’s a network that wants to further the discussion and raise awareness on fragmentation on technical, policy, legal, and regulatory measures that may, and actions that may pose a risk to the open and interconnected, interoperable Internet. So, what are the objectives of the policy network? The first objective is to understand what is actually meant with Internet fragmentation. So, come up with a comprehensive framework and overview of what Internet fragmentation is. We look at case studies, what actually is happening, try to come up with examples or look for examples. And then the third question is what to do about it, how to address or how to address the issue. So, we try to avoid fragmentation. This, looking back to what we did last year, we actually dove into those questions, and as often is the case, you want to find the definition and try to define what you’re working on. Through the webinars, like you see, we had webinars during the year, asking specifically that question, what is the definition? What is the definition of Internet fragmentation? What does it actually mean to people when they talk about Internet fragmentation, and what should and what can be done about it, and who should be doing what? Very quickly, through those webinars and those discussions we had, it became clear that trying to come up with a definition is not really something that is still possible. It might have been possible earlier on, but at this how the people are discussing and trying to squeeze that all in one clear definition is not helpful. What we did instead, or through the work, it became clear there are different views on what fragmentation is, and that’s how last year, as the outcome of last year’s discussion, we came up with a framework, saying, actually, what is the definition of Internet fragmentation? What is the definition of Internet fragmentation? What is the definition of Internet fragmentation? What is the definition of Internet fragmentation? Actually, if we listen to the people, if we listen to the comments we get, we kind of can form three baskets of what people see and understand as fragmentation. What’s in those baskets, we will further discuss and hear from the panelists today. But that, and I think that was the main output of our work last year, that allowed us to come up with a framework. The framework you see in small, and this, I think, is a larger version. So a framework that says, well, we found that when people are talking about fragmentation, we can either form a basket that we can label as fragmentation of Internet user experience, fragmentation of Internet governance and coordination, or people really refer to fragmentation of the technical layer, technical architecture of the Internet. That were the baskets that we could form. With the important comment we got, those baskets are not alone, are not completely separate. There are interactions, there are overlaps between them, and that’s, they shouldn’t be considered as specific, as separate silos. One comment before I hand over to Sheetal to discuss what we actually did this year, is we labeled the framework as it is a framework for discussing Internet fragmentation. We don’t want to come up with a framework to define what it is, but we, from the beginning, say, well, we, this framework should help to discuss and further the discussion. Because I think that’s one of the main evolutions we saw in the work and in our discussions, is that people started to move from, well, we need to define something and then we need to discuss, in kind of an understanding. that it is important to discuss with stakeholders and have these multi-stakeholder discussions on Internet fragmentation. But these stakeholders are not necessarily always the same. It is possible in those three layers of our framework that you need to sit together with different stakeholders or different types of people, different organizations. And I think that’s one of the major or one of the main findings we had last year in our work. Together with, and that probably will become clear out of today’s discussion, the second point is that what those different groups are in those different layers, they come up with actually guidance or guidelines or ideas on how to avoid or address fragmentation, will not necessarily or does not necessarily, is completely complementary with each other. So at the end of the discussion, it will still be necessary to, across those baskets, have discussions on how to actually address ND. So that’s what we did last year. I hope that was clear. So I end here. This was a framework, and that was also the start of our discussions this year. So I hand over to you, Gita.

Sheetal Kumar:
Hi, everyone. Thanks, Wim. It’s great to be here and to be presenting our output for this year. And as Bruna said, we co-facilitate this policy network, and it’s really nice, I think, to now not be doing so much of the work, but to be hearing from you. Once we’ve heard from the drafters and the commentators who will be responding to the drafters of this year’s output, we really want to hear from you. So the work is going to be in the room, and then, as you can see from the agenda as well, we will also be looking for feedback after this session. So what have we done this year? As Wim said, we have been building on the work of last year, where we developed a framework to conceptualize what Internet fragmentation is understood as, as we have just discussed, in many different ways. And so this framework we developed is to support, it’s really a tool to support better understanding and clarification of what Internet fragmentation is. And in that sense, what we were able to do this year is further unpack what the framework is, and those three areas which Wim outlined, and those are the fragmentation of the technical layer, the user experience, and governance and coordination. And what we wanted to do in unpacking these areas was better understand what the priorities should be in each area, so what is actually harmful and negative, and from that, assess what can be done. So develop some recommendations for action, and where we are really, I think, looking forward to hearing from you all and from those who have been so involved already is… Well, really, whether or not you think that these recommendations are helpful, whether anything is missing, and whether you think the way that the different elements of the framework have been unpacked and what has been prioritized, it aligns with your view of what we should be focusing on as an international community when it comes to this issue. So, what we are going to do is take each element of the framework one by one, and I also invite you to go to the PNIF’s webpage and look at the discussion paper as we’re discussing it here, and consider also in the second part of this session how you may want to react to what is being presented. So, we’re going to do, first of all, a presentation of each track or each element, and so we have the very hardworking drafters of the output document here, and we’re going to go one by one, hear from them. They’re going to just present the top-level findings or the top-level points, so what priorities they found need to be addressed, and then some of the recommendations, and then we’ll have a commentator to respond. And so, we’ll do that for each, and then we will open up. So, without further ado, I’d like to hand over to Rosalind Kenny-Birch, who is with the UK government at the Department for Science, Innovation, and Technology. And, Ros, you worked with others to develop the chapter in our document focused on internet governance and coordination and the fragmentation of that. So, in the next three or four minutes, would you be able to just provide an overview of what that chapter says and the recommendations that you have for addressing this element of fragmentation? Thank you.

Rosalind Kenny Birch:
Thanks very much, Sheetal, and great to see everyone. here today. I think one of the points of this panel discussion too is to really provoke a conversation. Our multi-stakeholder working group that worked on this chapter had quite a few different perspectives because fragmentation is such a complex topic it can be to discuss. So it will be really interesting to hear your insights here today from a wider group of perspectives and so I would really invite you to engage in the discussion, offer some of your own insights and challenge afterwards as well. But just to present on what we’ve written up in the preliminary draft chapter on fragmentation at the governance layer of the Internet, I’d first like to lay out a little bit of context. So our multi-stakeholder working group draft wrote that fragmentation at the governance layer primarily relates to the interactions between global Internet governance and standards bodies. When these bodies do not coordinate inclusively, it can and does result in fragmentation. This fragmentation can manifest in siloed or duplicative discussions, an exclusion of specific groups from participation, resulting in decisions being taken without consensus from the global multi-stakeholder community. And it’s important to note too that fragmentation at the governance layer can also create knock-on effects at the other layers of the Internet user experience and the technical layer. So there were a couple of different components to our analysis about where fragmentation can sort of emerge or come from at the governance layer. And one was duplicative mandates. So if part of specific Internet governance bodies mandate is unclear or may have overlapping elements with a different body, this could foster a competition for legitimacy or create confusion between bodies. And therefore that can make it difficult for stakeholders to know where and when they need to engage in a specific conversation. Another point we observed was when mandates are exclusive or don’t fully empower all elements of the multistakeholder community to participate. So we see inclusion as central to combating that so that people can participate on an equal footing. And then finally, taking actions at the right level. So individual governments’ actions can sometimes lead to divergence in the rules applied to the Internet and its management. And in that sense, it’s really important that national governments and Internet governance global bodies are closely conversing about issues, specifically when they’re being developed or discussed through multistakeholder processes already. So with some of those analytical points, we proposed a couple of different recommendations. And again, very eager to get a wide range of perspectives and feedback on this today. But one was not to introduce further bodies into the Internet governance landscape. The Internet governance landscape is already complex. And as we all well know, through all our travels, there are a lot of different conferences, events taking place that we engage with across bodies already. And people only have so much time and only so much financial resource to be able to engage in these. So further perpetuating that complex landscape could end up excluding people from discussions if they don’t have the resources to fully participate in more and more emerging bodies and spaces. However, that being said, another recommendation we made was, therefore, it is important to improve coordination between existing Internet governance bodies to help address perceived or real gaps in these spaces. So coordination between existing Internet governance bodies. governance bodies is needed to help address that as well. Thirdly, and in order to avoid siloed public policy discussions regarding internet governance, all internet governance bodies must be fully inclusive to stakeholders and enable meaningful multi-stakeholder participation on an equal footing. We also believed that that would help address instances of fragmentation at the governance layer. And then finally, we recommend that existing global internet governance bodies must engage more closely with national governments. So this goes back to our point of analysis before. There’s actually a two-way street here. National governments, when looking at proposed legislation, can actually really benefit from talking to global internet governance bodies about their plans and therefore receive important information and feedback. But equally, global internet governance bodies should be on the front foot about engaging with governments and ensure that governments know what activities are going on in the global space to help potentially avoid duplicative measures. So I’ll stop there. And again, an exciting part of this panel is we’ll now receive some challenge and other perspectives on this work. So with that, I hand over to Jordan Carter. Great to have you here.

Jordan Carter:
Thank you, Roz, and good morning, everyone. My name is Jordan Carter. I work for the AU Domain Administration, the ccTLD manager for .au. And it’s a pleasure to offer a few not very provocative provocations to the group to help the conversation happen. I am making some personal remarks. I’m not advancing an outer position here. Overall, I think this is a good start to the discussion around fragmentation. And my congratulations to the volunteers. I should disclose that aside from joining the email list a couple of months ago, I have not been involved in any way in this paper. I was reading it fresh to prepare for this session. And I agree with the analysis so far as it goes. So in the end, my provocation is relatively brief. of broad-based participation is vital, particularly in the standards bodies and in some of the global internet governance organizations like ICANN. The Western bias in participation is undeniable, and meaningful participation from around the globe and from the groups that are not participating is absolutely essential to within whatever framework that we have. When I read the very first box, the definition here, fragmentation of internet governance primarily relates to the interactions between global internet governance and standards bodies. My core thesis might be that that’s too narrow a definition of governance fragmentation, because one of the key agents of governance are governments, and if to not deal with government-driven, policy-driven fragmentation in this section, I think maybe complicates the picture, though I’m sure I can in turn be challenged about that. You know, and part of the challenge there is that the definition of internet governance itself is under challenge. You know, do we think that it’s just about the governance of the internet, which is a distinction that has been made, or is it the governance on the internet, or is it these broader questions of digital governance that get often tacked on to those infrastructure-level discussions today? Another challenge I think it would be worth taking into account in the governance fragmentation is that caused by the narrow mandates of a lot of the technical internet governance organizations. Those narrow mandates are there for good reasons, but sometimes they make it difficult for those organizations to actually deal with a systemic view of what’s going on in the internet. So you can have a situation where each silo is dealing with its narrow mandate, and none of them are prepared to take a view about the system as a whole, and so I think there are some institutional drivers there at the global internet governance level towards fragmentation. The paper talks about the need for better coordination, and I agree, and it suggests further research, and I agree, but quite a lot of the people who are involved in these global internet governance bodies could undertake meaningful coordination together without further research. They just need to start doing it. Some of it is being done, but the challenge not to this paper but to those organizations is get coordinating. Get coordinating in the face of the challenges that the internet is throwing up, and in the challenges to the governance model that we see today, and I really did appreciate the paper calling out the duplication and the risks with some of the proposals in the Secretary-General’s policy brief for a digital cooperation forum, for example. The last thing that we need is duplicative institutions being established with new resources going to fund them instead of the resources that the IGF, for example, is crying out for and could make good use of. And the last point I want to make, I guess, having argued that the governance discussion could use maybe a broader look, is the multistakeholder-driven Internet governance system and the multilateral or state-based regulatory and legal system, I think, need to be much better at working effectively together. The two can and should shape each other, and the multistakeholder dialogues in organizations like the IGF could usefully inform policy if more of the people doing public policy related to the Internet were aware of their work. So I’ll probably wrap it up there. I don’t know if that was provocative enough, but thank you for the chance to comment.

Sheetal Kumar:
Thank you so much, Jordan. And we will be going through each of the elements of the framework first before we open up. And I also wanted to let you know that we had some written feedback from the community when we published the paper and wanted to weave in some of that into this discussion as well. So there was one point of feedback relevant to the Internet governance and coordination chapter. It was really about providing concrete examples of how governance fragmentation causes Internet fragmentation, and just it was checking that what our understanding in the paper that you put out in the paper is of Internet governance and coordination fragmentation is essentially that the existence of multiple uncoordinated international processes is a source of fragmentation. So if so, why is that treated differently than governmental and corporate-sourced fragmentation which are both addressed under user experience, which we’ll come on to. So I think there’s a question there about what is the focus of this chapter? Is it on the existence of multiple uncoordinated processes, which I think you have addressed, and that is the focus. And then, Jordan, you mentioned the importance of ensuring coherence or at least engagement and coordination. And it might be interesting. to hear from you later, but also from everyone here and online, whether you have any ideas for concrete mechanisms or examples that are already existing for how that coordination can effectively take place. So without further ado, we’ll move on then before we open up to the second chapter, and we have here Vittorio Bertola, who was one of the co-drafters of this chapter within the group, and I know Vittorio wear many hats, so I don’t know how you prefer to be introduced, but please do provide your, well, please do choose your hat, and then an overview of the work that you’ve done in this year to assess the priorities in the user experience fragmentation that we we had outlined last year, and then also the recommendations that you put forward. It’s a very hefty chapter of the discussion document, so good luck with summarizing it in three or four minutes. Yes, it’s pretty hard. Well, I don’t know,

Roswitharu:
maybe my head is like having gray hair and having been for too many years in this kind of discussions, almost 25 now, but I work for Open Exchange, which is a German open-source software company, and so, I mean, I was one of the people that tried to tackle this problem of user experience fragmentation, which is, I think, the hardest and most vague one. It’s because the entire discussion of fragmentation started from the technical level, and then multiple stakeholders tried to add more things into it, and user experience things are mostly coming from this kind of approach. So we tried to go for a definition which is completely open and pretty broad, basically by saying that anything that makes two different users of the Internet see different things when they try to access the same service, website, whatever, or do the same thing over the Internet is a form of user level fragmentation. And of course, if you take this very broad approach, then there’s the need to tell between the positive cases and the negative cases, because there are many situations in which which actually this difference in experience is a good thing. It’s made to help the user, to customize content for them, or it’s made to protect the user, to give them rights, for example, through privacy laws in specific countries. Or it’s done, for example, to prevent them from accessing unhealthy, like malware websites or whatever. So you have to then define what is a negative case of fragmentation. There could be another approach, and some people have argued for it, of just finding a definition that covers only negative cases, but then we found this becomes harder and harder. So we’d rather take, then, a case-by-case approach. So by starting from this very broad definition, we identified several priorities in different cases, and then we want to work on them one by one, because they all have a different need and a different view to be taken into account. So we identified the two major sources of this kind of fragmentation, and it’s never the user. Usually, it’s either a government that, for some reason, wants to exert sovereignty and modify the experience for their own citizens only, or it’s a company, usually the global platforms, that wants to build like this kind of ecosystem, or world gardens, how you want to call them, that basically prevents users from going somewhere else, because they want, of course, to exploit them for business reasons. And so through these two opposite pushes, a number of phenomenon emerge. So we identified six priorities, and the three top ones, the ones we would start with, are, well, first of all, internet shutdowns. These are the principles, anyway. The internet shutdowns, we discussed a bit whether it’s a user experience level thing or a technical thing, but in the end, we decided we could discuss it at this level, and we think they are a negative thing. So we already received a comment of someone in the community saying that there’s actually something like a positive internet shutdown. I don’t know what it is, but it will be up for discussion. And then we We, the second priority we identified is the case in which national blocking or law enforcement orders have global effects. So spilling over to other jurisdictions and creating, let’s say, issues, I mean, problems to other countries and other citizens. And then the third case was the walled gardens I mentioned. So basically the building of barriers and the restriction of user choice and competition, both by governments when they have like laws that favor, for example, national problems over the global ones, but also by the global internet platforms. And then there’s more because we also would like to discuss national level censorship when content gets blocked for political reasons. We would like to discuss the violations of network neutrality, which are another issue. And the last one is the geo-blocking for intellectual property reasons. So as you see, there’s a long list of things to do and I encourage people in the community to participate even on specific issues. So we tried, I mean, we don’t think we can make suggestions for everything at the same time together, but we tried to identify five principles that are summarized in the slide. Basically the idea we would like to start with is that there should be a principle of equality, meaning the default should be that everybody should be able to access everything in the same way. And then the second principle is a partial correction to this, it’s a principle of enhancement. So when the differentiation, the customization is done in the interest of the user or asked for by the user, then it’s a good thing. And so we don’t need to worry about it. The problem is when this gets imposed onto users by a third party against their wish, and then in this case, you could have negative effects. So the first suggestion is that we should have an impact assessment whenever you do something that creates a deviation from the global internet, whether it’s a national regulation, national law, or even a business decision. Then there should be harmonization. So the idea is that, especially in regulatory terms, we should rely as far as possible on global agreements on how to tackle the same problem in the same way everywhere. And only go to national regulation when either the harmonization is missing or doesn’t take into account any national needs. But then the last, and maybe. the most important principle is that in the end, there should always be free choice. So the users should be free to choose how they use the internet and where to go. And unless there are very important reasons to make that, I mean, to prevent that from happening, in the end, the user should always be trusted to be able to do the good thing. So thank you. I think we have Mariela as a commentator and I give the floor.

Marielza Oliveira:
Very much. Thank you. I really liked your presentation. Well, let me start by saying Konichiwa, and my name is Mariela Oliveira. I’m the Director for Digital Inclusion, Policies and Transformation in the Communications and Information Sector of UNESCO. And this work for the policy network on fragmentation is particularly important to us because what my team and I do is essentially defend freedom of expression, access to information and privacy. And these are the rights that are most directly impacted by fragmentation. First, I want to say also a big congrats to Bruno, Chih-Tao and Wim who have been steering this work since the last year and it’s shaping up super well. So, well, let me say that to me, the user experience fragmentation is maybe the most interesting type just because it has this positive side when users are served with custom features or content that is set and the negative side when users are actually prevented from accessing certain features and service and content. And the discussion paper is actually concerned very much primarily with the negative side, which is essentially about how these features, these mechanisms actually impose barriers that isolate or trap users into an information environment from which they can’t really escape. And a consequence of isolation and a major source of the harms that happen as a consequence of this type of fragmentation is essentially that it enables serving trap users different world views than are served from other internet users. And that brings a really important point that maybe it’s not quite explicit in the paper yet, but I like that it was mentioned, alluded to in the presentation just made is that negative user experience fragmentation actually affects all users, not just the ones immediately deprived of access to the internet or to a specific content and services. Some of the users that are excluded are prevented from enjoying their human rights to access to information or their freedom of expression and other rights, but, and they may end up being driven to echo chambers and elements like that. But it’s also true that the non-targeted users who are deprived of their rights to freely associate with those who are isolated to seek information from them and impart information to them. And therefore, the consequences that these two groups end up kind of driven apart. There’s an increasing gap in the information and knowledge between them. And that separates people. And many times, especially when it’s done for political purposes, the likely consequences, polarization, which then spews beyond the internet and into the real world and actually may. affect even non-internet users. So I think that this is a particularly important topic. In UNESCO, we work with what we call the Rome Principles for Internet Development, in which the internet should be human rights-based, open to all, accessible by all, most stakeholder-led. And the user experience fragmentation is very much about the explicit decisions that reduce openness and accessibility, which then has consequences for human rights. And when we talk about bad fragmentation, it’s essentially not done by a most stakeholder process. It tends to be a very, you know, kind of a unilateral decision process. One of the things that I really liked about the paper is that it actually laid out principles specifically for fragmentation that were mentioned in the presentation, and particularly this issue of free choice and equality of access and enhancement of experience in others. These are very much in line with the existing principles and particularly with the human rights framework. And the paper actually received a number of comments already, including a suggestion that these principles regarding user experience be explicitly based in human rights standards and processes, which are already, you know, globally acceptable, accepted, and there is like a solid jurisprudence foundation around them. And particularly, it just said that we need to consider the three-part test on legitimacy of interferences with the freedom of expression. And so this is an element that I think it would be important to add to the paper as well. Some of the points that have been already made through comments is that there’s some content that actually is relevant to block, that is legitimate to block, because there’s a law that prescribes their blocking, they pursue a legitimate aim, and they are in line with a democratic society. And content like that, for example, has to do with child pornography, terrorism, incitement of violence, and things like that. And this has not yet been reflected in the paper on how we’re going to be kind of disambiguating between these different types. And the next type in the next draft, I think, would be, you know, should be including some of that, and maybe even making reference to the speech, you know, and debates around what is awful versus what’s lawful. And, you know, maybe just to finalize, I think that the paper would also benefit from bringing up some of the potential mitigation measures, including, for example, talking about enforcing platform interoperability, data portability, strengthen users, media and information literacy that can counteract the effects of the echo chambers and the disinformation and other, you know, that are created by fragmentation. And so, I mean, I’m going to end up here, because I know that you would love to hear the comments from our participants as well. Thank you very much for the chance to comment.

Sheetal Kumar:
Thank you, Marielsa, that was great. And you were very positive about the chapter. and I think you also very helpfully reacted though to some of the feedback that we got online, the written feedback which I have to say was really helpful and constructive, so you can also access it on the web page, but quite a lot of it focused on the need to be more explicit about the use of different terms, the connection between the human rights standards and negative user experience fragmentation and explaining the difference between what is called negative and harmful fragmentation in terms of user experience and as I said being more explicit about that, so it was great to hear you respond to that as well because I think when we come to you on the floor and online, please do sort of pick up some of those points or add your own, but certainly a lot of really helpful feedback already from you Mariel, so thank you for that. So we are going to move now to the chapter that looked at technical layer fragmentation and Olaf Kolbmann is here with us and to present the chapter and really looking forward to hearing from you Olaf and then you’re going to be joined afterwards or we are going to be joined by Suresh Krishnan from the Internet Architecture Board who will respond and then we’ll open up, so please do get ready with your reflections and questions. Without further ado, over to

Olaf Kolkman:
you Olaf. Thank you very much, my name is Olaf Kolbmann, I work with the Internet Society, I’m principal there. Chapter on technical infrastructure. When we speak about technical infrastructure of the Internet, that is the network of networks. that are internetworking to provide global connectivity, 80,000 networks that interconnect to provide global connectivity, and the supporting infrastructure that makes that happen. That’s for us the sort of internet technical infrastructure. Now a few ideas that we had in constructing this chapter, I want to highlight those without going to the details of the chapter itself. But first I want to urge people to review this. This is a work in progress and it becomes stronger when stakeholders engage with the document and provide comments. At this moment I feel that there have been too little eyes on this chapter and we can use help. Anyway, the chapter starts with saying that the technical fragmentation is not something that is clearly defined. There is an operationalized definition of fragmentation around. It’s a work by Baltra and Heidemann, but they have a criterion that says if 50% of the public IP addresses cannot reach the other 50%, then you have a fragmented internet. That’s a very, very fragmented internet. That means that half of the population cannot reach the other half of the population. I think we don’t want to be there. It’s like you’re losing your hair and at some point you’re bald and at that 50% point, that’s true baldness, I would say. So how to prevent getting bald? That’s sort of the question. What we also said is fragmentation is is not necessarily everything where people choose to not interoperate and not internetwork. And there are cases like that. Like my home network, my home automation network, my own home automation network does not need to be on the internet directly. That’s a choice. That’s a choice you can make. Yesterday in a session on fragmentation, somebody said you have good fragmentation and bad fragmentation. I sort of like that idea. Decentralization is not fragmentation. Lack of connectivity because you choose not to connect is not fragmentation. Temporarily having to reroute your traffic because of a network problem, so to speak, not fragmentation. But what is fragmentation? How do we define it then? Well, again, that’s very difficult. But the approach that we took is using the critical properties as one of the frameworks. There are multiple frameworks that we point to the critical properties that the framework that the Internet Society developed that basically defines the critical properties of the internet in non-technical terms. They’re inspired by the network architecture, and I won’t go into the details of them. But that’s one of the frameworks where you can say if you lose these critical properties, if you’re sliding down the scale away from these properties, then you run into the risk of fragmentation. So this is the approach that we took. Another framework you can look at and approach is that of the public core. The public core is a framework that was developed by a think tank in the Netherlands and later further analyzed and defined by the Global Commission on the Stability of Cyberspace. That’s another framework and lens through which you can look at the internet and say, OK, we’re impacting elements of the public core, and that might lead to fragmentation. I think one One of the things that we’ve done in this document is also by doing that, by using this type of non-technical frameworks, frameworks that do not specify exactly the technology that’s being used, we allow for evolution. Because the Internet really is still evolving, and I think that’s important, that we don’t ossify as we usually say, the Internet in its current state. We need to continuously be able to evolve it. Another aspect of fragmentation that we looked at was basically what I would call the evolution of the edge, whereby what we see is that there is a lot of changing in routing behavior. transit or building their own network compared to using transit to get close to the user. That might cause a fragmentation of a different sort, basically the digital divide, increasing the digital divide of users that are close to that type of infrastructure and users that are not. That has impact on the application layer. There might be users that have a very good user experience, and there might be users that do not have a good user experience, and that is due by the way that the Internet evolves in more richer parts of the world versus less connected parts of the world. Hard to catch within those critical frameworks that I just mentioned, but it is a point that we point out in the document. Going to the recommendations. So the recommendations are basically look at these frameworks. Use those frameworks, these critical properties or the public core, and make sure that together we protect these properties. Make sure that we can continue to network and provide a global network to everybody, that brings the opportunities to actually do all this user stuff. If we fragment on the user layer but still have a global network that connects us all, we have a chance to defragment on that user level. But once we have fragmented the internet technical infrastructure, that fragmentation will also be reflected in the user space. So it’s much more important, it’s not much more important, it’s very important to take care that those properties are protected and we have to do that together. There are very few ways to actually understand how that fragmentation is happening, there are very few measurements around that look into, on a longitudinal scale, on what the evolution is that impact fragmentation and how it’s caused and how it evolves. This is really a call for people to set up measurements and think creatively on how you would assess this fragmentation on a technical layer. Once proposals are introduced either on the policy or on the technical layer in standardisation efforts for instance, do assess them against these critical properties, do assess them against frameworks and see if we lose interoperability. See if we lose the ability to connect. If that is the case, perhaps it’s not such a good idea. Of course, we’re into this together and the multi-stakeholder approach is a good thing. in order to make sure that what is being delivered, both by the private sector developing these technologies and the technical communities working on these technologies, as well as by the civil society and the governments, to make sure that we stay globally connected and don’t split up this network of networks. I think that’s the summary.

Sheetal Kumar:
That’s great. Thank you, Olaf. And we have Suresh online. So let me check, actually, do we?

Suresh Krishnan:
Yeah, I do. I’m here. Please go ahead. Thank you, Sheetal. Thanks a lot. Like, thank you for that excellent summary. And there’s like very little fault in there. So I’m just going to go over a few things that I think are important and then kind of give you a little, some minor hints to improve. But I think the key part that this thing got right is that the internet is a decentralized set of networks. There’s no single point of choke point of control over this. There’s like multiple people who, I would say, collaboratively got together and built this large network. And I think that’s a key thing to protect. And that does not mean fragmentation. That’s by design that these networks are like independent and decentralized. And what really holds them together is the technology that offers the interoperability. I think that’s like something that you got like really like well done in the first piece of this, where we talk about the technology being the thing that holds stuff together and not really the administration of them. And I think that’s like a key point to emphasize. And the second thing is like on the critical properties of the internet, I think openness is one of them. And also the incremental deployability of stuff. And it kind of ties into your, I would say, lack of ossification, right? Like new technologies keep getting deployed on the internet. So for example, we had like IPv6 come in, you know, like at some point we ran into a situation where there’s like more than 4 billion people on the internet. And then we had like ways to kind of get around it. And it takes time, but we are able to build newer things on top. And we’ve had technologies on the internet now that the internet pioneers couldn’t have imagined, right? Everything depends on it. So the way in which like, you know, we can put like newer things on the internet and still expect them to work with people around the world is really because of the openness and the connectivity that’s there. So it’s something that we should strive to preserve, like you said. And so the other kind of key thing in there is that the layering principles of the internet as well. So like the internet kind of holds together at like, you know, the layer three and four kind of of the OSI model, like in a very high level. And there’s also applications that like we have a rich variety of applications, but as long as we keep the kind of technologies and the lower layers to like, I would say a globally interoperable minimum, I think like things are going to be good. And that’s what we should also look for. And also try not to push in, I would say. So I think like Mariel talked a little bit about the content in there, right? So the question is, should the content filtering happen in the lower layers or the higher layers? And I think like I would posit it should happen at the higher layers because it’s kind of, we are talking about like transporting, staying connected while enforcing like, you know, millions of laws, like, you know, state laws, like country laws and local laws are like very different around the world. So like, you know, instead of trying to like do this at like a lower layer, which the whole world shares, like, you know, we should kind of try to keep it at the higher layers where that belong. And that’s also alluded to in the document. One of the things is like the messaging was given as an example, Olaf, right? And we have like, you know, something very positive happening recently in the space with the multi-stakeholders architecture is that the Europe came up with this like Digital Markets Act, right? Like which opened us the gatekeepers to open up the communications and the IETF, we started work on something called MIMI, which allows like interoperating at the message layer. So like, you know, this is like a really good, I would say, blueprint to follow where like, you know, the governments and the policy organizations and the technical community, we all work together to have this common goals of increasing the openness of the Internet and people being able to connect. And for the measurement, I think that’s a critical piece, Olaf, and I think we need to put a lot more effort into it. Right. And we need to have a lot more. measurement points across the globe and like kind of be able to have a platform where people can use like you know it’s not just for us to do stuff but also build a platform such as like the right path less is a platform like that that exists today where people can run their own experiments on this platform with the probes that exist. So maybe we should like let other people with ideas to measure things could use the same kind of platform to build their own metrics on how they see the fragmentation as like instead of like us prescribing some metrics. So that’s something that’s actually really good as well. And I’m totally with you on like the multi-stakeholder approach. I think it has worked really well to bring the Internet to this level. And I think we should really continue going down that approach to like work collaboratively and make sure that we learn from the past lessons we’ve had. And that brings me to like my last 20 seconds to critique it. And the critic is really like we kind of need a little bit more references out of this document. And so like you know talk about like when so let’s say like RFC 1958 which talks about the architecture of the Internet and principles and so on. I think it’s like very interesting reading for a lot of people who are coming in from the policy sphere to look at like you know what are the technical things that led to the Internet being the way it is and why it’s like very good for growth. I think that’s probably going to be my only critique on this.

Sheetal Kumar:
Okay. Thank you so much Suresh and thanks for joining us online. That was really really useful to get your feedback on that chapter. But also you made connections to the other chapters as including the user experience one and that’s also key. We do see these different elements of the framework as intersecting of course. The point is to help to provide a lens by which to have this discussion. And so if if you all have comments on that please do do of course share and you also I think you made a point about referencing about clarity of terms but definitions which we also got in written feedback. So that is something we can certainly incorporate. But I’ll turn over now to Bruna and Bruna will be facilitating this part of the discussion which is really to hear from you. And so please do please do get engaged. We’ll also be looking at the online participants for any questions and reflections there. Thanks Bruna.

Bruna Marlins dos Santos:
Thanks so much. Yes as we said this is the feedback moment of the session right. So any questions or comments you might have are very much welcome. We have some microphones in the room so if you want to add some thoughts or just ask questions to the panelists you can come to them. But I guess I’ll start with one remote question who is from Foley Hebert from Togo. And his question is I would like to know how we can reach every how can we reach every citizen in the world. So, I think that’s the first one. The second one is a question about how we can protect the internet in a more secure world, especially how can we overcome language barriers if content can be translated into our local languages, that would be very good. So, and also, he made a comment about the more people are aware of splinter net’s damages and danger, the more they will be ready and prepared to fight against splinter net and to protect the internet in a more secure world. So, I think that’s the first one. So, I will take three rounds of questions, like three questions in one round, and then I will divert back to the panelists, so we can start there.

Audience:
Hello, my name is Mia Kuliv and I’m also a member of the Internet Architecture Board. It’s more a comment than a question. I would like to comment on the technical fragmentation part. Olof talked a lot about interconnectivity. And, you know, I think and I think we are on the way to getting there. So, to get to the point, if you have less than 50 per cent, it means you have the internet and you have another network which is not the internet which is just not connected, right? But at 50 per cent, you actually have two internets, you don’t know which one is real internet anymore, and there’s nothing like two internets, there’s only one internet. So, this is very mathematical, and that’s the point where it actually breaks, where there’s no way to get back to one internet, and we really want to have a more open internet, because if you have two internets and you have a lot of connectivity, it’s not easy to get back to one internet, right? So, yes, then you’re, then it’s too late. But what I wanted to say is that it’s not only about interoperability or interconnectivity, it’s also about the ability to innovate and evolve the internet, right? So, if we put barriers into place where we cannot evolve the internet anymore, we cannot introduce new protocols, because if we put barriers into place where we need to back up our ideas as a vertical group, we can’t do this. So, that’s the challenge here, and the reason why we are doing this is because internet bill expansion isn’t the only reason we are romanticising it, and it I think it’s a very good question. I think the problem is that we are still interconnected, and this is like all Internet protocols are designed this way. You always have to have a way to evolve, to go on, and if you put barriers in the way where we cannot evolve anymore, that’s, I think, where we lead to, which leads to whatever fragmentation or to a very negative outcome, because that means not only that we cannot change the technology anymore, we cannot adapt, we cannot make it more secure, we cannot make it more flexible, we cannot adapt to the Internet. And so, I think that whatever we do on top of the Internet will be limited, because we cannot adapt to it anymore, and then we’re stuck, and then all the benefits we get from the Internet, where we see this positive impact on the society, on our economy, and so on, doesn’t happen anymore. And that’s like, that’s the point where we’re still connected, but the Internet wouldn’t be as useful as it is today.

Bruna Marlins dos Santos:
Thank you. I think that’s a very good point. Right here in the middle, can we get a second question or comment? Thank you.

Audience:
First, I want to thank the policy network that they put the Internet fragmentation into perspective, and we understood today what is meant by Internet fragmentation. Obviously, we have three dimensions, the policies and procedures, we have the user experience, and we have the data, and we have the data, and we have the data. And so, if we think about the policy and procedure, we can understand the subject much better. And the policy and procedure level, first, what gives us comfort is that there is a general consensus and agreement that we don’t want the Internet to be fragmented. So, all our effort is toward not fragmenting the Internet, and this gives us a comfort in this matter. And, so, in the last three decades, we have seen, we thought that the internet on national level, there were, let’s say, treaties or commitments that represents the interest of that regions or represent the national interest and in terms of social and economic. And I wouldn’t say all of them, but whatever represents the interest of the into there. of the socio-economics of this region or national, and it is there. And so these commitments or these frameworks or these agreements or treaties represent the interests of these regions or these people, and there should be a thin line between saying that this represents a fragmentation or represents the interests or the benefit of that group. Maybe this is something that we need or something that needs to be addressed. At least in terms of that, if there are any regional or national arrangements, there is a certain level that they should not conflict with the overall of the unity or the unification of the Internet. Going back to the user experience and Vittorio as an advocate of user experience, he gave us a kind of trust that the indicators or the elements that has been identified, the five elements, represents really truly the user experience, at least principles that we don’t need to be harmed. And actually when it comes to user experience, there is nothing regional or national. Internet users should be all equal. So in that terms, we need to have a global understanding of that this is the minimum of what is known or what should be a user experience. Going to the technical side, thank you that you limited this to the interoperability and thank you that you clarified that decentralization or lack of connectivity or choice is not considered a fragmentation. What gives us assurances is that the industry or even the technical community built all its work toward interoperability. And this is something that at least we feel trusted that it will continue. But again, bringing the matter to digital divide, it means returning this again to the user experience. The third thing is that the third thing is that the third thing is that the third thing is that there is a problem with the user experience, which is now a wide open issue, and this may have implications. Why it has implications? In some parts of the world, this may be a controlled user experience means, let’s say, a negative aspect on the social status of that user, and the social status of that user, and the social status of that user, and so on. So, from all of that, really, while we have some arrangements on policies and procedures, and we have some arrangements on the technical side, we are wide open on the user experience so far, and maybe this makes the start of the dimension of user experience more important than going first to the policies or the technical side. Thank you. Everyone has delta data? Someone was waiting for the next speaker to speak. Farid? , OLAF referred to a comment I made the other day about harmful fragmentation versus the fragmentation that is part of the internet, the way the internet is intended to work, and what I think about it is that it’s a very complex problem. It’s a very complex problem. I’ve come up to comment about this sort of grey area. We think of harmful fragmentation as something where, let’s say, a service provider blocks its competitors, access to its competitors, or a country blocks certain websites at the IP level or DNS resolution level and that kind of thing. I’ve had a conversation with a Facebook user who has a Facebook account, and he has a Facebook page blocking off the content that Facebook users can see from people outside Facebook who can’t get to that content, and, of course, my Meta colleague thinks that’s not fragmentation. That’s just the way an application looks. layered on top of the internet works. And one thing that he says is Facebook is not the internet. The World Wide Web is not the internet. These are application layers that are put on top of the internet. But from the user’s point of view, that often is the internet. So this kind of gets to Vittorio’s area of fragmenting the user experience. So I just sort of want us all to think about that a bit more. These gray areas that change the user experience in other ways that we don’t normally think of as fragmentation. And maybe we should start wondering whether it is and whether it’s good or bad. Just more to think about, I think.

Bruna Marlins dos Santos:
Thank you very much. Next up.

Audience:
Hi there. Thank you. I’m Christopher Tay from Connect Free Corporation and Internet3. We think that the future of the internet is really having everyone own their own IP address. I think that up until now, there’s been huge amounts of cost involved in creating infrastructure that has led to ISPs and others owning blocks of IP addresses and having the difficulty of really getting these IP addresses out on the end. So by allowing everyone to generate their own IP address through cryptographic public key pairs, we can give everyone an internet IP address. And so we think that there’s something kind of really cool going on here in Japan. Because the government has implemented a law against NTT in the 1990s so that they had the entity on the network, but they weren’t able to become an ISP, they have fundamentally created a countrywide layer 2 switching network where all ISPs can enter onto the network. And so what that has allowed us to do is become a ISP of individuals. And what that means is that every computer on the entity network using our software can have an IP address and connect and build a presence on the network. I think there’s kind of an interesting thing about decoupling IP addresses from networks. Obviously it’s very hard to have individuals create networks, but we think that there should be a decoupling between the infrastructure, the actual hardware physical layer, and the layer 3 IP layer. We’ve proven that this is a possible thing. I think that there’s a lot of discussions to be had and we hope to join these discussions, so thank you for your time.

Bruna Marlins dos Santos:
Thanks so much for your comment. I didn’t see a fourth line there, so I’m very sorry. Please go ahead, Laura.

Audience:
Hello. First of all, I’d like to thank you for the panel and the report and the work of the network in general. My name is Laura Pereira. I’m a delegate from the Brazilian Youth Fellowship. We know that the defense of democracy, integrity, and information integrity is one of the main fields to actually adopt a more protective view of the digital space currently, and in that sense, sometimes to cause fragmentation and to risk the integrity of the digital space in general. Actually in the Brazilian chapter of Internet Society, we actually made an experimental application of the proposed concept of user experience fragmentation to collaborate on a public consultation by Brazilian Internet Steering Committee on platform regulation alerting to the unadvertised risks of platform regulation when it does not consider these kinds of harms to the critical properties of the network. However, as mentioned by your presentation, it’s not easy to balance democracy, integrity, defense overall in the general sense and harmful fragmentations. Is it possible to reach this sort of balance by using the concept of user experience fragmentation? Do you intend to advance on this perspective? Is it a goal of the network? how do you see this issue in a more detailed way? Thank you for your presentation.

Bruna Marlins dos Santos:
Thanks a lot, Laura. Just reflect that I’m closing the queue, but we’re going to take the last three comments. So please go ahead.

Audience:
Thank you. Thank you for the panel. And I appreciate the discourse on internet fragmentation, but also just the difficulties surrounding understanding it. And so I can keep it very pointed to the discussion points that were listed. I am curious, as we progress with initiatives like this, do we continue to do so without engaging regional or cultural leaders in areas that experience shutdowns? Or, at the very least, massive hindrances to their freedom of access to open information? There was a point where it was national governments are what we are hoping to interact with, and no new stakeholders to involve in governance. However, there does seem to be valuable parallels between looking at the way that communities who have been oppressed in the past have also taken a stand and helped create legislation and international policy to curtail that from happening to any other group. And furthermore, I would like to raise a discussion point of meaningful connectivity, as is eluded to by the UN development goals. With the rise of satellite availability and private corporate satellite availability internet, and LLM sophistication, do we recognize the potential for not only fragmentation, but quality of online experience? Is this something where we see fragmentation leading from billions of people being priced out of meaningful connectivity? And does this appear to be a perfect storm for exasperating the digital divide, not necessarily closing it? So how does internet fragmentation policy design, like how does it design itself to effectively account for rapid development on these emerging fronts, taking into account their incredible potential to create disparity of access to meaningful connectivity? Thank you.

Bruna Marlins dos Santos:
Thanks a lot. Next comment.

Audience:
Thank you. My name is Michel Lambert, I’m coming from Montreal. I work with an organization called Equality, which is doing technology to support freedom online. This is my first participation to the policy network. I’m particularly interested by it. Hopefully, we will manage to create some governance that will prevent fragmentation. But I come from a background where we tend to believe that these discussions are difficult and sometimes they take more time. And we need to develop alternative technologies. So I’d like to use this floor now just to invite people to join us in Montreal. We are organizing a conference called the SplinterCon. And the idea is really to bring together the people developing new technologies, which are going to allow us eventually to, you know, build bridges or make holes into walls so that people can continue to enjoy the Internet. So if you are interested to be part of that process, please go to splintercon.net and join us in Montreal in December to develop those technologies. Thank you.

Bruna Marlins dos Santos:
Thank you very much. Next comment right here.

Audience:
Thank you. First of all, thank you for the excellent discussion on Internet fragmentation. I have a general question. We want the Internet to be inclusive and open, borderless, global network, which gives equal opportunities to everybody who’s connected and those who are to be connected. But because of a geopolitical situation, trade concerns, and other factors, the response of nation-states, some of the nation-states, is that they take certain actions and enact certain laws that may come under the ambit of digital sovereignty or digital protectionism, and which may result in either technical, commercial, or governmental fragmentation of the Internet. So my question from you would be that how do you… I mean, you’ve given some very good recommendations, but given the governance structure of the Internet, how do you see that how easy or difficult it would be to address this challenge of implementing those recommendations? And especially we see the Internet evolving, I mean, it’s becoming more decentralized, the Web 3.0, so how do you see addressing that particular challenge? And we talked about the five principles, the DFI principles, so if there are certain laws enacted which may compromise any of those principles, so how do you see addressing that challenge? Thank you.

Bruna Marlins dos Santos:
Thanks a lot. Last but not least, Raul.

Audience:
Good morning. My name is Raul Echeverria. I’m from Latin American Internet Association. I think that we are, sometimes we are in a loop trying to define what is Internet fragmentation or not, and it makes me remember when we discussed in the past about network neutrality and somebody introduced the expression, the concept, but we never had an agreed definition on that. So we lost a lot of energy discussing about what is network neutrality instead of discussing what we want to avoid. And if we look at the topic of this event, it’s the Internet we want. So instead of trying to define what is Internet fragmentation, we have to focus on what things we want not to happen. And so I think that the work that the Policy Network is doing is impressive, and it’s very good, and congratulations for that, and I have been part sometimes in the discussions, but we should focus also on more clear recommendations, whole things that, for example, for governments, don’t block apps, don’t adopt policies that create different experiences in Internet for users in the same country, that is, or in the globe. This is the kind of things that we have to recommend. Of course, I heard what the colleague said about when we don’t participate in a platform or in a given space, we don’t have access to the information that is there, but the point is that if I want to be I can’t do it. I can’t do it because I don’t want to be part of that, I can. But in some places, or due to some policies, even if I want to be a TikTok user or buy something on Amazon or whatever, I can’t do it. So this is fragmentation, but we will be in a loop if we try to say, oh, this is fragmentation, this is not. But there are things that we clearly don’t want to happen, because that’s the Internet we don’t want. Thank you.

Bruna Marlins dos Santos:
Thank you very much. I think we have a lot of time, but I would like to ask the panelists to leave the room at 10.30, because we’ve closed the line, and we have a deadline to leave this room at 10.30, right? And at the same time, we also have the process for bringing inputs to the discussion open until the 20th of October, but we really want also the panelists to be able to comment on that. So Olaf, Jordan, Ross, Vittorio, do you guys would like to add anything, since it’s the talks of the day and a time we have that Kevin have already started, Jasper, and then Jason?

Olaf Kolkman:
Yes. Well, not a lot. I think that a number of points were made that are relevant and critical. One of the points that was made was by Mary asking for more nuance, and I think that’s a fair comment. I think that’s a good point, and I think that’s a good point. I think that’s a good point, and I think that’s a good point. Myriam made a good point, the ability to innovate and evolve is one that we should protect. That is, indeed, the idea. I made reference to the critical properties, and one of the critical properties that we have defined, and that we sort of introduced also in this paper, is having an open architecture that consists out of building blocks, and protecting that open architecture whereby we can evolve, I think, is important. That sort of speaks to the general idea that we need to protect that open architecture, and I think that’s a good point. The gentleman that I forgot his name, but I know from the metro advertisement, he invented something new. I don’t know if that works. I don’t know how that will scale across the Internet. And as Mirja also pointed out, we did this transition from v4 to v6. That could have failed. There is technical fragmentation between v4 and v6. And the onus has been on the people who developed and are implementing v6 and give everybody their own IP address, because that was the intention of the v6 address space, to make sure that that interoperability with the v4 Internet continued to exist. And that has been 20 years of hard engineering work. Introducing something new will mean that the onus is on the entities that are introducing something new to make sure that that interoperability exists. The critical properties says there are common protocols. They don’t say it’s IPv4, IPv6, yet another protocol. The Internet should be able to continue to evolve. But we have to agree on something to keep that interoperability going. Finally, the comment on meaningful connectivity. When I talked about that evolution of the edge, this is a point that we’re making in the paper under the name of death of transit. The idea is indeed about having meaningful connectivity. If the Internet evolves in haves and have-nots, then there will be fragmentation, too. And being priced out of the market. is indeed a way to be fragmented. And, mind you, we have a fragmented user experience nowadays. There are many people who cannot afford being on the Internet. I think that’s something we all have to work on, making sure that people who want to connect can connect.

Bruna Marlins dos Santos:
Thank you very much, Olaf. Roz?

Rosalind Kenny Birch:
Yeah, thanks so much. And that’s actually a great transition line, Olaf, because I wanted to come in on some of the first comments about local languages, for example, and whatnot. I think this goes back to the broader thematic point we tried to capture in our chapter, talking about the importance of inclusion in global Internet governance bodies. And I think local languages, making sure people can participate despite their cultural or regional background is so important. So I really wanted to pick up on that point in particular. And there were further points about particular regional contexts and whatnot. Absolutely. I think I really wanted to highlight the role of the IGF’s national and regional initiatives in this regard. I think these are great multi-stakeholder spaces where people can come and talk about those local nuances, regional contexts, absolutely. And I think better coordination between Internet governance bodies, as we’ve been talking about, can hopefully help capture those and bring those different voices together as well. So not only just having these regional spaces, the NRIs, I was lucky enough to attend the Africa IGF two weeks ago, which is an absolutely fantastic opportunity to hear about some of these perspectives, but also to make sure that these are captured in the broader global discussions within these global Internet governance bodies themselves as well. So thank you so much. Just to say in general, a big thanks to the audience for the participation here. And please do, if you think of anything else, feel free to grab me on the sidelines throughout the rest of this week. Thank you.

Bruna Marlins dos Santos:
Thanks so much, Roswitharu.

Roswitharu:
Yeah, very quickly, a couple of points. There’s no time for everything. So please join the discussion on the mailing list in the call. Well, first of all, I think some of the comments pointed out what is the problem that we had to deal with when discussing the user experience level, which is that the user experience, as mentioned, fragmentation is really a big elephant as big as the planet. And people only see a very tiny bit of it and believe that that is fragmentation. And so if you talk to people from Silicon Valley, from the US West Coast, mostly they complain about what governments are doing in authoritarian countries, or even in the EU with the privacy laws and whatever. And if you talk to my friends in Europe, they complain about what the Silicon Valley platforms do. And everybody thinks that’s the big problem in terms of user fragmentation. So the first step is agreeing on whether something is a problem and why, and then starting to work together on that in a very pragmatic way. Because if we focus on definitions, we will not go anywhere. And the other thing I wanted to say is that, in the end, what we are facing now is the tension between the original dream of a united planet, borderless, and everybody talking with each other freely, and the reality of differences of values, interests, economies, and whatever, and languages throughout the planet. And so to a certain extent, you do need to preserve the local level, and even the national sovereignty, because that’s also a way to preserve the independence of peoples, something that was often hard fought. And to give them a way, to give each citizen of the world a way to have influence over the network, and not just give it to the people that manage the network globally and have more influence on it. But on the other hand, you have to avoid breaking the globalness of the internet. And so this is what we have to be concerned about, finding a balance. Thank you.

Sheetal Kumar:
Thank you. And I’m sorry that we don’t have time to provide the commentators from the earlier part of the session with the opportunity to respond. But the good news is that there is still time to respond after this session via email. Or indeed, you can come and talk. to us, and we are giving a deadline of the 20th of October, and you of course have time to look in detail at the paper online, and the slides will also be available. I think they really nicely summarize the in-depth work that has been done. So what we wanted to do, what the original mandate and intention of this policy network was to provide some clarity to an incredibly complex and indeed controversial topic. I hope that you agree that we have to some extent done that, but it is not over. It is an evolving, just as the internet is, an evolving framework and an evolving piece of work. Please do join us in continuing that work, and I think that is it apart from thanking you all for being here, for your contributions to the panellists, to the drafters, to the very active members of the network who gave their time to put this paper together. Thank you. And please do continue to be engaged. We will be here during the IGF, but you can also email us. Bruno, is there anything I missed?

Wim Degezelle:
No, thank you. Maybe we can just have the slide, because there was the link to the web page. And there you can see, because on the web page of the PNF there is a link to the discussion paper and there is also explained how you can react. So looking forward to your comments. And the only thing I want to add is thank you to everyone. Thank you.

Audience

Speech speed

192 words per minute

Speech length

3174 words

Speech time

989 secs

Bruna Marlins dos Santos

Speech speed

174 words per minute

Speech length

661 words

Speech time

227 secs

Jordan Carter

Speech speed

152 words per minute

Speech length

782 words

Speech time

308 secs

Marielza Oliveira

Speech speed

168 words per minute

Speech length

1049 words

Speech time

374 secs

Olaf Kolkman

Speech speed

136 words per minute

Speech length

1741 words

Speech time

769 secs

Rosalind Kenny Birch

Speech speed

151 words per minute

Speech length

1162 words

Speech time

462 secs

Roswitharu

Speech speed

215 words per minute

Speech length

1540 words

Speech time

429 secs

Sheetal Kumar

Speech speed

144 words per minute

Speech length

1756 words

Speech time

731 secs

Suresh Krishnan

Speech speed

217 words per minute

Speech length

1148 words

Speech time

317 secs

Wim Degezelle

Speech speed

169 words per minute

Speech length

1150 words

Speech time

408 secs

Protect people and elections, not Big Tech! | IGF 2023 Town Hall #117

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Daniel Arnaudo

In 2024, several countries, including Bangladesh, Indonesia, India, Pakistan, and Taiwan, are set to hold elections, making it a significant year for democracy. However, smaller countries often do not receive the same level of attention and support when it comes to content moderation, policies, research tools, and data access. This raises concerns about unfair treatment and limited resources for these nations.

Daniel highlights the need for improved data access for third-party researchers and civil society, particularly in smaller countries. Currently, there is a disinvestment in civic integrity, trust, and safety, which further exacerbates the challenges faced by these nations. Platforms are increasingly reducing third-party access to APIs and other forms of data, making it harder for researchers and civil society to gather valuable insights. Large countries often control access systems, resulting in high barriers for smaller nations to access data.

Another pressing issue raised is the insufficient addressing of threats faced by women involved in politics on social media platforms. Research shows that women in politics experience higher levels of online violence and threats. Daniel suggests that platforms establish mechanisms to support women and better comprehend and tackle these threats. Gender equality should be prioritised to ensure that women can participate in politics without fear of harassment or intimidation.

To effectively navigate critical democratic moments, such as elections or protests, social media platforms should collaborate with organisations that possess expertise in these areas. Daniel mentions the retreat from programs like the Trusted Partners at Meta and highlights the potential impacts on elections, democratic institutions, and the bottom lines of these companies. By working alongside knowledgeable organisations, platforms can better understand and respond to the needs and challenges of democratic events.

Algorithmic transparency is a desired outcome, but it proves to be a complex issue. While it has the potential to improve accountability and fairness, there are risks of manipulation or gaming the system. Striking the right balance between transparency and safeguarding against misuse is a delicate task that requires careful consideration.

Smaller political candidates seeking access to reliable and accurate political information need better protections. In order to level the playing field, it is crucial to provide resources and support to candidates who may not have the same resources as their larger counterparts.

The data access revolution is transforming how companies provide access to their systems. This shift enables greater innovation and collaboration, particularly in industries like infrastructure and industry. Companies should embrace this transformation and strive to make their systems more accessible, promoting inclusivity and reducing inequalities.

Deploying company employees in authoritarian contexts poses challenges. Under certain regulations, these employees might become bargaining chips, compromising the companies’ integrity and principles. It is essential to consider the potential risks and implications before making such decisions.

Furthermore, companies should invest in staffing and enhancing their understanding of local languages and contexts. This investment ensures a better response to users’ needs and fosters better cultural understanding, leading to more effective and inclusive collaborations.

In conclusion, 2024 holds significant democratic milestones, but there are concerns about the attention given to smaller countries. Improving data access for researchers and civil society, addressing threats faced by women in politics, working with organisations during critical democratic moments, and promoting algorithmic transparency are crucial steps forward. Protecting smaller political candidates, embracing the data access revolution, considering the risks of deploying employees in authoritarian contexts, and investing in local understanding are additional factors that warrant attention for a more inclusive and balanced democratic landscape.

Audience

The analysis raises a number of concerns regarding digital election systems, global media platforms, data access for research, and the integrity of Russia’s electronic voting systems. It argues that digital election systems are susceptible to cyber threats, citing a disruption in Russian elections caused by a denial of service attack from Ukraine. This highlights the need for improved cybersecurity measures to safeguard the accuracy and integrity of digital voting systems.

Concerns are also raised about the neutrality and transparency of global media platforms. It is alleged that these platforms may show bias by taking sides in conflicts, potentially undermining their neutrality. Secret recommendation algorithms used by these platforms can influence users’ news feeds, and this lack of transparency raises questions about the information users are exposed to and the influence these algorithms can have on public perception. The analysis also notes that in certain African countries, platforms like Facebook serve as the primary source of internet access for many individuals, highlighting the importance of ensuring fair and unbiased information dissemination.

Transparency in global media platforms’ recommendation algorithms is deemed necessary. The analysis argues that platforms like Facebook have the power to ignite revolutions and shape public discourse through these algorithms. However, the lack of understanding about how these algorithms work raises concerns about their impact on democratic processes and the formation of public opinion.

The analysis also highlights the challenges of accessing data for academic and civil society research, without specifying the nature or extent of these challenges. It takes the position that measures need to be taken to fight against data access restrictions in order to promote open access and support research efforts in these fields.

The integrity of Russia’s electronic voting systems is called into question, despite the Russian Central Election Commission not acknowledging any issues. These systems, developed by big tech companies Kaspersky and Rostelecom, lacked transparency and did not comply with the recommendations of the Russian Commission, raising doubts about their reliability and potential for manipulation.

The use of social media platforms, particularly Facebook, for political campaigning in restrictive political climates is also deemed ineffective. The analysis argues that these platforms may not effectively facilitate individual political campaigns. Supporting facts are provided, such as limited reach and targeting capabilities of Facebook’s advertising algorithms and the inability to use traditional media advertisements in restrictive regimes. An audience member with experience managing a political candidate page on Facebook shares their negative experience, further supporting the argument that social media platforms may not be as effective as traditional methods in certain political contexts.

In conclusion, the analysis presents a range of concerns regarding the vulnerabilities of digital election systems, the neutrality and transparency of global media platforms, challenges in data access for research, and the integrity of Russia’s electronic voting systems. It emphasizes the need for enhanced cybersecurity measures, transparency in recommendation algorithms, increased support for data access in research, and scrutiny of electronic voting systems. These issues have significant implications for democracy, public opinion, academic progress, and political campaigning in an increasingly digital and interconnected world.

Ashnah Kalemera

Social media platforms and the internet have the potential to play a significant role in electoral processes. They can support various aspects such as voter registration, remote voting, campaigns, voter awareness, results transmission, and monitoring. These platforms are critical in ensuring that voter registration is complete and accurate, enabling remote voting for excluded communities and remotely based voters, supporting campaigns and canvassing, as well as voter awareness and education, facilitating results transmission and tallying, and monitoring malpractice.

However, technology also poses threats to electoral processes, especially in Africa. Authoritarian governments leverage the power of technology for their self-serving interests. They actively use disinformation and hate speech to manipulate narratives and public opinion during elections. Various actors, including users, governments, platforms themselves, private companies, and PR firms, contribute to this manipulation by spreading disinformation and hate speech.

The thriving of disinformation and hate speech in Africa can be attributed to the increasing penetration of technology on the continent. This provides a platform for spreading false information and inciting hatred. Additionally, the growing youth population, combined with characteristic ethnic, religious, and geopolitical conflicts, creates an environment where disinformation and hate speech can flourish.

To combat the spread of disinformation, it is crucial for big tech companies to collaborate with media and civil society. However, limited collaboration exists between these actors in Africa, and concerns arise regarding the slow processing and response times to reports and complaints, as well as the lack of transparency in moderation measures.

Research, consultation, skill-building, and strategic litigation are identified as potential solutions to address the challenges posed by big tech’s involvement in elections and the spread of disinformation. Evidence-driven advocacy is important, and leveraging norm-setting mechanisms can help raise the visibility of these challenges. Challenging the private sector to uphold responsibilities and ethics, as outlined by the UN guiding principles on business and human rights, is also essential.

Addressing the complex issues surrounding big tech, elections, and disinformation requires a multifaceted approach. While holding big tech accountable is crucial, it is important to recognize that the manifestations of the problem vary from one context to another. Therefore, stakeholder conversations must acknowledge and address the different challenges posed by disinformation.

Data accessibility plays a critical role in addressing these issues. Organizations like CIPESA have leveraged data APIs for sentiment analysis and monitoring elections. However, the lack of access to data limits the ability to highlight challenges related to big tech involvement in elections.

Furthermore, it is important to engage with lesser-known actors, such as electoral bodies and regional economic blocs, to effectively address these issues. Broader conversations that include these stakeholders can lead to a better understanding of the challenges and potential solutions.

In conclusion, social media platforms and the internet offer significant potential to support electoral processes but also pose threats through the spread of disinformation and hate speech. Collaboration between big tech, media, and civil society, as well as research, skill-building, and strategic litigation, are necessary elements in addressing these challenges. Holding big tech accountable and engaging with lesser-known actors are also crucial for effective solutions.

Moderator – Bruna Martins Dos Santos

Digital Action is a global coalition for tech justice that aims to ensure the accountability of big tech companies and safeguard the integrity of elections. Headquartered in Brazil, the coalition has been gaining support from various organizations and academics, indicating a growing momentum for their cause.

Founded in 2019, Digital Action focuses on addressing the impact of social media on democracies and works towards holding tech giants accountable for their actions. Their primary objective is to prevent any negative consequences on elections and foster collaboration by involving social media companies in the conversation.

Moreover, Digital Action seeks to empower individuals who have been adversely affected by tech harms. They prioritize amplifying the voices of those impacted and ensuring that their concerns are heard. Through catalyzing collective action, bridge-building, and facilitating meaningful dialogue, they aim to make a positive difference.

On a different note, the summary also highlights the criticism faced by social media companies for their lack of investment in improving day-to-day lives. This negative sentiment suggests that these companies may not be prioritizing initiatives that directly impact people’s well-being and societal conditions.

In conclusion, Digital Action’s global coalition for tech justice is committed to holding big tech accountable, protecting election integrity, and empowering those affected by tech harms. By involving social media companies and gaining support from diverse stakeholders, they aspire to create a more just and inclusive digital landscape. Additionally, the need for social media companies to invest in initiatives that enhance people’s daily lives is emphasized.

Yasmin Curzi

The legislative scenario in Brazil concerning platform responsibilities is governed by two main legislations. The Brazilian Civil Rights Framework, established in 2014, sets out fundamental principles for internet governance. According to Article 19 of this framework, platforms are only held responsible for illegal user-generated content if they fail to comply with a judicial order. The Code of Consumers Defense also recognises users as being vulnerable in their interactions with businesses.

However, the impact of measures to combat false information remains uncertain. Although platforms have committed to creating reporting channels and labelling content related to elections, there is a lack of detailed metrics to fully understand the effectiveness of these measures. There are concerns about whether content is being removed quickly enough to prevent it from reaching a wide audience. One concerning example is the case of Jovem Pรฃo, which disseminated a fake audio during election day that had already been viewed 1.7 million times before removal.

The analysis indicates that social media and platforms’ content moderation have limited influence on democratic elections. Insufficient data and information exist about platforms’ actions and their effectiveness in combating false information. Content shared through official sources often reaches a wide audience before it is taken down. Despite partnerships with fact-checking agencies, it remains uncertain how effective platform efforts are in combating falsehood.

There is a pressing need for specific legislation and regulation of platforms to establish real accountability. Platforms currently fail to provide fundamental information such as their investment in content moderation. However, there is hope as the Data, Consumer Protection, and Regulation (DCPR) initiative has developed a framework for meaningful and interoperable transparency. This framework could guide lawmakers and regulators in addressing the issue.

Furthermore, platforms should improve their content moderation practices. Journalists in Brazil have requested information from Facebook and YouTube regarding their investment in content moderation but have received no response. Without the ability to assess the harmful content recommended by platforms, it becomes difficult to formulate appropriate public policies.

In conclusion, the legislative framework in Brazil regarding platform responsibilities comprises two main legislations. However, the impact of measures to combat false information remains uncertain, and the influence of social media and platform content moderation on democratic elections is limited. Specific legislation and regulation are needed to establish accountability, and platforms need to enhance their content moderation practices. Providing meaningful transparency information will facilitate accurate assessment and policymaking.

Alexandra Robinson

The vulnerability of online spaces and the ease with which domestic or foreign actors can manipulate and spread falsehoods is a growing concern, especially in terms of the manipulation of democratic processes. The use of new technologies like generative AI further complicates the issue, making it easier for malicious actors to deceive and mislead the public. This highlights the urgent need for stronger protections against online harms.

One significant observation is the glaring inequality between different regions in terms of protections from online harms. The disparity is particularly alarming, emphasizing the need for a more balanced and comprehensive approach to safeguarding online spaces. It is crucial to ensure that individuals worldwide have equitable protection against manipulation and disinformation.

Social media companies play a pivotal role in creating safe online environments for all users. This is particularly important with the upcoming 2024 elections, as these companies must fulfill their responsibilities to protect the integrity of democratic processes. However, concerns arise when examining the allocation of resources by these companies. Despite investing $13 billion in platform safety since 2016, Facebook’s use of its global budget for combating false information appears disproportionately focused on the US market, where only a fraction of its users reside. This skewed allocation raises questions regarding the equal treatment of users globally and the effectiveness of combating disinformation on a worldwide scale.

Furthermore, non-English languages pose a significant challenge for automated content moderation on various platforms, including Facebook, YouTube, and TikTok. Difficulties in moderating content in languages other than English can lead to a substantial gap in combating false information and harmful content in diverse linguistic contexts. Efforts must be made to bridge this gap and ensure that content moderation is effective in all languages, promoting a safer online environment for users regardless of their language.

In conclusion, the vulnerability of online spaces and the potential manipulation of democratic processes through the spread of falsehoods raise concerns that require urgent attention. Social media companies have a responsibility to create safe platforms for users worldwide, with specific emphasis on the upcoming elections. Addressing the inequities in protections against online harms, including the allocation of resources and challenges posed by non-English languages, is crucial for maintaining the integrity of online information and promoting a more secure digital environment.

Lia Hernandez

The speakers engaged in a comprehensive discussion regarding the role of digital platforms in promoting democracy and facilitating access to information. They emphasized the importance of independent tech work to advance digital rights across all Central American countries. Additionally, they highlighted the collaboration between big tech companies and electoral public entities, as the former provide tools to ensure the preservation of fundamental rights during election processes.

The argument put forth was that digital platforms should serve as valuable tools for promoting democracy and facilitating access to information. This aligns with the related United Nations Sustainable Development Goals, including Goal 10: Reduced Inequalities and Goal 16: Peace, Justice, and Strong Institutions.

However, concerns were raised about limitations on freedom of the press, information, and expression. Journalists in Panama faced obstacles and restrictions when attempting to communicate information of public interest. Of particular concern was the fact that the former President, Ricardo Martinelli, known for violating privacy, is a candidate for the next elections. This situation has the potential to lead to cases of corruption.

Furthermore, the speakers emphasized the necessity of empowering citizens, civil society organizations, human rights defenders, and activists. They argued that it is not only important to strengthen the electoral authority but also crucial to empower the aforementioned groups to ensure a robust and accountable democratic system. The positive sentiment surrounding this argument reflects the speakers’ belief in the need for a participatory and inclusive democracy.

However, contrasting viewpoints were also presented. Some argued that digital platforms do not make tools widely available to civil society but instead focus on providing them to the government. This negative sentiment highlights concerns about the control and accessibility of these tools, potentially limiting their efficacy in promoting democracy and access to information.

Additionally, the quality and standardisation of data used for monitoring digital violence were subject to criticism. The negative sentiment regarding this issue suggests that the data being utilised is unclean and lacks adherence to open data standards. Ensuring clean and standardised data is paramount to effectively monitor and address digital violence.

In conclusion, the expanded summary highlights the various perspectives and arguments surrounding the role of digital platforms in promoting democracy and access to information. It underscores the importance of independent tech work, collaboration between big tech companies and electoral entities, and empowering citizens and civil society organisations. However, limitations on freedom of the press, potential corruption, restricted access to tools, and data quality issues represent significant challenges that need to be addressed for the effective promotion of democracy and access to information.

Session transcript

Moderator – Bruna Martins Dos Santos:
So, I’m going to start off with a little bit of background on what is happening at the moment, and then I’m going to turn it over to my colleague, Mariana, to talk a little bit about what’s going on. Good afternoon, everybody. We’re just starting out one last issue with Zoom, but I’m going to start off with this session. Welcome to the town hall that’s called Protect People and Elections, Not Big Tech. We’re here to talk about the global coalition for tech justice, which is a group of people who are working on big tech accountability, how to safeguard elections, and trying to bring in a new conversation or improve the current ones about why should we care about elections and why should we make this conversation even closer to social media companies, right? The global coalition for tech justice is a global organization that is based in Brazil, and we’re here to talk about the global coalition for tech justice, which is a group of people who are with me at this panel, but we do have more and more organizations and academics joining this space to discuss some of the things that we are planning for today. And as for those of you that don’t know digital action, we were founded in 2019 and we have been working for a number of years on digital action. So, we have been working on a number of issues, right, about how social media affects democracies and how the other way around works as well, but our work has been evolving some work, some catalization of collective action, building bridges, and also ensuring those directly impacted by tech harms are those that are actually in power, are those the ones that we are listening to. So, during these four days that I’ve been here, it has been a catalyst for to stop the digital sick seal, to stop the 10 micro SDGs, to stop the data hijacking, stopctioning of justice, Social media companies invest less or much less in in in the day-to-day lives So that’s a little bit of what we want to do. I want to first Bring in Alexandra Pardal. She’s the global campaigns director at digital action and then she’s gonna open this panel for us and Explain a little bit more about the year of democracy campaign and what we’re all about Alex. I think you’re in the room, right?

Alexandra Robinson:
Yes, I am. Thank you Bruna and Wonderful to be with you here, so welcome to all our panelists and participants in Kyoto to get today and Those joining us from from elsewhere remotely This is a global Conversation on how to protect people and elections not big tech So I’m Alexandra Pardal from digital action a globally connected movement building organization With a mission to protect democracy and rights from digital threats in 2024 the year of democracy more than 2 billion people will be entitled to vote as US presidential and European parliamentary elections converge with national polls in India, Indonesia, South Africa, Rwanda, Egypt, Mexico, and some 50 other countries. The largest mega cycle of elections. We’ve seen in our lifetimes, but our information spaces and the ability to maintain the Integrity of information and uphold the truth and a shared understanding of reality are more are more vulnerable than ever From foreign and malign influence in elections the use of new tech like generative AI Making it easier for domestic or foreign actors to manipulate and lie to financially motivated globally active disinfo industries, the threats have never been bigger nor more pervasive. Elections are flashpoints for online harms and their offline consequences. Now, over the past four years, Digital Action has collaborated with hundreds of organisations in every continent, supporting the monitoring of digital threats to elections in the EU and elsewhere, and led large civil society coalitions demanding a strong Digital Services Act in the EU and better policy against hate and extremism from social media companies globally. This experience has taught us that there’s startling inequity between world regions when it comes to protections from harms. From disinformation, hate and incitement to manipulation of democratic processes, online platforms just aren’t safe for most people. We know that the platforms run by the world’s social media giants, Meta, Google, X and TikTok, have the greatest global reach they’ve ever had and are at their most powerful, but safeguarding efforts have been weak to protect information integrity globally. For instance, Facebook says it’s invested $13 billion in its platform safety and security since 2016, but internal documents show that in 2020, the company ploughed 87% of its global budget for time spent on classifying false or misleading information into the US, even though 90% of its users live elsewhere. This means there’s a dearth of moderators with cultural and linguistic expertise, where Facebook has been unable to effectively tackle disinformation at all times and most consequentially during elections where when disinformation and other online harms peak. Similarly, non-English languages have been a stumbling block for automated content moderation on YouTube, Facebook, or TikTok. Algorithms struggle to detect harmful posts in a number of languages in countries at risk of real-world violence and in democratic decline or autocracy. What this means is that the risks on the horizon in 2024 are very serious indeed, at a time when social media companies are cutting costs, laying off staff, and pulling back from their responsibilities to stem the flow of disinformation and protect the information space from bad actors. If some of the world’s largest and most stable democracies, the United States, Brazil, have been rocked by bad actors mobilizing on social media platforms, spreading election disinfo, and organizing violent assaults on the heart of their democracies, imagine next year, where we’ll see democracies under threat, like India, Indonesia, Tunisia, alongside a whole swathe of countries that are unfree or at risk, where citizens hope to hold onto spaces to resist the manipulation of the truth for autocratic purposes. How can online platforms be made safe to uphold information and electoral integrity and protect people’s rights? So the challenge of 2024’s elections megacycle is a calling to all of us to show up, ideate, and innovate, bring our skills, talents, and any power we have to the table and collaborate. As an example of what’s in the works and background to the perspectives we’re going to hear today, together with over 160 organizations now, experts and practitioners from across the world, we’ve convened the Global Coalition for Tech Justice to launch the 2024 Year of Democracy campaign in order to foster collective action, collaborations and coordination across election countries next year. Together with our members, the Global Coalition for Tech Justice will campaign, research, investigate and tell the stories of tech harm in global media, supporting and amplifying the efforts of those on the front lines and building policy solutions to address the global impacts of social media companies. So we’re going to be actively collaborating with stakeholders and this conversation today is an opportunity to further these conversations and get collaborations off the ground with all those who share goals of safe online platforms for all. So I’m delighted to introduce this session for this important global conversation on how we protect 2024’s mega cycle of elections from tech harms and ensure social media companies fulfill their responsibilities to make their products and platforms safe for all. So I’m really happy to hand back to Bruna to introduce our panelists and the discussion this morning. Thank you.

Moderator – Bruna Martins Dos Santos:
Thank you so much, Alex and welcome to the session as well. And as she just brought up, this is really a global conversation, right, that we want to do. We want to spark a discussion on how can we collectively ensure that big tech plays its part in protecting democracy and human rights in 2024 elections. It’s not just one, it’s 60 elections as everybody has been talking about this week. So it’s a rather key year for everyone. So we have two provocative questions, kickoff questions for the panelists and I’m gonna bring you, Ashna, into the conversation first. Ashna is programs coordinator, right, for CIPESA. And the first question for you would be whether, like, if you consider that social media platforms and content moderation or the lack of it are shaping democratic elections, and if so, how?

Ashnah Kalemera:
Thank you, Bruna. Good evening, everyone, or good morning, like Alex said. I guess we’re all in very different time zones at the moment. It’s a pleasure to be here. Thank you for the invitation, Digital Action, and the opportunity to have this very important discussion. Once again, my name is Ashna Kalemira, and I work with CIPESA. CIPESA is the Collaboration on International ICT Policy for East and Southern Africa. We are based out of Kampala, Uganda, but work across Africa promoting effective but inclusive technology policy, but also its implementation as it intersects with governance, good governance, obviously, human rights, upholding human rights, as well as improved livelihoods. So I like to start off these conversations on very light notes. Very often, these panels are dense in terms of spelling doom and gloom. So first, I’d like to emphasize that technology, broadly, including social media platforms and the internet, have huge potential for electoral processes and systems. They are critical in ensuring that voter registration is complete and accurate, enabling remote voting for excluded communities or remotely based voters. They have been critical in supporting campaigns and canvassing, as well as voter awareness and education, results transmission and tallying, monitoring malpractice, all of them critical to electoral processes and lending themselves to promoting legitimacy and inclusion. of elections in states that have democratic deficits, which for most of Africa is many of the states. So I think that light note is very important to highlight as we then go on to the doom and gloom that this conversation will likely take. And now we start the doom and gloom. Unfortunately, despite those opportunities, there are immense threats that technology poses for electoral processes in Africa and I guess for much of the world. Increasingly, we’re seeing states, the authoritarian governments especially, leveraging the power of technology for self-serving interests. A critical example there is network disruptions or shutdowns. I see Kiputon coalition members in the room and they work to push back on that excess. On disinformation and hate speech, users, governments, the platforms themselves as well as private companies, PR firms, actively influencing narratives during elections, undermining all the good stuff that I mentioned in the beginning. And very often we ask ourselves at CIPESA and I imagine everybody in the room, why disinformation thrives, right? Because pretty much everybody’s aware of the challenge that it poses, but in Africa especially, it’s thriving and thriving to very worrying levels. One of them is again something positive. It’s because technology is penetrating and penetrating very well on the continent. Previously unconnected communities now have access to information at the click of a button literally, which again in the context of elections is great, but in the case of disinformation, it’s a significant challenge. Secondly is the youth population on the continent with many of them coming online via social media. There’s always jokes in sessions that I’ve attended where there’s African representation that for many Africans, the internet is social media. And that challenge is enabling disinfo and hate speech to thrive. Third is conflicts. The elections that we’re talking about are happening in very challenging contexts that are characterized by ethnic, religious, and geopolitical conflicts. Again, all the nice stuff I mentioned earlier on is then cast with a really dark shadow. Like Alex mentioned, that context that I’ve just described is going to be a very significant stress test come 2024 and beyond for the continent. And we’re likely to see responses that undermine the potential of the technology to uphold electoral legitimacy, but also for citizens to realize their human rights. One of those reactions we’re likely to see from a state perspective is weaponization of laws to undermine voice or critical opinion online, which again undermines electoral processes and integrity. And unfortunately, given the context around conflicts, we’re likely to see a lot of politically, sorry, fueling politically motivated violence, which restricts access to credible information and ultimately perpetuates divides and hate speech and can lead to offline harms. Now, bringing the conversation back to big tech, on the continent, unfortunately, we’re seeing very limited collaboration between tech actors and media and civil society in, for instance, identifying, debunking or pre-banking, depending on which side of the fence you sit, and moderating disinformation. Also, the processing and response times to reports and complaints are really slow, and this is discouraging reporting and ultimately maximizing, in some cases, circulation of disinformation and hate speech. There are also significant challenges around opaqueness in moderation measures. We’ve seen the case in Uganda during the previous elections where a huge number of. were taken down for otherwise not very clear reasons, and that led to a response from the state, i.e. shutting down access to Facebook, which remains inaccessible to date in Uganda. So, given those pros and cons, and either side of the coins that I’ve just described for the African continent, it’s important to have collaborative actions and movements just like what Digital Action is spearheading and we’re really honored to be a part of. And efforts in that regard should focus on showing up and participating in consultation processes just like this or others, where there are opportunities to challenge or provide feedback and comments. I think that’s really important. Such spaces are not many. We at CIPESA host the annual forum on internet freedom in Africa. We marked 10 years a couple of days ago, and for the second time, we were able to have the meta oversight board present and able to engage. They admitted that cases from the African continent are limited, but spaces like the forum on internet freedom in Africa that CIPESA hosts is providing that opportunity for users and other stakeholders to deliberate on these issues. I cannot not say that research and documentation remains important. Of course, we’re a research think tank and we’re always churning out a lot of pages and pages that are not necessarily always read, but I think it’s important because evidence-driven advocacy is critical to this cause. Skills building, again, digital literacy, fact-checking, and information verification, that remains critical, but also leveraging norm-setting mechanisms and raising the visibility of big tech challenges in new end processes, universal periodic review, the Africa Commission of Human Peoples’ Rights. These conversations are not filtering up as much as they should do, so there should be interventions that are focused on that, and interventions that, of course, promote and challenge private sector. to uphold responsibilities and ethics through application of the UN guiding principles on business and human rights. Lastly, is strategic litigation. I think that’s also an opportunity that’s before us in terms of challenging the excesses that big tech poses for elections in the challenging context that I’ve just described. Thank you. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks, Ashna. Thank you very much. Just speaking on two of the topics you spoke about, which is the weaponization of policymaking processes and politically motivated violence, I think that bridges very well with the recent scenario in Brazil, right? With, unfortunately, the repetition or yet another attack on a capital. And after a lot of discussions on a fake news draft bill and regulation for social media companies. Yasmin, I’m gonna bring you in now. Yasmin is from FGV Rio de Janeiro and also the co-coordinator of the DC on platform responsibility. Welcome.

Yasmin Curzi:
Thank you so much, Bruna. Could you please display the slides? Thank you so much. So addressing the first question that Bruna posed to us here, are social media and platforms content moderation shaping democratic elections? I’m sorry. To answer this question, I’d just like to give a brief context about the elections in Brazil, sorry, about the Brazilian legislative scenario regarding platform responsibilities. There are two main pieces of legislation that deal with content moderation issues. Specifically, since 2014, we have the Brazilian Civil Rights Framework, aka Marco Civil da Internet, probably known by many of you here. It establishes our basic principles for internet governance, such as free speech, net neutrality, protection of privacy and personal data. but also established liability regimes for platforms regarding UGC in its article 19 to 21. To sum up really quickly, article 19 created a general regime in which platforms are only liable for illegal UGC content if they not comply with judicial order asking for the removal of a specific content if it is within the platform’s capabilities to do so. There are only two exceptions to this rule, one for copyrights and one for non-authorized intimate imagery dissemination for which a mere notification of the user or their legal representative is surface. The second one is the Code of Consumers Defense, aka CDC, which considers users as hypo-sufficient and vulnerable in their relations with enterprises. In its article 14, CDC establishes an objective liability regime, a strict liability regime, in which enterprises or service providers are responsible regardless of the existence of fault for repairing damages caused to consumers due to defects or insufficient or inadequate information about their risks. So, in this sense, these two pieces of legislation can give users many protections online regarding harmful activities and illegal content. Nevertheless, users are still unprotected of the many online harms that are not clearly illegal, such as disinformation, or that are not even perceived as harm to them, like algorithmic gatekeeping, shadow banning, micro-targeting of problematic content. Regarding the first issue, given the non-existence of a legislation that deals specifically with coordinated disinformation, our Electoral Superior Court has been enacting resolutions to set standards for political campaigns and else. Also, the Electoral Superior Court established in the scope of its Fighting Disinformation Program partnerships with the main platforms in Brazil, such as Meta, Twitter, TikTok, Kuai, WhatsApp, and Google, that sign official agreements stating what their initiatives would be. In these documents, most of them committed with creating reporting channels, labeling content as electoral-related, and redirecting users to the Electoral Court official website and promoting official sources. Instagram and Facebook also developed cute stickers to support users to vote, in spite of voting being already mandatory in Brazil. Nevertheless, we don’t have enough data to see the real impacts of these measures, just the generic data on how much content was removed in a given platform, also generic data on how they are complying with the legislation. This sort of data is offered by the main platforms in Brazil since the establishment of partnership programs with fact-checking agencies in 2018. I’m not saying that they are not removing enough content. What I want to highlight here is that we don’t have data or metrics to understand what this generic number means, nor do we have knowledge on if the content is being removed fast enough to not reach enough users. Furthermore, in fact, some of these efforts to combat falsehood on YouTube, for example, were themselves a risk for democracy and elections in 2022. By the official sources program, this is the slide that is displayed right now, a hyper-partisan news media channel, Jovem Pรฃo, was being actively recommended to YouTube users. To give an example, the election day, Jovem Pรฃo was disseminating a fake audio. allegedly from a famous Brazilian drug dealer, Marcos Camacho, aka Marcola, in which he was supporting Lula’s election. Justice Alexandre de Moraes from the Brazilian Federal Supreme Court, which was presiding the Superior Electoral Court, ordered for the removal of the content, but not before it had already reached 1.7 million visualizations. Supporters also shared this video at least 38 WhatsApp groups and Telegram groups monitored by the fact-checking agency, Ausfatch. So to Bruna’s question, are social media and platforms content moderation shaping democratic elections, I tend to answer no, or at least not significantly, as either we have not significant data, or we don’t have enough information on their actions and results. That’s it. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks a lot, Jasmine. I’m going to bring it to Leah right now as well. Leah is representing Independent Tech, right, and also a fellow Latin American. Yet another region of the world that’s facing a lot of those discussions, right, in terms of proper resources, deployments, and also policymaking as well. So Leah, welcome to the panel.

Lia Hernandez:
Thank you so much, Bruna. Good afternoon. Well, my name is Leah Hernandez. I’m going to talk about mailing the recent and next electoral process in Central America, because politics is a big part of our conversation. I speak very loud, so no. OK, perfect. Well, because Independent Tech is a digital rights organization based in Panama City, but working in all Central America. So I’m going to refer mainly to the recent electoral process in Guatemala and the next electoral process in Panama that will take place in May 2024. And the first thing is that I want to send all my support to the Guatemalan people where they are mobilizing in the streets because they are demanding democracy in their past elections in the country. In Central America, digital platforms make tools available to our electoral public entities because they try to help them to verify the information and to avoid any violation of our digital rights, our fundamental rights as protests, freedom of expression, freedom of press, privacy. But currently in countries such as Panama, my country, a digital media platform and a journalist were ordered to remove information from their platform by the Tribunal Electoral, it’s the Panamanian electoral public entity, and they got a fine because they were posting information about Ricardo Martinelli Verrocall, I don’t know if you know about Ricardo Martinelli, he’s very famous, he’s so famous as Lula and Bolsonaro in Brazil, well, he was a former president of Panama and he’s a candidate for the next elections in Panama because he wants to be president again, and by the way, he’s the most violator of privacy in the country. So the electoral entity in Panama ordered this journalist to remove information about them because it’s against the democracy and it’s against their privacy, their own image. So the question is, if big techs are giving tools to our electoral public entities to promote democracy, to promote access to information, to promote fundamental rights. why electoral entities put barriers to the citizens, to journalists, and to communicators, who their main fulfill is legitimate the duty to inform, the duty to communicate to the citizen what is happening in the countries, and more in these cases of corruption, because this former president is very corrupt. So freedom of expression, freedom of information, and freedom press are limiting in Panama when journalists try to communicate based on the principle of public interest that we have in knowing the good, the bad, or the ugly of our candidates in our electoral process. Digital platform must match their words because with their actions, because even though they don’t have any autonomy in the country, in the decision of the electoral branch, they should not become like part of the problem, and limitate constitutional warranties, such as freedom of press. So mainly this is a very recent case that we are follow in Panama, and thank you so much, Bruna, for just facing this panel.

Moderator – Bruna Martins Dos Santos:
Thanks so much, Lia, very interesting that this kind of like, right, there is an ongoing line of major interferences with expression, with conversations online, and it’s not just like one or two countries, but it’s often the lack of, sometimes it’s the responsiveness, sometimes it’s the ongoing conversation, or the cooperation that social media platforms should have with authorities, and that should be interesting to be developing that, but there are also downsides to those partnerships when it goes towards the path of further requests for data, and access. or even like privacy violations, right? So it is definitely a hard and deep conversation. I’m gonna go now to Dan, Daniel Arnaldo from NDI. Dan, so welcome to the panel as well, and same question as the others.

Daniel Arnaudo:
Yes, thank you. Thanks for having me. Thanks for everyone for being here, and we’re really pleased to be a part of this coalition. For those who don’t know, I’m from the National Democratic Institute. We’re a non-profit, non-partisan, non-governmental organization that works in partnership with groups around the world to strengthen and safeguard democratic institutions, processes, and values to secure a better quality of life for all. We work globally to serve elections, strengthen elections processes, and my work particularly is to support a more democratic information space. And in this work, we engage with platforms around the world, both through coalitions like this or others, such as the Global Network Initiative, the Design for Democracy Coalition. We help highlight issues for platforms. We perform social media monitoring. We engage in consultations on various issues ranging from online violence against women in politics to data access and crisis coordination. I think as was mentioned, 2024 will be a massive year for democracy. And from our perspective, I think we’re particularly concerned about contexts we work in throughout the global majority and particularly small and medium-sized countries that do not receive the same attention in terms of content moderation, policies, research tools, and data access, and many other issues. This is all in the context of, I think, what is a serious disinvestment in civic integrity, trust and safety, and related teams within these organizations. So just in the region, I think you have Bangladesh, Indonesia, India, Pakistan, and Taiwan that will all hold elections in the coming year. I know there will be some resources devoted to larger countries, but on the other hand, they are massive user bases, and the smaller ones are going to receive very little attention at all. So, I think this is a consistent focus for our work and for considerations around these issues. I think one of the main kind of recommendations that I would have would focus around data access. In the context of this disinvestment, I think we’re seeing a serious pullback from access for third-party researchers. We are very concerned about changes in the APIs and in different forms of access to data on the platforms, as I think some of my other panelists have discussed, for research and other purposes, particularly meta and Twitter or X, and continued restrictions in other places. They are building mechanisms for access to traditional academics in certain cases, but not for researchers or broader civil society that live and work in these contexts. They’re often provisioned through mechanisms that are controlled within large countries in the United States or in Europe, and there aren’t really systems in place both for documentation or understanding those systems, and that there are, you know, huge barriers to that kind of access, even when it’s enabled in that sense. So that’s something that I would really urge companies in the private sector and groups such as ours to coordinate around in terms of figuring out ways of ensuring that access in future to shine a light within those contexts. Secondly, I think they’re ignoring major threats to those who make up half or more of their user base, namely women, and particularly those involved in politics, either as candidates, policymakers, or ordinary voters. Research has shown that they face many more threats online, and platforms need to institute mechanisms that can support them both to protect themselves, to understand threats, to report and escalate issues as necessary. We have conducted research that shows both the scale of the problem, but also look to introduce a series of interventions and suggestions for the companies and others that are working to respond to these issues. But I think this is really a global problem that we see in every context we work in globally. And I think many in the room will understand this threat and this issue. Finally, I think there’s a need to consider critical democratic moments and to work within those specific situations. How they can work with the broader community to manage them, not only elections, but major votes or referenda, and also more critical moments such as coups, authoritarian contexts, protests, really critical situations. If they cannot appropriately resource these contexts and situations that they may not have greater understanding of, they at least need to engage with organizations that understand them and help to react and effectively make decisions in these challenging situations. I think retreat from programs such as the Trusted Partners in the case of Meta and a consistent whittling down of their teams that are addressing these issues will have impacts on these places, on elections, on democratic institutions, and ultimately these companies’ bottom lines. The private sector should understand these are not only moral and political issues, but economic ones that will push people away from these spaces as they become hostile or toxic to them in different ways. We understand the trade-offs in terms of profit and organizing systems that are useful for the general public, but we would encourage companies to reflect that the democratic world is integral to the open and vibrant functioning of these platforms. As with 2016 and 2020, 2024 will be a major election year and also likely represent a concomitant paradigm shift in its moderation and information manipulation campaigns and regulation, which is another kind of threat that companies need to consider, and a host of related themes that will have big implications for their profits as well as democracy. I think they are going to ignore these realities at their peril.

Moderator – Bruna Martins Dos Santos:
Thanks a lot, Dan. And also, thanks for highlighting some of the things that are the year of democracy campaign. We issued a document that’s the campaign ask. So some things we would like to require from social media companies, such as streamlining human rights, or even bringing in more mechanisms to protect users, and addressing the problem at the real scale. We are not just saying, like, issue plans for elections. We are also saying, like, deploy the solutions. Invest the money. It’s not just Brazil that matters, but it’s also Brazil, India, Kenya, Tanzania. So that’s what’s really core and relevant about this conversation, for sure. So thanks a lot, everybody. I would like to ask if anyone has any questions for the panelists, or would like to add any thoughts to the conversation. There is a microphone in the middle of the room, so yes.

Audience:
Thank you for giving me some space and ability to express myself. So I’m from Russia. We have, like, a digital election system in Russia. And we are talking about, like, threats which are posed by global media platforms all around the world. Preferably, it’s meta. It’s, like, Facebook, Instagram, and Google, and et cetera, et cetera. But we didn’t talk about cyber threats to these digital election systems. For example, like, two months ago, we had elections all over Russia. And our digital election system was attacked by a denial of service attack by Ukrainian party to disrupt elections. And elections were disrupted for, like, three or four hours. And citizens were not able to actually vote. So this is not something about, like, harming Russia as a state. It is something about harming Russian citizens as citizens. That’s number one problem. The second problem is, I think you have mentioned it before, but I think it’s a little bit deeper. Because we have talked a lot about global media platform involvement in information manipulation, fakes, and disinformation spread, et cetera. But we didn’t talk about global media platform’s position, which is tend to be neutral, but is not always neutral in terms of conflict. Because there are two sides, and sometimes global media platforms choose sides. And what we see and what we talk about a lot is that global media platforms have some very closed, very secret recommendation algorithms, which basically forms the news feed for users. And the situation is that, for example, in some countries in Africa, Facebook, and I think you can approve me, Facebook actually, represent like internet for some people. And Facebook can do a revolution in a click, just altering users’ news feed with their recommendations algorithms. And nobody knows how these algorithms work. And I think internet society, and global international society, and IGF included, should put more pressure on global media platforms for making these algorithms more transparent. Because people should know why they’re seeing this or this content. That’s all. Thank you so much for giving me some time. Thanks a lot. Any other questions? Hello. Thank you for the panel. My name is Laura. I’m from Brazil. I’m here for the youth delegation, but I’m also a researcher at the School of Communication and Media Information at Getulio Vargas Foundation in Brazil as well. And I’d like to hear more about the issue of data access for academic research and civil society research. As a center specialized in monitoring the public debate in social media, we are very concerned with the recent changes mentioned by Arnaldo and mentioned by Yasmin as well, regarding the data access for us. And I’d like to hear more about what kind of tools and mechanisms can the academic community and the civil society community in general access to fight those restrictions and to face these issues, not only in the regulatory sphere where this debate is present, but also in a more broad way. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks so much, Laura. And the last question?

Audience:
Okay, two points. I’m Alexander from a country in spring of which next year 145 million will elect Vladimir Putin as president. And I have two points. First of all, I would like to thank Timothy about information on the DOS attacks because Russian Central Election Commission didn’t confirm any issues with electronic electoral systems. Unfortunately, such system in Russia was created by Russian big tech Kaspersky, created one system used in Moscow and Rostelecom, which could be considered as a big tech, created another one. Systems completely intransparent, does not comply to the Russian Commission recommendations and other kind of recommendations for digital system. And on my point, a few intended for just faking results. I hope. So if you are interested about such details, please ask me later. But I would like to ask maybe not panel, but everyone. Have somebody participated in elections last time? Thank you. Yeah. Okay. Have you tried to use platforms for your promotion? Okay. Nowadays, I also would like to inform Tim if Facebook is not possible, is not legal to be used in promotions. But before, I’ve created a political activist or political candidate page on Facebook and would like to advertise myself in a constituency with about 20,000 voters. So, I asked Facebook, please make a suggestion, and they suggested me two new contacts for 10 bucks. So, I think in some cases, either platforms don’t understand requirements for candidates, if it’s not presidents, something like, either we need to work with this, either they will want too much money for promotions. Because, okay, if I would create pret-a-porter cakes, maybe two contacts for 10 bucks is reasonable, but not for the one who wants to advertise himself in a constituency. So, I think such work with platforms and platforms helping candidates, especially in restrictive regimes where advertisements on the physical media is no longer possible, is also should

Moderator – Bruna Martins Dos Santos:
be done. Thank you very much. Thanks, Alexander. We have one extra question from the chat that I’m just going to hand out to you guys, and you don’t need to answer all of them, the ones that speak to you the most, I guess. The one that’s on the chat is, what should be done legally when some cross-border digital platforms like Meta refuse to cooperate to national competent authorities regarding cybercrime cases like incitement to violence and promoting pornography for children and private images, and even in serious crimes and refuse to establish official representatives in the country? Rather dense question as well, but… I will give it back to the floor to you guys, and as we’re moving to the very end of the session, you only have 12 more minutes, I would maybe also ask you to, in a tweet, if you could summarize what would be your main recommendation for addressing this so-called global equity crisis in big tech accountability. So I know it’s difficult to summarize that, but if you have a tip, an idea, a pitch for that, it’s very much welcome. I’ll start with you, Ashna.

Ashnah Kalemera:
Thank you, Bruna, and thank you for the very, very rich questions. I think they highlight that this conversation is not limited to elections and misinfo and disinfo or hate speech, but there are very many other aspects around it. The DOS attacks that you pointed out, which speak to tech and the resilience of not just civil society organizations, but even electoral bodies and commissions or entities that are state-owned or run and leverage technology as part of elections, as well as other conversations around accessibility and exclusion, because some of that technology around elections excludes key communities, which brings about apathy and low voter turnout, all of them critical to the conversation around elections. Similarly, the point around positions and the power of these tech companies to literally start revolutions, to borrow your word, I think that, too, is an area that is critical to deliberate more on. The answers are not very immediate. Some of the work that we’ve done in researching how disinfo manifests in varying contexts has highlighted that the agents, the pathways, and the effects vary from one context to another. like I mentioned in the beginning, in contexts where there’s conflicts, religious or border conflict or electoral conflict, the manifestations are always very different, the agents are always very different. So we’re not necessarily pointing a finger only at big tech but I think we are all mindful of the fact that this is a multi-stakeholder conversation that must be had and should be cognizant of all those challenges. There was an issue on research, I think that’s something that we’ve felt on the continent, the inaccessibility of data. Previously at CIPESA we’ve leveraged data APIs, I believe that’s the technical term, to document elections and monitor elections, social media, sentiment analysis and micro-targeting. That capacity is now significantly limited so we’re not able to highlight some of the challenges that emerged during elections around big tech. That’s not to say documentations through stories or humanization would not have the same effect if the access to data is limited. What else did I want to talk about? Now I forget because there were so heavy questions but yes, the conversation is much broader than just elections and big tech alone. We all have a role to play and engaging the least obvious actors like electoral bodies, regional economic blocs and other human rights monitoring or human rights norm-setting mechanisms is also critical to the conversation. Thank you.

Yasmin Curzi:
So, regarding recommendations I think it’s only possible actually to have really real accountability. If we have like specific legislation and regulation of platforms, it’s not possible to have like a multi-stakeholder conversation if we have like the sort of the power symmetries are just too big for us to sit on the same table and discuss with them and talk to them. They set all the rules that are on the table, so it’s not possible to talk to them without regulation. In Brazil, for example, during the elections again, the journalist Patrรญcia Campos Melo and Renata Gauf asked Facebook how much they were investing, not only Facebook, sorry, Facebook and YouTube, how much they were investing in content moderation in Brazil to see how much they were complying with their own memo agreements that they made with, that they signed with the Superior Electoral Court. And they did not answer, they just said that this was sensitive data. And this is like we are talking about aggregated data of how much they were investing financially to improve their content moderation in Portuguese. So if we don’t have this basic information, if we don’t have like how to assess how much harmful content is being recommended by their platforms, it is quite difficult for us to be able to make proper public policies to address these issues. So I’d just like to display the slides again just to do some propaganda. Sorry, can you display the slides again, just a minute? Just to make a brief propaganda, we have at the DCPR, our Dynamic Coalition on Platform Responsibilities. Our outcome last year was a framework. on meaningful transparency, meaningful and interoperable transparency, with some thoughts for policymakers and regulators worldwide if they want to implement, and also platforms if they are able and eager to improve their best practices, so they also can adopt this framework. And this year, our outcome we are going to release tomorrow also focusing on human rights, risk assessments, and else. So this is our title. It’s like a collaborative paper with best cases and also discussing legislation in India, GSA, GMA, the Brazilian legislation. So we are going to release it tomorrow. Our session is at 8.30. So thanks. I’m sorry for doing the propaganda. I just wanted to show the document. So this is what I would recommend for people to.

Daniel Arnaudo:
Yeah, thanks for the questions. I think certainly algorithmic transparency can be a good thing. You just have to be careful about how you do it. And to create systems to understand the algorithms, I think they can also be gamed in different ways if you have a perfect understanding of them. So it’s a tricky business. I think definitely on need for better protections and systems for smaller candidates in different contexts, it’s a part of the system. It’s not just the individual users and what they’re seeing and how these systems or these networks are being manipulated, but also how candidates can have access to information about political advertising or about even basic registration information. I think every country in the world should have access to the same systems that are used by Meta and by other major companies, Google and others, to promote good political information. And I mean. very basic political information about voting processes, about political campaigns, anywhere in the world. I think on data access, certainly, you’re seeing a revolution right now in terms of how the companies are providing access to their systems, and I think it’s focused on X and Twitter. That has changed the way that any sort of research is being done on those platforms. It’s much more expensive, it’s more difficult to get at. I think companies need to reconsider what they’re doing in terms of revising those systems and making them more difficult for different groups. Meta, in particular, I think will be really critical, so I think we need to work collectively to make sure that they make those kinds of systems like APIs available to as many kinds of people as possible. I think, you know, certainly, there are issues around placing company employees in certain countries around the world, and that can be problematic in certain ways because they could also be authoritarian contexts, and then the employees become bargaining chips, potentially, within certain kinds of regulations that they want to enforce, so you have to be careful about that, but I certainly understand the need to enforce regulations around privacy and content moderations and other issues, so I think it’s something that has to be designed carefully. I think, you know, certainly, there’s a huge crisis in terms of how companies are addressing different contexts, and they need, I think, ultimately, to better staff and resource these issues or these different contexts, so to have people that speak local languages, that understand these contexts, that can respond to issues in reporting, that know what they’re doing, but this is expensive, and I don’t think you’re gonna be able to work your way out of it through AI or something like that, as many have proposed, so I just think it’s something that they need to recognize that reality, or they’re gonna continue to suffer, as, unfortunately, we will all.

Lia Hernandez:
Just one minute, well I think that it’s necessary not just to empower the electoral authority, it’s most necessary to empower citizens, civil society organizations, human rights defenders, activists, because we are really working to promote and to conserve the democracy in countries. So this is my recommendation, and regarding your question about the data, for example in our case, we are working in monitoring digital violence based against candidates in the next election in Panama, and everything is very manual, because the digital platforms, they don’t make available the tools to the civil society, they make available the tools to the government. So we are trying to sign an agreement with the electoral authority to maybe have access to that tools, because it’s necessary to finish the work before the elections, and in another case, the data is not clean, they don’t use open data standards, so we have to find, sometimes guess the information that they have, not upgrading in their websites, so it’s a bit difficult for us to work with this kind of data.

Moderator – Bruna Martins Dos Santos:
Thanks a lot to the four of you, and Alex as well, that is following us directly from the UK, thanks everybody for sticking around as well. If any of this conversation stroke a note with you, go to the yearofdemocracy.org, that’s the website for the Global Coalition for Tech Justice campaign, and have a nice rest of the IGF, thanks a lot.

Alexandra Robinson

Speech speed

142 words per minute

Speech length

954 words

Speech time

402 secs

Ashnah Kalemera

Speech speed

162 words per minute

Speech length

1780 words

Speech time

658 secs

Audience

Speech speed

145 words per minute

Speech length

905 words

Speech time

375 secs

Daniel Arnaudo

Speech speed

172 words per minute

Speech length

1578 words

Speech time

549 secs

Lia Hernandez

Speech speed

132 words per minute

Speech length

744 words

Speech time

338 secs

Moderator – Bruna Martins Dos Santos

Speech speed

181 words per minute

Speech length

1397 words

Speech time

463 secs

Yasmin Curzi

Speech speed

141 words per minute

Speech length

1287 words

Speech time

548 secs

Promoting the Digital Emblem | IGF 2023 Open Forum #16

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Koichiro Komiyama

According to a report by the IISS, several Asian countries, including China, Australia, India, Indonesia, Iran, North Korea, and Vietnam, are significantly increasing their cybersecurity capabilities. This development has raised concerns about the escalation of cybersecurity capabilities in Asia.

Ransomware attacks have been on the rise, with damages increasing, and many of these attacks being driven by commercial profit. Over the past year, there have been successful breaches of critical infrastructure, such as hospitals. This highlights the vulnerability of essential services to cyber threats.

Japan, traditionally known for refraining from cyber offense due to its peace constitution, has changed its stance on cyber offense in light of national security concerns. This shift in policy indicates that Japan is recognising the need to enhance its cybersecurity capabilities.

To combat cybercriminal activities, the application of guidelines or emblems is suggested as a method to pressure criminal groups regarding their operations. Such guidelines can establish a framework for acceptable behaviour, discouraging criminal activities in cyberspace.

Koichiro Komiyama, a prominent individual in the field, has expressed concerns about cybersecurity threats specifically targeting hospital and medical systems. He emphasises the need for proactive measures to safeguard vital systems against evolving cyber threats.

Moreover, the implementation of local environment concepts for critical systems is considered crucial. Critical systems, whose offline or disconnected nature makes them less vulnerable to cyber attacks, do not use global IP address spaces or associate with any domain name. Implementing these concepts enhances the security of such systems.

Overall, the increasing cybersecurity capabilities of several Asian countries, coupled with the rise in ransomware attacks and successful breaches of critical infrastructure, highlight the urgent need for robust cybersecurity measures. It is essential to address cybersecurity threats to hospital and medical systems. Furthermore, the adoption of local environment concepts can enhance the security of critical systems.

Audience

During the discussion, concerns were raised about the offensive cyber capabilities that AI is reportedly enhancing. Automation and AI have increased the speed of cyber capabilities, leading to growing apprehension. The feasibility and effectiveness of the digital emblem solution were questioned, specifically regarding its ability to deal with the accelerated speed and wider reach of cyber capabilities. Doubts were expressed regarding whether cyber capabilities would take the time to verify the authenticity of digital emblems.

The discussion emphasized the need for strong interest from states and sub-state organizations in the digital emblem solution. The successful implementation and socialization of the solution require a strong appetite among these entities. Incentives were identified as necessary to encourage their engagement with the digital emblem solution. Additionally, the degree of interest among states and sub-state organizations was discussed, highlighting the importance of incentivizing their involvement.

The issue of incentivizing non-state actors and less organized groups to respect digital emblems was also raised. There was an example of activists in Russia and Ukraine pledging to reduce the scale of their cyber operations, indicating some willingness to comply. However, motivating these actors to fully respect and adhere to digital emblems remains a challenge.

Attribution problems and issues with incentivizing state actors were discussed. It was argued that problems with incentives and attribution could discourage state actors from respecting the digital emblem. This could potentially make emblem violations easier without clear attribution to a specific state.

The visibility of hospital targeting in the Asia-Pacific region was highlighted as evidence of the urgent need for the proposed emblem. Hospitals in this region are targeted by nation-states on a daily basis, underscoring the necessity of finding a solution to prevent such attacks.

The discussion also touched upon the self-regulation within the criminal community. It was mentioned that the criminal community regulates itself against targeting perceived “soft targets.” This suggests that there may be a deterrent effect that discourages criminals from attacking certain entities.

Finally, the potential role of Internet Service Providers (ISPs) in validating adherence to the digital emblem was suggested. ISPs possess the ability to identify operational nation-states and their infrastructure, which could provide insights into whether the emblem rules are being followed.

Overall, the discussions highlighted various challenges and concerns related to offensive cyber capabilities, the feasibility of the digital emblem solution, and the imperative of strong engagement from different actors. The importance of incentivizing compliance and addressing attribution issues was emphasized. The visibility of hospital targeting and the potential role of ISPs were also significant points of discussion.

Felix Linker

The ADEM (Authentic Digital Emblem) system, developed by Felix Linker and his team, is a technological solution designed to address the need for verifiable authenticity and accountability in the digital landscape. It was developed in response to a request from the International Committee of the Red Cross (ICRC) for a digital emblem. The purpose of ADEM is to provide a reliable and tamper-proof method of identification and endorsement for protected parties.

ADEM is designed to be a plug-in to the infrastructure of protected parties, such as the ICRC, allowing for the autonomous distribution of emblems. Prototyping is ongoing with the ICRC, and plans are in place to deploy ADEM within their network. This move is seen as a positive step towards enhancing cybersecurity and supporting the mission of protected parties.

One key aspect highlighted in the discussions is the role of nation-states in endorsing protected parties. ADEM allows nation-states to make sovereign decisions regarding the endorsement of protected parties, and emblems will be accompanied by multiple endorsements from nation-states. This approach empowers nation-states to exercise control and support protected missions according to their individual preferences and policies. It is considered a positive development in promoting digital sovereignty and aligning with the goals of SDG 16 (Peace and Justice) and SDG 9 (Industry, Innovation, and Infrastructure).

However, challenges arise when it comes to verifying endorsement requests. Felix Linker raises concerns about technical organizations that control parts of the internet naming system, such as ICANN. He believes that these organizations may struggle to authenticate requests for endorsement due to their technical nature. This argument carries a negative sentiment as it highlights a potential limitation in the current system.

In light of these challenges, Felix suggests that endorsement of protected parties could be undertaken by nation-states, supranational organizations, or entities with relevant experience and knowledge in the field, such as the ICRC. He emphasizes the importance of not burdening technical organizations with additional responsibilities that may not align with their expertise. This perspective is seen as positive as it suggests a more suitable and effective approach to securing endorsements for protected missions.

ADEM consists of two main components. The first component focuses on protecting entities identified using IP addresses and domain names. This aspect of ADEM aims to provide security and authenticity at the network level. The second component involves granting emblems through mechanisms such as TLS, UDP, and DNS. These mechanisms serve as a means to validate and authenticate the emblems, ensuring their authenticity and reliability. This dual aspect of ADEM showcases its comprehensive approach to safeguarding the integrity and authenticity of protected parties.

Felix’s team is also working on the development of local emblems, which aim to protect against threats at the device level. By addressing vulnerabilities such as malicious email attachments and network penetrations, this extension of ADEM provides an extra layer of security and ensures a holistic approach to safeguarding digital assets and missions.

Moreover, the discussions highlight the benefits of emblems in monitoring and reducing cyber attacks. Emblems serve as a mechanism for verifying the authenticity and legitimacy of actors engaging in cyber activities. By recognizing and respecting emblems, actors can be monitored more effectively to prevent and mitigate potential cyber threats. This observation carries a neutral sentiment as it reflects the potential of emblems in enhancing cybersecurity efforts.

Lastly, the proposition of Internet Service Providers (ISPs) taking on the responsibility of monitoring emblem distribution is viewed positively. Felix suggests that ISPs could play a crucial role in regularly checking whether emblems are being sent out as intended. This proposed role for ISPs aligns with SDG 16 and SDG 9 and potentially enhances the effectiveness of emblem distribution and validation.

In conclusion, the development of the ADEM system presents a promising solution for achieving authenticity and accountability in the digital realm. By allowing the autonomous distribution of emblems within the infrastructure of protected parties, ADEM promotes enhanced cybersecurity and supports protected missions. The involvement of nation-states and the consideration of various endorsement mechanisms further strengthen the system’s reliability and effectiveness. However, challenges exist in verifying endorsement requests, particularly concerning technical organizations’ ability to authenticate requests. The development of local emblems and the potential role of ISPs in monitoring emblem distribution offer additional layers of protection and monitoring. Overall, ADEM holds great potential for advancing digital security, ensuring authenticity, and supporting the goals of SDG 16 and SDG 9.

Moderator – Michael Karimian

The digital emblem is an innovation in humanitarian protection aimed at extending protections into the digital realm. Its purpose is to safeguard medical and humanitarian entities from cyber operations. This concept acknowledges the evolving nature of warfare and conflict, where cyber operations play an increasingly impactful role. By implementing the digital emblem, these entities can continue their work without fear of cyber operations.

Furthermore, the digital emblem represents a collective commitment to protecting the vulnerable from cyber threats. It highlights the intersection of technology, cybersecurity, and humanitarian protection, emphasizing the need for collaboration and advanced measures to ensure a secure digital future. This collective commitment signifies the importance of addressing cyber threats within the broader context of humanitarian efforts.

Applying multi-factor authentication and zero-trust principles can significantly enhance cybersecurity. Studies have shown that 99% of cyber-attacks can be prevented by adopting basic cybersecurity practices, including these two measures. By implementing multi-factor authentication, which requires multiple forms of verification for access, and following the zero-trust approach, which assumes no trust by default and verifies every action, organizations can greatly increase their cybersecurity resilience.

Keeping systems updated and employing data protection measures through encryption are also essential in minimizing the risks posed by cyber attacks. By ensuring that software and patches are up to date, organizations can protect themselves from known vulnerabilities. Additionally, encryption provides an added layer of security by securing sensitive data and making it unreadable to unauthorized parties.

To bolster cybersecurity efforts, it is encouraged for tech and telecommunications companies to join initiatives such as the Cyber Security Tech Accord and the Paris Call for Trust and Security in Cyberspace. The Cyber Security Tech Accord is a coalition of approximately 150 members committed to best practices and principles of responsible behavior in cyberspace. The Paris Call for Trust and Security in Cyberspace is the largest multi-stakeholder initiative aimed at advancing cyber resilience. By becoming part of these initiatives, companies can contribute to collective efforts in maintaining a secure cyber environment.

Engaging with the Cyber Peace Institute can also aid in improving cybersecurity. The Cyber Peace Institute focuses on promoting norms and advocating for responsible behavior in cyberspace. Collaborating with this institute can provide valuable insights and resources to enhance cybersecurity practices.

In the context of protecting medical facilities and humanitarian organizations, a multidimensional approach is required. This includes implementing technical solutions, fostering collaboration among various stakeholders, conducting research, and advocating for enhanced protection. The challenges and potential solutions in safeguarding these facilities and organizations were discussed, emphasizing the importance of research and advocacy in the process.

The significance of audience engagement and the contributions of the speakers were acknowledged in supporting the protection of medical facilities and humanitarian organizations. These discussions underline the critical importance of ensuring the safety of these entities, as the consequences of attacks can be just as devastating as physical assaults.

Overall, the digital emblem represents a critical innovation in humanitarian protection, offering safeguards against cyber operations for medical and humanitarian entities. By promoting the intersection of technology, cybersecurity, and humanitarian protection, advocating for best practices and responsible behavior, and implementing advanced cybersecurity measures, organizations can enhance their resilience against cyber threats. Collaboration, research, and advocacy are also essential in protecting medical facilities and humanitarian organizations. By joining together and adopting comprehensive strategies, we can create a more secure and resilient digital space.

Mauro Vignati

The International Committee of the Red Cross (ICRC) considers the digitalization of the emblem to be crucial and necessary. The digital emblem is used to identify medical personnel, units, and organizations, providing a means of recognition during armed conflicts. The ICRC argues for flexibility in the usage of the digital emblem, limiting its use to selected entities solely during times of armed conflict.

Initiated in response to the need for increased protection during armed conflicts and the COVID-19 pandemic, the ICRC began researching the digitalization of emblems. The digital emblem aims to provide security for medical facilities and Red Cross organizations.

Several technical requirements have been defined to ensure the effectiveness of the digital emblem. Ease of deployment, compatibility with different devices, and the ability to verify authenticity are among the key considerations. It is essential that the emblem can be utilized by both state and non-state actors.

Despite the benefits of the digital emblem, there are various challenges associated with its implementation. Such challenges include the lack of separate internet infrastructure for armed forces and civilians, difficulties in modifying medical devices, and the complex nature of the internet environment.

To develop the digital emblem, the ICRC consulted with 44 experts, initiating the project in 2020. This endeavor holds promise in reducing misuse through technological advancements. However, it is important to note that the authority to authorize the emblem’s use in physical space lies with the state, as stipulated by the Geneva Convention.

Both state and non-state actors are expected to comply with the conventions, including the digital emblem. The Red Cross actively appeals to non-state actors to adhere to International Humanitarian Law (IHL), as violation of IHL could be deemed a war crime.

In conclusion, the digitalization of the emblem is deemed vital in order to enhance protection in both physical and digital realms. The objective is to educate non-state actors on the significance of respecting IHL and the emblem to ensure the safeguarding of humanitarian efforts. Nevertheless, it is imperative to further assess the challenges and potential risks associated with the digital emblem.

Francesca Bosco

The Cyber Peace Institute was established with the goal of mitigating the adverse effects of cyber attacks on people’s lives worldwide. It plays a crucial role in aiding vulnerable communities to stay safe in cyberspace, conducting investigations and analysis on cyber attacks, advocating for improved cybersecurity standards and regulations, and addressing emerging technological challenges.

The healthcare sector is identified as a particularly vulnerable sector to cyber attacks, which often lead to the loss of data and disruption of services. The Cyber Peace Institute has a platform that documents cyber attacks on the health sector, highlighting the breach of over 21 million patient records and significant disruption to healthcare services. This demonstrates the urgent need for improved cybersecurity measures within the healthcare industry.

Cyber attacks during armed conflicts have a significant human impact as they threaten crucial services and spread disinformation. The borderless nature of cyberspace allows cyber operations to extend beyond belligerent countries, hitting critical infrastructures in third countries. This highlights the need for increased international cooperation and measures to protect critical services during armed conflicts.

Risks in the medical and humanitarian sectors include the increasing accessibility of sophisticated malware and ready-to-use cyber tools, as well as the blurring line between state and non-state actors. This presents a challenge as it lowers the barriers to entry for malicious actors and makes it difficult to attribute attacks to a specific entity. Thus, it is essential to develop strategies to effectively address these risks and protect vital infrastructures.

Education is identified as a vital component in understanding the importance of protecting healthcare and humanitarian organizations from cyber attacks. By educating different stakeholders, including professionals and the general public, they can better comprehend the potential consequences of not safeguarding these crucial infrastructures.

Francesca Bosco, an advocate in the field, emphasizes the need for analyzing the human impact of cyber attacks and the long-term consequences in order to underline the importance of protecting vital infrastructures. Efforts are being made to standardize a methodology to measure the societal harm from cyber attacks. The aim is to monitor responsible behavior in cyberspace and assess the societal costs of not adequately protecting vital infrastructure.

Basic cyber hygiene activities and information sharing are identified as critical elements in mitigating cyber attacks and improving cybersecurity. It has been found that 99% of cyber attacks can be stopped by implementing basic cyber hygiene practices. Additionally, full cooperation in terms of information sharing is needed to effectively trace and address cyber incidents, as seen in the case of the healthcare sector.

Civil society organizations are recognized for their close proximity to the people impacted by cyber attacks and their firsthand experiences. These organizations can play active roles in advancing knowledge and efforts in mitigating cyber attacks, working in collaboration with other stakeholders to address the challenges posed by cyber threats.

Sharing defense resources and enhancing cyber capacity building are recommended as important measures for protecting critical infrastructure. This can be achieved through initiatives such as the Global Cyber Capacity Building Conference, which focuses on the protection of critical infrastructure from cyber attacks.

In conclusion, the Cyber Peace Institute is at the forefront of efforts to mitigate the harmful effects of cyber attacks globally. Through its various activities, such as aiding vulnerable communities, investigating cyber attacks, advocating for better cybersecurity standards, and addressing emerging technological challenges, the Institute works to protect vital infrastructures, such as healthcare and humanitarian organizations. It is evident that education, cooperation, and capacity building are essential elements in effectively addressing cyber threats and safeguarding critical services. By understanding the human impact and long-term consequences of cyber attacks, there is a growing recognition of the need to protect vital infrastructure and develop strategies to mitigate cyber risks.

Tony

Tony highlights the necessity of a digital emblem in order to uphold International Humanitarian Law. This emblem should protect the end system data, its processing, and the communications involved. Moreover, it should be visible to those individuals who are committed to complying with international humanitarian law. Significantly, the digital emblem should not burden the operations of humanitarian organizations.

Tony suggests implementing the digital emblem by leveraging existing Internet infrastructure and technology. The internet has the capability to employ cryptographic methods to safeguard fundamental data. Critical data, such as naming and addressing required to operate the internet, can be protected through technology that is already established.

To implement the digital emblem, Tony proposes an implementation approach using secure DNS and secure routing. This approach involves inserting a special text record within the DNS record, which is signed by a trusted entity to validate the emblem. Additionally, visible blocks of address can be segregated to accommodate humanitarian traffic flows.

International cooperation is crucial for the successful implementation of the digital emblem. Nation-states have the responsibility to regulate the use of the emblem, and working through existing organizations like the ICRC can facilitate the process.

Tony argues that regional internet registries should take on more responsibility for verifying the authenticity of humanitarian missions, rather than relying solely on ICANN. This is particularly important because regional internet registries are better equipped to verify humanitarian organizations compared to ICANN, particularly in countries where there is a close coupling between the internet operator and the state, such as Egypt and China.

Coupling the verification of the humanitarian emblems with the operations of the internet can make the system more scalable. Tony suggests using DNS to propagate the emblem, rather than verify it, to make the process manageable. This can be achieved by having a local ISP or an organization like the American Red Cross sign the digital record within the DNS record.

The control of internet operations by the state is not universally applicable, and it varies among countries. In the United States, the government has little involvement in how names and numbers are allocated, whereas in countries like Egypt and China, the internet operator and the state have a close coupling.

There is a concern about the risk of unintended consequences and disruptions to humanitarian missions resulting from cyber attacks. Unintended denial of service attacks can occur if focus is only placed on the attacked entity, and nation-state attacks often focus on the infrastructure rather than individual users.

Protective measures should rely on internet infrastructure for third-party queries, instead of solely relying on potentially attacked endpoints. This proposed solution aims to mitigate the risks of cyber attacks by utilizing the infrastructure of the internet for third-party queries.

While basic cyber hygiene is essential, it is not a complete solution to cyber attacks. Existing technology can mitigate many damaging attacks, but sophisticated adversaries and high-value targets require more comprehensive defense strategies. To address this, authorities, whether legal or ethical, should promote and normalize cyber hygiene practices.

Transparency and collective action can help expose and deter malicious activity. Initiatives tied to scalable internet infrastructure can be repurposed for monitoring and responding to digital threats. Adversarial activities against sensitive institutions like hospitals and public utilities should be observable and provokable.

The current mechanisms and applications for protecting humanitarian operations in conflict zones should be expanded to other environments, even in peacetime. Ransomware attacks on peacetime institutions, such as hospitals, pose significant threats that current cybersecurity measures may not adequately address. Implementing existing security mechanisms sector by sector is challenging and impractical.

In conclusion, Tony emphasises the need for a digital emblem to respect International Humanitarian Law. Implementing this emblem by leveraging existing Internet infrastructure and technology, using secure DNS and secure routing, and ensuring international cooperation are vital for its success. Regional internet registries should play a larger role in verifying humanitarian missions, and coupling the verification process with internet operations can make the system more scalable. Cyberattacks pose a risk to humanitarian missions, and protective measures should rely on internet infrastructure. While basic cyber hygiene is important, more comprehensive defense strategies are needed for sophisticated adversaries. Transparency and collective action can help deter malicious activity, and mechanisms for protecting humanitarian operations should be expanded to other environments.

Session transcript

Moderator – Michael Karimian:
There we go. Hopefully everyone can hear me. So, distinguished guests and esteemed panelists, good morning, good afternoon, good evening, or good night, depending on where you are joining us from. Welcome to this important session on promoting the digital emblem. I am Michael Karamean, Director of Digital Diplomacy for Asia and the Pacific at Microsoft, and I have the privilege to serve as moderator today. In today’s digital age, the concept of the digital emblem represents a critical innovation in humanitarian protection. Much like the Red Cross, Red Crescent, and Red Crystal emblems have safeguarded lives during times of conflict in the physical world, the digital emblem aims to extend these protections into the digital realm. It is intended to be a symbol of hope and security, ensuring that medical and humanitarian entities can continue their life-saving work without the fear of malicious cyber operations. Importantly, the digital emblem concept is an acknowledgment of the evolving nature of warfare and conflict, where cyber operations play an increasingly impactful and harmful role. It emphasizes the criticality of upholding the principles of international humanitarian law in the digital space, where the consequences of attacks on hospitals and humanitarian organizations can be just as devastating as physical assaults. Our esteemed panel of experts today will delve deep into the technical, legal, and humanitarian aspects of the digital emblem. They will explore how it can be developed, deployed, and upheld, ensuring that it becomes a recognized symbol of protection in an increasingly digital yet vulnerable world. As we embark on this discussion, it is important to recognize that the digital emblem has profound importance. It not only signifies a collective commitment to safeguarding the vulnerable, but also highlights the intersection of technology, cybersecurity, and humanitarian protection. Through this dialogue, we aim to advance our understanding, share insights, and collectively work toward a more secure and resilient digital future. So, let us begin this exploration into the digital emblem concept, its significance, and the path forward. Together, we can hopefully promote digital peace and protect those who need it most. To help us achieve that goal, I am pleased to say that we are joined by Felix Linker, researcher at ETH Zurich, who joins us online. Dr. Antonio DeSimone, chief scientist at Johns Hopkins Applied Physics Laboratory, who also joins us online. Francesca Bosca, chief of strategy and partnerships at the Cyberpeace Institute, who is also joining us online. And in person, we are joined by Koichiro Komiyama, director of the Global Coordination Division at JPCERT, and also affiliated with APCERT, and Mauro Vignetti, advisor on digital technologies of warfare at the ICRC. So, to help set the scene, Mauro, please let’s begin with an overview of the digital emblem. Yeah, thank you very much,

Mauro Vignati:
Michael and everyone. So, I’m going to give an overview about the emblem, also the physical one, just to bring everybody at the same speed by discussing the digital emblem. So, the Red Cross, Red Crescent, and more recently, the Red Crystal have been symbols of protection. So, meaning that facilities, people, vehicles showing this emblem should not be attacked, they should be spared by the consequences of armed conflict. So, this is why the international military law requires part of the conflict to ensure the visibility of the emblem, so that combatants can identify the persons and the objects that they must protect and respect. And we’re going to see that this is a very important aspect, also in the digitalization of the emblem. So, the rules on the use of the distinctive emblems, or signals, are governing in the Annex I of the first additional protocol of the Geneva Conventions of 1977. So, and there is an article, it’s an article, it’s the article one of the Annex, that mandates the ICRC to see whether new systems of identification should be adopted. And that’s why we’re here to discuss the project of the digital emblem, because we think it’s fundamental to have a digital version of the emblem. So, the emblem marks medical personnel, medical unit, vehicle, and organization like the Red Cross and the Red Crescent organization. So, and there are two use of the emblem. So, there are, there is the distinctive use of the emblem. So, so to say it’s always on, in the way that organization like the International Committee of the Red Cross and the National Societies can use the emblem at all time. And then there is another use of the emblem that is the protective use. This means that the selected, dedicated entities can use the emblem only during armed conflict. This was a very important point because the emblem was in the digital space, must be flexible in this respect and in use only during armed conflict. So, that said, so it’s a general review about the emblem and, and we’re gonna go into the detail why we need to digitalize the

Moderator – Michael Karimian:
emblem to have a digital version of it. Thank you. Thank you, Mauro. So, today’s session will have three segments. For approximately 30 minutes, our speakers will frame the discussion from their perspectives. We’ll then spend approximately 20 minutes with the speakers having a conversation among themselves on the technical, legal, and humanitarian aspects. And we aim to dedicate 30 minutes for audience Q&A. So, please start to think of your questions now. In terms of framing the discussion, Francesca will turn to you first and it’ll be great to have your overview of the CPI’s role in protecting vulnerable entities in cyberspace. Overview of the trends in healthcare, sorry, cyber attacks against hospitals and medical facilities, including in times of conflict, and also importantly the role of neutral organizations in promoting digital peace. So, Francesca, over to

Francesca Bosco:
you. Thank you so much, Michael, and it’s a pleasure to be here with you all. Can you see my screen? We can, thank you. Great, thank you. So, thanks a lot, Mauro, for the excellent introduction in framing the discussion around the digital emblem. Let me take a step back, or better, to share some reflections on the work that we’ve been doing at the Cyber Peace Institute, specifically to understand the context. of the why it’s so important to protect civilian infrastructure like the healthcare sector and humanitarian organizations, both in peace time and during armed conflict. So let me share some also, some reflections on how the Cyber Peace Institute was created and is operating to try to understand some of the considerations that I hope will help the discussion further. So recognizing that our digitizing societies are particularly vulnerable to cyber attacks and often lack the resources to strengthen their cyber security. The Cyber Peace Institute was founded in 2019 in response to the escalating dangers posed by sophisticated cyber attacks. The mission, the overarching mission of the Institute is to mitigate the adverse effects of cyber attacks on people’s lives worldwide. This is extremely important because this will bring us to the focus of the Institute, which is to understand the human impact of cyber attacks. We accomplish this through key synergistic pillars that you can see here. So first, we aid vulnerable communities to stay safe in cyberspace, focusing especially on vital sectors as mentioned, like healthcare, non-profit and humanitarian organizations. Second, as you might see, we conduct investigation and analysis on cyber attacks. Our cyber threat analysis team has been focusing on cyber attacks against the healthcare since 2020 and since February, 2022, specifically on cyber attacks in the context of armed conflict. Now we are building the same capability to monitor attacks against NGOs, including humanitarian ones. Then we advocate for improved cybersecurity standards and regulations with evidence-based knowledge. And we complete, let’s say, the cycle by proactively addressing those emerging technological challenges and disruption to the work of humanitarian organizations caused, for example, by artificial intelligence or quantum computing. I wanted to explain this to understand also, I mean, how we came about, let’s say, the analysis that I’m going to offer some insight today for further discussion. All the information and specific data are available on our website and our different platform. As mentioned, I mean, when we think about the healthcare sector, what we did at the institute was that amid the pandemic, we focus on our work aimed by supporting the so-called the most vulnerable, specifically on the unique vulnerabilities of the healthcare sector and the real impact of the increasing numbers of cyber attacks against it. And you can see that we created a fairly unique platform that is called the Cyber Incident Tracer Health. And the platform serves to document cyber attacks. And not only to, you will find the, let’s say, the numbers in terms of like data collection, but also try to understand which are the criteria, which are the metrics that are relevant to understand the real impact that they have on people. So you will see how many attacks per week, so the total record breach, how many countries, but also you will find what it means in terms of, for example, how many days of disruption in hospital and medical facilities, how many people could not get the vaccines because a certain facility was attacked, how many people could not get the proper care, how many ambulances redirected. In total, I mean, and just to give an idea, this has led to the breach of over 21 million patient records, which has leaked or exposed in 69% of the incident. Again, the important aspect is that disruption to patient care endanger lives and create the stress and suffering for patients and medical professionals. And on the long term, it also erodes the trust in healthcare providers. We apply the same capability, we’re currently applying the same capability also to assess cyber attacks in terms of what is happening when civilian infrastructures are attacked during armed conflict. Again, no need to stress it again, but cyberspace is borderless, and so cyber operations go well beyond the belligerent countries to hit critical infrastructure and populations, also in third countries. We have to consider the anonymity of the digital world, so the actors involved in cyber warfare are numerous and diverse, and their true intention are even more complex, let’s say, to define and predict. And again, cyber operations have a significant human impact on population living in conflict. They are threatening crucial services, healthcare is a good example, and also other civilian infrastructure areas. And also there are, let’s say, kind of like a very peculiar dimension about the, let’s say, the digital space, and this is why the emblem is so important. For example, the spread of disinformation can make it harder to distinguish between fact and fiction, both inside and outside of countries in conflict. I would like to basically to stop here, maybe sharing these first insights, and we can possibly continue the discussion further. Thank you so much, Michael.

Moderator – Michael Karimian:
Francesca, thank you very much, and absolutely we can come back to more of these topics in the discussion later on. I think if anything, the pandemic showed in a perverse way that with the severe vulnerability of the healthcare sector there is a need for this sort of collective action together and hence the importance of the ICRC’s leadership in this space. Now moving on, Koichiro, it’ll be great to have a presentation from you or to hear your thoughts on the cybersecurity challenges in Asia and the Pacific, and the insights that you might have into the evolving threat landscape, and of course the importance of global coordination.

Koichiro Komiyama:
Thank you, Mike. And good morning, everyone. My name is Koichiro Sparky-Komiyama from Japan CERT and AP CERT. I think in this session I’d like to represent the technical community in this region, Asia-Pacific. I’ve been working for on-the-ground incident response for like dozens of years, and I’m also a scholar for international relation and related area. So. So from my perspective, I’d like to share with you a few things. First of all, in Asia, states are racing for expanding capacity and capability of offensive side of their cyber capabilities. And for instance, UK think tank IISS recently published a report on the cyber power of 20 major states. And quite a few, some of Asian countries are ranked as, for example, Australia, China. They are the tier two countries where we have only one tier one country, United States. So we have two major players in Asia. And for tier three, we have India, Indonesia, Iran, Malaysia, North Korea, Vietnam. They are all, well, by assessment from an independent think tank, they have well-established offensive cyber capabilities. So there’s an urgent need for a country like Japan to de-escalate the group of militarization of cyberspace. And then talking about Japan itself, we have been refraining to go offensive. Mainly due to our peace constitution prohibit us to use the force, except the case it is recognized as a part of collective defense. So historically, we do not have, and we did not try to equip offensive cyber capability. But that has been changed. That was changed December last year with new national security strategy. Japan also seeking to have an offensive. Well, in our wording, it is active cyber defense, not offensive. But, well, there’s a subtle difference. But anyway, it’s not something we haven’t even tried for the last 50 years. And my last point is we see many damages caused by ransomware attack. And most of those, they are mostly driven by a commercial profit. So they hack, they launch ransomware attack for profit. Now, for last 12 months, we see many successful breach to our hospitals, one of our very critical infrastructure. However, they are usually very strong in protecting their own network. And going back to the emblem, of course, I know it’s not for, you know, it doesn’t have any direct effect to criminals in a peacetime, of course. However, having this type of document and guideline, I expect there’s some pretty. can also put some pressures on criminal groups on what they cannot do for their operation. So that’s my initial contribution and I’m happy to discuss with you for further details. Thank you.

Moderator – Michael Karimian:
Koichiro, thank you very much. Interesting to hear you reference the intention for Japan to introduce active cyber defense as part of the new national security strategy. Of course, different actors always define active cyber defense in different ways. It’ll be interesting to see how Japan approaches it in line with responsible behavior and cyberspace norms and the pacifist constitution. Mara, returning to you, it’ll be helpful to hear more on the ICRC’s role in researching and developing the digital emblem, the importance of addressing the need for extending international humanitarian law into cyberspace, and the insights that you might have on the application of the digital emblem in practice. Thank you very much. So,

Mauro Vignati:
Michael, you and Francesca, you mentioned the pandemic. So this is exactly the point in 2020 when we start to think about the digitalization of the emblem by observing what was happening during 2019 in the pandemic time, but also observing what is happening during armed conflict. So that’s the period we start to research the possibility to digitalize the Red Cross and Red Crescent emblem to signal the protection against cyber operations for medical facilities and the Red Cross and Red Crystal organizations. So to start the project, we define some technical aspects that the emblem should have, a potential digital emblem should have. So these are the requirements that we define. So first one was it must be easy to deploy. So we know that during armed conflict, it’s always difficult to find, it’s already difficult the situation in armed conflict, but it’s also difficult to find IT personnel that is able to work in this domain. So the emblem must be very easy to deploy, like the physical one, also the digital one must be easy to deploy. So it must be able to be installed on a number of different devices. That’s a very important aspect because we know that, for instance, medical devices, they cannot be modified because of different reasons, the guarantee, the functioning of medical devices. So we have to find a way to put the emblem on those devices without touching them, without installing anything on those devices. So we do not have to generate costs for the entities that are showing the emblem. So if we think a medical unit, a doctor that has to show the emblem, he has not… to have a relative cost to deploy and show the emblem. And most importantly, he has to be seen and understood. So the logic of the emblem is from the perspective of the attacker. So when we have an operator running a cyber operation, they have to understand that they are confronted with an emblem. And they have to be able to recognize this is the emblem of the Red Cross Red Crescent. And they have to understand this emblem. And they have to be able to also check the authenticity of the emblem. Not that this is a fake emblem, but this is an original one. And another aspect is the emblem should be used by state and non-state actors. So we see many state actors who are involved in conflict. So not only thinking about states able to deploy the emblem, but also by non-state actors. So on that, we are seeing some challenges in deploying this. First of all, and I think it’s one of the most important challenges, that we don’t have an internet for armed forces. And we don’t have an internet only for civilians. So the infrastructure is mixed. The nature of internet is mixed. And that’s why we need a digital element that can go granular on identifying assets on network. Because networks are intermingled. And we cannot divide. So I’m thinking about cloud infrastructure, satellite infrastructure, and so on. So we can have a doctor that has a computer that should be protected with the emblem that is using a military network that is a target. So we have to think in those scenarios. And then, so the challenge is also the medical devices I mentioned before. And then the environment. So it’s very complex, fluid, dynamic field. So we have a very stressful situation in armed conflict. So we have to be aware of this. And that’s why the digital element must adapt to this kind of field. So that’s why we start to talk with John Hawkins University that we’re going to have later on in this panel, and with ATHZ and the University of Bonn Center for Cyber Trust. And we start to talk with them, and they start to develop. a potential way to digitalize the emblem. Then we consulted, during the last year, 44 experts from 16 countries. And we submitted the ideas that have been developed so far. And they identified benefits and risks in digitalizing the emblem of the Red Cross. So among the benefits, logically, the digital emblem will extend the existing protection from the physical space to the digital world. So this is a very positive aspect. And the emblem will make it easy for operators to avoid arming protected entities. So those are the main benefits resulting from the consultation, but also the risks. So we risk, based on the expert consultation, to increase visibility of sensitive and less protected entities, like hospitals. Knowing that all of the experts reflect on that, saying that nowadays there are already multiple, several possibilities to identify less protected entities, scanning the internet and finding out which IPs, which domain names, belongs to hospitals. So in their opinion, we are not aggravating the situation. We are not increasing, because there are already methods and means to identify those. But we have to keep in mind that putting an emblem on something, someone, an object, could be putting a target on a personal object if the parties do not respect the emblem. And then as a second big risk is the possible misuse. So we know in the physical world, there are several cases of misuse of the emblem. We’re going to see, with the presentation from the two universities, that we can reduce in the digital space the possible misuses through the technology that they are developing. So this is a positive development in this respect. So we published the first report in November last year. So if you are interested. on the website of the ICC, you’re going to find the report. So this is generally how the genesis of the project in this time. Thank you very much Mauro, and you mentioned the role, the issues

Moderator – Michael Karimian:
surrounding non-state actors. During the Q&A, perhaps we can discuss the ICC’s recent principles on non-state actors. I know a question has already been posed on the Zoom platform, I encourage more questions as well, and of course encourage the audience to think about their questions when we come to the Q&A portion later on. Felix, turning to you, ETH Zurich, it’ll be tremendous to hear your thoughts on the technical solution of the Center for Cybertrust to implement the digital emblem, your thoughts on the feasibility and design considerations, and any insights that you might have on the role of technology in protecting medical and humanitarian organizations. Felix,

Felix Linker:
over to you. Thank you for a great introduction Michael, and also thank you to the other speakers for setting the flow so well. So as Mauro said, we were contacted by the ICRC in 2020, and in response to their question of how a digital emblem could work, we developed a system that we call ADEM, which stands for an authentic digital emblem. And in the next minutes, I’d like to give you an overview of the key design concepts that went into ADEM. So first, Mauro mentioned it, an emblem must be verifiably authentic. We looked at this problem more generally and asked ourselves the question, when is the digital emblem trustworthy? And we identified three security requirements in response to that. So as I said, an emblem must be verifiably authentic. That means parties who observe an emblem can check that it is legitimate and develop trust in the emblem itself. Second, a digital emblem must provide accountability. As Mauro said, there can be misuse, but we designed our digital emblem in such a way that whenever parties misuse it, they commit to irrefutable evidence that could be admitted to court, for example, to prove that they misbehaved and to hold them accountable for that misbehavior. And finally, attackers must stay undetected when inspecting the emblem. I put attackers in quality because it’s a bit of a funny attacker model. We are thinking about parties here who are willing to engage in offensive cyber operations, but not when their target has a digital emblem on it. These people must feel safe in using the digital emblem and trust that it doesn’t harm the operations. For example, that it would reveal in other cases that they’re about to attack entities. Coming to ATEM itself, we envision our design to be used by three types of parties. First, nation-states who endorse protected parties, then protected parties who send out digital emblems to attackers. With ATEM, nation-states can make sovereign decisions as to who they do or not endorse. Protected parties can distribute emblems autonomously, and this touches on what Mauro said earlier. This is a means for protected parties to decide individually whether or not they want to show the emblem, whether or not they feel safe to showing it. ATEM was also designed as a plug-in to the protected parties infrastructure. You can just add a device into their networks and it will distribute emblems for you. And for attackers, these parties can verify an emblem as authentic while staying undetected. And critically, we designed ATEM so that it also fits the standard workflow of attackers. Looking more at the technical side of ATEM, we identify parties via domain names for countries, for example via their .gov address, and protected parties as well. For example, let’s say pp.org. Governments cryptographically endorse a protected party, and a protected party, for example, would cryptographically endorse a hospital that has some IP address. In practice, these hospitals have multiple protected digital assets, for example, a website, tablets of the medical staff, or general purpose medical devices that cannot be touched, as Mauro explained. With ATEM, you can deploy an emblem server additionally within the hospital that would signal protection via TLS, UDP, and DNS to aforementioned attackers. This emblem server would distribute emblems that have multiple parts. First, the emblem itself in the center that is a cryptographically signed statement of protection. And this emblem would be accompanied by multiple endorsements. Endorsements from all the nation states that endorse the protected party and an endorsement from the protected party itself. An attacker could learn from this emblem that multiple conflicting states endorse the emblem and thus deem it as trustworthy. This reasoning might be simpler for military units who are bound by AHL. For these military units, it might suffice that they see that a nation state they trust, for example, their own nation state or an ally endorse the emblem. In summary, our design, ADEM, provides three security requirements. It’s verifiably authentic, it provides accountability, and it lets attackers stay undetected. Our design is to appear in a top tier security conference and our publication is accompanied by formal mathematical proofs of security. Currently, we have prototyping ongoing with the ICRC and we hope to deploy ADEM within the ICRC’s network, as I just showed for hospitals soon. If you want to learn more about the digital emblem, I encourage you to follow the QR code on the right hand side or reach out to me via my contact details. And I look forward to the discussion later.

Moderator – Michael Karimian:
Felix, thank you very much. And it is important to note that Felix and Francesca are dialing in at approximately 4.30 AM their time. So real kudos to them and thank you for their generosity. Tony, I think has a slightly better time zone, but still up a little bit late. So turning to you, please, Tony, if we can hear your thoughts on similar aspects as Felix’s presentation, but from the perspective of Johns Hopkins APL. Thank you.

Tony:
Yes, happy to do that and happy to be here. Thank you very much for inviting us to this and also to participate in the larger effort. We, the Applied Physics Lab or division of the university, we have a variety of technical efforts, many focused on protecting critical infrastructure. The project we’re discussing here is actually part of a broader set of activities we have, recognizing that while we are a laboratory, major technology activities, if we expect to have significant impact have to be tied into a legal and even a legal policy and even a social framework to be successful. And so that’s what this is about. We’ve had a longstanding effort to look beyond the technology into the other policy, ethical norms based issues associated with critical infrastructure. And when we discussed with ICRC, some of their objectives for the digital emblem, there was a significant overlap, particularly because within the context of international humanitarian law, we had a fairly specific way of thinking about what needed to be done in order to provide that emblem to the parties that needed to be able to implement it and observe it and respect it. So I’ll tell you a little bit about what we envisioned for the technical solution, but I wanna back up a little bit to kind of our thoughts on what is it that a digital emblem has to do. And this is a recapitulating a little bit of what we’ve heard, but I think the important thing to think about here is twofold. Who is it that has to respect the emblem and who is it that has to observe that set of behaviors? And it’s important that we are looking at actors who would desire to comply with international humanitarian law. So there’s a large class of cyber actors, a large class of cyber attacks. There are hacktivists, cyber criminals, script kiddies who are doing it for fun. And then there are nation states or organized militaries or organized combatants who employ cyber in conjunction typically with other means of power. And those are the types of cyber operators we’re focused on. That’s the nature of the emblem for international humanitarian laws. It applies to those types of actors. And one thing we observe is that if you look at how nation states have employed cyber means in conflict, they typically have fairly broad capabilities and will do things like major disruptions to the internet in order to support whatever it is that they would like to do, suppressing activity within their state or limiting the ability of combatants to operate within their domain. So what that means is from a protection point of view, we can’t just think about protecting the end systems, the data, the processing. We also have to be able to protect the communication. Many of the operations that we look to protect rely not just on the ability to process locally, but the ability to reach back and communicate either for logistics purposes, to receive advice, receive supplies. So the emblem needs to protect both the end system, its data and processing and the communications. And it has to do that with a degree of assurance. It has to do that in a way that’s visible to operators. And then to some of Francesca’s points, it also has to be visible to third parties in a way that doesn’t disrupt the operations of the humanitarian mission. So we were looking for a solution that had those kinds of attributes. It needs to be scalable. It needs to be visible globally. And it can’t be a burden on the operations of the humanitarian organization beyond what they need to do in order to operate on the internet. And in order to do that, what we tried to do was look at how we would leverage the infrastructure that is in place in the internet, rather than looking at developing a new capability that would require new infrastructure. And what we were looking at was the way to leverage what is on the internet today in order to secure the internet. The internet technology has grown the capability to employ cryptographic methods to protect the fundamental data that you need to operate the internet. And that is the naming and the addressing that’s used in order to enable communications. So with that infrastructure in place, we have an asset that we can use that doesn’t require us to roll out new capability in support of the emblem. We leverage what’s out there that gives us the global reach and the scale that we think we need. And a lot of these technologies are well understood. What we have to understand is how to adapt it into this mission, into the mission of supporting a digital emblem. And the fundamental problem, you know, in our opinion, isn’t the technology to protect information on the internet or to indicate your presence on the internet, protecting IP addresses, protecting names as established technologies. What needs to be done is adapting it into the model for how international humanitarian law and the emblem are used. And there’s a very strong analogy with what’s done physically, and I think we’ve touched on some of this. The emblem is understood globally through the good work of the International Committee of the Red Cross and the National Societies, but the the emblem itself is regulated under the laws of each state, and so it’s different in each state. And what has to be done then is to tie the assurance that the emblem is valid to that authority that the state has to determine how to regulate the use of the emblem, which is different in different states. In some places, there’s a very close coupling to the National Societies. In other places, there are state agencies that are responsible for regulating the emblem. But that’s the new connection that has to be made from a technology point of view, and that is all about the ability to use the same cryptographic techniques that are used to protect the Internet, but to protect the emblem. Now that’s the premise for what we’re doing. Let me talk specifically about what we think would be a valid implementation of the emblem that has these properties of global visibility and scalability. What we’ve looked at doing is simply leveraging what’s already in place for secure naming, secure DNS, and for routing for securing the BGP system used for global routing. And what that means is that we have cryptographic protection for that information, for names and addresses. How do we now layer on top of that the cryptographic protection for the emblem? Well, to do that, we can leverage what’s available already within DNS, and we have a prototype running where what we have done is taken part of our DNS namespace at JHU, and as part of our demonstration said that that subset of the namespace is for humanitarian missions. Now the name itself isn’t the emblem because the name is not something that can easily be assured, but what we do in addition to assuring the name, which shows that the name is legitimate, we insert within the DNS record a special text record that is signed by a different entity that is trusted to verify that the emblem is being used properly. And that’s what then has to be tied back to the way international humanitarian law is regulated in the different states and in the different jurisdictions. So that’s the first part of what we’ve suggested, that we use the DNS in order to propagate this information, make that available within the DNS record using standard technology, thereby inheriting the scalability and global reach. But it’s not enough to have names. In order to see what’s happening on the internet, you actually have to focus on addresses and you get an address from the namespace. But if you just relied on that, you’d run into the problem of being able to do that at scale. If you are Francesca’s organization, you don’t wanna have to look for each individual name and collect each individual address. What you’d like to do is operate in a way where the addresses used for these protected missions are part of a distinguished part of the IP address space. And again, that’s something that can be done. It is used all the time in order to segregate some of the traffic for the normal users of the internet. Commercial internet operators, nation states that operate the internet, will distinguish how they handle traffic based on what they know about the meaning of that address, but they do that based on local considerations. What we’re seeking to do is make that context by which you determine how to handle an address global and global tied to international humanitarian law. So the suggestion then is to have designated blocks of addresses that are associated with the humanitarian missions and assigned through the normal process to provision internet services tied to the infrastructure in place for secure routing. What that means is, an entity that would like to have a service supporting a humanitarian mission would number that out of the address space that is designated for humanitarian missions and register that within the RPKI, the Resource Public Key Index that exists for routing and thereby gain the global scaling and visibility for the address so that if an entity like the Cyber Peace Institute would like to see if internet traffic disruptions are affecting humanitarian traffic flows, that is done based on aggregated blocks of addresses so that it’s quickly visible to a third party observer that a state action has in fact affected a humanitarian mission. So those are the core technical concepts. Adopt a naming technology and the means to do secure naming in order to provide a distinguished record that serves as the namespace address and rely on blocks of addresses in order to have traffic flows that can be monitored that are associated with humanitarian missions. All of that secured by standard cryptographic techniques that then need to be tied to essentially a route of trust associated with the way that international humanitarian law is implemented. That last piece really is where we see, excuse me a second. That last piece is where we see a great opportunity to work with international organizations on how that would be done. If it’s done country by country, we again have a scalability problem. Every country would have to be able to, every country, not just every country. Everyone interested in participating would have to essentially touch every country. Better would be to work through existing organizations, national societies and the ICRC or the IFRC or perhaps regional associations that countries might use in order to coordinate how they would implement the regulation of the entity that they do under their domestic laws. That piece again is at the intersection of the technical solution that I’ve sketched out here and the legal policy frameworks that are in place to allow cooperation among nations and then a cooperation with third party entities. So that’s where we are. As I mentioned, what we’re doing now is prototyping focused not on showing you can do this. Like I said, most of this is very well established technologies, but showing that if you do it on the operational internet, it will behave the way you expect. It will have the scaling properties, the global visibility. We will have the ability to bring up or take down an emblem. We have to understand what those time constants are given the way the internet works. And that’s an experiment that we hope to do over the next few months with some technical partners. And in parallel to that, as I say, we should be doing some work with the appropriate bodies that would look at how the nations that are responsible for putting in place regulation of the use of the emblem would cooperate in order to make the assurance of the emblem something that also scales globally. And that’s what I have. Thank you.

Moderator – Michael Karimian:
Tony, thank you very much. I think both yourself and Felix, your remarks have highlighted the technical feasibility of the emblem. And of course, that in itself demonstrates the innovative nature of the emblem itself. And also I think speaks to the credit of the ICRC for taking so much time to go through the due diligence to identify and design how this could be rolled out in practice. In the next 15, 20 minutes, we have. the privilege of engaging in what I hope to be a dynamic conversation among the speakers and that will delve into the technical policy, cybersecurity and humanitarian aspects surrounding the digital emblem. This is intended to be a conversation among the speakers so that they all have a chance to react to and build upon each other’s thoughts. If I can please request for the AV team to have Antonio, Felix and Francesca on the screen at the same time so we can see them simultaneously, that’ll be very helpful. Thank you. So let’s start by discussing the mix of technological and policy dimensions of the digital emblem. I think it’s crucial to consider the involvement of international organizations such as ICANN and the ITU in this endeavor. I wonder if any speakers have any thoughts on how these organizations can play a role in the development and implementation of the emblem and what collaborative efforts we can envision on this front. Felix, I think maybe you have some thoughts on this topic. Yeah, this touches a bit on what

Felix Linker:
Tony said last time. So we, in our design of ADEM, we feature a notion of authorities as well and we are deliberately vague in what these authorities are supposed to be because we don’t know which authorities like the world in the end will agree upon which are the good ones to be endorsed by. So one of these authorities could be the ICRC that endorses protected parties to run humanitarian missions. It could also be organizations like ICANN. But what we thought is that organizations that, for example, control parts of the naming system of the internet are not particularly well suited to verify whether someone that reaches out to them and tells them, hey, I run a protected mission, can you please endorse me? Organizations that are more of technical nature would have a hard time verifying these requests as genuine, is what we feared. So we didn’t want to put any legal burdens on technical organizations, so to speak, and rather focused on nation-states or maybe supranational organizations like the Arab League or organizations that know what they’re doing in the space anyways, like the ICRC. Thank you, Felix. Do any other speakers have

Tony:
thoughts on this? I agree with Felix that it’s really the regional registries more than ICANN. They are responsible for operations, but their role is the validity of the information used to run the internet. They are not in general in a position to verify humanitarian organizations. but that’s not true as a blanket statement. And the difference is it is a state responsibility as the ICRC has written to regulate the use of the emblem. And in many states, there is a very close coupling between the internet operator and the state. And so in that world, under the ICANN and the regional registries, there is a state authority that controls names and numbers. And if that’s the case, then there’s a natural place for that to be the authority that controls the use of the emblem, not as the numbering authority, but as the state authority for the use of the internet. Now that’s not global. In the United States, that’s not the way the internet operates. In the United States, the government has very little involvement in how names and numbers are allocated. But in other countries, Egypt, for example, China, the coupling is very close. So the answer, Michael, to your question is not simple. In some places, you’d expect a close coupling. In other places, it really needs to be distinct, but it does need to be tied into the way that the internet itself is operated, or you have to overlay another global scalable system, for example. So we envision using DNS, not to use DNS to verify that the emblem is correct, but to use DNS to propagate the emblem, regardless of who has signed the digital record within the DNS record that says the emblem is valid. That can be an ISP, as I say, in certain countries. In the United States, it almost certainly would not be. It could be the American Red Cross, or it could be the US as part of the supranational organization. But the general technical solution does have to maintain that separation, recognizing that operationally, to make this scalable, it does have to couple to what’s done by the registries and ICANN.

Moderator – Michael Karimian:
Thank you, Tony. Mauro?

Mauro Vignati:
Yeah, just to give a couple more thoughts on the legal and policy perspective. So the use of the emblem is not decided by the ICRC. So this is decided by states on the Geneva Convention. And this is in the annex ones of the additional protocol. So that’s where we have to operate from a legal perspective. So buying to the technological development, we are working on the legal process. And we are presenting the idea to states so that for the international conference in October 2024, where the state’s going to come to Geneva also to discuss about the emblem. States are aware about the project, and national societies too, and then we look for them to give us the mandate to continue to explore this project. Because at the end of the day, we have to amend the Geneva Convention. So we have to amend the additional protocol or to create a new protocol. So this is the basic legal process that we have to go through to be able to have a digital version of the emblem. So that said, in the offline physical space, then are the state’s authority that decide who is able to use the emblem. So the Ministry of Health or other ministries entitled for this, they decide who internally in their nation or in their territory, because we are always talking about a non-state actor that occupy territory and control territory. So these could be also a non-state actor. They are entitled to give the permission to selected entities to display the emblem for protection. So the distinctive use is already in the Geneva Convention, so the ICRC and the national societies. But at the end of the day, the entity that decide who is able to display the emblem in the physical space is the state. So we try to replicate the same. process that we have in the offline or in the online. We’re going to see the difficulties that we can have in this specific domain, but we would like to replicate exactly the same process for the authorization. Then the implementation is another topic. Thank you Mauro, very

Moderator – Michael Karimian:
helpful. Let’s turn to the cybersecurity implications because of course we must recognize that with innovation comes great responsibility, and so let’s examine the risks and benefits associated with this concept. I wonder if any speakers have any thoughts on potential vulnerabilities that we should be vigilant about, and of course conversely how the overall cybersecurity of cybersecurity posture of critical medical and humanitarian organizations can be enhanced by the emblem, but also recognizing that in a world where cyber threats evolve and sometimes in predictable ways, sometimes in unpredictable ways, what proactive and best practices can we put in place to safeguard these vital systems? Would anyone like to start? Kojiro, please.

Koichiro Komiyama:
So I don’t have a clear answer for the question, but to protect the, or talking about the protecting the, for example, infrastructure at the hospital or medical system, so this is more like a question to Felix or Antonio. You mentioned the ADEM or the implementation of digital emblems right now is, it can sign on DNS domain name or IP address or TLS, DNS. Could it be possible to sign like individual files or medical, the physical systems that are used in factory or hospitals?

Felix Linker:
Felix, you have your hand up? Can I jump right in? Yeah, great. Yeah, so we need to distinguish two types of, two parts of ADEM. just talking about my design now, or our design. So there is, for one, what you say that is protected, right? Like how you speak about the entities that are protected, in which direction you point. And what we use in Atom are IP addresses and domain names for that, right? This is how we identify an entity that is protected. And then TLS and UDP and DNS are our mechanisms by which we give someone the emblem, right? And then this emblem includes the pointer, we give it via, for example, UDP, and then this emblem says, ah, this is the protected IP address, this is the protected domain name. Now, a colleague of mine is currently working also on local emblems, where the idea is that malware that infested some device could check whether this device is protected, or whether parts of this device are protected. In the work that I presented, we focused on the network level, and on the network level, we thought it only makes sense to talk about things that you can also see from the network level, right? We found it would be kind of like, what would a verifier do with the information? Oh, like looking at their notes, file f.txt on this computer is protected, allegedly. But I mean, I have no access to this computer, right? What am I supposed to do with this information, yeah? So on the internet, we wanted people to only say, something is protected that they can also recognize as that thing that is protected. But for local emblems, we are looking on future work, yeah. And this, for example, would target especially the devices of medical staff, because not every penetration happens through the network layer, right? It could be malware in a malicious email attachment that just gets sent out en masse, right? And then the malware happens to find itself, wake up within the hospital network. And we also want to cater those designs, or those problems, rather, not designs.

Moderator – Michael Karimian:
Oh, thank you, that’s great. So, I’ll, Rik.

Tony:
Let me make, can I make one comment on some of the risks? We worried a bit about unintended consequences, and what we have to be careful of is not to create an emblem in a way that itself potentially causes a disruption to the humanitarian mission. And really, the important thing here is to think about how a third party, not the cyber actor, how a third party would observe that, the emblem was being respected. What we wanted to avoid was depending on the humanitarian organization itself to field a query from an arbitrary third party in order to avoid the potential for an unintended denial of service attack. The scenario to think about is, you would like to be able to observe a cyber attack in progress. If the only way to do that is to query the attacked entity, what you are doing is focusing traffic on the attacked entity. That’s how unintended denial of service happens. There’s no way to check for malware on a machine without checking the machine. But given what we have seen, that nation-state attacks typically are focused more on the infrastructure than on the individual user. We want to make sure that the observation of attacks on the infrastructure don’t depend on observing the endpoint. I’m talking about a set of mechanisms that have actually manifested many times on the Internet with the loss of certain critical capabilities because of a focused overload on the endpoint. You can imagine that kind of thing happening if all the news organizations in the world or all of the third parties that care to monitor compliance with international humanitarian law, address a endpoint that is intended to be protected. That’s a little aspect of this that is still a concern to me. Our solution tries to mitigate that by relying on Internet infrastructure to query for third parties. But there’s nothing that prevents those third parties from actually, now that they know where the attack is manifesting, from actually focusing their attention on it, unintentionally disabling the humanitarian operation.

Moderator – Michael Karimian:
Thank you, Tony. Kojiro?

Koichiro Komiyama:
Just a very quick comment. But I do agree with, or I strongly believe that the local environment is something. We really need to implement this concept because the more a system is critical is, those system tend to be completely offline or not connected, or doesn’t use the global IP address spaces, do not associate with any domain name and others. That’s something I need to see your future proposal.

Moderator – Michael Karimian:
Thank you, Kojiro. Francesca, if I may put you on the spot at 5 AM your time. I know strategic foresight is a speciality of yours. I wonder if you have any thoughts on where risks to the medical and humanitarian sector might go in the future, and how we can proactively mitigate those risks.

Francesca Bosco:
Actually, can I share a reflection that was, I think, a connection point across different aspects that were mentioned, starting from what Mauro was mentioning in terms of one of the key requirements of the emblem is that it needs to be understandable by the different parties, let’s say. Let me share specifically also to address your point, Michael, in terms of we chart the evolutions in cyberspace that we are seeing. I’m sharing an evolution that we are all aware about, for example, the kind of like civilization of conflict, for example, that we’ve seen and why the emblem is so relevant. More than an evolution in terms of technology, I would like to share an evolution, which is a combination of, let’s say, technological disruption, like, for example, the availability of certain tools. And I’m thinking, for example, about the accessibility of harmful and sophisticated malware, for example, the diffusion of ready-to-use cyber tools that are accessible online, link-to-sold, and so they lower the barriers of entry for malicious actors. One of the key elements that Mauro mentioned before is that the emblem needs to be understandable also by the attackers. And here we’ve been talking more about, let’s say, the technological vulnerability, but let’s also think about the human vulnerabilities, let’s say, in terms of lowering the barrier to entry means also that, again, as we’ve seen, let’s say there is a blurring line between the state and non-state actors, the complexity, clearly, of the, I mean, the attribution of cyber attacks and the increased complexity of having civilians, for example, engaging in cyber operations. This is to say that one of the problems is also understanding the real impact that certain actions might have. What we have observed is, for example, that there is a combination between, for example, state-sponsored actors and activist collectives that usually conduct more basic attacks and focus on disruptive effects, but you can never completely, let’s say, foresee the spillover effect or without fully understanding the consequences that their actions might have, often because they don’t understand the full impact, basically, that they might have with their actions. So I think this is an interesting evolution, let’s say, in cyberspace, where, again, to Mauro’s point in terms of the value of the digital emblem is indeed something to consider. And let me also allow another comment, which was also, I was seeing some of the comments in the chat about the education. I think that the education needs to go in different directions. Again, going back to why it’s important, let’s say, to protect healthcare organisations, institutions and facilities, but also, at the same time, humanitarian organisations. Before understanding, let’s say, why it’s important to protect, often the easier argument is to offer concrete examples of what it means if we’re not protecting them. And we’ve seen this. We have not necessarily learned from that, but this needs to go across, let’s say, the different stakeholders involved. I started with the malicious actors, but then let me go back also to what the, let’s say, which are, let’s say, the ones that need to decide on the emblem, as Mauro was mentioning, are states at the end of the day. Also, in terms of like states, we need to educate in terms of which are the real consequences and the real impact of attacks. And to this end, one of the work that we’re currently doing is also analysing, basically starting exactly with the work that I was mentioning on the healthcare, to understand the real human impact, but also to foresee potential consequences on the long term. We started doing this work by which we are working on a standardised methodology to measure the societal harm from cyber attack and monitor also the responsible behaviour in cyberspace. And to the points that have been made, this needs to be applicable in peacetime, in armed conflict time, and be able to assess which are the costs that we are paying as society if we are not protecting vital infrastructure like healthcare and humanitarian organisations.

Moderator – Michael Karimian:
Francesca, thank you very much. We now have approximately 22 minutes for audience Q&A. For anyone in the room who has a question, if you could please approach the microphone at the stand. I don’t say that to make things awkward, but just it is important for accessibility and so that questions are captioned on the screen as well. But just to help kick things off, there is a question in the Q&A chat on Zoom which I will pose. It is actually a very helpful big picture question. Then we can zoom back in. The question comes from Aliou Shabashi. They ask, can we stop cyber attacks in all sectors by investing a huge amount of funds for developing highly sophisticated software tools slash systems, or are there other means to at least minimise cyber attacks that harm countries? It is a big picture question, not just specific to the digital emblem. It helps us expand the conversation on cyber security more broadly. If any of the other speakers have thoughts on this, I will just quickly mention the Microsoft perspective. At Microsoft, we talk about five specific actions which are recommended that are taken. One, this is true for individuals and systems administrators, is to apply multi-factor authentication. I know that can sometimes seem very annoying, but it does make an enormous difference, as studies have shown. Secondly, apply zero-trust principles, that is specific to systems administrators. Extend detection and anti-malware software and solutions. Keep up to date, in other words, patch systems and use the latest available versions of software and protect data, ideally through encryption. Studies have shown that 99% of cyber attacks can be stopped by those basic cyber hygiene activities. I would also encourage tech and telco companies to join the Cyber Security Tech Accord, which is a coalition of approximately 150 members who have committed to best practices and principles of responsible behaviour in cyberspace, as well as the Paris Call for Trust and Security in cyberspace, which actually applies to all sectors. It is the largest multi-stakeholder initiative to advance cyber resilience. I would encourage anyone to engage with Francesca’s organisation, the Cyber Peace Institute. Does anyone else have any thoughts on this? Francesca, I see your hand is up.

Francesca Bosco:
I was waiting for this moment. Because actually, when we worked on the cyber incident to trace our health, in full transparency, we started receiving many requests. like, can you do it also for the banking sector, for example? Can you do it also for other vital infrastructure? On purpose, we decided to focus on all civilian infrastructure. And so we started looking into that. So I get the point. So I’m talking here more about like understanding the full landscape. I’m not gonna go into the weeds, let’s say, of the definitions and let’s say the landscape of different laws and regulations that apply that are making also difficult, let’s say, to do some proper collection work. But let’s stick to our own experience and to answer to the question, would the funding be enough from a technical standpoint? And I spent all my life in cybersecurity. I would say, no, stopping, let’s say, cyber attacks worldwide, not possible. But on the mitigation side, indeed, there’s work that can be done. You mentioned, you started basically already answering, Michael, in mentioning, I mean, basic cyber hygiene. And to me, this should be kind of like the minimum requirements, let’s say, of all society education. But the sticking more in terms of like what the different stakeholders can do. I think there’s one basic point, which is full cooperation in terms of like information sharing. One of the challenges that we encountered, for example, in the cyber incident trace of health was to collect the data, analyze the data, and also share the data among the different partners. So information sharing is still a challenge. And there is one part which is also related to then how to transform the knowledge into, let’s say, palatable and understandable knowledge that can help the international community to advance the mitigation efforts, notably when it comes to, for example, accountability. But also I’m thinking in terms of like the active role that civil society organization or non-state actors, Michael, you mentioned the tech accord, for example, or civil society organization like us and like many other attendee, for example, the IGF and for sure in the room can play a role because they are the ones that are often either impacted or they are the last mile, let’s say, very close to the people that are impacted by cyber attacks. So to understand, again, the consequences and for potentially advancing knowledge for the mitigation efforts, we need to have this constant dialogue. And then the third part that we have not discussed so much about, but in the end, it’s also, I mean, the framing of the conversation, which is protecting the protectors, meaning sharing also defense resources, because there is one part which is the information sharing when it comes to the attacks, but then there is also, okay, so what we can do about it and therefore how we can mitigate. Enhancing cyber capacity building, there are different efforts in that regard. I would like to mention there is going to be a high level meeting in Ghana at the end of November, the Global Cyber Capacity Building Conference. I’m mentioning this because this goes also into the mitigation effort side and that there will be also one focus specifically on protection of critical infrastructure, both in, let’s say, I would say developed and in developing countries. But then also, again, sharing the knowledge, the good practices, and also sharing active, let’s say, defense initiatives. To this end, and considering the humanitarian context, we launched the Humanitarian Cyber Security Center, which is a sort of like umbrella platform by which we are collaborating with different entities exactly to go, I mean, hopefully to stop cyber attacks, but especially to mitigate the impact of cyber attacks specifically on humanitarian organizations, because they are the ones, again, that they are protecting society as a whole.

Moderator – Michael Karimian:
Thank you, Francesca. Tony, your hand is up.

Tony:
Yeah, I just wanted to, first, Michael, very much endorse your points about the importance of some basic cyber hygiene. Many, many of the kinds of attacks you see that are very damaging, we have the technology to mitigate, and it’s just not done. Having said that, I think we can’t count on a technology solution to these problems because some of the adversaries are so sophisticated, some of the targets are so valuable that there has to be more than a technical solution. And that’s one of the things that got us started down this path. We think there’s a lot of value to exposing malicious behavior and looking for collective action, which is one of the reasons why we’ve tied a lot of the mechanisms we’ve used specifically for the IHL application to general mechanisms available on the internet because IHL is very important but very limited to the humanitarian operations in conflict. So you wanna have a solution that works in that environment, but you’d like to be able to extend it under different authorities into other environments. And authorities could be legal authorities or it could just be ethical or norm-based behavior that says, we will be able to observe that there seems to be hostile activity against a hospital, not in conflict, a hospital or a public utility. And to do that, you have to make, you have to provide some more transparency so those who are interested in watching know what they’re seeing. And again, to do that globally and scalably, you have to tie it to the scalable infrastructure that’s in place. You can’t hope to do that sector by sector and still scale. And that’s one of our motivations to try to tie what we’re doing to the infrastructure that’s in place that can then be repurposed for these purposes. IHL, very good special case, but would not address, for example, ransomware at a hospital in peacetime. That’s not an IHL problem, but it’s very much an important problem that could be solved by looking for those same kinds of bad behaviors.

Moderator – Michael Karimian:
Tony, thank you very much. Again, in terms of questions in the room, please do approach the microphone, which is on that side to my right, if you’re looking at the screen. Yasmin, I believe you have a question, please.

Audience:
Hi, it’s a bit awkward to be standing in front of a microphone, but thank you very much for this very interesting and fascinating panel. I’m Yasmin, I’m a researcher at the UN Institute for Disarmament Research. So I do have a few questions, so I hope you bear with me. First on the question of offensive cyber capabilities that are being enhanced by AI. I know that there’s a lot of hype around it, but fact is that there will be cyber capabilities that are increasing in speed, even without automation and AI. And I was wondering how the digital emblem solutions would deal with issues surrounding the need for the emblem to be verifiable and in an authentic way, but at the same time, how do you deal with the increase of speed of the cyber capabilities that might not even take the time to verify the authenticity of these emblems, or they don’t even care about the emblems in a way. And second is my question of surrounding the appetite of states and sort of sub-state level organizations and agencies for these solutions. So obviously I’ve heard a lot about your efforts at socializing the idea, which I think is great, but at the same time, how much appetite do you see concretely at the moment and what sort of incentivization have worked so far? Because I saw, I think it was just yesterday, a couple of days ago, I saw an article about, for example, the activists in Russia and Ukraine who actually pledged to sort of lower, like de-scale the level of cyber operations that they’re conducting. But at the same time, how would you incentivize, for example, activists that are less organized in these groups to respect solutions such as the digital emblem? And yeah, I think that’s about it, right? Because I’m aware of the limitations.

Moderator – Michael Karimian:
Yasmin, thank you very much. I know we have more questions, and so we can… good if we can have the questions bunched together and then allow the panelists to respond in whatever makes most sense for them. So another question

Audience:
please. Sure, so hello my name is Glyn Glasser. I’m actually with the Syravese Institute. Hi Francesca. But we don’t work directly together so I’m not a plant. My question actually follows on quite well from this last one about incentives. I’m wondering given problems around attribution that Francesca mentioned, would you foresee kind of fewer state actors being motivated to respect the emblem given that there’s maybe an easier or higher probability that they could, the emblem could be violated without the attack being attributed to a state? That’s my question, thank you. Thank you. It looks like we have a third question. Hello, thank you very much everyone. This has been really interesting. I didn’t actually know about this proposal. I’m Jess Woodall and I work in policy and national security for Telstra, which is Australia’s incumbent ISP and telco provider. So it’s been really fascinating and I have a background in international relations so this really hit me. A couple of kind of observations and then a question. I think just to kind of add to what kind of Sparky was saying, I think there’s a real kind of need for this. Like we have excellent kind of visibility on the targeting in the Asia Pacific region given our kind of network and this is a real threat. This is stuff that is happening now. There’s hospitals being hit by nation-states that we can see kind of almost every day. So there’s, you know, from the outset say there’s a case for this and it’s really interesting. I think to kind of answer the question before my question, the first question, what I think you might say is like the malicious kind of criminal community is very self-regulating. So they will go after people who target people that they perceive as soft targets. Like they don’t like that amongst their own community. So whilst this is kind of primarily targeted at nation-states, you might even see that trickle-down impact within the criminal community itself. So yeah, I think that there might be broader kind of impacts than what you’ve even outlined here. On the kind of issue of validating kind of who is adhering to the emblem, because I’m a real kind of, you know, how do we implement this? This is great but what will it look like in reality? Like how do we roll it out? How do we do it? You could even look to ISPs because we can see, we have really good knowledge of who the key nation-states are that are operating in our jurisdictions, what their C2s are, what their infrastructure is. So if you were to implement something like this, you could reach out to kind of those organisations and be like, okay, is this actually being adhered to? Are people following these kind of rules? And we could give you kind of some insight, you know, is that happening or is that not happening? So yeah, my question is like, do you think that there’s kind of a role for, you know, ISPs and that kind of situation to help validate that people are adhering to, you know, an emblem type

Moderator – Michael Karimian:
scenario? Thank you. Thank you Jess, tremendously helpful. So just to briefly summarise there, we’ve had a question on how to deal with the implications of AI empowered attacks but also AI empowered defence, the appetite for states here and similarly how we can ensure that states respect the emblem. How do we think about knock-on consequences of the emblem and the role for ISPs? piece. We have approximately six minutes left. So if I could encourage our speakers to exercise some brevity, that would be great. Who would like to go first? Felix, I see your hand is up.

Felix Linker:
Yes, I hope I can be brief. I’ll do my best. So I actually would like to comment on all of the questions or parts of them. So in the context of the question regarding AI, it was like, how do we then even deal with attackers who might not even verify the emblem as authentic? And here I think it’s important to recontextualize the emblem. So the emblem is a mechanism that aims to reduce cyber attacks, but only by design from those people who verify it and pay respect to it. So I think it’s important in all discussion to focus just on these actors, because otherwise there is no point and there’s nothing we can do. Regarding the last question, I appreciate that the second question was already answered by the person asking the question themselves. A role that we were exploring for our design in general, not regarding ISPs, was because our design is so active, it functions like a heartbeat protocol, right? Emblems are just sent out regularly or not. We were wondering if monitors that regularly, but not too often, check whether these emblems are actually sent out to be able to attest, for example, to other people. I mean, you say you didn’t see the emblem, but look, we saw how it was sent out. It was not dropped. I’ve never thought of ISPs taking this role, but it could be one of the possible

Moderator – Michael Karimian:
roles, yeah. Thank you. Thank you, Felix. Four minutes remaining. Who would like to go next? Mauro?

Mauro Vignati:
Yeah, probably on the non-state actor and the incentive for the state actor to respect the emblem. So from the state’s perspective, there is a legislation that they signed, or other conventions, so they should comply with the Geneva Conventions if they’re going to sign this amendment or the new protocol. So they are bind by law. Knowing that inside the space you can be a little bit more anonymous than the physical one when you do operation, it’s one thing. We have to test the emblem when it’s going to be out there. But we tend to think that countries that are respecting the physical emblem will also be in respect of the digital one. Another story is about the non-state actor. So we published a couple of days ago in the European Journal of International Law an article about eight rules that non-state actors should respect. Those are not new rules. So some newspaper thought that we are doing a new Geneva Convention or new commandments in this respect. Those are just rules based on IHL, so rooted on IHL, and we call non-state actors to respect IHL. We formulate in a little bit new way because of the recent conflicts, but those rules are rooted in IHL. So what is the goal is to talk to, through the publication of this rule, to talk to those non-state actor and to ask them to respect IHL and not to attack civilian objects and not to attack civilian people and so on and so on. So you can find this on our blog and on the European Journal. So through this work we are doing, we are teaching those people what is IHL, what is the respect of IHL, and then an infringement of IHL could be considered as a war crime. So this is what we try to do. We do in the physical space with armed forces and now we try to do also on a digital space, knowing that the people in the digital space are physically somewhere. So that’s the goal. Thank you Mauro. Two minutes

Moderator – Michael Karimian:
remaining. Would anyone like to be the final speaker for this session? If not, then… Sure, I’ll help to wrap up. You don’t need me to reiterate the significance or importance of protecting medical facilities and humanitarian organizations. We know that. I think this session has helped demonstrate how we further help those sectors to be protected. But of course, as we’ve also discussed, technical solutions are not enough. We need a broad range of multidimensional solutions involving many, many actors. And so I hope that those of you here who have joined us in the room or online have found that this has been relevant to your work and that you can also contribute in ways that are necessary. Of course, Mara will be here. And of course, feel free to email or connect to any one of us if it is necessary to do so. I think we clearly need to have more collaboration. But also, there’s a space for more research and more advocacy on these matters as well. This session alone doesn’t achieve all those goals. But with that, I’d like to thank our great speakers for what I hope has been an interesting session and thank our attendees as well for their tremendous engagement and questions. Thank you all very much.

Audience

Speech speed

189 words per minute

Speech length

931 words

Speech time

296 secs

Felix Linker

Speech speed

163 words per minute

Speech length

1710 words

Speech time

631 secs

Francesca Bosco

Speech speed

151 words per minute

Speech length

2487 words

Speech time

989 secs

Koichiro Komiyama

Speech speed

102 words per minute

Speech length

677 words

Speech time

397 secs

Mauro Vignati

Speech speed

165 words per minute

Speech length

2236 words

Speech time

811 secs

Moderator – Michael Karimian

Speech speed

186 words per minute

Speech length

2465 words

Speech time

796 secs

Tony

Speech speed

168 words per minute

Speech length

3499 words

Speech time

1253 secs

Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martin Wimmer

The analysis explores various perspectives on digital transformation, sustainability, and the environmental impact of technology. One speaker emphasises the need for a human-centric approach to digital transformation, focusing on improving individuals’ lives and preserving the integrity of the Earth. They draw on the metaphor of the Japanese rock garden to describe our relationship with technology. Additionally, they highlight the importance of considering sustainable development goals and respecting human rights in the use of technology.

Another speaker argues that digitalisation and technology should promote sustainable development goals and uphold human rights. They point out that the German development policy supports the realisation of human rights, protection of climate and biodiversity, gender equality, fair supply chains, and other important aspects. They propose that a just transition to sustainable economies requires a nurturing approach rather than exploitative practices, drawing parallels with being a “gardener.”

However, concerns are raised about the environmental damage caused by artificial intelligence (AI). The negative sentiment towards AI’s impact on the environment is highlighted, suggesting that we are currently in a state of repair. Similarly, the negative sentiment towards the industry’s lack of concern for the environmental impact of their activities is expressed. The argument is made that industry needs to consider the environmental impact, aligning with the sustainable development goals related to responsible consumption and production.

The analysis also addresses the lag in legislation and regulation related to technology. The negative sentiment is expressed, stating that legislation and regulation are often implemented too late. The need for learning and better preparedness for future technologies is emphasised, as well as the positive sentiment towards gaining knowledge from the mistakes of the past.

The role of civil society and non-governmental organisations (NGOs) in exerting pressure is highlighted as a means to drive change. The positive sentiment towards the pressure from civil society and NGOs is expressed, suggesting that their involvement is crucial in advancing sustainability and human rights.

The transformation of the internet is discussed, with references to its evolution from interconnected networks to the oldest among digital technological artifacts. The neutral sentiment is expressed towards the internet, implying that it can neither be deemed good nor bad. Instead, the focus is on the internet’s role as a foundation for various digital technologies, with artificial intelligence being considered the most recent incarnation.

Overall, the analysis highlights the importance of considering sustainability, human rights, and the environment in digital transformation and technological advancements. It also underscores the need for a human-centric approach, better industry practices, improved legislation and regulation, preparedness for future technologies, and the involvement of civil society and NGOs in driving positive change. The varying perspectives shed light on the different aspects and challenges associated with digital transformation and its impact on society and the environment.

Audience

The analysis explores different perspectives on technology development, highlighting concerns, and advocates for a proactive approach. The concerns revolve around the necessity and impact of new technologies, with a particular focus on the harms and risks faced by certain communities. It is noted that significant investments are being made in technology development, but there is a need to address the potential negative consequences associated with these advancements.

One argument raised is the need to rethink the ideology and narrative of growth and development. There is a call to move away from the traditional approach and consider alternative ways of achieving progress. The emphasis is on the importance of responsible consumption and production, as well as considering the long-term sustainability of new technologies.

Another perspective suggests that countries from the Global South are not prioritising sustainability and climate protection over digitalisation. It is argued that these nations should focus on addressing environmental concerns and ensure that technological advancements align with sustainable development goals. This observation highlights the need for a balanced approach to technology adoption and an emphasis on considering the environmental impacts.

The analysis also highlights the existing digital divide, with the most advanced centres of research and development and influential companies predominantly located outside the Global South. This observation points to the power dynamics in the technology sector, indicating that decision-making and agenda-setting are often controlled by entities outside the Global South. This imbalance calls for efforts to bridge the digital divide and empower the Global South to have a greater say in shaping the technological landscape.

In conclusion, the analysis presents a range of perspectives on technology development. It underscores concerns regarding the impact of new technologies, calls for a re-evaluation of growth narratives, emphasises the need to prioritise sustainability, and highlights the inequality in the technology sector. The analysis also suggests that a proactive approach is necessary to address the challenges and potential negative consequences associated with technology development. Overall, it provides valuable insights into the complexities of technology’s role in society and the need for a more balanced and responsible approach.

Siriwat Chhem

This analysis examines the challenges and progress of sustainable AI in Cambodia. Cambodia has experienced impressive economic growth, with an annual GDP growth rate of 7% over the past 20 years. The country also benefits from a young population, with two-thirds under the age of 30. The availability of affordable mobile data and Wi-Fi has accelerated digitisation in Cambodia. Moreover, Cambodia has bypassed card payments and adopted mobile payments directly.

However, Cambodia currently lacks specific policies on AI and sustainable AI. The country is learning from regional models and others’ mistakes to develop its own AI framework. Civil society, represented by AVI Asian Vision Institute, plays a crucial role in Cambodia’s sustainable AI development by providing policy research and capacity building in the digital economy. The institute also focuses on Cambodia’s role as a small state in global governance.

Efficiency evaluation of AI tools and platforms is important as the misconception that AI can solve everything comes at a high cost and can create more problems. Long-term partnerships and continuous engagement are essential in addressing global issues related to AI and sustainability. However, there is a challenge of lack of follow-up and building on discussed points after high-level international conferences.

AI and sustainability are long-term journeys that require careful legislation and policy development. Backtracking or catching up from a regulatory standpoint is difficult due to the established nature of AI and sustainability. It is crucial to consider the broader implications of AI beyond just the technology itself.

In conclusion, Cambodia needs comprehensive policies on sustainable AI while capitalising on its progress in digitisation. Civil society, particularly AVI Asian Vision Institute, plays a vital role in advancing the digital economy. Evaluating the efficiency of AI tools, advocating for long-term partnerships, and focusing on sustainable solutions are crucial for sustainable AI in Cambodia.

Robert Opp

Digitalization and climate change are identified as the biggest global mega-trends. Developing countries bear a disproportionate burden of climate change and face challenges in terms of digitalization. Although digitalization presents the opportunity for positive action against climate change, it is also contributing to carbon emissions.

Environmental regulations and governance should not be sidelined in the pursuit of rapid digitalization. It is important that countries prioritize reducing data centre inefficiency and addressing the issue of e-waste. The global north, as a major contributor to technology development, has a responsibility to ensure that the environmental impact of these technologies is minimized.

Forming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental Sustainability (COBE) and the AI for the Planet Alliance aim to foster political alignment and promote sustainable approaches in the digital sphere. These alliances recognize the importance of involving diverse stakeholders including the private sector, civil society, and governments.

The value of local digital ecosystems and capacity building is emphasized for addressing sustainability issues. The global pattern of AI systems often lacks representation and diversity, and local innovators may struggle with financing, skillsets, and access to tools for building locally relevant systems. Strengthening local digital ecosystems can lead to fresh ideas and innovative solutions for sustainability.

Concerns are raised about the lack of representation and diversity in AI systems, particularly generative AI. The underlying data, or lack thereof, and the training processes contribute to this issue. It is important to address this lack of diversity to ensure that AI systems are fair, inclusive, and do not perpetuate biases or discrimination.

Developing countries may face challenges in prioritising environmental issues due to limited resources. However, it is important to recognise that the current pattern of environmental issues was created primarily by countries of the global north. It is crucial for these countries to take responsibility and work towards mitigating their impact on the environment.

Advising country partners to consider environmental implications in digitalization is a key recommendation. Technology should serve people and the planet, rather than exploiting or harming them. The process of digital inclusion and transformation should continue while not forgetting the importance of environmental considerations.

In conclusion, the extended analysis highlights the need for a balanced approach to digitalization and climate change. Environmental regulations and governance should not be overlooked, and alliances in global digital governance are crucial for promoting sustainability. The importance of local digital ecosystems, diversity in AI systems, and capacity building is emphasized. Furthermore, the responsibility for environmental issues should be acknowledged and addressed by countries of the global north. Ultimately, technology should be used as a tool to benefit both people and the planet.

Moderator – Karlina Octaviany

The IGF 2023 Open Forum 37 focused on the topic of sustainable development in relation to ICT technologies, with a particular focus on artificial intelligence (AI). The discussion aimed to address the ecological and social risks associated with the rapid digital transformation.

The panel of speakers included representatives from diverse organisations, such as the German Federal Ministry for Economic Cooperation and Development, UNDP, Mozilla Corporation, ITU, and the Asian Vision Institute. These experts shared valuable insights and examples of initiatives aiming to integrate sustainability in ICT technologies and global digital governance, specifically focusing on AI.

One important aspect highlighted during the forum was the need to limit the ecological impact of digital technologies. The panelists emphasised the growing contribution of digital transformation to greenhouse gas emissions and stressed the importance of ensuring sustainable AI development and deployment. They discussed the need for sustainable aspects to be considered in the development and deployment of digital technologies, including AI, and highlighted the role of digital transformation in addressing the planetary limits of AI.

The speakers discussed various options for action to promote the sustainable development of ICT sectors and technologies, with a specific focus on AI. They proposed measures such as the development and adoption of green ICT standards to support governments and stakeholders in developing sustainable and circular ICT systems. Examples were shared to illustrate how these standards could contribute to reducing ecological impacts and fostering sustainable practices.

Another key topic of discussion was the role of civil society and business in promoting sustainable AI. The panelists discussed the specific challenges faced by communities in Africa and Cambodia in adopting and benefiting from AI technologies sustainably. They highlighted the importance of including diverse perspectives and ensuring that the benefits of AI are accessible to all members of society.

Transparency and measurement were also highlighted as crucial factors in achieving sustainable digitalisation. The need to avoid the risk of greenwashing, where companies make false or exaggerated claims about their environmental practices, was emphasised. The discussion emphasised the importance of accurate measurement and reporting frameworks to assess the ecological impact of digital technologies and ensure genuine sustainability efforts.

The forum concluded with closing statements from each of the speakers, summarising the key points raised during the discussion. There was an overall agreement on the significance of integrating sustainability in ICT technologies and global digital governance, particularly in the context of AI. The forum provided a platform for meaningful dialogue and collaboration among stakeholders to drive positive change towards a more sustainable and inclusive digital future.

Noam Kantor

Businesses have a crucial role in sustainable AI by investing in environment-friendly partnerships. This involves seeking out and investing in or partnering with organizations that mitigate the climate emergency. Tech companies should also consider the ethical standpoint of their investments. Making products more efficient and sustainable is another important aspect of sustainability. Mozilla, for example, allows developers using Firefox developer tools to track the carbon emissions of their software. Civil society plays a significant role in educating the public about the climate impacts of technologies like AI. In Africa, sustainable technological development faces challenges such as limited funding and finance. However, initiatives like Mozilla’s Africa Emrati Project aim to address these barriers. Transparency is vital in sustainability, and digital companies should develop a transparent look at their environmental impacts. Tech regulators also have a crucial role in enforcing against deceptive greenwashing claims. Making sustainability part of product development can drive sustainable digitalization. Overall, businesses, civil society, tech regulators, and individuals all have important roles to play in promoting sustainable practices in the digital age.

Atsuko Okuda

The analysis highlights the need for greener AI and ICT development to address their negative impact on the environment. The greenhouse gas emissions generated by top telecom companies were estimated to be 260 million tons of carbon dioxide equivalent in 2021. This calls for urgent action to mitigate the environmental impact of these industries.

However, digital transformation shouldn’t be abandoned; instead, it should take environmental considerations into account. AI can play a crucial role in enhancing green transformation and weather forecasting. For example, AI can improve the predictability of demand and supply for renewable energy across a distributed grid, promoting sustainable energy practices. Additionally, AI can enhance weather forecasting, which has implications for climate action.

Another concerning issue is the significant amount of e-waste generated due to the increase in internet users. It is estimated that over 70 million tons of e-waste will be generated annually by 2023. Efficient e-waste management practices, including recycling to extract critical raw materials and promote a circular economy, are urgently needed.

Standardization and recommendations for environmental performance and e-waste management are crucial to ensure all stakeholders work towards common environmental goals.

Raising awareness among wider societal groups about the environmental impact of AI and ICT is crucial. The International Telecommunication Union (ITU) is implementing an AI project to build capacity and awareness among different stakeholders. This inclusive approach enables diverse perspectives to be considered in finding solutions to environmental challenges.

The ITU is also evaluating the environmental resilience and performance of data centers, aiming to improve their sustainability.

While AI technology offers opportunities, it should be integrated with environmental considerations to minimize negative impacts.

Addressing e-waste management requires collaboration with small and medium-sized enterprises (SMEs). An area office and innovation center in Delhi is working with SMEs and businesses in India to tackle e-waste management challenges.

Policy and regulatory mechanisms play a significant role in addressing the e-waste issue, ensuring producers take responsibility for proper e-waste management, even if they are not located in the same country as end-users.

Furthermore, proper e-waste disposal practices are essential to prevent environmental and ocean pollution.

Digital inclusion and transformation are crucial for global development. However, environmental concerns must be considered alongside these goals. Approximately 2.6 billion people are still unconnected, highlighting the digital divide. Bridging this gap while incorporating environmental considerations is essential.

To summarize, addressing the negative impact of AI and ICT on the environment requires greener development practices. Key areas of concern include greenhouse gas emissions, e-waste generation, and the digital divide. Incorporating environmental considerations into digital transformation, promoting proper e-waste management and recycling, raising awareness, and implementing policy and regulatory mechanisms are vital steps towards a sustainable future.

Session transcript

Moderator – Karlina Octaviany:
Hello, everyone. Thank you for joining the IGF 2023 Open Forum 37, Planetary Limits of AI, Governance for Just Digitalisation. I think all of the speakers are here. Thank you, everyone, for joining on-site and also online. We are broadcasting this event also hybrid, so I think we should gather a lot of insights and also a good balance of questions online and on-site. Thank you for coming here. My name is Karlina Oktaviani. I’m Artificial Intelligence Advisor for Fair Forward Indonesia and also Digital Transformation Centre Indonesia, Global Initiatives Dedicated to Open and Sustainable Development and Applications for Artificial Intelligence in Africa and Asian Countries. On behalf of German Federal Ministry for Economic Corporations and Development, BMZ, implemented by GIZ, I will be your moderator for today. The session will run with discussion led by the moderator and also some speakers will send also a presentation and later we have a question and answer session, so please be prepared with your questions and if you have an opinion or response, we also welcome that and we shall begin the session. The digital transformation increasingly contributes to the greenhouse gas emissions. To ensure sustainable artificial intelligence or AI, there’s a need to limit the ecological and social risk. How can we ensure that sustainable aspects are considered on the development and deployment of digital technology, such as AI, and how we can form the basis of digital transformation? In this open forum, we will discuss options for action for the sustainable development for ICT and of the ICT sectors and technologies, especially for AI. I will introduce the panelists for this session. For impulse statement, we have Martin Wimmer here, Director General Development Policy Issues, German Federal Ministry for Economic Corporation and Development, BMZ. We have on that side Noam Kether, Senior Public Policy and Government Relation Analyst of Mozilla Corporation. We have Robert Opp, Chief Digital Officer of UNDP, and joining online we have Atsuko Okuda, Regional Director, International Telecommunication Union or ITU, Regional Office for Asia and the Pacific. Also joining online, we have Chem Siruwat, Director of Center for Inclusive Digital Economy at the Asian Vision Institute and Advisor to the Council for the Development of Cambodia. To begin, please welcome impulse statement from Mr. Martin Wimmer, Director of General Development Policy Issues, German Federal Ministry for Economic Corporation and Development, BMZ. Please welcome. Give the warm welcome. We can clap your hands.

Martin Wimmer:
Thank you. Yesterday morning, I went to Ryoen-Chi. This World Heritage Site in Kyoto and yours is one of the most inspiring gardens ever built by mankind since the 15th century. It is a rock garden. Basically, it consists of 250 square meters of flat gray gravel and five islands with rocks on it. It is rectangular like a screen. The gravel representing dots. You see, it can mean anything you come up with during meditating there. And it is a metaphor for the technological design, the shaping of nature and the current five hype digital technologies, AI, quantum computing, whatever everyone is crazy about. It is a metaphor for the millions of websites in the internet and the five platforms that stand out. It is a metaphor for all the millions of users and the five founders who get all the attention and money. It encourages thinking out of the box. And whether you think the digital transformation leads to good or bad, the lesson you get from the rock garden at Ryoen-Chi is that the more you focus on the five outstanding highlights, the more you watch the rocks that steal the limelight, the more your attention will shift to the gravel. If you look long enough, if you think deep enough, it’s the gravel that makes the rocks shine. There are only 15 rocks and millions of pebbles there, but the task is to leave no one behind. For our discussion today, this could mean to emphasize the importance of a human-centric perspective. What does the big platform, the new technology, the great solution, the fascinating vision of one of our outstanding speakers mean for the people who do not stand out and do not get all of the attention at first sight? The poor, the children, women, people with disabilities, LGBTI+, people in the global south, oppressed people, indigenous people, victims of terrorism and war. You don’t need to be a sand master. It’s just common sense. Whether you are a gardener or a coder, whether you use a shovel or a server for your work, using technology, data centers, AI to change the world, nature, societies, human interaction should never be for technology’s sake, but for improving the lives of every individual living with us on this planet, and to secure the integrity of this one, our Earth, which translates into safe energy, safe water, safe resources. Don’t believe in growth. Don’t fool consumption. Don’t produce waste, which, to be clear, is the opposite of what the digital economy does most of the time, most of them. If you’re serious about carbon neutrality and a just transition of our economies to sustainable economies, we have to act as a gardener. Respect, tend, nurture, not exploit data centers. Do what the rock garden does. Remain within given boundaries. That’s why the German development policy supports the global realization of human rights, the fight against hunger and poverty, the protection of the climate and biodiversity, health and education, gender equality, fair supply chains, fair working conditions, the democratic, social, ecological, feminist, inclusive use of digitalization and technologies transfer to promote sustainable development goals worldwide. Thank you.

Moderator – Karlina Octaviany:
Give your own welcome. Please clap for Mr. Wimmer. Thank you, Mr. Wimmer. I really like the analogy of making it as an ecosystem that grows everyone’s in the AI. So, let’s begin the session. I will remind you that this is an open forum, so I will encourage people and invite people to prepare your questions, your response, your opinion. If you have any points that you want to discuss, we’re open to that. And first, we’ll go to UNDP, Robert Off, Chief Digital Officer of UNDP. So, let me ask the first question. How can we form broader efforts to integrate sustainability in ICT technologies and global digital governance, including AI? Thank you so much for having me at

Robert Opp:
the panel. Thank you, Martin, for the poetry. I knew you were going to deliver something inspirational, and you’re absolutely right about the boundaries. Couldn’t agree more. I think I would like to start with just a general reflection on the issues that we’re talking about here. Digitalization and climate, these are quite possibly the biggest mega trends that we have globally right now. They are changing everything about the world, but they’re doing it disproportionately. And so, we know that a disproportionate burden of climate change is borne by developing countries. We know that digitalization is happening at different rates globally, and developing countries are at a disadvantage when it comes to the speed of digitalization or the technology generation and things. And then, between these two concepts, there is a tremendous interaction that goes, it’s a bi-directional. So, on the one hand, digitalization presents the possibility to take dramatic, positive action against climate change. On the other hand, we know that digitalization is driving carbon emissions. It’s also contaminating soil for the extractive industries that have grown up around building chips and technology platforms, the rare earth minerals and so on. And it’s even the data center techniques of using cooling water that is not a closed-loop system, can contaminate water sources and things. So, there’s a really interesting, I mean interesting, important interaction, bi-directional interaction between the concepts. I think that one of the things from a UNDP perspective that we really work with, when we work with countries worldwide on their digital transformation, and we’re engaged, well, we have digital programs in 125 countries. We are engaged in about 50 of those countries on questions of national digital transformation. And I think that our partners are in the developing world, but they, like pretty much most countries, tend to put some of the environmental regulatory and important governance discussions on the back burner in the favor of quick digitalization. And so, one of the things that we really try to do in our approach when we work with a country, and we look at their readiness for digital transformation, or the readiness for artificial intelligence, because we do that kind of assessment as well, we try to place those questions centrally. And it’s about putting people and their rights, their right to, for economic and social rights to development, but also the rights for the environment. And we put the questions in front of countries. If you’re using data centers, are you doing that in a green way? Have you looked at optimization for efficiency? Have you looked at the carbon footprint of the digital change you’re making? Are you transparently disclosing environmental impact of technologies that you’re adopting? And like I said, our countries, our partners, are developing countries, but I think every country in the world needs to put this as a central concern, particularly those who are driving the technologies. And I think the last thing I would say is simply that if we look toward what it’s going to take for global action in this, it really is going to be that this has to become the norm, so that these are central questions. And going back to that point about disproportionate impact, I really think that we need to send the signal in the global north that is developing a lot of these technologies, that we must find ways to ensure that the environmental impact, the greenhouse gas footprint of the technologies that are being used, is seen as a priority in terms of data center efficiency, reduced e-waste, reduced contamination, and so on. So those are just a few initial thoughts around that.

Moderator – Karlina Octaviany:
Thank you so much. Give a warm welcome to Israel. And next we’re heading online. Asko Okudad, Regional Director of International Telecommunication Union, or ITU, Regional Office for Asia and the Pacific. Ms. Okudad will present a presentation on examples of green ICT standards, how can they support governments and stakeholders to develop sustainable and circular ICT, including AI. Asko, if you’re ready, you can start. Give a warm welcome to Ms.

Atsuko Okuda:
Asko. Thank you very much for giving… Thank you. First of all, I would like to thank the organizer to invite ITU to this very important meeting, and I believe that Robert also shared that the topic is very timely. We should perhaps think about our action in terms of how to ensure that the AI development, as well as ICT development, are greener. And I have a few statistics that I would like to show from the recent studies. Let me start with the GDPT and rapid rollout of AI solutions globally. And I’m sure that, you know, all the participants have been using or experimented the use of generative AI, such as chatGPT, and the power of the solutions that are in front of us. I just want to share with you, with the participants, that there are many interesting and innovative use of chatGPT. One of our ITU senior officials got married recently, and I believe that he asked chatGPT to write his marriage vow. So I hope that that was successful. But there are increasingly very interesting and widening use of chatGPT in our social life, as well as in our workplace. Now, the question, perhaps, which is very relevant to this session, is the environmental impact of an increasing use of AI, because the tool itself is not material, in a way. And it is very difficult to quantify the environmental impacts. But today, I would like to share with you the two aspects. One is the electricity consumption, and second, the green gas emission in this presentation. So as you see on the screen, I hope that you are seeing the flipping slides. Of course, the increased use of AI is supported by the increasing the transmission of data. And those data are stored, as Robert mentioned earlier, in the data center, carried by different means of telecommunication. Now, that data center, as you know, consumes lots of electricity. And as you may have heard in the other sessions, there are significant progress in making data center energy efficient. However, one study still suggests that the training of AI solutions would require 3.5 million liter of water to cool the facilities itself for the computing. And additionally, there is a study that in terms of the greenhouse gas emission, the top most telecom companies estimated to have produced 260 million tons of carbon dioxide equivalent in 2021. So there are certainly benefits, but there are some environmental impacts that we have to consider. So because of these two aspects of digital transformation and the need to green, some scholars coined the phenomena as the twin green and digital transformation, which means that the digital transformation should take into account the environmental aspects. And AI can certainly enhance many different aspects of the twin green and digital transformation. For example, AI can enhance the predictability of demand supply for renewables across a distributed grid. And of course, as you know, there are benefits to improve weather forecasting by incorporating more of the real world systems in calculations. So the question I believe is the balance that we must find between these two, the green part and the transition part, and to get best out of the twin transition. Now, coming back to the data center, there are two, as I mentioned, the components in terms of the data center operation, there is a cooling part that will be required and these are significant and largest, the energy loss in the facility and the cooling replacement for water includes the refrigerant, which could contain harmful chemicals. But in addition to that, there is globally increasing data traffic and that is generated from low and middle income countries because they are now investing more in the storage and hosting solutions to meet the increasing demand of the internet users, which are increasing in these economies. And that will require more data centers in these locations and that may consume more electricity according to the latest statistics. Now, that’s the reason why we, ITU, have been working with partners such as GIZ to ensure that the green aspects are integrated into this digital transition and digital transformation. And one entry point to ensure that is the public procurement, to make sure that in the process of procurement to establish data center or improve data centers, the green and environmental aspects are considered and taken into account. So another entry point to ensure the environmental aspects is the e-waste management and to manage the critical raw materials. As you know, there are more internet users globally, mainly in middle and low income countries, which means that there are more devices for people to connect to the internet. And by 2023, over 70 million tons of e-waste are expected to be generated annually. And as you may see on the screen, it is estimated that the storage of the expected 2025 global data sphere alone would require up to 80 kilotons of neodymium, which is about 120 times the EU 2020 demand for this material. And at the same time, the critical raw materials can be extracted from this process of recycling if we do it properly. And we hope that the member countries as well as industry, academia, and in partnership with other stakeholders, we can create a circular economy to ensure that the e-waste are discarded safely. At the same time, that will regenerate and that will recover the critical raw materials. And in addition, ITU, as you know, have been working on the standardization and recommendations to ensure that best practices are applied on these critical aspects of environments and the environmental performance. And I hope that today’s discussion will shed light on some of these topics, including the green gas emission as well as the e-waste and data center management. And finally, I just want to highlight one point that we should also perhaps encourage a wider societal groups to be aware and to be exposed to this discussion through their awareness raised on the benefits and challenges of AI solutions in their societies. And I would like to share with you as a second question to my intervention at ITU has been implementing an AI project to build capacity awareness across different stakeholder groups in these four countries supported by the government of Australia. So that was my last slide. Thank you very much. Back to you.

Moderator – Karlina Octaviany:
Thank you so much, Ashikol. Next, we go to Noam Kantor, Senior Public Policy and Government Relations Analyst of Mozilla Corporation. So first question is what’s the role of civil society and business and what are the specific challenges of communities on the Africa continent on sustainable AI?

Noam Kantor:
Thanks so much. Thanks for having me. Thanks everybody for coming. Regarding the role of business, the first thing I like to do is zoom out to consider tech companies as just companies. An example of what I mean is that one thing companies do just as companies is invest in other companies and in financial instruments. So I guess my question zero is, and really a primary question is, is a tech company investing in companies or partnering with companies that exacerbate the climate emergency? I would say that’s the bare minimum before you start thinking about the tech they’re implementing. I think another thing businesses can do is share best practices in terms of how to make products more efficient and sustainable. For example, this year we at Mozilla created a way for developers using our advanced Firefox developer tools to track the carbon emissions of the software they’re developing. So I recommend you go have a look at that if you’re interested. I think civil society can also play a really significant role in education, especially regarding the climate impacts of technologies such as AI. My Mozilla Foundation colleagues wrote a review recently of all the climate, of many of the climate impacts of the internet and AI usage, including how much energy is used when you’re on a Zoom call when the video’s on versus when the video’s off, which maybe you all have seen. And it’s been a really popular article. I think that shows that people wanna know the impacts of the tools that they’re using, but in the case of technology, that information can be really hard to find. As for specific challenges on the African continent, I have to say that I’m not on our Africa team. I do wanna tell you just a little bit about our work there, because I think the team does great work and I’m really proud of it. I just wanna echo also, first of all, the disproportionate impact of the climate emergency on the African continent, which was previously discussed. One thing we’ve done is in 2021, Mozilla partnered with AfriLabs to study the African innovation landscape. Across the continent, the study that we did with them found key innovation barriers, such as access to funding and finance, local policies to protect and enable the ecosystems, lack of access to affordable connectivity internet, which is a big one, and a general need to collaborate across the regions that they studied. Mozilla’s Africa Emrati Project is working to fight these barriers. I think many of the same barriers affect sustainable technological development in the area. But ultimately, we think that communities should be able to speak to and try to solve their own challenges with support from others. That’s why the Mozilla Technology Fund, which supports open source projects with promising approaches to solving pressing issues, recently announced that the theme for this year is AI and environmental justice. The fund will provide $300,000 to open source projects that leverage AI to make a positive impact on the environment and local communities. It includes one year of Mozilla mentorship and support, and awardees will likely be announced in early 2024.

Moderator – Karlina Octaviany:
Interesting. So if you want to, you are welcome. And if anyone wants to explore more, you can ask later about the finding. And we go online again to Tam Sriwat, Director of Center for Inclusive Digital Economy at the Asian Vision Institute and advisor to the Council for the Development of Cambodia. Tam, are you ready? Okay, he’s online. So what’s the role of civil society and business and what are the specific challenges faced by community in Cambodia and specifically on AI?

Siriwat Chhem:
Yes, thank you very much for your question. First of all, I would like to thank the organizers for inviting me on this panel of very esteemed and distinguished panelists. Just to let you know, I’ve been following the IGF for a very long time in my research. It’s always been a dream to come and attend, but unfortunately I wasn’t able to join in person, but at least I’ll be able to join the panel online. And so onto the question. Maybe I’ll start with the second part first, since we’re talking about specific challenges faced by Cambodia on sustainable AI. I think the first thing you think of when you think of Cambodia might not be related to technology or let alone sustainable AI, but maybe I can just share a little bit of the context. And so pre-COVID for the last 20 years, Cambodia had been experiencing 7% GDP growth annually, so developing extremely quickly. And I would say that within the last five years, Cambodia has gone through its own form of digital transformation. If you were to visit around five to 10 years ago, you would see that we predominantly use cash everywhere. Also making it more complicated, we are a dollarized economy, meaning that we are on dual currency, both with the USD and our local currency. And so basic things like going to the market or taking a tuk-tuk for transportation around, you would have to do these things very manually and having complications of converting currencies and so on. And basically what happened throughout the last few years, a very high digital adoption rate. We’ve been able to, let’s say leapfrog the era of using cards, credit cards, debit cards, and move straight into mobile payments, transfers and QR code payments. And so that the main reason I would say is because of Cambodia’s young population with two thirds of the population under the age of 30, a median age of 26. We have quite affordable mobile data and access to Wi-Fi within the urban population. This has allowed us to really move forward in terms of digital transformation. And so now if I can just go back to the question of the theme for today on sustainable AI, we face different types of challenges from the previous ones that were mentioned. Because we could say that we joined late to the game, our focus is really building it from the ground level up. And because we don’t have any legacy technology or any established longstanding institutions in terms of AI, we rely quite significantly on looking at the models of our regional partners, on looking at what’s being done successfully around the world, and also learning from others’ mistakes. So in terms of sustainable AI, we are, let’s say, building a strong foundation from the beginning. We don’t have any existing specific policies on AI or specific to sustainable AI. And so I think looking at regional models, what’s being done around and locally contextualizing to Cambodia’s situation is very important. And so if I could just elaborate a little bit more on the role of civil society, on behalf of our institute, on the AVI Asian Vision Institute, we are an independent think tank. And so what we’ve tried to do over the last four years in Cambodia is to provide policy research and also capacity building and training related to the digital economy. And so over the last four years, we’ve published two books, one of them on Cambodian cyberspace, another one on Cambodia’s emergent cyber diplomacy. So really giving an overview of the digital economy, what kind of role does Cambodia as a small state play in the frame of global governance? I know that will be onto the next theme and question, so I won’t talk too much about it. And so with that, I would like to close my opening remarks. Thank you.

Moderator – Karlina Octaviany:
Thank you so much. Give a warm welcome to Tim. Thank you. Thank you. Thank you. Thank you. Thank you. Okay, thank you. Okay, so I think we can move to the second round of questions. I come back to Mr. Robert Opp, what type of alliance for global digital governance are needed?

Robert Opp:
Okay. Hello, okay. No, thanks for that. I think all of these interventions so far have drawn attention to some form of angles that we’re talking about. There’s sort of private sector, there’s the civil society, the importance, there’s governments and so on, the importance of bringing together the stakeholders can’t be overemphasized. And of course, that’s what IGF is about. I think in this space, the biggest role for alliances is around alignment of purpose, alignment of intention. And I can just give a couple examples in this space of alliances that we’re involved in and that I think have some hope for the directions that we need to set globally with this. The first one is called COBE. which stands for Coalition for Digital Environmental Sustainability, and that is an initiative with the German Environment Agency, with the UN Environment Program, UNDP, the International Science Council and Future Earth. And actually recently Atsuko ITU has also joined CODES as one of the kind of core members. And CODES has engaged with over a thousand stakeholders in the last couple years that it’s existed, and it really is trying to get a few different things. One is around political alignment for the kind of these issues of the twin transitions. Then there’s sort of a set of initiatives around mitigating negative impact, and then there’s accelerating the innovations for efficiency. And so this is a kind of broad-based coalition, I would say, and there’s some action lines that are being developed now. And I think it really just highlights the importance of really coming together under common purpose. The second alliance, which is a little bit more focused on the topic at hand, is called the AI for the Planet Alliance. And that has been created by the Boston Consulting Group, UNDP and UNESCO, plus a coalition of startups called Startup Inside. And it’s a group, a kind of an odd group in a way, of players that are engaged in this issue as well, but specific to artificial intelligence. And it is really also about providing a platform where we can identify and promote innovations that are, again, driving innovations that can help us with environmental action, as well, and scaling them, as well as looking at ways to really encourage the players in the artificial intelligence space to adopt more efficient and more environmentally friendly, more sustainable approaches to their work. And these are, you know, again, things that are very multi-stakeholder in nature, open for participation of many. The organizations I mentioned are just the kind of spearhead organizers, but really open for all to be involved in. And that’s an open call for everyone who’s listening in today, as well. These can be found, I’m not going to give the websites, but they’re both, they can be googled and found online, and encourage everyone to participate. Thank you. And additional resources for our discussion

Moderator – Karlina Octaviany:
later on, you can also share. We go to Noam. So how can we move towards sustainable digitalization? Thanks. I want to talk about transparency first. I

Noam Kantor:
talked about it a little bit with the education bit before, but I do think we need a transparent look at the environmental impacts of tech tools, including AI. So, you know, sustainability reports are often a big tool towards transparency, but as we all know, there’s a spectrum of transparency when it comes to reporting. So I wanted to talk a little bit about what we do in our sustainability report. It’s maybe an example, because we hope that we’re leading the way. So my understanding is that per the greenhouse gas protocol, which is one of the reporting standards, we’re not required to calculate or report the product use emissions associated with using products like Firefox, Mozilla Hubs, and Pocket. But we want to lead by example. We want to support transparency by reporting the optional data. So we started doing it actually in 2019, and we’re hoping that it’ll encourage our peers to do the same. What we had to do, though, was we had to work with an external consultant, and we had to develop a brand new methodology, because no one had really developed a methodology for measuring the impacts of browsers, the environmental impacts of browsers. And, you know, we hope that it accounts for device emissions that can be reasonably attributed to the browser, so that, you know, it captures the work that we’re doing and what we control. So it’s possible and vital that companies report on this aspect of the work. I think the hope is that we’re showing that it’s possible, and we’re encouraging others to do so. And I think the hope is, if the impacts are too high, they should consider changing their product roadmap. Now, as I mentioned before, also related to Mozilla Developer Tools, it’s not just about the products that companies build, but also for customizable or open-source products. It’s about giving developers and users the ability to measure and reduce the emissions in the tools that they build. I also want to say that tech regulators sometimes have an interesting role to play in sustainable digitalization. A good example is the Federal Trade Commission in the United States, which is a primary, one of the primary tech regulators in the United States. But the FTC also enforces against deceptive greenwashing claims. So there’s an interesting nexus there. In fact, the FTC has just begun a once-in-a-decade update to its green guides related to deceptive environmental claims. And some commenters have specifically requested that they bolster their enforcement against certain misleading net-zero claims or sustainability claims. But there are limits to anti-greenwashing policies because they require deceptive representations in the first place. So, you know, they’re just one piece of the puzzle. But I thought that was just an interesting nexus of how different regulators can work together in the

Moderator – Karlina Octaviany:
space. Interesting. So measurement and also risk of greenwashing. Okay, we can go to the Q&A in this open forum. We welcome respectful, diverse questions and opinion. If you have any questions, please kindly raise your hand, introduce your name and organizations, and then mention your questions. Also, for participants online, I will also remind to please type your name, organization, and questions. We will select the questions to be read online. So I will give the opportunity for the people on site first. Is there any questions, opinion, or curiosity that you want to ask?

Audience:
Yeah, thank you everyone. My name is Bushree Badi. My question is around, I think much of the conversation has focused on the impacts after you start adopting these technologies or developing them, and I’m wondering like how much work is now being done to really think critically if these specific types of technologies are needed in the first place. Because it feels like we’re trying to mitigate risks that are already being, or like certain communities are being exposed to those harms and risks, and trying to kind of like put things back into the bag that shouldn’t have necessarily been implemented in the first place. And you see a lot of this type of development in places like Silicon Valley, where there’s a lot of investment that keeps going into the development of these technologies that are presented as solutions to really systematic problems that we’re facing, but fundamentally will fail to do so. And we know this as people who maybe work on this through a systematic lens or framework. So I’m wondering if you could speak to some of the work that’s being done there, because it feels like a lot of this is just responsive instead of being proactive in addressing these issues. Thank you. Interesting. We will go

Moderator – Karlina Octaviany:
around. It’s okay. Well, thank you. Well, first of all, great to be here. My name is

Audience:
Josรฉ Renato, I’m from Brazil, and I have two questions actually. Maybe jumping a little bit upon her question, we started the session talking about the growth, about the possibility of thinking beyond this, let’s say, ideology, narrative, I don’t know how to put it, but of development, of growth. We use some of these terms here, so like what are the opportunities that we have to rethink this? Maybe, is there any other paradigm that we could focus on? And the second question, after, I unfortunately forgot the name of the UNDP representative. Robert, thank you very much. I apologize, I’m terrible with names. You mentioned about the role of countries from the Global South in this whole theme, and how they were sort of not prioritizing, at least as far as I could understand, the issue of like sustainability, climate protection, over the digitalization. But I would like to hear from you, and maybe if there are any other inputs would be also welcome. How is it, like, considering that we have all of this push towards digitalization, this, it is part of the whole imaginary of development, of how a development, developed economy should look like. What would be your take, considering that the most advanced search centers of research, of development, the companies that dictate most of the agenda, they’re outside of these territories. It’s like, how do you work with these countries? How do you, you could potentially work with them to some degree, either create an environment in which they can build upon, in which it’s not like, in which they’ll be, they’ll have the benefits of all of this, even when we consider that many nations who are advancing these technologies are not fulfilling these questions. So yeah, thank you so much.

Moderator – Karlina Octaviany:
Thank you so much for all the questions. So we can move to our panel, starting

Robert Opp:
with Robert. Sure, I can address particularly that that last question. In a phrase, the value of local digital ecosystems here is super important, and this is very relevant for AI. It’s relevant far beyond a sustainability question. The concern that I have, and anybody who’s spoken to me recently has heard this, because I say this over and over again, I am very concerned about the global pattern of rollout of AI systems, particularly generative AI at the moment, because I worry about the representation and diversity in technology, in the underlying data or lack of data, and in the training process as well. And I believe that one of the most important things that we can do is to look at the ways to build capacity for local digital ecosystems, so that local innovators who are, you know, innovators and entrepreneurs are everywhere, but they sometimes lack the ingredients, and you were talking about that before, Noam. They may lack the financing, they may lack the skill set or the access to skills, and they may lack the set of tools to compete globally, or not necessarily compete globally, but to actually build systems that are locally relevant, and that will actually work towards satisfying the needs of people locally, and the needs of those markets locally. And so I really think, and this will also I think benefit the sustainability agenda as well, the stronger the local digital ecosystems are in these countries around the world, the more I think we’re going to see innovative and fresh looks at how we can address the sustainability issue as well. So that would be my response to your question about, you know, the countries, and when I said countries are not necessarily prioritizing environmental issues, that’s not a criticism. That’s because developing countries have a lot on their plates right now, and need to, are desperately short of resources, and in a constrained environment where you’re trying to really think about where you’re going to put your scarce resources, it may not be the first instinct to put it into something like that, and that actually I think needs to, the light needs to be shone toward the countries of the global north, who’ve basically created this pattern, and they didn’t think about environmental concerns either. That’s why we have this issue. And so what we say is that going forward, as we work on digitalization in these countries, we advise our country partners to stay aware of the environmental considerations as part of their governance, to think about the policy and regulatory environment that needs to be there from the beginning, so that ultimately that will pay off down the road. Maybe I’ll let other panelists answer some of the

Noam Kantor:
other questions. You can go, Noam. Still on? Okay, I think probably I have the most to say on the first question, about, you know, question is when should we not implement technologies at all, given their risks and their benefits? I think it’s the golden question, and I guess I just want to talk about the ways that trust, the concepts of trustworthy AI, transparency in AI, and transparency in climate impacts, all kind of work together to create, you know, as ingredients to create, you know, hopefully responsibility here, which is that, you know, I think one of the challenges is that many, many of the products that you reference that might not be very effective relative to their risks, oftentimes people don’t know how to measure the effectiveness of those products, like that if we’re talking about an AI model, people don’t necessarily know how to talk about the robustness, the accuracy of the model, potential for bias, even though there’s been a lot of work on those things that sort of, both investors and the public and regulators are still learning, and will be learning for a long time, how to measure those things, and so I think the more that we can push on the side of trustworthy AI, the more it will be obvious to people what they’re weighing the environmental impacts against, right? If it’s obvious, you know, how trustworthy or how accurate a model is compared to what it’s claiming to do, then it’ll be more obvious, you know, is it worth it compared to the amount of energy we have to pay for, and then, you know, external effects that are impacting our

Moderator – Karlina Octaviany:
climate and economy. Thank you, we go to Martin. To your question, I would fully

Martin Wimmer:
agree, the damage is already done. AI is here, and we are only in repair mode once again, and the reason for that is the industry just doesn’t care about the environmental impact of their money-making, and legislation and regulation are way too late once again. All we can do is learn for the next technology that breaks through. We have to be better and faster, and we need the pressure from the civil society and the NGOs here. And then we go to online.

Moderator – Karlina Octaviany:
Atsuko, if you want to answer the question. Sure, thank you. I have two,

Atsuko Okuda:
maybe, examples where we can, you know, concretize and show concrete examples of how we can take into account the questions on AI benefits and challenges to the environment. One is the, you know, mainstreaming of greening questions. ITU has been working in the communication sector and digital technology for many years. And one of the questions we are increasingly receiving is to evaluate, for example, the resilience and performance of data centers. And we have conducted the assessments in a few countries in Asia and the Pacific. But in the process, we made sure that the environmental aspects and best practices applied in the process of assessment so that the recommendations include how to mitigate the negative impact on the environment. And I hope that there will be more of this integration of greening and environmental considerations in all the aspects of digital transformation and what we do. But I would also like to add the perhaps partnership that we can expand with the industry, especially small and medium-sized enterprises. And I want to give an example of e-waste management that I mentioned earlier, that increasingly there will be data that is generated through an increasing number of devices people are using. And in ITU, we have opened a new area office and innovation center in Delhi recently. And one of the topics that we are addressing with the association of SMEs and businesses in India is to encourage innovation and to make sure that the e-waste management and climate technologies are taken up and mainstreamed in the industry side so that we can make it as a successful and profitable business. And we hope that that will contribute to the circular economy. And I hope, I believe that more of these business models will be required now that AI is being rolled out very quickly. Thank you. Back to you. Thank you. We go to Cem.

Siriwat Chhem:
Yes, just for my final comment. Recently, about last week, I attended a workshop specializing in AI organized by the International Science Council. And so what we did, they invited AI experts from the Asia-Pacific region. And I would just like to share two of the outcomes from this full day discussion. And so the first point is on mindset. Currently, we have this mindset and mentality that AI should be the solution for everything. And this comes at very high costs, not only in terms of sustainability and environmental aspects, but even down to the efficiency of actually trying to solve a problem. And so what is happening is that now we’re starting to use AI to the extent that it creates more problems than it solves. And so the overall consensus from the workshop was that we should be extremely careful in evaluating and assessing how efficient AI tools and platforms and applications are being used, and whether it’s actually solving the problem more efficiently and effectively, and not in turn, creating more problems. And so the second part, which I would like to share, is on long-term partnerships. As I mentioned, we were in a room full of very, let’s say, qualified individuals from that field of expertise. And they shared that one of their challenges or the main problem is that when they convene together for high-level international conferences, or they have workshops or meetings, the time period leading up to the meeting, a lot of preparation and time is involved. All the stakeholders are engaged throughout the event. But the problem is that following up after meeting, not much is done to bring together all the important points that were discussed. So in terms of an extensive report, in terms of building long-term partnerships to build on what was discussed at those events, because addressing global issues in terms of AI and sustainability, it requires a lot of considerations. And these things cannot be solved in one day or in a one-week conference, but it really has to be taken many steps forward into the long term. So I would just like to conclude with that. Thank you. Thank you. We go to questions from

Moderator – Karlina Octaviany:
online audience. It’s Avis from Cameroon, from the Proto-JQVIS organization. One of the thorny problems in Africa remains the return of e-waste to producer. What binding mechanism can we put in place for its effectiveness? Anyone wants to answer from the panel?

Atsuko Okuda:
Ah, yeah. Atsuko, perhaps you can answer. Sure. Thank you. Thank you for this very important question. I have a question regarding the obligation of programs that Mr. Avis asked regarding the return of e-waste to producers. Of course, there are policy as well as regulatory mechanisms that could bring, but perhaps the, as I mentioned earlier in my example, perhaps that could be seen also as opportunities to work with startups as well as SMEs so that they can recycle the devices before, of course, e-waste. And perhaps that could be seen as one part of the circular economy. So I believe that, of course, returning the e-waste is one thing that could be, you know, mandated, but perhaps we can see more collaborative ways because the producers may or may not reside in your countries, right? So returning to the producer in some cases could be a challenge. So perhaps we can see it from a holistical and ecosystem point of view on what’s the best mechanism to make sure that the e-waste are not discarded in the environment and in the ocean. I’m not sure if we have sufficient time to answer this question, but I believe that this mechanism and how to do this is a very important and essential topic, I believe, for all of us.

Moderator – Karlina Octaviany:
Thank you. Back to you. Thank you, Asuko. So as your reminder, it’s already closing time for our open forum. So we’ll have a closing statement from each of the speakers. Perhaps we can go online first from Tim.

Siriwat Chhem:
Yes, thank you. So just back to our topic on AI and sustainability, I believe it is a long-term journey, as mentioned from our opening statement and all the panelists, that in certain cases they have already been established for a long period of time. And it’s difficult from the legislation and policy point of view to kind of backtrack or catch up for that matter. And so with that, I think rather than focusing too much on the technology, which is something that’s being done in the field of AI, we should focus more on the fundamentals, which are utility and also what are some of the implications. Because if we focus too much on the technology, we think that it’s a solution to everything rather than looking at the overall big picture and weighing out the pros and cons. So I would say that we should take a more big picture approach and looking into the long term rather than just focusing on solving immediately what we can do in the current state without

Moderator – Karlina Octaviany:
thinking too far ahead. Thank you. Thank you. Atsuko, you want to share closing remarks? Thank you.

Atsuko Okuda:
I want to also add a dimension on digital inclusion. As you know, according to the latest ITU estimates, 2.6 billion people are still unconnected. And I believe that this process of digital inclusion and digital transformation should continue so that those who need the digital technology and transformation can benefit from the technologies. But at the same time, I believe that we shouldn’t forget about the greening part and environmental considerations in the process. And I hope that this conversation will continue among all of us or in the expanding community globally so that we can make sure that we can mainstream the perspectives and considerations to the environment in the future in our effort to connect the unconnected and making the digital transformation sustainable. Thank you. Thank you, Atsuko. We go to Robert for closing.

Robert Opp:
I didn’t expect a closing statement, and I don’t have a closing statement, but I do have a couple thoughts. And actually, even these last couple thoughts that were offered about the digital divide and not focusing on technology, I think Cem is exactly right. The focus here should not be the technology. The focus should be on what best serves people and the planet. And I think that if we stay focused on what best serves people and the planet, you know, technology, we’re not going to stop the sort of innovation for commercialization process. But I think as we go forward in alignment around what needs to happen, we have to keep people, we have to make sure that technology is serving people, not the other way around. And it’s the same for the planet. We just, we can’t keep that cycle of the planet is here for the taking, for the purpose of technology rollout.

Noam Kantor:
It’s not about that. Thank you. I know it’s 2.31, so I’m between you all and your coffee. But yeah, this was fascinating. And I think what I’ve been able to see is efforts towards sustainable digitalization from code to cooperation on an international scale. How everyone in the policy stack, as it were, can make an impact from where they are. And it’s been great to learn about that. I hope you’ve also come away with the sense that better practices are possible in the tech space. And that, you know, there is a way to make progress on these goals, including when necessary, you know, not shipping certain products when it wouldn’t be responsible to do so. I don’t have a poem to end with, like we started with, which is sad, but probably something from Mary Oliver would be good, so you all can imagine that. Thank you. To Martin?

Martin Wimmer:
Yeah, interconnected networks, the internet, are a variable thing. They are something like Tupperware or color TV or punk rock. Ideas from the middle of the last century. People who were there at the beginning are very old now and have gray hair. The subtext of this conference, as I experience it, is to discuss what the digital transformation means to the internet. It’s old heroes, it’s old myth, old narratives, old governance structure. And while there’s still a community of people who believe in the value of the internet for internet’s sake, there might be a new generation out there who considers the internet to be just the oldest of many digital technological artifacts, AI being the most recent incarnation, which are not good or bad in itself. A matchstick firing global warming in the worst case scenario, or tools…

Atsuko Okuda

Speech speed

129 words per minute

Speech length

1924 words

Speech time

892 secs

Audience

Speech speed

170 words per minute

Speech length

552 words

Speech time

195 secs

Martin Wimmer

Speech speed

136 words per minute

Speech length

803 words

Speech time

354 secs

Moderator – Karlina Octaviany

Speech speed

135 words per minute

Speech length

1162 words

Speech time

517 secs

Noam Kantor

Speech speed

190 words per minute

Speech length

1596 words

Speech time

504 secs

Robert Opp

Speech speed

149 words per minute

Speech length

1857 words

Speech time

748 secs

Siriwat Chhem

Speech speed

184 words per minute

Speech length

1328 words

Speech time

434 secs

Net neutrality & Covid-19: trends in LAC and Asia Pacific | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During the conversation, Javier begins by expressing gratitude to the person he is speaking to in various languages, such as “gracias” in Spanish and “ciao” in Italian. He then bids farewell using the phrases “nos vemos” and “bye-bye.”

Javier then brings up the topic of drafts, suggesting that he believes the woman he is speaking to has some. However, he appears uncertain about this assumption, repeating the statement “I think she has drafts” to indicate his lack of certainty.

The purpose of discussing this topic seems to be to engage in conversation and explore ideas related to drafts. However, the conversation stalls as the woman repeatedly uses the phrase “Creo que…” to share her thoughts, indicating her own uncertainty or hesitation in expressing a definitive opinion.

These exchanges highlight the tentative nature of their conversation and the absence of concrete information or consensus regarding drafts. It appears that they are both searching for common ground or a clearer understanding of the subject matter.

Overall, this conversation offers insights into the interpersonal dynamics and communication style between Javier and the woman, as well as their shared uncertainty about the existence of drafts and their desire to engage in conversation.

Piero Guasta Leyton

The discussions surrounding net neutrality and non-discrimination in the context of trade are considered to be of utmost importance. Piero, for instance, perceives these discussions as key, particularly when it comes to the trade aspect. However, it is often observed that these subjects tend to be overshadowed by more popular topics.

In terms of protecting internet-related aspects, such as free data flows and net neutrality, it is argued that these aspects should be safeguarded rather than implemented. It is contended that these key principles already exist, and the goal should be to ensure their protection. Trade agreements play a crucial role in achieving this objective, as they should aim to guarantee the preservation of these principles.

Similarly, non-discrimination is highlighted as a principal aim in trade. The notion of providing equal opportunities for all participants and market entrants is key. Notably, during the pandemic, specific measures were not required due to the existence of non-discriminatory regulations. Moreover, the measures taken by countries during the pandemic were generally deemed permissible from a trade perspective.

Moreover, net neutrality policies are attributed with having a positive impact on market competition and consumer choices, particularly in the case of Chile. These policies have facilitated the entry of various technological products into the Chilean market, making it more attractive and competitive.

In summary, the discussions surrounding net neutrality, non-discrimination, and internet-related aspects are seen as critical in the trade domain. Protecting and preserving these principles through trade agreements can help ensure equal opportunities and foster market competitiveness. Furthermore, the positive impact of net neutrality policies on market competition and consumer choices, as evidenced by the Chilean example, highlights the importance of these topics for further discussion and promotion.

Javiera Cรกceres Bustamante

Net neutrality is a critical principle for ensuring equal access to the internet and plays a crucial role in achieving the Sustainable Development Goals (SDGs). It prevents discrimination by internet service providers, ensuring that all users have equal access to different sites and applications. By maintaining an open and level playing field, net neutrality fosters equitable opportunities for individuals and businesses.

Net neutrality holds significant potential in contributing to the SDGs, particularly SDG 4 (Quality Education) and SDG 8 (Decent Work and Economic Growth). It can ensure access to online educational resources and platforms, enabling individuals to acquire knowledge and skills necessary for quality education. Additionally, net neutrality can facilitate job creation in digital environments, supporting the goal of achieving decent work and economic growth.

The COVID-19 pandemic further highlighted the importance of net neutrality. While exceptional measures, such as traffic management for emergencies and prioritisation of access to critical digital services, were implemented during the pandemic, they were seen as compatible with the principle of net neutrality. These measures aimed to ensure that essential services and information were accessible to all, emphasising the significance of net neutrality in times of crisis.

The Pacific Alliance, comprising four countries, has made notable progress in implementing net neutrality. It has set a precedent by incorporating this principle into an international treaty, demonstrating a shared commitment to ensuring equal access to the internet. The experiences of the Pacific Alliance can provide valuable insights for other economies, particularly in the Asia-Pacific region, seeking to regulate net neutrality effectively.

Network slicing, a complex and evolving topic, is closely associated with net neutrality. It involves dividing a network into multiple virtual networks to optimise resources and provide tailored services based on application requirements. While some view network slicing as a means to enhance safety and efficiency in services like autonomous driving, it poses challenges in terms of maintaining net neutrality. Therefore, careful regulation is necessary to prevent backdoor violations of net neutrality in the context of network slicing.

Investigating how companies comply with net neutrality is crucial and provides valuable insights for future research. Understanding the extent to which companies adhere to net neutrality principles can inform policymakers and regulators in designing effective strategies and policies to ensure equal access and prevent discriminatory practices in the digital landscape.

In conclusion, net neutrality plays a vital role in ensuring equitable access to the internet and contributes to the achievement of SDGs such as quality education and decent work. Exceptional measures during the COVID-19 pandemic have underscored the compatibility of net neutrality with prioritising critical services. The Pacific Alliance stands as a successful model for incorporating net neutrality into international treaties, while network slicing necessitates careful regulation to avoid violations. Investigating compliance with net neutrality provides valuable insights for future research and policy development in the digital realm.

Dilmar Villena Fernรกndez Baca

The analysis examines the impact of net neutrality on internet usage in Peru during the COVID-19 pandemic. It underscores that the internet infrastructure was unprepared for the surge in information flow caused by the pandemic, resulting in network overload. To address this issue, the Peruvian government allowed operators to prioritize certain data packages through emergency actions. This was done to alleviate the strain on the network caused by increased device usage for work and entertainment during the pandemic.

One argument presented is that the exceptions made by the Peruvian government to net neutrality regulations were necessary to adapt to changes in internet usage patterns. For example, the government developed the ‘Aprendo en Casa’ platform to provide educational materials to students. Telecommunication companies were given the flexibility to prioritize data packages related to remote work and learning. This decision aimed to ensure uninterrupted access to essential services, such as online education, during the pandemic.

However, the analysis also highlights concerns about selective zero-rating in Peru. Zero-rating refers to exempting certain data packages from usage limits or charges. The stance put forward is that Peruvian regulations permit zero-rating as long as it is not done arbitrarily. However, compliance with this regulation varied among telecom companies. Several operators did not fully adhere to transparency guidelines, which require them to be transparent about zero-rating. Only one major operator in Peru was found to be fully compliant during the pandemic. This non-compliance and lack of transparency generated a negative sentiment regarding the implementation of net neutrality regulations in Peru.

The analysis also reveals the impact of net neutrality on non-traditional forms of work, such as gamers and streamers. It argues that prioritizing conventional work-related traffic penalized these workers, as their data flow requirements were not adequately met. This aspect highlights a potential downside of prioritizing certain types of internet traffic during periods of increased network strain.

Furthermore, the analysis extends beyond Peru and highlights compliance with net neutrality among telecommunication companies in the Pacific Alliance. Despite being the same corporations, compliance varied across different countries in the Pacific Alliance. An example mentioned is Intelcom, a Chilean company operating in Peru, not fully complying with net neutrality transparency guidelines. This observation emphasizes the need for consistent implementation of net neutrality regulations across borders.

In conclusion, the analysis provides valuable insights into the challenges of net neutrality during the COVID-19 pandemic in Peru. It reveals the strain on internet infrastructure, the regulatory exceptions made by the Peruvian government, concerns about selective zero-rating, and compliance issues among telecom companies. The analysis emphasizes the importance of maintaining a balance between prioritizing essential services and ensuring equal access to the internet for all users.

Olga Cavalli

Argentina does not have specific regulations on net neutrality, but its national law on digital services indirectly upholds the principles of network neutrality. The law includes two articles that guarantee network neutrality for the services in place, ensuring equal treatment of all online traffic. However, challenges arise regarding the distribution and concentration of internet traffic, particularly on streaming services, which can result in unequal treatment of other online services. Additionally, the practice of zero-rating in mobile services packages, where certain services like WhatsApp are exempted from data charges while others are not, raises concerns about fair treatment. Moreover, the inclusion of IoT devices in mobile networks, such as 5G, raises questions about the feasibility of network neutrality due to the possibility of treating different types of traffic differently for functionality. Some argue that this approach may violate net neutrality, especially when it comes to time-sensitive traffic from IoT devices like autonomous cars. Good regulation is vital to enable services for consumers and service providers, taking into account the rapid changes in the technological environment. It should provide room for future updates and improvements while ensuring fairness for all parties involved. In conclusion, while Argentina indirectly supports network neutrality through its national law on digital services, challenges persist and require careful consideration and regulation to ensure fair and equal access to the internet and foster innovation in the digital services sector.

Raquel Gatto

Net neutrality has emerged as a significant issue in Brazil, necessitating the implementation of regulations to safeguard the rights of internet users. In 2014, Brazil enacted a law that aimed to protect user rights on the internet without subjecting them to criminal charges. This law allowed for the use of the internet as a means to safeguard and uphold these rights.

The Marcos Review, also known as the Brazilian Bill of Rights, was formulated through public consultations and online discussions. This inclusive approach ensured that the opinions and perspectives of a wide range of stakeholders were taken into account. The Marcos Review enshrined net neutrality as a fundamental principle in Brazil’s internet governance framework.

However, the Marcos Review also acknowledges that there can be exceptions to the principle of net neutrality in certain circumstances. For example, it recognizes the need for traffic control and the prioritisation of emergency-essential services. These exceptions are carefully defined and implemented to ensure that they do not undermine the overall principle of net neutrality and the protection of user rights.

The Brazilian Internet community, known as NIC.br, plays a significant role in defining and implementing these exceptions. This organisation works closely with various stakeholders to ensure that the exceptions are justified, transparent, and in the best interest of internet users. By involving the Brazilian Internet community, the regulatory framework surrounding net neutrality becomes more accountable and responsive to the needs and concerns of the public.

Overall, the sentiment surrounding net neutrality in Brazil is neutral. There is a call for cautious monitoring of any exceptions to the principle of net neutrality. This suggests that there is a recognition of the importance of balancing the needs for regulation and protection of user rights, while also maintaining an open and equitable internet environment.

In conclusion, net neutrality is a significant concern in Brazil, and the country has taken significant steps to address it through regulations and the Marcos Review. The law passed in 2014 allows for the protection of user rights without criminalising them, and the Marcos Review ensures that net neutrality is upheld as a principle while also allowing for carefully defined exceptions. By involving the Brazilian Internet community in the process, Brazil’s regulatory framework aims to provide a balanced and accountable approach to net neutrality.

Felipe Muรฑoz Navia

During the session, the topic of network slicing with 5G and its potential impact on net neutrality laws was discussed in depth. Network slicing, which involves dividing a physical network into multiple virtual networks, allows for the creation of different logical networks on the same hardware. This innovation opens up new possibilities for enhanced mobile broadband, massive machine type communication for Internet of Things (IoT), and ultra-reliable low-latency services.

However, it was acknowledged that network slicing could potentially affect net neutrality at lower levels that may not be covered by existing laws. The speakers emphasised the need for careful analysis and updates to technical drafts that accompany these laws. They proposed that laws should be updated to cover potential violations of net neutrality that may occur beneath the routing, ensuring that the principles of an open and unbiased internet are upheld.

The session concluded that advancements in technology call for laws and technical drafts to be revised and adapted to accommodate these changes. The speakers highlighted the importance of net neutrality in maintaining an internet ecosystem that fosters innovation, competition, and equal access to information. Therefore, thorough evaluation and consideration of network slicing’s implications on net neutrality are necessary to prevent any unintended consequences.

In summary, network slicing with 5G offers various benefits for logical networks, but it also poses challenges to net neutrality at lower levels. Updating laws and technical drafts is vital to uphold net neutrality and safeguard fairness and equality in the digital space. Proper analysis and adjustments are necessary to accommodate technological advancements and maintain the integrity of net neutrality in all aspects of network infrastructure.

Ignacio Sรกnchez Gonzรกlez

Net neutrality is an essential principle that has received substantial attention and regulation in various countries and international treaties. Specifically, the Pacific Alliance, consisting of Chile, Colombia, Peru, and Mexico, has acknowledged the significance of net neutrality by enacting it into their national laws. Furthermore, the Pacific Alliance as an organization has extended the scope of net neutrality by incorporating it into their trade protocol, setting a noteworthy precedent in international public law.

In Brazil, net neutrality is also a major topic of discussion, seen as a crucial element of internet rights and legislation. This highlights the need for comprehensive regulations to ensure proper implementation of net neutrality. Key to this is incorporating exceptions in regulations, as they play a vital role in striking a balance between preserving net neutrality and addressing specific circumstances that may require different treatment.

However, a critical concern is the lack of an active monitoring body that ensures compliance with net neutrality principles. Despite having legislation and dispute settlement mechanisms in place, the absence of continuous monitoring hampers effective implementation. This poses a challenge in ensuring that internet service providers (ISPs) adhere to net neutrality principles and refrain from discriminatory practices.

Additionally, the interaction between net neutrality and network slicing is a relevant aspect to consider. Network slicing is a technique used to accommodate specific technical requirements, such as low latency for critical services like autonomous driving. Whether network slicing complies with net neutrality principles largely depends on the application and intent behind its implementation. This highlights the importance of regulating and monitoring net neutrality and network slicing practices to prevent preferential treatment or discrimination by ISPs.

In summary, net neutrality is a vital principle for maintaining an open and equal internet environment. Its regulation in various countries and international treaties, particularly within the Pacific Alliance, demonstrates the global recognition of its importance. To effectively implement net neutrality, comprehensive regulations should be in place, including exceptions that strike a balance. Moreover, active monitoring mechanisms are necessary to ensure compliance and prevent discriminatory practices. Overall, the regulation and monitoring of net neutrality and network slicing are critical for safeguarding the principles of an open and fair internet.

Session transcript

Ignacio Sรกnchez Gonzรกlez:
Hello. Good afternoon, everybody. Thank you very much for coming. On behalf of the Institute of International Studies of the University of Chile, we are holding now the session called Net Neutrality and COVID-19 Trends in Latin America and the Caribbean and the Asia-Pacific. First of all, I want to thank all the people who have joined the panel, Raquel Gato, Olga Cavalli, Dilmar Villena, and our online speakers also, Javier Cรกceres, Felipe Muรฑoz, and Piero Huasta. So we will talk now about why is net neutrality important for our countries, why are we making a link between Latin America and specifically the Pacific Alliance within Latin America, and why we are linking it with the Asia-Pacific. We will address a global discussion, comparative regional processes regarding net neutrality. And for instance, just to start the conversation, the presentation of the paper that Professor Javier Cรกceres and Felipe Muรฑoz will make, they will talk about, for instance, that an important outcome is that the four members of the Pacific Alliance, which are Chile, Colombia, Peru, and Mexico, the four of them have regulated in law the principle of net neutrality, and that had as a result that the Pacific Alliance, as an organization and in the trade protocol, they set the principle of net neutrality in an international treaty, setting an important precedent in international public law. That’s why our first presenters will present the paper, Net Neutrality Exceptionality, a look into the Pacific Alliance countries during the COVID-19 pandemic and lessons for Asia-Pacific economies. This paper is now in press in the framework of the call for proposals of the UN Economic and Social Commission for the Asia-Pacific, UNCTAD, the United Nations Industrial Development Organization, and ARTNET. And this call for papers is called Unleashing Digital Trade and Investment for Sustainable Development. The presenters of the paper, I will present every speaker when the time to talk arrives. Our first presenters are Professor Javier Cรกceres-Bustamante, who is also one of the authors of the paper. She is an instructor professor at the Institute of International Studies of the University of Chile, and also a PhD fellow at the London School of Economics and Political Science. And Professor Felipe Muรฑoz, an associate professor of the Institute of International Studies of the University of Chile. The two professors and I are the authors of this paper, which will start the conversation of this principle in our region. So Professor Cรกceres and Professor Muรฑoz, thank you very much for joining us online. And please, whenever you want, you can start and present this research topic on net neutrality. Thank you, Ignacio. Good afternoon and good morning to all

Javiera Cรกceres Bustamante:
of those participating either on site or virtually in this panel. I don’t know if you can see my PPT, because I asked Piero if you could project it. I don’t know if, Piero, yeah. Yeah, now we can see it. You can see it? Okay, perfect. Okay, so first, I would like to thank you, Ignacio, for convening such an interesting session. It is my pleasure to share with you the finding of our research project, as you were saying, specifically on net neutrality, exceptionality, and the Pacific alliance, and how, from this experience, some lessons can be taken for other economies, including those of the Asia-Pacific. I’m going to be presenting. Felipe is here too, but I’ll be the one presenting. Next, please. First, we’ll start with an introduction that you can see on screen. While it’s not new that the information and telecommunications revolution has changed the paradigms of production, consumption, and even social interactions, the COVID-19 pandemic allowed us to witness the extent of the internet as an essential tool for individuals and businesses. So, in a moment dominated by restrictions on social distancing, So, in a moment dominated by restrictions on social distancing, digital tools enabled people to connect and collaborate with others across the globe. During the pandemic, as we all know, the importance of the internet as an essential tool for people, businesses, and the digital economy has surged. So, several activities turned into digital environments as a consequence of the pandemic, ranging from, for example, remote working and education to the rise of e-commerce, or the increased use of digital platforms, too, for social communication and leisure activities. So, in this context, our research focuses on studying how the Pacific Alliance economies have regulated and managed the principle of net neutrality during the COVID-19 pandemic in order to draw some ideas that may provide insightful information for policymakers or using net neutrality principle during exceptional circumstances. Next, please. Now, this is a bit of kind of a theoretical framework or literature review. So, I don’t know. Yeah, there. According to Tim Wu, internet service providers, ISP, should be required to treat all internet traffic equally without discriminating or charging different based on user, content, website, platform, application, type of attached equipment, or method of communication. So, this is the basis of the net neutrality principle. If not, as this author argued, internet service providers could become internet gatekeepers, controlling access to information and stifling competition and innovation. So, this principle seeks to prevent internet services providers from blocking or slowing down access to websites or applications or charging consumers extra fees for faster or prioritized access to specific sites or applications. So, net neutrality can be defined as essential to ensure that the internet remains an open and level playing field where all users have equal access of information and to services. So, for this reason and to provide stronger regulatory framework, we can see that some countries have already decided to incorporate the net neutrality principle as part of their negotiating mandates in their free trade agreement negotiations, including, as the case of analysis that we are presenting, the economies of the Pacific Alliance. Next, please. Here we see a little bit more of information regarding net neutrality and COVID. So, as I already mentioned, the use of the internet skyrocketed due to the rise of COVID pandemic. So, the population used the internet for communication and leisure purposes, including video calls and streaming services. Various activities moved into virtual environments. So, the increasing usage of network limited the capabilities of internet services providers to provide the required broadband width. So, this problem was particularly relevant in developing economies and rural areas and for those who did not have the access to broadband land connections or latest generation mobile connections. So, to ensure that citizens get access to the internet, particularly those that are digital and digital enabled services considered critical, governments impose different measures and policies like zero rating or the price discrimination between digital packages in which companies may discriminate regarding the price they will charge for a specific content prioritization. Both type of measures that could be understood as inconsistent with net neutrality principles. Next, please. But also, there is important to see here that net neutrality should not only be seen as a technical issue related to the governance of the internet, but also as a tool for development. So, while there is no specific reference to the net neutrality principle within SDGs, it can be stated that the net neutrality principle might promote more equitable access to the internet, but not allowing discrimination concerning access to various contexts distributed in a digital environment. So, hence, the relationship between net neutrality and the SDGs becomes significant as the internet has proven to be an essential tool for achieving SDGs. Here, we can see on screen some examples of how net neutrality could help the achievement of SDGs are shown here. So, we have first, for example, that we might think about SDG 4, which is quality of education and the need to access educational resources and online learning platforms, for example, or SDG 8, which is decent work and economic growth and the increased job creation in digital environments. So, we can see that both activities are dependent on the access to the internet for which any discrimination by internet services providers could hinder the capabilities of the population to participate in that. Next, please. Thank you. So, the Pacific Alliance becomes an interesting case study as the regional bloc has established as one of its main working objectives the construction of a regional digital market. Various presidential and ministerial declarations and roadmaps have been elaborated towards achieving this objective in which those instruments express the cooperation on net neutrality is necessary to, I’m quoting, create an enabling environment to promote the exchange of digital goods and service. So, moreover, during the amendment of the additional protocol of the Pacific Alliance, its trading instrument within the alliance, the following provisions were adopted. So, I’m going to quote here article 14.6 of the commercial protocol, which is part of the telecommunication chapters, which says that each party shall adopt or maintain measures to ensure compliance with net neutrality. So, while countries may have their own measures to achieve this objective, the common goal of net neutrality is committed at the regional level within the alliance. Next, please. I’m sorry for this slide and the following one. I know it has a lot of information. I think we can afterwards share the presentation. So, I’m just going to summarize this. We can see here that the four countries in the Pacific Alliance have implemented policies looking to ensure net neutrality. So, our research found that Chile was the first country to adopt these measures. To a large extent, the other three economies have replicated the Chilean model with small variations. What is most interesting here and relevant for our discussion today is the final column, because here we can see information on how these countries have addressed exceptionality during COVID-19 pandemic. So, it is concluded that while existing laws on net neutrality are imposed, there is policy space allowing countries to implement exceptionality measures to ensure that access to critical digital services, as education or health related, was possible during the pandemic. So, prioritization was possible under the event of the pandemic, supported by the World Health Organization, as happened also in Colombia. And we also see that traffic management measures for emergencies were also put in place. Also, for example, Mexico defined that this kind of exceptions could be grinded if there was a risk to the integrity and security of the net network or private communication of users, exceptional technology congestion, and also emergency and disaster situations. So, therefore, net neutrality principles was not incompatible with an emergency situation. Next, please. So, this analysis takes us to Asia-Pacific economies. So, while the use of net neutrality has been widely discussed for countries in the Asia-Pacific, most economies in this region have not yet implemented formal regulations regarding net neutrality. So, there are various reasons for this lack of regulation, being one common acknowledgement that the stakes are high for both consumers and industry holders, and states don’t want to lose the possibility of controlling traffic on the internet. Nevertheless, we can find some cases in India, Japan, and Singapore, for example, where these countries have already implemented in different extents net neutrality regulations, as you can see in this slide. So, as I mentioned before, I know I don’t have much time left, so I’m going to afterwards share this presentation. Next slide, please. Here, we can see the comparison between the three main integration processes. We have the Pacific Alliance, APEC, and ASEAN. So, as previously mentioned, a significant development at the level of binding instruments and working documents with a focus on the regional digital market, all of them addressing net neutrality, has been taking place in the Pacific Alliance. In the case of APEC, we can see that net neutrality has been highlighted in recent years in relevant working documents, which may eventually lead to a leader’s declaration, but still no further progress has been achieved. While in the case of ASEAN economies, there is no relevant advances to establish that the net neutrality principle is part of their agenda, pointing out the differences between member economies, too. Next, please. So, to wrap up from our research, it can be stated that members of the Pacific Alliance began with the incorporation of the net neutrality principle in the trade protocol of the Alliance. So, in turn, the adoption of the principle by the four countries, not only closing time, but also their connection points, have been highlighted in key issues to promote the digital economy and in the intra-regional trade as they intersect in subjects such as traffic management measures, transparency, compliance mechanism, and references to international technical standards. Regulation details regarding traffic management measures in a way that would allow the adoption of measures to prioritize traffic and data for essential services in time of emergency, such as the pandemic. So, in this regard, the four regulations stand out for the level of detail and for the instructions to which internal services providers must adhere, being able to manage data traffic in order to ensure the continuity of critical services. Members of the Pacific Alliance made progress in joint discussions that led to the reform of all telecom legislation and subsequently set the first multilateral precedent for the incorporation of net neutrality principles in an international treaty. So, the Alliance practices align with both members’ digital trade policies. Just a couple of more ideas before I finish. The regulation of net neutrality within Asia-Pacific economies has been a matter of divergence, and some countries have built regulations and frameworks to address this issue. In contrast, others have not worked on this topic, so while the topic has been covered in some preferential trade agreements such as CPTPP, it has not been covered in others such as, for example, RCEP. Next, please, just to conclude. Here, I think I’ve already mentioned everything related to COVID-19, so I’m going to focus on the last two points. So, we see that the Asia-Pacific region, particularly APEC and ASEAN, has discussed the concept of net neutrality at a multilateral level. However, the experiences in local regulations are still scarce, and many organizations or forums have focused more on the declarative sphere rather than actually developing and creating regulations. The Pacific Alliance has offered an unusual normative and political experience. It has difficultly developed its binding discussion and worked instruments on net neutrality, so I think that this dissemination of information can help us build best practices in the Asia-Pacific region. Thank you. Next, please. Thank you, Ignacio.

Ignacio Sรกnchez Gonzรกlez:
So, thank you very much, Javier, for that excellent and clear presentation. Now, I give the floor to Olga Cavalli, who is the National Director of Cybersecurity in Argentina. Olga, thank you very much. Thank you. Thank you for inviting me. Very interesting

Olga Cavalli:
investigation and the outcomes of this initiative. I was wondering, Pacific Alliance, because I’m ignorant of how many countries do make part of the Pacific Alliance, if you can tell me. Chile, Colombia, Peru, Mexico. So, the one side that you have mentioned, Chile, Colombia, Peru, Mexico. Okay, very interesting that you gathered together to have a treaty, which I understand that it’s binding to national regulations, which is important because sometimes we get together and we do declarations which are perhaps aspirational, and then it doesn’tโ€ฆ I think it’s very important that we have this kind of discussion and we have to reflect what really has an impact at the national level. Argentina doesn’t have a specific regulation on net neutrality, although the national law on digital services, I would say, national digital, establishes two articles that guarantees the network neutrality for the services in place. I think it’s very important that we have this discussion and we have to reflect what really has an impact at the national level. I think there are a lot of philosophical questions about this issue of net neutrality. Sometimes I think it’s a bit aspirational and perhaps it’s difficult to think about it in a world where most of the traffic is increasing and really concentrating, especially in streaming services. If you look the way that the Internet traffic has been distributed, it’s not the same. It’s not the same as streaming services. It’s not the same as streaming services. It’s not the same as the delivery services. At the same time, you have the content delivery networks that deliver most of this content into directly to Internet exchange points. So at a point, trying to do regulations about network neutrality with a reality that most of the content is being distributed, it’s not the same as streaming services. So I think it’s just a question that I make myself. In any way, I’m putting a doubt in what you’re doing. Sometimes I wonder if it’s something aspirational and really we can achieve. At the same time, thinking about mobile services, the practice of, for example, in a package that you buy, you have a specific law that says that you can’t send messages to a particular country, but you can send messages to the bound of services like WhatsApp or some, not all messaging services like WhatsApp, but not every messaging. I wonder, that’s a common practice. I know that in some countries it was not allowed, like Chile, because you have a specific law on network neutrality. I wonder if it’s something that we should really struggle to achieve or just bear in mind that it exists. At the same time, I have a question for those who made the research. The inclusion of internet of things in the mobile networks, like, for example, 5G, will include a new way of treating the bandwidth of the network, which is called network slicing. Some people think that network slicing is a way of not respecting the network neutrality, because you are treating differently different types of traffic, but it makes sense in relation with the service. I mean, you cannot delay an autonomous car, the streaming of that car, because it may hit someone and do an accident, but you can delay a service. If you delay a service, it’s not a service. If you delay a service, it’s not a service. You can delay other traffic, I don’t know, perhaps streaming or music or broadcast of television or radio. So, then, some type of traffic in the networks that are being developed now, whether in 5G, or I also think about the internet of things, it’s a way of respecting the network neutrality. If you are thinking about objects that are connected, and they are providing critical services to users. So that’s a question for those who have made the research, and I congratulate you for what you have done. Thank you. Thank you very much, Olga, for your inputs. Now, I believe that you mentioned, for example, zero rating, and some technologies that are developing. I don’t know if Piero… Zero rating didn’t come to my mind, because I’m totally jet lagged. I was finding the words in my mind, and it didn’t come. Thank you for reminding me.

Ignacio Sรกnchez Gonzรกlez:
I don’t know, Piero, if I’m speaking on behalf of you, but I think that Piero, who is our next speaker, might refer to zero rating. I’m not sure, Piero, if you are not going to, don’t worry. I’m not sure if you are going to, don’t worry. I’m not sure if you are going to, don’t worry. I’m not sure if you are going to. So, Piero, he works in the undersecretariat of international economic relations, and he will present a commentary and reflection on the Chilean experience, negotiating net neutrality within the Pacific alliance. Piero, thank you very much, because also, there’s a lot of time difference between here and the other side, and it’s a lot of time difference between the two sides. So, Piero, if I may ask you, I hope you can hear me?

Piero Guasta Leyton:
PIERO JARAMILLO-SANTOS Yes. PIERO JARAMILLO-SANTOS Everything is okay, technically? First of all, I would like to thank you, Javiera, and Felipe, for inviting me here. My first comment are in general that I think this effort is very good. For me, it’s very important. It’s key to start discussing this type of issues. And I think it’s very important to discuss this type of issues, because, particularly in the trade work, these type of topics are very high behind more more famous topics, so it’s always good to discuss this. Particularly because we are in a stage on trade that the civil society is asking why we negotiate, why we are working these topics. And I think that we need to start discussing this. And I think that we need to start kind of defending the technical aspect that are born with internet. For example, free data flows, net neutrality, et cetera. It’s not something that we need to implement in the future, but something that we need to protect, if you will, in trade agreements to continue that. And I think that we need to start talking about that, because we already have that, yes, that is true, but the idea behind this is to protect these issues. Regarding this specific topic, I will talk again, I’m sorry for repeating myself, about the trade perspective, and I will connect that with what Ignacio was saying in the sense that the view from trade is kind of, I don’t want to say that it’s a bad view, but it’s kind of a bad view, because it’s not the trade economy, but it’s the competition, the opportunity for companies to export. So in our view, for example, it discusses a lot the new measures that some countries took during the pandemic. In the case of trade, it’s kind of allowed, because specifically in the case of Chile, we follow the idea that we need to have a regulatory. So for example, during the pandemic, you can prioritize health services or any other service, but you cannot discriminate between different providers. Same idea with the streaming, same idea with email providers, same idea with all the areas of the internet economy. Okay? So in that sense, I think the current measure of all trade agreements allow that, because, again, our focus is non-discriminatory. The idea is to all have the opportunity to be able to enter the market. And that is very important, for example, in the case of the pandemic, there were some initiatives or platforms that, for example, Chile developed or Colombia developed, and the idea was to be able to offer that solutions to Pacific Alliance in the same conditions. So in our view, we don’t need specific measures for that. Because it’s already available, and that is the main idea on the main objective on trade perspective. Regarding the topic more in general, for us, it has been very useful. I remember some early studies from specifically from one of our ISPs talk that it was very useful to have this kind of policies, because you make the market more attractive, more competitors can enter the markets, and be able to reduce the prices and allow more data processes and services to be available. In the case of Chile, we are adopters of almost everything technological, and we have almost access to everything that is currently on the Internet. We have some particularities that, for example, Asian products that maybe took a little bit of a long time to enter other markets like Europe or the U.S. are very easily entered to Chile. So I think it has been a good practice from Chile. I know that Internet has been evolving. There is the old discussion about autonomous cars. I’m prioritizing that. The health issue. But I think we need to be more aware of the health issue. But I think we need to see how everything is evolving regarding, for example, streaming. There is a lot of work regarding the broadband they use. There is a lot of competition about algorithms that compress images. Probably it’s not broadband is not much important as the latency of the connection. So I think that is another point of discussion regarding specific health and autonomous driving. But again, I’m closing for not extending too much my discourse. I think it’s a very good paper. It’s good that we continue the discussion. I know that maybe we need to evolve this more because, again, it’s a technical issue that we kind of established on the agreement that was already there and we hopefully we are not going to change in the future. But it’s important to highlight that, to see that this is important that we negotiate on other topics in the future. So thank you to the authors. Please, if you have any other questions, feel free to ask it. Thank you.

Ignacio Sรกnchez Gonzรกlez:
Thank you very much, Piero, for your inputs. Now I give the floor to Raquel Gattu, who is representing NIC.BR, NIC.BR. She will comment and reflect from the perspective of the technical community and speak about the situation in Brazil. Thank you very much, Raquel.

Raquel Gatto:
Thank you very much, Ignacio, Javiera, Felipe, for the study and for the invitation to be here commenting. I should start with a little bit of background. I’m a lawyer, so I’m supposed to be talking about technical issues and technical issues. So I will give a disclaimer. For the Latin American folks here in Japan, this is the worst time of the day. For all of us. So you’re all jet lagged a little bit more by the end of the afternoon because of the time zone. So forgive if some word is missing and if we do look a little sleepy here. But I will bring a little bit to my side in terms of commenting the regulation. And, in fact, in Brazil, net neutrality was and still is a big topic. It was interesting, and that’s what I want to bring forward with the questions Ignacio put in the beginning. Why it is important, right, and what has been done in each of the countries that we are talking today and in Brazil. So, as a Baril, when I say Barexcogenics, it maps Japan, it also maps the Brazil and Argentines. It also has a revision country also and Argentina and others. So for all of you that don’t know, Markus Review of Internet, the Brazilian Bill of Rights or the Internet framework, how we call it, and it was issued in 2014 after the Brazilian government passed a law that allows the use of Internet as a way to protect the rights of the user. So it’s a law, but it doesn’t go into the nitty-gritty details. And one of the perspectives of Markus Review is to bring the user’s rights protection before we go into the criminal penalties that comes with the problems that we know the Internet faces, but let’s protect the rights first. So the first challenge was to make sure that the Internet is protected. And the second challenge was to make sure that the Internet is protected before we go into the criminalization. And one of the big topics was net neutrality. The first challenge was to make the legislators understand what the Internet is and how it works into the technical perspective of open Internet working. And the third challenge was to make sure that the legislators understand how the Internet is and how it works into the technical perspective of open Internet working. And the fourth challenge was to make sure that the legislators can break this core, neutral core of the Internet, and that’s an interesting exercise that happened because Markus Review was discussed through public consultations, online consultations, and it was very, very interesting to see how people were able to have the click, you know, when they understand what are the consequences of their decisions in terms of regulations. And then, well, fast-forwarding, because we don’t have much time, the decision was to keep it as a principle of net neutrality, to the point Olga was making. Is it kind of aspirational? Is it kind of, you know, like, you know, like, you know, we have to keep it as a principle of net neutrality and we have to keep it as a principle of net neutrality. And then we go into the exceptions, and the study brings this forward very nicely in terms of, okay, so, that’s what we want. We want to keep it non-discriminatory. We want to have the packages flowing, whatever the content is, where they are coming from, where they are going to, and so on, and so on, and so on, and so on, and so on, and so on, and so on. And so, it begs for the questions, who is deciding what those exceptions are, and how are we going to control it or monitor it? I’m avoiding the word control, but it doesn’t come anything better in my mind. Anyway, and so, in the case of Brazil, Marcos Sevil has the principle of net neutrality, and it’s a very important principle, and it’s mentioned in the document, but then it mentions the possibility of exceptions in case of the need, for example, for a traffic trolling, I don’t know exactly how to spell that, and then to prioritize emergency-essential services. And then after two years, there was, oh, sorry, and then Marcos Sevil came out with the concept for the law to achieve the exception, but it’s also mentioned on the steps they’re going to define the rules that those exceptions are fit. So one of them is the Brazilian Internet community, where I work, so Nick is the arm of the CIG that is appointed there, and then it’s there, and then there’s the other, the other decree that sets out, lays out the implementation of those exceptions for net neutrality. For example, in case of that you have an overload with spam or with the DOS, the denial of services attacks. So in those cases, of course, you’re going to break, you know, and put rules in the middle, because you need to. So the Internet keeps working, and the network keeps doing what it needs to do. So anyway, those are the cases that there are already in the regulation, and that they’re being put in place. But then, and I’m fast forwarding, because this is the question that I have also for you guys, in terms of your studies, and perhaps you can expand or just tell if this is something you were looking for. in the future, but I think, as I said, after you take the principles and after you take, okay, you set out, those are the rules, then you need to think on the implementation and the monitoring. So how you are going to make sure that those exceptions are only exceptions and not become themselves the rules and how you are going to even prove that, because this is an issue, right? Even if you set out an oversight and ways to, and the regulation has some penalties for that, even there, how do you prove it? And so you go more into making it real than just aspirational. So I’m leaving like that and thank you very much. Thank you.

Ignacio Sรกnchez Gonzรกlez:
Thank you very much, Raquel. Just to provide a quick answer and then give the floor to Dilmar. Yeah, exactly. Effectively, one of the elements that our research, our previous research considered, we identified seven elements of net neutrality regulations. And one of those elements are, of course, the exceptions, because you can have net neutrality regulations, but if you don’t have the exceptions, or what will the exceptions be? Actually, the legislation won’t be operable, no, it can’t be effective. And besides the exceptions, one of the other elements is the dispute settlement bodies when net neutrality is not taking place. But I do think this dispute settlement was an element because there you can go when net neutrality is not being complied on the part of the ISPs. But I do think that it is a challenge, the monitoring. I think all the legislation we saw, they lack of a body that actively monitors that everything and everyone are complying with the principle. I have that quick answer for now. So now I give the floor to Dilmar Villena, who is the Executive Director of the NGO Hiperderecho of Peru. Dilmar, thank you very much for being here.

Dilmar Villena Fernรกndez Baca:
Thank you very much for the invitations and congratulations on the paper. Well, talking about net neutrality, right now it’s not the hot topic like is AI, but it’s always good to talk about it because it’s essential part of internet, net neutrality. And well, net neutrality as an ideal, as a principle, it’s also in our Peruvian regulation. But what happened during COVID-19 is like, yes, we have net neutrality, regulatory bodies implement net neutrality and demand companies to comply it. But pandemic starts and what we have in front is like, all the infrastructure isn’t ready for all the flow of information that it started at that point. Pandemic started and nobody could get out and everybody was using their laptops, their cell phones in order to work, in order to, I don’t know, to consume some entertainment. So that in Peru, that’s what happened. And I think that happened also in a lot of countries. And net were overloaded. So Peruvian government in that case, take, let operators to take some emergency actions in order to prioritize the flow of some packages to some specific types of website or services. Talking about server rating, for example, what happened in Peru. Peruvian government developed a webpage that is called Aprendo en Casa. Dot com, dot PE. And it was the center of information from all the students that couldn’t go to school, but in order to learn or in order to have access to some educational materials, they could get it through this page, Aprendo en Casa dot PE. And what happens in Peru that students that need to access to this information doesn’t have the data packages in their cell phones or doesn’t have the money required to buy these data packages. So in that case, most of the telecom companies, and also because of the Peruvian government ordered that, they put server rating to access to this educational webpage. And also Peruvian government, like it didn’t order, but it suggested the companies to prioritize some types of traffic to, I don’t know, traffic for people that are doing remote working or maybe people that are using Zoom or Microsoft Teams or all of these platforms to do their work while they stay at home. But during the time of the day, but what government didn’t take into account is when they did these regulations, they were talking more on the traditional form of working. But what happened with person that were making money through streaming services, like streamers, like were gamers and streamers, they couldn’t have the necessary package flow to continue doing some streaming and also working because for most of them, the video gamers, streamers, that was their work. So that was a point that also Peruvian government didn’t take into account when we were talking about net neutrality. Other thing that is very important is Peruvian legislations permits server rating when is it done in arbitrary form. So what is not arbitrary is not that clear, but in any case, it demands to telecom companies to be transparent about that. The transparency of companies in implementing this measure is a key concern here. How transparent have been telecom operators in Peru regarding network managing during the pandemic? One of the questions we ask whether these companies are adequately communicating their actions and decisions during this uncertain times, those are certain times of the pandemic. Among the major operators in Peru, just one at that time was complying with transparency demands on net neutrality. The other three ones didn’t comply with it. So we didn’t know to which platforms or to which websites telecom companies were treating or giving preference and treatment to their data packages. And that is what comes to my question maybe and what we can discuss about the paper. Through the Pacific Alliance, there is a lot, maybe there are two, three companies, big companies, telecom companies that are in our countries. Now I’m thinking about Peru, Chile, Mexico, maybe now Claro or Intel, I don’t know. Maybe we can think about how these companies comply with net neutrality requirements in each country and why do they comply more or less in each country. I don’t know. In the case of Peru, Intel, that is from Chile, is like seven or eight years in the Peruvian market. And actually, not much complies in net neutrality disposition, at least in transparency. But we have other telecom companies that do comply with, do comply it. So maybe we can think about it on the future. These are more the same companies, but why do they comply more or less in some countries when we talk about the Pacific Alliance? So that I think is gonna be for me. Thank you very much.

Ignacio Sรกnchez Gonzรกlez:
Thank you, Dilmar. And now that I’m hearing you talking about transparency and you were talking about the user rights and their regulations, I am recalling the other elements of the net neutrality legislation that we started last year and actually transparency and the user protection, specifically privacy, are one of the elements of this net neutrality legislations besides dispute resolution also. Now, before closing, I know that Professor Cรกceres who presented the paper wants to go deeper in one of the questions that were made in the panel.

Javiera Cรกceres Bustamante:
Thank you, Ignacio. Well, thank you all the speakers. It was very interesting and I’m taking a few notes. I don’t know if I have so many question or ideas to answer all of your questions, but I think it’s been super interesting. Regarding to what Olga was saying before regarding network slicing, I think we did not consider that as part of our paper. I think that would be something very interesting to concerning future research or maybe like a revised version of our article. But I think here it’s interesting to see how kind of like both the literature or one like how states are actually approaching this topic. It’s very changing in the sense that when we talk about network slicing, like completely we could say right away, like, no, it’s not compatible with network neutrality. But if we start thinking about it, we see also that this also depends on the application and intent. So we see that network slicing is used more like to support specific technical requirements, for example, low latency for critical services like autonomous driving, for example. So we see that in that case, we can see network slicing as a way to optimizing network resources for safety and efficiency. And in that case, when we think about net neutrality, we can see that, yeah, in this case, actually the problem with net neutrality is when we’re talking about this preferential treatment for a specific ASP. So in that case, I think we go back to what Raquel was saying because actually how we monitor, how we regulate both net neutrality and network slicing will be very important to see how actually net neutrality or network slicing is being used so that it doesn’t actually become a backdoor violation of net neutrality. So I think that’s something very interesting to consider. And also moving forward to what Mr. Dillmar was saying about this idea of how to comply with net neutrality. I think that’s also something that wasn’t as part of our paper in the sense of following, maybe go like a different analysis, even like mythologically speaking of going, talking to companies and actually comparing how they’re actually complying with net neutrality. But I think that’s also very interesting to consider in future research. So thank you very much for all your comments. Thank you, Ignacio.

Ignacio Sรกnchez Gonzรกlez:
Yes, thank you, Javiera. Also recalling who decides the exceptions, for example, I was remembering that when we started the Colombian regulation in the decree, they say, okay, the net neutrality principle is established, but the traffic management, they can manage traffic while the ISPs comply with the ITU recommendation and they specify a specific recommendation. So if they comply with that, traffic management can be done. So that’s one of the ways that they rationalize the exceptions regarding the principle. Something else that I recalled on that previous research. I don’t know if anyone else has a comment or a question. Please.

Felipe Muรฑoz Navia:
Hello. Well, thanks for all the, for the panel, the authors of the paper. Very interesting discussion. I wanted to comment on what Olga said. Most of the network slicing with 5G is something new that’s going to stick eventually. We’ll have three types of slices. The one that most of us will use for sure would be enhanced mobile broadband. But there will also be using the same physical layer. That’s a nice thing of slicing is that you will be able to have different logical networks in the same hardware. So you will have massive machine type communication for IoT for having millions of devices and also for the, I would say very few cases at least at the beginning with ultra reliable low latency. To me, because what I’ve seen, I’m not an expert, I’m not a lawyer, I’m an engineer, and this has to do also with what Raquel pointed out, the exceptions. The law says the principles that should guide us, we don’t want arbitrary discrimination of the scheduler, of the way we traffic packages. But there’s also the same law usually says, okay, but this is going to be defined in a technical draft. That’s where the details go. And it’s interesting what Olga mentioned because I don’t see any technical draft today that speaks about discriminating not in the routing layer, in layer three, where usually we are all watching if there’s neutrality or not, but underneath it. So that’s something for sure we have to analyze and see what’s going on there because that’s where net neutrality, although in the routing is being respected, maybe underneath it is not. So that’s something for sure that it has to be updated in all the technical drafts that go with the laws in our countries. So thanks a lot, very interesting topic, thanks.

Olga Cavalli:
Sorry, that the regulation includes exceptions that enables good services for consumers and for services providers. But you must have all these things that will happen or are happening right now. So because the environment will change very much from what it was before, especially considering the amount of Internet of Things devices that will be connected, which is the number is really enormous and it’s happening now. So the idea is that the regulation is good for everyone and it’s not prohibiting things that are good, but consider them exceptions or part of, or modify or more broad. That’s very challenging with any, no me salen las palabras, pero estoy dormida. So with all the regulations that are related with technology it’s very challenging because you have to be ample, you have to be let updates and the changes in the services. So it’s a process.

Ignacio Sรกnchez Gonzรกlez:
Yes, indeed. That’s why there’s a common phrase, at least in Chile, that says the law always comes late and especially regarding technologies. So of course there is a challenge there. So thank you also for your input and the comments. Now I think we might be closing. We are on time. I want to thank all the speakers online and onsite for bringing the perspective of Latin America to the IDF of this year. So thank you very much. Really all of you online and onsite. Thank you for also the people who attended our session. Thank you for the participation. Thank you.

Javiera Cรกceres Bustamante:
Thank you, Ignacio, for all convening us today.

Audience:
Thank you. Gracias, Javier. Nos vemos. Ciao. Nos vemos. Bye-bye. Yeah, I don’t know. I think she has drafts. I think she has drafts. Yo solamente los traje para traer conversacion. Yo creo que… No, no, no. Creo que… Creo que…

Audience

Speech speed

186 words per minute

Speech length

55 words

Speech time

18 secs

Dilmar Villena Fernรกndez Baca

Speech speed

149 words per minute

Speech length

915 words

Speech time

369 secs

Felipe Muรฑoz Navia

Speech speed

181 words per minute

Speech length

352 words

Speech time

117 secs

Ignacio Sรกnchez Gonzรกlez

Speech speed

148 words per minute

Speech length

1297 words

Speech time

525 secs

Javiera Cรกceres Bustamante

Speech speed

166 words per minute

Speech length

2588 words

Speech time

938 secs

Olga Cavalli

Speech speed

209 words per minute

Speech length

978 words

Speech time

281 secs

Piero Guasta Leyton

Speech speed

242 words per minute

Speech length

937 words

Speech time

232 secs

Raquel Gatto

Speech speed

222 words per minute

Speech length

1170 words

Speech time

316 secs

Main Session on GDC: A multistakeholder perspective | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Amandeep Singh Gill

The Global Digital Compact (GDC) is viewed as a crucial tool for addressing global challenges, and it should be considered within the broader context of global issues. The completion of the consultation phase of the GDC, with over 7,000 entities providing inputs, is seen as a significant milestone. Efforts to enhance multi-stakeholder engagement and inclusivity are necessary, inspired by the Secretary-General’s vision on digital cooperation. Balancing multilateral processes and multi-stakeholder engagement is acknowledged as a challenge, but innovative approaches have been taken, such as involving stakeholders in sensitive discussions. Stakeholders are encouraged to engage with local member states to foster greater involvement. Areas such as the digital economy and development issues require greater emphasis and action. The policy brief on the GDC outlines a strategic vision, addressing the digital divide, human rights, and agile governance. Gender inclusion and youth participation are emphasized as important themes. Accountability and adaptability are vital for the digital future, and the fragmented landscape of digital issues calls for better coordination. Critical gaps exist in addressing issues like misinformation, disinformation, AI governance, and human rights accountability. The success of the summit of the future rests on raising the level of ambition, activity, and coherence in responses.

Paul Wilson

The internet plays a vital role in our society, offering stability, availability, efficiency, and scalability. However, it is often taken for granted and overlooked. Cooperation among all stakeholders is crucial to maintain the internet’s critical qualities and prevent fragmentation or compromise.

Multistakeholder internet governance is essential for the internet’s continued success. The Global Digital Compact (GDC), a proposed framework for global digital cooperation, should recognize and support this cooperation. Paul Wilson, a member of the technical community, emphasizes the need for ongoing global cooperation in internet governance, particularly within the GDC negotiations.

Addressing the current state of internet connectivity is another crucial aspect the GDC needs to focus on. Although significant progress has been made, approximately 33% of the global population remains unconnected, and 66% lack meaningful internet access. Building upon the current state of connectivity is necessary to ensure more people can benefit from the internet.

The internet’s growth is expected to continue, but challenges with capacity, infrastructure, integrity, and security must be addressed. Inclusivity is also important, as the concerns of marginalized communities, youth, and underrepresented groups should be heard in internet governance and the GDC process.

The Internet Governance Forum (IGF), which has been facilitating discussions for 18 years, should be focused on continuous improvement rather than reinvention. The IGF’s multistakeholder community is ready to discuss and enhance internet governance matters.

COVID-19 has highlighted the internet’s significance, as it enables communication, education, and job continuity during lockdowns. Lastly, addressing non-digital issues such as climate action, poverty, and hunger is essential for the internet to contribute to broader societal goals.

In summary, the internet’s stability and success depend on cooperation among stakeholders. The GDC should recognize and support multistakeholder cooperation. It should also address connectivity gaps, ensure internet growth, promote inclusivity, and harness the potential of the IGF. Additionally, the internet’s role in supporting humanity during crises and addressing non-digital challenges should not be overlooked.

Moderator 2

The Global Digital Compact Process has energized the Internet Governance Forum (IGF) community, attracting positive sentiment and drawing attention to the work of IGF and its national and regional initiatives. It has created opportunities for engagement and brought stakeholders together.

However, there is a need for greater clarity and forward-looking perspectives on how the Global Digital Compact can strengthen and expand the field of Internet Governance. To address this, a panel will provide additional insights and clarity on the future of the process, with the aim of enhancing Internet Governance and aligning it with the Sustainable Development Goals (SDGs).

Another important aspect that demands attention is the complexity of the two governance forms: multilateral and multi-stakeholder. It is argued that the complexity of these forms may be underestimated, and efforts are underway to foster their complementary nature. The goal is to ensure that both forms can effectively engage and support one another.

Improving governance, accountability, and cooperation within and between the multistakeholder and multilateral processes is also highlighted as a crucial need. There is a call to enhance these aspects for more effective and inclusive Internet Governance, aligning with SDG 16 (Peace, Justice, and Strong Institutions).

The Global Digital Compact process, along with the Summit of the Future, provides a specific focus on internet development and its intersection with broader governance. This focus closely aligns with SDG 9 (Industry, Innovation, and Infrastructure) to address the specific needs of internet development within the broader governance discussions.

Moreover, the role of governments as enablers of people-centered development, human rights, and inclusion is emphasized. The WSIS outcome documents describe the role of governments as enablers in creating an environment that enables these important aspects. This implies that governments play a vital role in shaping and supporting internet development in a way that encompasses human rights and reduces inequalities, aligning with SDG 10 (Reduced Inequalities) and SDG 16.

In conclusion, the Global Digital Compact Process has successfully energized the IGF community, bringing attention to their work and fostering engagement. However, there is a need for more clarity and forward-looking perspectives to enhance and broaden Internet Governance. The complexity of multilateral and multi-stakeholder governance forms is also highlighted. Additionally, improving governance, accountability, and cooperation within and between these forms is crucial. The Global Digital Compact process and the Summit of the Future focus on internet development and its intersection with broader governance, aligning closely with the SDGs. Finally, the role of governments as enablers of people-centered development, human rights, and inclusion is emphasized as a crucial aspect of internet governance.

Audience

The discussions on the Global Digital Compact (GDC) involve various perspectives from stakeholders. One argument is that the final stage negotiations of the GDC should remain open for contributions from multiple stakeholders. The EuroDIG community unanimously supports this stance and is ready to provide further inputs. The upcoming EuroDIG event encourages participation to gather stakeholder input for the future of the internet.

Another perspective is the role of the Internet Governance Forum (IGF) in implementing the principles and commitments of the Compact. The goal is to achieve a free, open, secure, and sustainable digital future. The IGF is seen as a key platform for inclusive dialogue and stakeholder participation, specifically for SDG 9 on industry, innovation, and infrastructure.

Civil society voices are also important in the GDC process. Some argue for more involvement at the global level, while others advocate for greater participation at the country level. The objective is to ensure inclusivity and address the needs of marginalized communities.

Stakeholder engagement and active involvement are crucial for innovation in internet governance. It is believed that effective governance can only be achieved when all stakeholders are directly involved. Therefore, the UN should shift from consulting to actively involving stakeholders in decision-making processes.

Transparency and public involvement in negotiations are important. There is support for public involvement in governance issues and greater transparency in the GDC process.

Inclusivity and stakeholder mechanisms are discussed in relation to challenges with certain member states. Questions are raised about how to include stakeholders when member states are not inclusive or unwilling to work with critical voices. The aim is to find mechanisms that ensure all perspectives are considered.

Digital inclusion and reducing the digital divide are also important in the GDC process. The focus is on bridging the divide and providing access to quality digital technologies and connectivity for all.

Gender equality and intersectionality should be considered in the GDC process. Some argue for a feminist and intersectional approach to create a gender-just world. This includes addressing environmental impact, promoting women’s leadership in tech, and protecting against gender-based violence online.

Energy consumption of the internet is a concern. There is a need to focus on reducing energy consumption while ensuring reliable internet access.

The role of the IGF and its relation to the GDC are discussed. The relation should be clarified to avoid competition for resources and attention.

Accountability mechanisms in global compacts and partnerships are another area of concern. Stronger mechanisms are needed, and developed countries should support the capacity-building efforts of developing countries.

In conclusion, the discussions on the Global Digital Compact involve various perspectives, including multistakeholder contributions, the role of the IGF, civil society involvement, stakeholder engagement, transparency, digital inclusion, gender equality, energy consumption, the role of the IGF, and accountability in global compacts. The focus is on creating a fair and inclusive digital future by considering the perspectives and needs of different stakeholders.

Raul Echeberri

The high-level panel on digital cooperation, created by the UN Secretary-General, highlights the significant focus on digital cooperation within the UN’s agenda. Raul Echeberri welcomes this and considers digital cooperation a central point in the Secretary-General’s agenda. However, there are concerns about the inclusivity of the Global Digital Compact process. Raul suggests conducting more consultations at the regional level and involving the private sector to a greater extent. The private sector’s diverse interests, sectors, sizes of companies, and regional origins need to be considered in the Global Digital Compact process.

Active participation and involvement in consultations are emphasized, with several governments working hard to organize them. Raul himself participated in some contributions. Preferred sessions and formats for consultations are those that allow for more comfortable community engagement rather than just submitting comments.

There is a need for more opportunities for non-governmental stakeholders to participate in the Global Digital Compact process, with reference to the 2005 summit involvement. The expectation is that innovations will improve the process, but no specific evidence is provided to support this claim.

The similarities between the Internet Governance Forum (IGF)’s key agenda topics and the issues in the shared paper for the Global Digital Compact are noted, validating the IGF as a valuable venue for discussing the compact.

A positive outlook on technology evolution is expressed, with the belief that technology should be embraced positively as it continues to evolve.

The argument is made for the need to speed up innovation in every country to achieve inclusive development. Technology is expected to play a significant role in achieving equitable development.

The Global Digital Compact is expected to inspire and bring hope, with inspiration drawn from the message of the Prime Minister of Japan regarding optimizing technology benefits while reducing risk.

Caution is advised against creating new bureaucracies in the compact process, as this may create additional barriers for the participation of developing and small countries. It is important to ensure equal opportunities for participation and contribution.

Existing venues like the IGF are seen as capable of effectively handling challenges, eliminating the need for increased governmental control. The argument is made for multistakeholder mechanisms in digital governance to allow for the full participation of all stakeholders.

The role of governments in creating enabling environments for inclusive development and accelerating innovation is emphasized. It is crucial to ensure that the positive impact of technology benefits everyone worldwide.

Lastly, there is a call for more stakeholder participation and the strengthening of the IGF. More opportunities for stakeholder engagement are needed in the process towards the future summit, with the recommendation to maintain the IGF as the central venue for dealing with the issues at hand.

In conclusion, the analysis highlights the importance of digital cooperation in the UN’s agenda, with the establishment of the high-level panel. Concerns are raised about the inclusivity of the Global Digital Compact process, and the involvement of the private sector and active participation from all stakeholders is advocated. Technology, equitable development, and government involvement in creating enabling environments are identified as key factors. Stakeholder participation and the strengthening of existing venues like the IGF are seen as crucial for effectively addressing the challenges of digital governance and achieving the goals of the Global Digital Compact.

Valeria Bettancourt

The Global Digital Compact process has received criticism for a lack of clarity and timely information provision, which hampers meaningful engagement and participation of civil society actors. There is a need for the Global Digital Compact to establish clear linkages with existing processes as the scope of Internet-related public policy issues expands and the distinction between digital and non-digital becomes blurred. Inclusion should be prioritized in the process, considering the social and economic impacts of the global pandemic. Efforts must be made to prevent the exclusion of those who are most affected by digitalization, and to challenge perspectives that maintain the status quo.

Addressing digital inequality and injustice is essential to ensure an inclusive digital transition and prevent developing countries from being left behind. Trade rules are used to weaken the digital rights of countries, particularly in the global south. International financial institutions need to make new commitments and big tech companies should be subjected to taxation to address these concerns.

The digital transition should prioritize creating public and social value, as well as expanding human freedoms. The successful implementation of the Global Digital Compact will require financial mechanisms and the strengthening of digital infrastructure skills and regulatory capacities for all countries.

The Human Rights Charter and the International Covenant of Economic, Social, and Cultural Rights should serve as the basis for evaluating commitment to an open, free, and secure digital future. Existing processes such as the Universal Periodic Review and the Sustainable Development Goals can be utilized to further this objective.

The Internet Governance Forum (IGF) should be strengthened to bridge the gap between liberative spaces and decision-making processes. Challenging the belief that big tech cannot be regulated is crucial. Global digital governance should establish conditions for equity and fairness. A feminist, sustainable, and transformative vision is necessary for a digital future that is open, free, and secure, and which promotes gender equality, reduces inequalities, and fosters industry, innovation, and infrastructure.

In conclusion, the Global Digital Compact process needs to address issues of clarity, linkages with existing processes, inclusion, digital inequality, trade rules, public and social value, human rights, financial mechanisms, taxation, the role of the IGF, and the need for a feminist and transformative vision. By considering these factors, the Global Digital Compact can work towards a more equitable and inclusive digital future.

Moderator 1

Upon analysing the statements made by the speakers, several key points emerged:

1. The first speaker argues that the Internet Governance Forum (IGF) plays a crucial role in facilitating discussions on global digital compact issues. They believe that the topics covered in the issues paper closely align with the agenda of the IGF, underscoring the forum’s value and relevance.

2. The second speaker advocates for embracing the positive evolution of technology. They argue that rather than resisting technological advancements, societies should adopt a positive approach towards them. The speaker believes that technology has the potential to significantly contribute to global development, aligning with SDG 9, which emphasises the importance of industry, innovation, and infrastructure. However, no specific evidence or examples were provided to support this argument.

3. The third speaker highlights the need to ensure that technological benefits are accessible to everyone globally. They emphasise the importance of achieving equitable development and reducing inequalities that arise from unequal technology distribution. This argument aligns with SDG 10, which focuses on reducing inequality. Unfortunately, no supporting evidence or specific examples were provided to strengthen this point.

It is worth noting that both the first and third speakers expressed positive sentiments regarding their respective topics. However, the lack of supporting evidence weakens the overall strength of their arguments.

In conclusion, the analysis underscores the significance of the Internet Governance Forum as a platform for discussing global digital compact issues. It also highlights the importance of embracing technology’s positive evolution and ensuring equitable access to its benefits worldwide. While the arguments put forth by the speakers are compelling, the absence of supporting evidence or specific examples diminishes their impact.

Bitange Ndemo

During the discussion, speakers focused on several key topics related to technology and innovation. They emphasised the significant role of the internet during the COVID-19 pandemic, particularly in teaching and empowering micro-enterprises to leverage digital platforms for business. This highlights the internet’s ability to facilitate continuity and growth in challenging times. The sentiment expressed towards the internet was overwhelmingly positive.

Another important aspect discussed was the need for regulation in new technologies. The speakers highlighted the rush of people toward regulating these technologies and suggested that The Global Digital Cooperation (GDC) could provide guidance to governments on how to effectively regulate new technologies. While the sentiment towards regulation was positive, the speakers noted the importance of open discussions on standards and regulations in the field of Artificial Intelligence (AI). This neutral sentiment indicates the need for careful consideration in establishing appropriate standards and regulations.

The positive impact of digitalisation and innovation on young people was also emphasised. The speakers acknowledged that digitalisation has enabled young people to leverage technology for innovation, leading to productivity improvements. This highlights the value of providing opportunities for young people to explore their potential and contribute to economic growth. The sentiment towards this topic was largely positive.

The discussion also touched upon the relationship between innovation and regulation. It was argued that innovation should be allowed to take place openly before implementing regulation. The speakers believed that innovation precedes regulation and should not be stifled by unnecessary restrictions. This viewpoint suggests a positive sentiment towards embracing innovation and allowing it to flourish.

Language barriers were identified as a challenge in achieving internet access and inclusivity. The speakers noted that even with 100% internet coverage, language differences can prevent individuals from fully utilising the internet. To address this issue, the speakers suggested leveraging AI technologies, such as Language Learning Models (LLMs), to overcome language barriers. The sentiment towards this topic was neutral, indicating a recognition of the problem without offering a strong opinion on the solution.

In terms of AI, the speakers presented a positive stance, viewing it as an opportunity rather than a threat. They highlighted how AI can eliminate errors in marking academic essays and reduce reliance on outdated theories and rote memorisation in education. This highlights the potential of AI to enhance the quality of education. The sentiment towards AI in education was positive.

The convergence of thought regarding the future of the internet and individual human rights was also highlighted. The speakers referred to a previous session on the declaration of the future of the internet, which addressed similar issues. This convergence suggests a positive sentiment towards aligning the development of the internet with the protection of individual rights.

In terms of policymaking, the speakers emphasised the importance of inclusive development and involving civil society in discussions. They shared personal experiences of benefitting from engaging with stakeholders and civil society as policymakers. The sentiment towards this was mixed, with a negative view on governments sometimes excluding civil society from discussions. The speakers advocated for more open and inclusive policymaking with stakeholder involvement, recognising the value of diverse perspectives in policymaking processes.

In conclusion, the discussion highlighted the essential role of the internet during the COVID-19 pandemic and the need for regulation in new technologies. There was recognition of the positive impact of digitalisation and innovation on young people, and the importance of allowing innovation to take place openly before regulation. Language barriers were identified as a challenge to internet access and inclusivity, suggesting the use of AI technologies as a potential solution. The speakers viewed AI as an opportunity and emphasised the convergence of thought between the future of the internet and human rights. They advocated for more inclusive policymaking with stakeholder involvement, recognising the value of civil society contributions. This comprehensive analysis provides valuable insights into the various perspectives and considerations related to technology and innovation.

Session transcript

Moderator 1:
Hello, good morning, and good afternoon and good evening for everyone online as well. It’s nice seeing so many people in this session. We are starting very shortly. We have still one speaker who is on his way, but I think, Henriette, we can start slowly. So hello, everybody. My name is Jorge Cancio. I work for the Swiss government, and I have the pleasure of being co-moderator of this session with Henriette Oesterhuizen. So welcome to this session about the Global Digital Compact, a session organized by the multistakeholder advisory group of the IGF, and the title of the session is the GDC and Beyond a Multistakeholder Perspective. For this, we have indeed a multistakeholder panel with us today. We have Paul Wilson from APNIC, from the technical community, who is coming. I see him there. Hello, Paul. Faster, Paul, faster. I’m sorry, this is perhaps Swiss punctuality, or Japanese, of course. We try to be on time here, and we have Valeria Betancourt from the Association for Progressive Communications, civil society, she comes from a Gulag country, Raul Echevarria from the private sector, also Gulag. Constance Deleuze, she will be joining us virtually. On video, she is from the Project Liberty Institute Academia, based in a Wyok country. Then we have also the pleasure of having with us Ambassador Bitange Ndemo, Ambassador to Belgium from the Kenyan government, who was very much involved in the excellent IGF of 2011 in Nairobi. And of course, we have the pleasure of having with us Amandeep Gill, the Under-Secretary General and Envoy for Technology of the UN Secretary General from India. So with this, I think that the session, we will try to have it as interactive as possible. We have broadly structured it in three segments. A segment on the process, a process towards the Global Digital Compact. We are in midst of this process, but still a way to come to the outcomes. A second segment about the content of the Global Digital Compact. What will be there in this very important document? And finally, what will come after once the GDC is adopted? What will be the follow-up and the review? And in each segment, we will have statements, short statements, two minutes each from our panelists. And then we will go to the audience. And this will be repeated in each of these three segments. And we will finalize with one-minute takeaways from our panelists. So, Anne-Marie.

Moderator 2:
Thanks very much. I think that’s a really good point, and I think we need to think about the future of the Internet Governance Forum. I think that’s a really good point, and I think we need to think about the future of the Internet Governance Forum. Thank you very much, Jorge. I don’t have much more to add in terms of introduction, and I think the GDC is the global digital compact is not new to us, and I think it’s just really worth reflecting on the fact that there’s been a lot of debate around it, there’s been a lot of concern about what it might look like, and I think it’s the moment that we have to acknowledge and to think about the future of the Internet Governance Forum. On the positive end, I think what we really need to acknowledge and actually celebrate about this process is that it has galvanized this community, it’s made the IGF think about its place in the world, and where this place is heading. It has opened up engagement. The Internet Governance Community has a tendency to become quite insular, and I think the global community has a tendency to become quite insular, and I think that’s a good thing. I think it’s a good thing that we have this community of other processes in the world that deal with bigger and broader issues that also intersect with the issues that we deal with. And then I think it has also brought us to the attention and the work that has been done within the Internet Governance Forum, within the national and regional IGF initiatives, to the attention of people that were not aware of it. And I think that’s a good thing, and I think it’s a good thing that we have this community of other processes in the world that are opening up, reflecting, and engaging. So I really look forward to this panel taking us on that path, providing more clarity, but also being really forward-looking on how this process can actually strengthen and broaden the work that has been done in this space. So we’ll go, we’ll start. I’m going to start on that end. I’m going to ask Paul to open up the panel. We’ll change the direction, but, Paul, can you please open for us?

Paul Wilson:
Thank you. Thank you, Henriette. Apologies to the moderator. Look, I do want to say that as governments move into the GDC negotiations that it’s just so important not to take the internet for granted. I mean the stability, the availability, the efficiency, the scalability, everything that is intrinsic to the internet layer and I’m speaking as a member of the technical community here so I’m talking about the internet as the layer on which everything else depends and it is almost invisible and it is very easy to take it for granted. But the thing is regardless of the GDC of course and what the process is, whether it’s a multistakeholder or multilateral or something in between, the internet can only continue to thrive, the internet as we see it and the technical community can only continue to thrive on the continuing cooperation of all of the relevant stakeholders and without that there are critical qualities of the internet that are at risk of or will inevitably over time become fragmented or compromised. I’d like to just remember that multistakeholder internet governance was not an invention of the WSIS in 2005, it was a discovery by the Working Group on Internet Governance that the multistakeholder nature of internet governance was a key and is still a key today to the internet’s success. I’d like to say that for the GDC to be also successful it needs to recognise the multistakeholder cooperation that has been with us for so long, including over the last 20 years while it has been under the microscope and also not take that for granted, because the thing is that cooperation of any kind and particularly not global cooperation as we see it here, it never comes for free. work on the part of everyone involved that’s costly and it’s challenging, and that it can also be fragile. I think it absolutely warrants recognition in this process, it warrants encouragement and it warrants support, and I really hope that’s the goal of the GDC, at least in terms of the objectives related to the internet. Thanks.

Moderator 2:
Thank you. Thank you.

Bitange Ndemo:
I think this comes at the right time and I think everybody by now understands that internet is very key to our lives. Going through COVID-19, we were able to teach throughout that year, which put people aside. We worked with micro enterprises to leverage some of the platforms to do business. So this is a very important space and GDC comes at the right time to perhaps give government directions with respect to regulation. We see people rushing to regulate new technologies at the moment. We hope that we can have such discussions through multi-stakeholders to provide the best of regulations, especially in AI. We also need to talk about standards across the world. So many things are happening, innovation, young people leveraging digitalization to innovate. We have seen productivity improvements and we need to create a space for conversations to ensure that all this happens as we move forward. I think I’ll stop there. Thank you.

Moderator 2:
Thanks, Pitangue. Raul.

Raul Echeberri:
Thank you very much, Henriette. First of all, I think that we should recognize and be very happy to see that this has been a central point. the agenda of the Secretary-General of the United Nations, so it’s very good to see that finally the topics that we discuss, the issues that we discuss here, go to the top in the international agenda. And there has been a consistent path since the creation of the high-level panel on digital cooperation, so those are very good news. With regard to the process itself of the Global Digital Compact, I feel really that we could have contributed more and better, and it shows the complexity and the difficulty of organising a really global and inclusive process. The world is very big and the diversity is also very, very big, and I had the feeling that we could have had more consultations, probably through more partners, involving more people, because a Mandeep team cannot do everything. But maybe we could have organised more events, consultations at the regional level, involving more people. I feel that there is a large part of the community that comes from the private sector, so the small companies, small private sector associations, that are not aware of what is happening. In fact, I was in Montevideo, Uruguay, two weeks ago, in a global summit of parliamentarians. Some of them mentioned the Global Digital Compact and the Tech Envoy and other things, but I realised that the majority of the people were not aware of the processes. I don’t know how to fix it at this point, but speaking about the process, I have that feeling that we could have contributed more than that. With regard to my final point, with regard to the private sector, it’s a highly diverse constituency. because there are diversity of interests, diversity of sectors, but also diversity of sizes of companies and the regional origins, so it is difficult to involve everybody and we have to work more on that.

Moderator 2:
Thanks, Raul. Valeria?

Valeria Bettancourt:
Thank you very much, Andriet. I want to use this opportunity to bring up some of the issues that civil society organizations, including the one that I am part of, have identified as critical in regards to the process. The aspirations of the Global Digital Compact as an opportunity to strengthen the multistakeholder approach have faded. This aspiration had to do with building and expanding on the principles adopted by the WSIS in terms of multistakeholder participation, acknowledging that multilateral and multistakeholder global digital governance are not mutually exclusive and that both are really necessary to respond to the different and distributed ways and spaces in which global digital governance is undertaken. So far, the trend has been the lack of timely information provision for a meaningful engagement and participation of civil society actors, including clarity on what the whole process is aiming at, what the format and outcome will be, and how the input provided through the regional and global consultations, the call for contributions, and the deep dives will continue to be used. Humanity and the planet are experiencing the social and economic impacts of a global pandemic resulting in emerging and exacerbated structural inequality and injustice, an overlapping crisis, including the unprecedented climate emergency. The expectation was that the Global Digital Compact would establish clear linkages with other existing and ongoing processes and spaces in the midst of a rapidly changing context in which the scope of Internet-related public policy issues keeps expanding and the separation of digital from non-digital is diffused. So no open, free, and secure digital future for all can be shaped by excluding the voices and realities of the most affected by digitalization of all aspects of life and allowing the predominance of interest oriented to keep the status quo. The GDC could replicate the model of the WSIS Plus 10 review in which the primary participants were governments, of course, in accordance to its intergovernmental character, but which also allowed the possibility of effective and real engagement of other stakeholders in the preparatory and negotiation process. Inclusion should be the norm, not the opposite, not the contrary.

Moderator 2:
Thanks, Valeria. Amandeep, as usual, you’re the one that has to be put on the spot.

Amandeep Singh Gill:
Thank you. Thank you very much, Anjit. And I think Valeria has set it up very nicely for me. I like this reference to the non-digital challenges that we face. a standalone product or a process, it’s part of the highway to the summit of the future where there are these different tracks on those urgent non-digital issues. The debt crisis, the need for reform of the global financial architecture, the need to progress on the SDGs, the need to build new frameworks for peace, the new agenda for peace track. So the GDC should be seen as part of those, that larger picture, and it indeed is coming out of the Our Common Agenda report where this is just one of the 12 important areas that are mentioned for the international community to rally around. Now, the second thing I want to say is that we’ve just come through the first phase of the process, and that was the consultations phase. And within the limitations of time and resources, you know, I have a very small team with a very small budget. I think the team has done a phenomenal job. The co-facilitators have done a phenomenal job of getting more than 7,000 entities to contribute inputs, not only those eight thematic deep dives and other consultations in New York, but also consultations in Geneva, in many other places, regional consultations in Africa, Latin America, and in Asia. And that continues. And later this week, there’ll be a consultation in Korea, in Seoul, for the Asia-Pacific region. So we will keep that up, that inclusive, open process of consultations, listening in, reflecting what is happening inside the room. That will continue. So in many ways, as you’ve seen in the Secretary General’s statement and in the policy brief on the Global Digital Compact, this is an opportunity to also push the multi-stakeholder paradigm into new areas, new venues, and enhance participation. In a sense, when there is some method to this madness, you look back at the high-level panel on digital cooperation, this is programmed inflammation. One of my yoga teachers talks about programmed inflammation. So you need to kind of get, if you want to get the ecosystem to the next level, because tech is not waiting, the challenges are not waiting. They are multiplying exponentially. So we need to take the ecosystem to the next level of agility, dynamism, responsiveness. So the Secretary General’s vision on digital cooperation is inspired by that. So this is the next level of programmed inflammation. So obviously, when you are pushed to grow, there is, you know, from the body and the mind, there is some lethargy, there’s some resistance. And I think this is where some of the kind of, you know, sometimes, oh, what is happening? Where are we going? Et cetera, those kind of questions come. But stay tuned in, participate, as you’ve been doing in a fantastic manner. Adi Sababa helped inform the consultation process. And starting with this IJF, we are going to be informing the negotiation phase. I’m glad to see the Ambassador of Rwanda join us with his team. So the co-facilitators would appreciate your active engagement going forward.

Moderator 2:
Thanks very much, Amamdeep. And thanks, everyone, for mostly keeping to the time limit. Our final speaker, Konstanz Baumler-Deleuze, is not with us. So I think, Jorge, we can go ahead and get input from the audience.

Moderator 1:
That’s great. Yes, as we said at the beginning, we’re trying to have this session as interactive as possible and not waiting for the audience at the end of the session. So we have the privilege of counting now with the intervention from Ms. Agnes Vaciukivei-Ciute. I’m sorry for the pronunciation. Deputy Minister of Transport and Communications from Lithuania. The floor is yours.

Audience:
So good morning, everyone. It was very interesting to listen to all the speeches. I think we all have the same goal for the future of internet. And I would like to intervene on behalf of the EuroDIG stakeholders community, which has been engaged throughout the United Nations Secretary-General’s process on digital. cooperation. It is important now for the final stages of the negotiations of the Global Digital Compact in the United Nations to continue to be open to multistakeholder contributions. I think all the panelists agreed on this approach. Following the Summit of the Future next year, the IGF should have a central role in the implementation of the Compact’s principles and commitments to action for achieving an open, free, inclusive, secure, and sustainable digital future for all. I had fruitful discussions with the EuroDIG’s community in Vilnius and here in Japan, in IGF, so I can confirm they stand ready to provide a European channel for further stakeholder inputs. So I’m very proud to announce and invite all of you to participate in EuroDIG, which will take place next June in Vilnius, Lithuania. And I hope that the next year, and as colleagues mentioned, this is only the first step. So I think from the audience and all the panelists, the drive is to know more about the whole process and steps ahead. And I hope that the discussions and negotiations for the next year will be very fruitful, and we will come up with a future of Internet we all want. So thank you very much.

Moderator 1:
Thanks so much for that input, for those thoughts, and for being on time. We have four mics here, and in the good IGF tradition, you can line up and speak. We have time for three or four, perhaps, speakers. You have two minutes. Please share your thoughts. I see the gentleman. And introduce yourself. Yes, please.

Audience:
This is H.M. Bhojlu Rahman. I come from Bangladesh Internet Governance Programme. We have been involved with the Global Digital Compact process from the beginning, and we have already participated in the deep dive under the leadership of UNTEC. Thank you very much for involving us. summit of the future, there is no civil society space. So I would appreciate if you could allow us, if you could provide some spaces for the civil society voices from the country level. Thank you very much.

Moderator 1:
Thank you so much. This is a direct input and I see a gentleman there, Jordan Carter, the floor is yours.

Audience:
Thank you, Jorge. Good morning, everyone. Jordan Carter from the .au domain administration, speaking personally. I agree with the comments about the need to be innovative in these processes, and I think that the multistakeholder Internet governance community has a lot of benefit it can add, but it shouldn’t just be seen as offering input on a consultation basis. I think the UN system needs to consider innovations that it can deliver to the negotiation process as well, and not just, again, consultation, but active engagement and involvement. I know given the nature of the UN and the multilateral system that that is a big thing to ask, but I think if we have a genuine belief that Internet and digital governance happens best by genuinely involving the stakeholders, not only to hear their points of view, but to help genuinely shape the decisions by being in the room, that is an innovation that could be done. And it isn’t necessarily an innovation because it’s been done before in the WSIS context and in other contexts. So my urging to everyone involved, to all of the representatives, particularly of member state governments who are here, because you are the key players in the UN system, is to take some innovations into this process itself to shift the dial from consulting us to involving us. Thank you.

Moderator 1:
Thank you so much, Jordan. And as a government representative, I take note of that, of course. Don’t be shy. Come forward to the mic. But of course. course, is there any intervention, perhaps, online? Anghiet is multitasking so well.

Moderator 2:
Total support for the comments from Jordan Carter. I mean, I think I’ll just add, and maybe a question for the panelists when they respond to this, is that are we perhaps also underestimating the complexity of two very different forms of governance, both of which are imperfect in their own ways, and both which require a fair amount of evolution and improvement, multilateral and multi-stakeholder, that we need to get them to engage and be more complementary? And maybe we’re still in the phase where we’re kind of head bashing, and we still need to move towards the innovation that Jordan was talking about. And is there an online comment? Nnena Nwakanma has her hand. Can she be unmuted, please? She wants to speak. Nnena, please go ahead. She’s still muted. Nnena, you could type your question if you wanted to. And then we could, I don’t have a host. Great, we can hear you. Please go ahead.

Audience:
Thank you, Anne-Marie. Hello, everyone. Just a quick one. As we go into the negotiation phase, we do understand that this is mainly governmental. And like someone has said, we would love for it to be more than that. However, my submission would be that regular updates on these negotiations need to be made public so that we can follow. The reason I’m saying this is that I am participating in Kyoto online. And while we might be happy with negotiations that will happen in New York, it is very important that GDC recognizes that the greater parts of the GDC community are neither in New York. are not online and may need to follow things in other ways. So my submission will be that while negotiations are going on, the summaries will be regularly updated on the site of the UN Tech Invoice. Thank you very much.

Moderator 1:
Thank you so much, Nnenna. And I think we have to keep moving because we have covered the timing for the first segment. But thank you so much for those interventions. I think-

Moderator 2:
Jorge, I just want to read one short question from Fiona Alexander. What changes can we see in the process going forward?

Moderator 1:
Okay, good question. Perhaps it’s something that panelists may weave in in their next statements. Raul, do you have a short intervention to that? Okay.

Raul Echeberri:
Yes, I think that’s what Jordan says is very important about the kind of participation and involvement. And this is, I don’t doubt that there were thousands of contributions. In fact, I participated in some contributions and there were several governments working hard in organizing the consultations. But clearly, it’s clear that we feel in this community more comfortable with this kind of sessions and formats of consultations than just submitting comments. And I think that’s what Fiona says and also Nnenna is crucial that toward the summit of the future, we have opportunities to participate for non-governmental stakeholders in the process as we did, or even better than we did in 2005 for which we would expect that we could improve the process and innovate in that sense.

Moderator 1:
Thank you so much, Raul. Maybe short reaction?

Amandeep Singh Gill:
Yes, so several good comments and I love the point about building on the innovation that are already there. on multi-stakeholder participation, this kind of how do we square the circle between multilateral processes and multi-stakeholder, not just participation, but deeper engagement. We don’t have the perfect answer anywhere, so I’m a student of international learning in a historical sense. We really don’t have a perfect answer, but we have innovations out there. The cybercrime treaty negotiations, the negotiations involving the chemical industry recently that UNEP facilitated, and the negotiations, even on difficult, sensitive issues like lethal autonomous weapon systems, where with some inventiveness, you found a way to bring in experts into the discussions. So the co-facilitators are here, and they are listening to all these suggestions, and I’m sure working with member states, they would find a way to make sure that this is as open, as inclusive, as engaging as possible. Nena made this point about briefings, so intersessional engagement with different stakeholders has been a part of the approach that’s been adopted during the cybercrime treaty negotiations. So I would like to add, in addition to the suggestion that we’ve heard, I’d like to urge you also to work with the member states that you live in, that you work with, so that you can get into the delegations, get to engage the delegations more, particularly the delegations in New York and in Geneva. So we have to work at this problem from several angles. It is, there’s no magic fix to this.

Moderator 1:
Amandeep. This is a great segue to the next part of our conversation, and as you said before, and as we commented, we are at the midst of this process. We have seen a policy brief, and recently we have seen the issues paper, a very summarized version of what the deep dives and the many consultations have brought on the table from the perspective of the co-facilitators. So perhaps, and of course this is a provisional state of the situation, what would be your point of view, of your panelists, of what is worthwhile having in the GDC, what is still lacking, what could be innovations to bring really added value, a new substance into this global framework on digital cooperation. Maybe if I may, I would start with you, Amandeep, and you can give us.

Amandeep Singh Gill:
Of course, issues in the digital universe are many, many, and you had to organize them, and I think that those eight issues are a nice way of organizing the substance. And there has been in the imports, in the commentaries, et cetera, there have been suggestions, how do you tweak this, perhaps we need a greater emphasis on the digital economy, the kind of data for development, digital for development issues that are emerging rapidly. AI already finds a good place in the current structuring of issues. Again, there is an upsurge of interest, and I’m sure there is some time before the negotiation phase starts, plenty of time for the co-facilitators and their teams to think about how to organize for the next phase. I don’t think there is anything missing, it’s just a question of emphasis. If you look at the Secretary-General’s policy brief, again, this was a challenge for us across the UN system, all the UN entities working to help the Secretary-General prepare that policy brief. How do we bring it down to a solid vision? How do we structure that vision? So there’s threefold framing, bridging the digital divide, accelerating progress on the SDGs. Second, addressing the harms online, protecting and promoting human rights, digital trust and security type of issues. And third, the governance side of it, the agile governance, the responsive governance side of it, with particular reference to AI. So that was one way to bring it all together to a strategic level. And then those different action areas, they followed the co-facilitators’ lead in terms of the structuring of the issues, principles, objectives and example actions under those objectives. Because it will not be enough to have only principles. We have a surfeit of principles in the digital domain. We need to move to action frameworks, to commitments, and a way to follow up on those commitments. That is the potential for value addition from the Global Digital Compact.

Moderator 1:
Thank you so much, Amandeep. So we have more flesh on the bones and more flesh to react on. Valeria.

Valeria Bettancourt:
Thank you, Jorge. Well, global digital cooperation is at a crossroads. The gains of connectivity are uneven and digital exclusion, including the gender digital gap, are preventing many to embrace the benefits of the digital revolution. So social and economic injustice and inequality present an urgent challenge to development and democracy. So if the agenda 2030 is to be realized, and if the Global Digital Compact is meant to contribute to it, bold and committed actions are needed to first take the benefits of digitalization to all countries and people. Second, govern digital resources in a transparent, inclusive, and accountable manner, protecting the public core of the internet. And third, may digital policies and law fit for catalyzing innovation that counts. We need definitely a paradigm shift, one that address digital inequality paradox. As more people are connected, digital inequality is amplified as all technologies converge into the larger phenomenon of digitalization. The threat that the digital revolution bypasses developing countries becomes more real. So this is not just about access to the internet. It is about the complex issues of quality of such access, affordability, and equal participation of countries in the global regime that set the rules of the game. And for people everywhere to have the skills to reap the opportunities of this paradigm. So it is paramount to understand that we have to bridge the gap between those who have technological and financial resources to use the internet and other digital technologies to transact, to prosper, to contribute to wealth of nations and others who don’t. So the powerful countries use free trade agreements to stifle digital rights of peoples and countries in the global south in particular. So trade rules are used to arm twist governments to hyper-liberalize data flows, take away local autonomy of public authorities to govern transnational. the monopoly corporations, and their algorithms, prevent the scrutiny of source code, and legitimise a permanent dependence of the developing countries on the monopoly corporations controlling data and AI power, so this kind of infrastructural dependence is equivalent to a neo-colonial extractivist order. The unfinished business of the WSIS can not be forgotten, and this is the reason why the WSIS is so important, and why the WSIS is so important, and why it is so important that the technologies that have emerged in the last two decades have to be addressed by the global digital compact. It is really necessary to enable political, regulatory, technical, technological, and financial conditions to increase the individual and collective agency and autonomy, and choice of people to connect to digital technology and spaces, as well as to ensure that people have the right to vote, and to participate in the democratic process. The WSIS is one base on human rights, intersectional and feminist frameworks to address the geopolitics of global inequality and injustice. The conclusive test for well-guided digital transition is in the public, collective, and social value it can create, and the human freedoms that it can expand.

Moderator 1:
Thank you very much for those thoughts, and, Raul, what’s your take?

Raul Echeberri:
Thank you, Jorge. I think that the issues paper that was shared very recently is a good collection of the points that have to be in the global digital compact, and it is very interesting to see that the similarities between the list of issues and the topics that are central for the agenda of IGF, so it means, between brackets, it means that IGF is a very valid and valuable venue to discuss those issues. What I would hope from, what I would expect from the GDC, I expect a positive emphasis in relation with technology. That’s the technology evolution will not stop, and we need humankind to embrace the technology evolution in a positive manner, so I would expect to have a message of hope and a call to speed up innovation in every country around the world, and to work hard to really achieve that the technologies help to have a more equitable development across the globe. that the benefits of the technology evolution is reached to everybody in the world. So it could not be just a regulatory approach or over-regulatory approach to the technologies. This is what I would expect. And I think that the message from the Prime Minister of Japan yesterday was very inspiring in that sense. He said something, I don’t want to quote the Prime Minister, but he said something like we cannot ignore the problems that we have, but we can optimize the benefits of technology reducing the risk, something like that. And so I think that this is very inspiring and this is the direction that the CDC should have trying to bring really a hope for humankind that’s a positive. We cannot just, we cannot try to stop the technology evolution, but we have to work to make that the technology evolution is good for everybody in the world. Thank you.

Moderator 1:
Thank you so much, Raul. Very important thoughts. It’s really the task or one of the tasks of our time to really find that balance. Now Bitange, what’s your view on this?

Bitange Ndemo:
Yeah, I would explain this by giving just two examples. In 2007, one of the operators was looking for approval to allow digital money and that was what we call impersonal and we thought about it, there were too many spaces, government was fearing, but eventually took the risk and M-PESA went through a lot of inclusivity now when people talk in retrospect. If we can be able to understand here that innovation precedes regulation because what I’m seeing now with the prospect of AI is that people want to regulate before we are out there to do innovations. Having been a teacher for many years and having seen some of the applications in AI in education, some of us grew through the theories of Plato, the philosophies of Plato where children had to memorize everything and you come to get shocked what you memorize that is a theory that you needed to understand a couple of years ago. There are so many problems in education. One that everybody can relate to, that if you mark 30 essays and you give it to 10 other people, they would all make mistakes. But with the new technologies, such problems would go. If we can make sure that either we agree universally that we allow the innovations to take place, that we can be able to see, in this period of augmentation, especially in education, we could do much more to the world than just coming out and saying the propaganda about AI. It is bad. It’s going to take human beings and stuff. That’s what I can say about this.

Moderator 1:
Thank you so much, Bitange. Of course, education is one of the basic pillars also of this digital world. Paul, what’s your take?

Paul Wilson:
Thank you. I think the GDC needs to truly acknowledge where we are and what we have already and build from here. I mean, one of the objectives of the GDC is an inclusive, open, secure, and shared internet. But we still have 33% of people still to connect. And I’d say out of the 66 connected, a lot who still need what we call meaningful internet connectivity. And it means we’re still in growth. We’re still in growth of connectivity, and accessibility, and content, and capability of the internet. And so the growth pains of the last 20 years that we’ve all felt, that we’ve all responded to, that this whole process is aiming to address, these growth pains are going to continue with building capacity, and infrastructure, and integrity, and security. They’re going to continue and require our cooperation. And people have asked me, why are we still talking about internet governance? And the answer is because the internet is changing, and growing, and new challenges are. are coming along constantly, and we’ve got incredible innovations so far, famously across the internet, but also in this room and in this process. And so I really think that while we’re in growth, we need to continue to use and build on those innovations, not to rearrange the deck chairs wantonly or to simply overlook what we’ve got. I think we need to continue the work of bringing benefits of the internet to more people, and urgently, and so out of respect for all of the work that’s been done and recognition for all of the work that’s been done, but also for the sake of sheer efficiency and the urgency of, let’s say, of overcoming the digital issues and paying attention to the non-digital issues that Valeria has mentioned, let’s recognize this and build on it. Thanks.

Moderator 1:
Thank you so much. I see that Vitanga has an urgent reaction to that.

Bitange Ndemo:
I think he raises a good point. Even if we had 100 per cent coverage of internet globally, still a good percentage of people would not be able to be on internet simply because of language, and AI has come, all these LLMs, we need to enable these people through their local languages to be able to do something online. That is what I can call inclusivity.

Moderator 1:
Thank you so much, Vitanga. Andrea.

Moderator 2:
We have, my apologies if I look like I’ve been on my screen. I’ve been having connectivity issues, believe it or not, so apologies to online participants if I’ve missed your questions, but we have a hand from Omar Farouk. Omar, do you want to speak? I’m trying to unmute you here. I’ve lost my connection again. So can you unmute him, please? I’m Rita. And Omar, just … briefly introduce yourself and keep your intervention short. Is there another question? Amrita, just switch on your mic and please read the questions. And to our tech team over there, I’m afraid I can’t unmute people because my connection keeps dropping. Do you not? Thank you, thank you Amrita. Sorry about that.

Audience:
So this is a question from Jyoti Pandey. She asks, what is the mechanism to include stakeholders in the scenario of member states not being inclusive or not wanting to work with critical voices?

Moderator 2:
Thanks for that. And I see Omar still has his hand up and I cannot unmute him. Thanks a lot, Amrita. Shall we take another question in the meantime? Over there, Nigel, and introduce yourself and then we’ll move over here. And I had a bet with Jorge that there’ll be more people coming to the mics when I’m moderating the open segment. So please help me win that bet.

Audience:
Thank you. I am Nigel Casimir from the Caribbean Telecommunications Union which is an intergovernmental organization of 20 member states and territories in the Caribbean. And we are following the processes of the GDC development on behalf of the Caribbean, our member states essentially. However, we being generally small island developing states, that’s the kind of perspective that we are bringing into the discussion. And we’ve heard many of the panelists talk about people who may be not even aware of the process of GDC, the need for inclusion. We still have a third of persons not yet connected and a lot of those persons and people who are not aware are in like small island developing states and others. So I’m wondering if there is a special, any special effort being made to involve them. I mean, it certainly is our challenge and we are taking up the challenge. Even here at this IGF, we have a forum on it on Thursday. But I’m wondering to what extent in the development of the GDC, what efforts are being made to reach these specific ones, these specific types of countries and to get their inputs and to make sure that they are appropriately. We expect and keep in mind that the

Moderator 2:
context of the research community and the

Audience:
innovation process is a very important area to be taken account of. Thank you. Thanks, Nigel. Stella, we’ll go to you. Just make yourself. I have a question for the pin list to help out the administration of this project as we should move through the

Moderator 2:
process of review after we’ve submitted our initial

Audience:
submissions. Thanks a lot. Erik, we’ll go over you first. Hello, I’m from Rizomatica. Well, my question is regarding I was remembering the process for WESSES Indigenous Peoples were actively involved in the process of I’m not sure if it’s the right word, but it’s the process of incorporation of Indigenous Peoples for Indigenous communities where was visited by the government of Canada, I remember well. So in this process of how Indigenous Peoples are being involved with and incorporated, especially

Moderator 2:
considering that most of the challenges that you have tremendously many need to finally something they need to accept and accept as a principle and as I said, to things that are part of our

Audience:
eternal history in our history, I would like to know what impact this process will have on Indigenous communities in India and alsoWhich countries areะณะตW gone and if we are going to as well? questions aren’t given properly and discussions, so my question is that then I am concerned that GDC Folsk negatives in the policy brief and prepared by the co-facilitators of GDC. So my question is how to bridge the digital divide and ensure that all children and young people have access to quality digital technologies and connectivity, because it’s really important that the future, the future, as the children and youth are the future, so we must ensure their connectivity and access to digital technologies. And then additionally, my another topic is that GDC should introduce substantive developments on quality digital technologies and how to hold the private sectors accountable for its role in the digital world and ensure that it protects the rights and interests of children and young people. So thank you to the young tech envoy for giving me the opportunity to represent children and youth globally.

Moderator 2:
Thanks, Omar. We’re going to close the queue now, so no-one else come forward. to the mics, Emma over here.

Audience:
Hello, I’m Emma Gibson from the Alliance for Universal Digital Rights, or AUDRI for short. And alongside other organizations, we’ve been consulting women and people of diverse genders and sexualities all year about what they think should be in a global digital compact so that it works for them. And essentially, what they’ve been saying to us is that the principles of the GDC that ensure an open, free, and secure digital future need to be infused with a feminist and intersectional approach if we’re going to ensure a gender just world. So we’ve come up with a set of 10 principles, which we launched on Saturday at a conference. You can stop and ask me for a copy. And a feminist GDC would work for everybody. And this includes making sure that the GDC is rooted in existing human rights law, that it’s protecting people from multiple people facing multiple forms of discrimination, that it ensures freedom from gender-based violence online, which is something we were disappointed wasn’t in the write-up of the deep dives, alongside freedom of expression, which we were also concerned was missing from that write-up as well. Another principle is obviously ensuring internet access for all, dealing with harmful surveillance. We want to expand women’s participation in leadership in the tech center and in policymaking. We need to reduce the environmental impact of new technology. The GDC needs to ensure data privacy and adopt equality by design principles for algorithms and digital tech development. And finally, the GDC needs to set safeguards to prevent discriminatory biases. When we launched this set of principles on Saturday to a variety of governments from around the world, and one government suggested that gender equality and feminism should be an additional pillar of the global digital compact. So I would really like to ask the panel for their thoughts on that. Thank you.

Moderator 2:
Thanks a lot. And apologies, I do have to interrupt people. OK, we’ll have this person over here who’s still scribbling her notes. Sorry. Is it Liz? Yes, I was adding some points. Introduce yourself.

Audience:
OK, my name is Liz. So, I’m going to start with the first part of the panel, which is the digital inequality and the intersectionality of the two. So, my name is Arembo from research ICT Africa, and we’ve made two submissions to the GDC process. These two submissions, when we’ve developed them, we’ve consulted African stakeholders, some who are in this room, and some who are not able to join us. The things that were pertinent in those two consultations were to do with multistakeholderism, which I’m not going to talk about in this panel, but to do with multistakeholderism, and to do with the intersectionality paradox that Valeria has talked about, and I’d just like to add that, with the people who are mostly disadvantaged at multiple sectors of inequalities, those are the people that we actually need to take into account with this new GDC process, and what you’re saying is that not just even issues of gender, but also issues of access, and access to technology, and access to the internet, and access to the internet, and people are accessing technologies differently in terms of gender, where they’re placed, economic issues, and all that. So, with that, we also put a solution, things to do with data, data access, or data, measuring data on digital assets, because at the end of the year, it is sรฅdan time now where all the gigahertz companies start are starting. But you have data wires that are connected to lots of environments, than they are considered to be accessible, and so we, thereis a dependency of data when it comes to the position of Africa, how are we accessing our own data? How can we mass create new ways to ensure that we are not promoting concurrency to the rest of the world?

Moderator 2:
Thanks, Liz.

Audience:
Thank you. My name is Elisa Hever, and I’m from the Dutch government, and I’m a MAG member. The policy brief touches upon many important topics, though we are here today in Kyoto, and in exactly this building the Kyoto Protocol was negotiated. It was the first time that we internationally agreed upon acting for a sustainable environment. The energy consumption keeps on rising for the use of the internet, to connect all the people that still need to be connected, but also to have a faster internet and less latency. The policy brief only mentions in one bullet to develop environmental sustainability by design and globally harmonized digital sustainability standards to safeguards to protect the planet. It doesn’t mention the energy consumption of the internet, at least I couldn’t find it. However, in my opinion, we need to put more attention to this topic. For us to have a sustainable planet, we also need to decrease the amount of energy that we use on the internet. Thank you.

Moderator 2:
Thanks, Elisa. And our final contribution.

Audience:
Hello, everyone. I’m Alexandre Costa Barbosa. I’m a coordinator of the Brazilian Homeless Workers’ Movement Technology Sector. It’s the largest housing movement worldwide. It’s standing for 30,000 people. We have been doing in practice most of what the GDC is claiming for. We’ve been teaching in public schools digital literacy, digital technology education. We’ve been installing public Wi-Fi hotspots in the poorest regions in solidary kitchens. We’ve been developing ourselves platforms with democratic and cooperative-based platform governance to generate income for the last mile. So that’s precisely what it’s including everyone on digital technologies, as Raul mentioned. So my question, it’s something that is really somehow neglected in IGF’s agenda, which is the labor topic. But it’s in the policy brief of the GDC. It’s clear there. How can we share labor rights, right? So I’d like to hear how, in the process, until the summit of the future, can we really ensure fostering SDG number eight, it’s not only economic growth, it’s also decent work. How can we really ensure the participation of unions and other labor organizations in the development of it? Thank you very much.

Moderator 2:
Thanks. And the last contribution from over there. Sorry, Jeanette, the mic has been closed, but we’ll open. We’ll try to open once more.

Audience:
Hi, everyone. I’m Nermin Selim, the Secretary General of Creators Union of Arab, ICASA Consultative Estates. Thank you very much for all members of honorable stage. It’s just a comment, not a point of view, not a question. Allow me to add a point of view. I believe that one of the goals of the Global Digital Compact is to provide a safe digital environment for everyone. But I believe that it must include children from an early age in particular to protect them from electronic blackmail and violation of privacy. Therefore, we as a civil society organization contributed to this matter by adopting the initiative of one of our academic members who prepared a curriculum of digital safety and cyber security to provide a safe digital environment. So to know about this curriculum of digital safety and cyber security, we have a presentation on 12th of… Do you have a question for the panel? No, it’s just point of view to provide our curriculum of cyber security and digital safety for children to be as a part of the goals of digital global compact. So we invite you to take as idea about this curriculum to be generalized and all institutions, a large number of institutions that can take this curriculum.

Moderator 2:
Okay, thanks. Thanks and people can grab you. And I want to urge people when they take the microphone to ask a question of the panel. So apologies that we couldn’t take more, but we’ll try and open once more. I think we don’t have enough time for you to respond and then to go into the final segment of our session today, which is looking at the going forward, what comes after the GDC process. So I’m going to ask you to respond to the questions we’ve had on content. And I’m gonna add just one question to that, which is, we’ve looked a lot at the content of the GDC. Have we looked enough at the proposed content of the summit of the future? And is there perhaps a little gap here in how we as a community working with digital looks at our input at not just being focused on the GDC, but also other aspects of the summit of the future, such as the global agenda for peace. But so the panel, let’s start again. Shall we start with Amandeep? I think it’s your turn to start now. Looking forward, post GDC process, review mechanisms. What do you think can we do? How can we be innovative? But also if you can make some responses to the questions from the floor.

Amandeep Singh Gill:
Thank you. There were many, many questions. So I think we take a long time to answer all of them. Let me just try and group them in. to three categories. One is some of the specific interest groups, children, whether it is the small island developing states, and that point is well taken. In fact, many of the engagements have been around those kind of themes. Youth, for instance, working with Secretary General’s youth envoy, we’ve put together some consultations and Amur Omar, who spoke earlier, he’s been, he’s iconic in terms of youth participation in the GDC deep dives. The issues of sustainability and gender. If you look carefully at the issues paper, at the end of those thematic issues, the co-facilitators have very carefully articulated why those are cross-cutting strategic issues. So, Emma, you know, I was happy to join you on Saturday and you heard me speak about how the mainstreaming of gender on digital issues is an important goal for the Secretary General. And you should not only look at the GDC process, but what’s happening around it. This year’s commission on the status of women, CSW67, was an exciting opportunity because the theme was around digital and technology. We were able to make a lot of progress and that is going to have its own impact on the GDC process. Now, your point coming to this aspect of moving forward and I think that also featured in some of the questions, the process-related questions, and I love the title of this panel, GDC and Beyond, because we need to think about how do we take the GDC forward. As I mentioned earlier, you know, if the ecosystem is going to, hopefully, we have to have an ambitious outcome. So, if it goes… goes to the next level, then how do we make sure that it stays at that level and that we are organized in a multi-stakeholder fashion to follow up? So the Secretary General has presented some thoughts on that in his policy brief. They are meant to stimulate debate and discussion when the process resumes. I think the essential point, the fundamental point which he made in his remarks yesterday is that we need to pull things together in a better way. We need to make sure that we don’t again retreat into silos. And we need to make sure that there is accountability. That term came up in one of the questions, accountability of the governments or the private sector in terms of the kind of digital future that we want. So that debate is going to be interesting and exciting. It’s also going to be a little challenging. It’s part of that programmed inflammation paradigm. And I think, Paul, you also started to kind of talk a little bit to that. Because we can’t really rest on our oars. The Internet is growing. The user base is growing. It’s shifting. The data flows, if you look at where, what’s the quantum of data flows around the world. So you have new players. The majority of data flows are happening in a non-West European, non-North American context. Starting very recently. So how does the system adjust to these challenges? The advent of AI and in the future, perhaps ambient computing. So these are kind of interesting questions. And we need to make sure that we have agile frameworks, we have updated frameworks. And in that sense, again, WSIS plus 20 would be another opportunity to make sure that the ecosystem keeps up with the challenges. And we are able to handle this enhanced participation from across the globe in our existing forums and make sure that it’s meaningful participation. Governments and the private sector give it importance, land up, engage with other stakeholders, the tech communities. society, academia, and researchers, and help us to address the challenges in real time.

Moderator 2:
Thanks, and I know Amandeep went over time, but there were lots of questions to respond to, so, but I do want to ask people to keep to time. Valeria, do you have?

Valeria Bettancourt:
Is it on? Okay. Very briefly, in terms of the follow-up and review mechanisms, I think that the Human Rights Charter and the International Covenant of Economic, Social, and Cultural Rights should be the basis for assessing a stakeholder’s commitment with an open, free, and secure digital future, so any review mechanism should be related to existing processes, such as the Universal Periodic Review, the Sustainable Developing Goals, the reporting system around those, the review of the implementation of the WSIS action lines, among others. It should also take into account existing instruments and frameworks, such as the UNESCO Internet Universality Indicators, that also applies to other tracks of the Summit of the Future, because all of them have digital-related components. So, in order to be implemented, the Global Digital Compact has to put in place financial mechanisms and reinforce the commitment for the development of digital infrastructure skills, but also regulatory capacities for all countries to navigate the terrain. We need new commitments for the international financial institutions in the form of reparation for all the data that has been appropriated from people and their interactions, from nature and also from common heritage, including indigenous knowledge, as someone from the audience referred earlier. In addition, taxing big tech for global and national financing is a must. We want countries to be able to bring into practice the Global Digital Compact. And last but not least, the IGF should continue and has to be strengthened, and its mandate should be extended to facilitate the operationalization of global digital cooperation, but also bridge the gap between the liberative spaces and decision-making processes, and serve as a central space for multi-stakeholder engagement. Thanks, Valeria.

Moderator 2:
Raรบl.

Raul Echeberri:
Thank you. First of all, I think that the Global Digital Compact and the Summit of the Future, we have to be very careful, governments have to be very careful in creating new bureaucracies, and that makes it much more difficult to participate for developing countries, small countries, as Nigel pointed out, the complexity for participating in the global landscape for small Caribbean countries, among others, but also for other stakeholders that don’t have the power and the resources to participate in multiple processes. In that sense, I already said that the agenda of IGF is very aligned with the issues that will be part of the GDC, so we have to work in strengthening the IGF. Of course, the IGF has to continue evolving to accompany the evolution of the challenges, but this is a good venue, it’s a venue that has been very useful for everybody. And the UN has an important role in promoting the participation of more governments in the multi-stakeholder mechanisms, and actually, the UN is the organization that is best positioned for doing that. And I think that we have to take governments out of their comfort zone. At the end of the day, this will be conditioned by the intergovernmental decisions. The decisions will be by the governments, and so we have to make them, to help them to resist the temptation to increase governmental control or oversight in digital governance, but we don’t need more. more governmental control, we need more multistakeholderism. The issues are so complex that the only way to deal with the challenges that we have is with full participation of all stakeholders, and this is why we have to be there. This is why we have to participate in this process, and we need more participation of all the stakeholders to be disruptive and to, as I said before, to take governments out of their comfort zone.

Moderator 2:
Next role, Bitange.

Bitange Ndemo:
I think there is convergence in thought. We had a session on declaration of the future of Internet, which almost are similar issues enabling the freedom and taking care of every individual with respect to human rights, and I think they’re here. If we are able to look at such convergences a little more widely, we can be able to encompass or respond to all the questions that have been asked. Thank you.

Moderator 2:
Thanks very much, Bitange. Paul.

Paul Wilson:
Thanks, Henriette. I heard quite a lot of questions, and quite a lot of questions about inclusivity. Most of the questions were about inclusion, I think, about marginalised individuals and communities and small islands and youth and gender and homeless and children and others, and I also heard about inclusion in the Internet, in Internet governance and in the GDC process, so that’s a lot of inclusion that’s being asked about there. I think the fact that the questions that were asked can be asked here in this room and can be asked by the people who are directly concerned with those issues is a hint at the power of this model, of the IGF model. So this is the IGF. It’s not the GDC or the Summit for the Future, but I do think the answer is yes. answers to those questions of inclusion across the board, potentially in this room, because what’s happening here and what’s happened here for 18 years, if not perfect, and no one has said it is, can still absolutely provide the venue and the framework for what GDC apparently needs for follow-up of actions and objectives and reviews and so on, whether that’s by expanding the remit of the IGF or by replicating somehow, certainly by evolving it, but I really think the answers are here and do not need to be replicated. I mean, we’ve got this multi-stakeholder community here that’s ready, that wants to talk about strengthening and has done for quite a period of time. Eurodig, we heard, has called for it. Henriette said it. We’re ready for this. We’re looking for an opportunity for the IGF to provide its worth, to do its work better and further, and for us to exploit the potential that’s here in this room, the potential of the process and the people and the communities that are involved. So I think that’s where I would like to see the future, as I say, not as a reinvention, not as a rearrangement of deck chairs, but really as a way to just simply move forward and make things continually better. Thanks.

Moderator 1:
Thank you so much, Paul. I think we have time for three short interventions from the audience, be it online or be it here. I see Jeanette coming forward again. Now we have time.

Moderator 2:
Can we check with Amrita, because my connection has dropped. Amrita, if there’s an online … Maybe just read it. Good.

Audience:
Jeanette Hofmann, Professor for Internet Politics in Berlin, Germany. A lot of the issues that were addressed so far seem to be covered more or less by the IGF. So in a way, I think I echo Paul Wilson’s point about how the IGF and the global compact will actually be related to each other. We talked a lot about Internet fragmentation. We also need to worry about fragmentation and Internet governance.

Moderator 1:
Thank you so much, Jeanette. And please, we have to close the lines. We have the gentleman there to my left. Please introduce yourself.

Audience:
Thank you. I would like to get the opinion of the panel, when the Gambia, when we did our GDC consultation process, we involved all stakeholders, including the government, in doing the submissions, so I would like to know how you feel about that process, because we felt it was necessary to get our government input. Thank you.

Moderator 1:
Thank you so much. And we go to my right, please.

Audience:
Hello. Thank you for the panel. My name is Laura Pereira. I’m a youth delegate from Brazil. My question is, I believe that the critics that were resonated in the panel are a reflection of how the label of multistakeholderism has been applied to multilateral process or to process in general as a synonym of public consultation. As we know, as a community in internet governance, that the multistakeholder model must be more than that, however hard it is to put a fiction into practice, to quote Hoffman here. Is it possible for us to use the GDC model and choose the IGF opportunity to set an updated standard to allow the use of the multistakeholder as a label to the process? Can we develop updated standards to classify a process as multistakeholder or not multistakeholder? Isn’t that the agenda for all of us here at the IGF? Thank you all.

Moderator 1:
Thank you so much. We had…

Audience:
Hello. My name is Chat Garcia Ramelo from APC. I have a question for Amandeep. Two things. What would you see as a scenario of a failure for GDC? And on the other hand, what would you see as success like two years from now?

Moderator 1:
Thank you. Very concrete questions. And we have Amrita, please, from the online audience.

Audience:
Online comment from William Drake. He says he agrees with Raul. We don’t need a new bureaucracies on a new digital cooperation forum that competes with the IGF for resources and attention. We need to renew the IGF’s mandate and strengthen the process. This point has been made repeatedly throughout the GDC process.

Moderator 1:
Thank you, Amrita. And very shortly, the gentleman there to my left. Very shortly, please. Go. Very shortly. Go ahead.

Audience:
Thank you very much. We discussed a lot about the content of the global compact, digital compact, as well as the partnership. But what about the accountability mechanism after having the law or constitution? compact because we agree on many things, but the implementation part is always poor, especially the developed countries are not responsible to the developing countries to build their capacity for the smooth implementation of those compacts so that how the UN system and other agencies would be responsible and accountable in this matter. Thank you very much.

Moderator 1:
Thank you so much for keeping it short, very important point, and in the interest of time we have to go already to your final takeaways, if you can react there very shortly in one minute each of you to what has been said now in this last round. I would begin with Paul, please.

Paul Wilson:
I’ve said before that the internet deserves a Nobel Prize for how it served humanity during COVID and I’m really inspired by the plea by Valeria actually which was to recognise real issues facing humanity and I think COVID was a fantastic example of a real issue addressed actually not just by the internet but by the digital capacities of the world, medical science for instance in a major way, and I really think there are other non-digital issues which are pending right now, they’re existential for communities and for humanity, and those non-digital issues need to be addressed, if they’re not, if digital issues only are going to occupy us then let’s be sure as I said, I think third time, not just rearranging deck chairs but building on what we have on the innovations here in this venue and around the world to produce real non-digital outcomes because that’s what the planet actually needs right now. Thank you.

Moderator 1:
Thank you so much. Bitange?

Bitange Ndemo:
I think we have a real chance of coming up with a guiding framework for policymakers, governments and other stakeholders. This is the time to do it because we have seen the importance of the internet, we need to create a future that is more inclusive, we need to create a future that enables innovative programmes to come upon, but we must get a chance to deliberate those issues like we are doing right now. And as a formerly a policy maker, I benefited from discussing with the stakeholders, the civil society, it worked. Most governments sometimes push aside civil society into their discussions. But as you can see, there is so much we can learn from each other. Thank you.

Moderator 1:
Thank you so much, Raul.

Raul Echeberri:
Thank you, Jorge. Every stakeholder has a huge responsibility in this era, on these topics. Governments have a huge responsibility in accelerating innovation, in creating enabling environments for building new, more inclusive and equitable development models, and really creating avenues for making the technology impact in a positive manner in the life of everybody in the world. So this is a good opportunity for this discussion to reinforce that. With regard to the process, it’s clear that we need more opportunities of participation for stakeholders in the process toward the summit of the future and the adoption of the global digital compact. And of course, I echo everybody’s comments with regard to the need to strengthen IGF and keep IGF as the central venue for dealing with those issues after the summit of the future. Thank you.

Moderator 1:
Thanks so much, Raul. Valeria.

Valeria Bettancourt:
Thank you. I think we are all aware of the injustices of the current order, and we know the problem diagnosis already. We also recognize the power held by the few that control policy spaces. The silent consensus that we cannot regulate big tech has to be challenged. We need a political commitment, and we need member states to measure up. Global digital governance, including a global regime for that governance, should set the conditions for equity and for fairness, and in that way benefit. Everyone should benefit from the digitalization, and ensure that those benefits are distributed to ensure a dignified life for everyone. And any institutional arrangement decided in the framework of the Global Digital Compact must not walk the path of reinforcing the current in unjust order. What we seek and what we need is a feminist, sustainable and transformative vision for a digital future that is really and truly open, free and secure.

Moderator 1:
Thank you, Valeria. Amandeep, you have the last takeaway.

Amandeep Singh Gill:
Thank you. I like the last point someone made about accountability. I think there’s no doubt that the challenges are such that we need more action by more people. I think on that we can all agree. So the current level of action, the current level of response is not adequate. So we need to go to the next level. And it’s also important that we have accountability and we have justice in terms of the governance that the entry barriers to participation in the governance discussions are lower. And the point made by Raul about smaller delegations, there are 160 plus countries who shouldn’t be running from forum to forum and then figuring out what a whole of government perspective on digital looks like. So we need to make that task easier and make sure that people have agency over the digital transformation. Only a few countries, only a few corporations have the resources to engage on digital issues in multiple forums. So there is a fragmented landscape already. What we need to do is plug the gaps, just as in the Secretary General’s policy brief you see with that infographic, critical gaps on misinformation, disinformation, the accountability for human rights, the issue of AI governance. And there are ongoing initiatives like the IGF Leadership Panel to strengthen the IGF and to that gap. So that’s what we need today. And if you allow me a few seconds on the success of failure, just in one sentence, the failure is if we don’t use the opportunity of the summit of the future to raise the level of ambition, raise the level of activity, raise the level of coherence across our responses. Success is exactly the opposite. So we have to rise to that challenge. Thank you, Amandeep.

Moderator 1:
And I think really this community is up to that challenge. And thank you so much for being able, profiting from picking your brains, picking the brains of the audience, both here physically and online. And Henriette, please. Thanks. There isn’t

Moderator 2:
really time for closing remarks, but very briefly. I think when it comes to process, we have to abandon complacency. There’s a need for improving governance, for more accountability, as has just been said. We need that within this multi-stakeholder process. We need it within the multilateral. We also need more cooperation within each of these and between them. So let’s do this evolution and improvement together. On content, I think what is really challenging, but the GDC has put that into focus, is navigating the specificity of Internet development and growth and governance, but also how it intersects with broader governance issues. We need to do both, and I think the GDC and the Summit of the Future and the link with the SDGs is putting that into focus. It’s not easy, and we can do it, and the IGF is a very important part of that. I think in terms of follow-up, I just want to bring to us a phrase from the WSIS outcome documents, enabling environment. If you read the WSIS outcome documents, that’s how it describes the role of governments to create an enabling environment for people-centered development, human rights, and inclusion. So I think let’s keep that in mind, that it’s not just about the topics that we are discussing specifically in the GDC. It’s creating an enabling environment for not just dealing with current challenges, but also emerging challenges. So thanks to everyone for very good input, excellent panel, and apologies to online participants if we did not give you enough space. And to the MAG who organized this, thanks a lot.

Amandeep Singh Gill

Speech speed

156 words per minute

Speech length

2317 words

Speech time

891 secs

Audience

Speech speed

164 words per minute

Speech length

3268 words

Speech time

1197 secs

Bitange Ndemo

Speech speed

120 words per minute

Speech length

754 words

Speech time

376 secs

Moderator 1

Speech speed

119 words per minute

Speech length

1270 words

Speech time

640 secs

Moderator 2

Speech speed

185 words per minute

Speech length

1709 words

Speech time

555 secs

Paul Wilson

Speech speed

155 words per minute

Speech length

1274 words

Speech time

492 secs

Raul Echeberri

Speech speed

145 words per minute

Speech length

1345 words

Speech time

558 secs

Valeria Bettancourt

Speech speed

150 words per minute

Speech length

1436 words

Speech time

576 secs

Manga Culture & Internet Governance-The Fight Against Piracy | IGF 2023 WS #69

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moto HAGIO

Moto Hagio, a renowned manga artist, shares her insights on the qualities and perception of manga in society. She emphasises that the most important features of manga are interesting stories and appealing characters, which greatly contribute to its enjoyment and popularity. However, she acknowledges that manga was once seen as vulgar and looked down upon in society.

During Hagio’s childhood, manga was disapproved of in schools, families, and society at large. Parents often encouraged their children to focus on their studies rather than reading manga. Despite this disapproval, Hagio firmly believes that manga has great educational value. She asserts that manga provides valuable lessons about human emotions and relationships, which are often not taught in traditional educational settings. Hagio specifically mentions that she learned a lot about these aspects through reading manga, particularly the works of Tezuka Osamu, whose manga taught her lessons that were not generally found in society.

In terms of piracy, Hagio strongly opposes it and supports the reading and purchasing of officially published works. She emphasises the importance of creators receiving appropriate remuneration for their work, describing it as saddening and unjust when creators do not receive compensation. Hagio mentions that the revenue she receives from readers of her old works on the internet allows her to earn a living and invest in future works. She appreciates readers who choose to support official versions of her work and actively encourages anti-piracy measures.

Furthermore, Hagio proposes additional incentives for readers who opt for formal channels of manga consumption. She suggests privileges such as providing points or featuring the voices of artists as a token of appreciation. Hagio believes that these incentives can promote and encourage the choice to purchase manga from legitimate sources. This aligns with her stance that creators should be appropriately rewarded for their work.

In conclusion, Moto Hagio’s perspective on manga revolves around its qualities, societal perception, educational value, and the issue of piracy. She believes that manga’s interesting stories and appealing characters are its defining attributes, while acknowledging its historical disapproval in society. Hagio firmly advocates for the educational importance of manga, asserting that it imparts valuable life lessons on emotions and relationships. Additionally, she opposes piracy, supports reading and purchasing officially published works, and proposes incentives to encourage readers to choose legitimate sources. Hagio ultimately encourages readers to make ethical choices and considers the impact of piracy on both readers and artists.

Jun Murai

Manga piracy has become a significant issue in the digital age, largely due to the accessibility and replication capacity of the internet. The ease of generating and distributing exact copies of digital information endangers copyrighted material, causing concerns for industries such as music, movies, and publishing. These industries have faced struggles as their digital content is easily replicated and shared without permission.

Various protection mechanisms and subscription technologies have been developed to address this problem. These technologies aim to safeguard intellectual property content by providing encrypted materials and web standard subscriptions. Implementing such measures can help protect industries against piracy and ensure fair compensation for their creative works.

Jun Murai, an expert in the fight against piracy, acknowledges the complexity and challenges involved in dealing with piracy operators and malicious domains. Identifying the identities of piracy operators and dealing with malicious domains are major obstacles in the battle against piracy. Moreover, the involvement of intermediary providers, such as content delivery networks (CDNs), adds another layer of complexity to this issue.

Despite the challenges, Murai appreciates the collaboration among different stakeholders in Japan, including the government, internet community, and industry, in addressing piracy. The Japanese government has raised the issue of piracy to the Government Advisory Board (GAC) under the Internet Corporation for Assigned Names and Numbers (ICANN). Regular meetings among CEOs of internet service providers and publishing companies in Japan are also held to discuss piracy issues, indicating a proactive approach in combating piracy.

Murai believes that a comprehensive solution to piracy requires cooperation among different domains, such as legal expertise, international law, climate, and internet service providers. Taking a holistic approach to address piracy from multiple angles can lead to more effective solutions. Regular dialogues engaging different stakeholders are necessary to develop strategies and policies that can effectively combat piracy.

Drawing from the success of the music industry in combating piracy, where sharing music online is followed by encouraging live music attendance, the same model could be applied to manga. By sharing digital manga content and then fostering a supportive environment for attending live manga events, the industry can adapt to the digital age while maintaining its value and revenue streams.

Publishing companies are advised to preserve the value of printed manga in digital format and continue collaborating with both established and upcoming artists. By embracing new formats while recognizing the importance of the original art form, manga can thrive in the digital era without losing its essence.

Moreover, streaming services and publishers are releasing more content to cater to the growing demand for manga. Shonen Jump, for example, offers recent chapters for free and provides a subscription service that grants access to all back chapters. This approach not only satisfies consumer demands for more content but also contributes to combating piracy by offering legal alternatives.

While addressing piracy, it is essential to consider potential issues of over-policing that could lead to censorship. Multi-stakeholder discussions regarding internet censorship policies are taking place to ensure a balance between protecting intellectual property rights and preserving freedom of expression. The involvement of the ICANN Government Advisory Board highlights the importance of addressing this issue and finding appropriate solutions.

Manga has gained global recognition and popularity in recent years, with an increasing number of fans outside of Japan. The early 2000s saw a surge in manga’s global popularity, and European fans have accepted and appreciated the cultural aspects of manga. This growing accessibility has contributed to the wider reach and influence of manga worldwide.

Piracy has extended due to expensive and unavailable translation services. The high cost of translation has led some individuals to consume pirated content instead. Efforts are being made to address these issues and make translation services more widely available, aiming to reduce the dependence on pirated copies and ensure that creators receive fair compensation.

Youth engagement plays a crucial role in the fight against piracy. Young individuals actively stand up against piracy and engage in campaigns to discourage the use of illegally copied software. Publishing companies recognize the power of youth in these campaigns and attract young individuals to join their efforts against piracy.

In conclusion, manga piracy poses significant challenges to various industries due to the accessibility and replication capacity of the internet. Protection mechanisms and subscription technologies have been developed to safeguard intellectual property content, and collaboration among stakeholders is crucial in addressing piracy effectively. The success of the music industry’s model suggests ways in which manga can adapt to the digital age. Preservation of the value of printed manga, cooperation among domains, and the involvement of youth are essential components of a comprehensive solution to piracy.

Andy Nakatani

The global manga market has experienced rapid growth, particularly during the pandemic, as people sought entertainment while staying at home. Manga consumption saw a significant spike in 2019 and 2020, leading to increased popularity and sales. The rise in manga’s popularity can be attributed to the increased availability of anime on broadcast cable TV and the presence of big box bookstores like Borders, which contributed to its mainstream appeal.

To combat piracy, various SimulPub platforms have been introduced. These platforms offer official manga content in both English and Japanese simultaneously, aiming to provide an accessible and alternative option for readers. Publishers such as Viz Manga, Shonen Jump, Manga Plus, K-Manga, BookWalker, and MangaUp have adopted this strategy, allowing them to release content alongside its Japanese counterpart and reducing the prevalence of pirated content.

However, the presence of illegal or pirated content remains a major issue in the manga industry, particularly through scanlation sites. It is estimated that there are approximately 1,100 known piracy sites, resulting in substantial financial damages to the industry. The top 10 piracy sites in Japan alone account for approximately 507 billion Japanese yen in damages. This piracy not only affects the revenue of artists and publishers but also devalues the perception of the art and the work of the artists themselves.

Piracy creates a sense of entitlement among readers who come to expect free access to manga even before its official release. The popularity of piracy sites is staggering, with visits to the top 10 piracy sites in original Japanese totaling more than 150 million per month. The English manga piracy sites have an even larger audience, with around 200 million visits per month. This trend highlights the need to address the issue of piracy and educate readers on the value of supporting official releases.

Efforts are being made to increase the accessibility and affordability of manga through streaming services and lower subscription prices. Streaming services focus on attracting readers through a large funnel, increasing the exposure of manga, and then guiding them towards making a purchase. Additionally, Shonen Jump, a popular manga publisher, releases chapters on its service the same day they come out in Japan, allowing fans to stay up to date with the latest content. The push for easier access to more content at affordable prices includes offering a low subscription price of $2.99 per month for access to all the back chapters.

Andy Nakatani, an influential figure in the manga industry, looks forward to an upcoming exhibit in San Francisco. His positive perception is fueled by the visible efforts and cooperation taking place across multiple industries. However, Nakatani expresses a lack of enthusiasm for public speaking, which suggests that he may prefer to focus on other aspects of his work.

The strength of the print industry, particularly in the United States, is valued and acknowledged. The United States is known for its strong print industry, which adds to the overall growth and success of the manga market.

In conclusion, the global manga market has experienced significant growth, driven by increased consumption during the pandemic. SimulPub platforms have proven effective in combating piracy by offering official content in English and Japanese simultaneously. However, piracy remains a significant concern, devaluing the perception of manga and the work of artists. Efforts are being made to increase accessibility and affordability through streaming services and attractive subscription prices. The upcoming exhibit in San Francisco and the visible work and cooperation within the industry are promising signs for future development.

Nicole Rousmaniere

A recent manga exhibition held at the British Museum in London was a tremendous success. The event received widespread acclaim and attracted large crowds, with the exhibition selling out completely. Notably, it drew the youngest audience the museum has ever seen, highlighting the broad appeal of manga beyond traditional demographics.

The exhibition was praised for its ability to forge connections and transcend boundaries. Visitors emotionally connected with the content, finding resonance and identification within the storylines and characters depicted in manga. Additionally, the exhibition had a diverse audience in terms of ethnicity, further demonstrating manga’s power to bring people together and promote cultural diversity.

Despite the celebration of manga’s cultural impact, concerns were raised about the threat of piracy to the industry. Piracy not only jeopardizes the livelihoods of manga artists, editors, and publishers, but also poses a risk to the industry as a whole. Efforts are being made to protect the rights of manga creators and safeguard their work from piracy, emphasizing the need to combat this issue.

Manga is considered a valuable cultural treasure of Japan, akin to traditional art forms like Ukiyo-e and culinary delights like sushi. Preserving and cherishing this art form for future generations is deemed crucial. Discussions surround the preservation of physical copies of manga and the concerns that relying solely on digital access could potentially hinder its accessibility and readability in the future.

An important aspect highlighted is the significance of maintaining paper copies of manga. Prominent figures within the industry, such as Murai-sensei, have emphasized the importance of continuing the production of paper copies. This aligns with the goals of responsible consumption and production, contributing to sustainable practices in the industry.

In conclusion, the manga exhibition at the British Museum was a resounding success, showcasing both the popularity and cultural significance of manga. However, it also brought attention to issues such as piracy and the importance of protecting artists’ rights. Efforts are being made to combat piracy and preserve physical copies of manga for future generations. The overall sentiment towards manga and its cultural impact remains positive, and discussions on supporting and safeguarding the industry continue.

Moderator

The panel discussion focused on the significant rise in online piracy and its negative impact on the manga industry. Over 1,100 piracy sites dedicated to manga have led to an estimated $3.6 billion USD in yearly damages. The top Japanese sites alone have 150 million monthly hits, while English-language piracy sites have around 200 million visits. Efforts to combat piracy have had some success, but face challenges such as domain hopping. International cooperation is crucial to address the issue. Manga artists’ livelihoods are being affected, and manga’s global popularity is increasing. Manga provides valuable lessons not taught in schools, but it also faces negative perception. Online piracy sites provide easy access to high-quality content, devaluing the work of artists. Strategies to combat piracy include controlling internet providers and educating consumers. Technology support and collaboration with multiple industries are important. Publishers should differentiate digitized print manga from digital manga. Protecting freedom of speech is a balancing act, and promoting accessible manga globally fosters cultural exchange. A reward system for legal readers, proposed by Moto Hagio, would discourage piracy and support artists. The panel discussion provides valuable insights and recommendations for combating manga piracy and ensuring the industry’s sustainability and growth.

Audience

The discussion centred around the issue of manga piracy and its impact on the industry. One of the main concerns raised was the lack of access to manga in the West and the significant delay in its release, which leads fans to resort to piracy. Victoria Bertola highlighted this problem and suggested that technology should be used to expedite the distribution and availability of native Japanese manga globally and at competitive prices.

Criticism was directed towards the industry’s approach to piracy, with an appeal to recognise fans as potential future buyers rather than pirates. It was argued that teenagers who can’t afford to buy manga may turn to pirated content, but they could become paying customers in the future. Instead of being too harsh on fans, the industry should make their content more accessible and affordable.

The discussion also delved into the underlying reasons why people resort to pirated websites. It was suggested that the use of these websites indicates a high demand for content that is not adequately met. An audience member hinted at the need to learn from other experiences and improve in this area to meet the audience’s demands.

Transparency in earnings distribution within the Japanese industry and the impact of piracy on artists’ earnings were also raised as concerns. There is a perception that the majority of profits go to publishers rather than the artists, which raises questions about the impact of piracy on the artists’ livelihoods.

The discussion also touched on the potential impact of piracy on freedom of information and the risk of censorship. It was argued that pursuing and punishing end users who share content goes against freedom of information. Some fears were expressed over how anti-piracy actions could lead to censorship, including political and economic censorship. A representative from the Pirate Party International and Russian Pirate Party emphasised this point, highlighting that it could even involve pursuing individuals wearing potentially counterfeited items.

Affordability and accessibility were identified as key issues. Affordability was cited as a driving factor for piracy, with a thousand-dollar camera costing five months’ minimum wage in some countries. Limited translation of manga into various languages also leads people to rely on volunteers who translate and publish manga online for free. It was argued that piracy is a symptom of inequality rather than solely a problem of greed, and the root causes of affordability and accessibility need to be addressed.

The role of exhibitions in raising awareness about manga piracy was highlighted. The suggestion was made to include awareness actions about manga piracy in exhibitions as a tool for educating attendees about copyright infringement. This could help combat piracy by providing information and raising awareness.

The importance of maintaining copyright laws and fair use was emphasised during the discussion. It was asserted that knowledge of intellectual property and copyright is crucial for protecting creators, and fair use ensures that copyright owners receive their royalties.

Legal online distribution was advocated as a solution to piracy. It was suggested that such platforms would not only curb piracy but also support upcoming artists. One representative shared her personal experience of how manga influenced her art and explained the need for legal online distribution in Latin America due to rampant piracy and the lack of legal platforms.

Lastly, involving youths in the fight against piracy was seen as crucial. It was observed that most pirates are from the younger generation, and they have a good understanding of the importance of manga and the threats posed by piracy. The use of new technologies among youths makes them well-equipped to fight piracy effectively.

In conclusion, the discussion explored various arguments and perspectives on manga piracy. The lack of access, delayed releases, affordability issues, and inequality were identified as major driving factors behind piracy. The industry was urged to address these issues by utilising technology for improved distribution, making content more affordable and accessible, and involving youths in the fight against piracy. Maintaining copyright laws and fair use was seen as crucial for protecting creators. Overall, it was emphasised that addressing the complex issue of piracy requires a comprehensive approach that recognises the underlying causes and works towards resolving them.

Session transcript

Moderator:
Shall we start on time? Is it okay for us to get started on time? Okay, so thank you for joining us on this sunny day for the session, Manga Culture and Internet Governance, The Fight Against Piracy. This is Kensaku Fukui, and this session was planned by the Japan Publishers Manga Anti-Piracy Conference, or JPMAC, which was established by five major manga publishing companies and lawyers, and which has manga piracy. Japanese manga is popular all around the world. It continues to expand with many anime adaptations, game adaptations, character goods, and fun events. Sales have also increased significantly. Many saimaru publication apps are making new releases available to the public around the world in English and many other languages. The problem we are facing is the online piracy business, which has rapidly become a huge problem for creative industries. So far, there are 1,100 non-active piracy sites for manga only. The top 10 Japanese sites attract 150 million hits per month. This is the number of visits. The Three major English language piracy sites attract even larger visits of some 200 million. It is estimated that free reading via the internet was 3.6 billion US dollars per year. They are offered by anonymous operators through a combination of various services on the internet. Often, the countries where the hosting servers are located differ from the countries where the operators exist. And both tend to be concentrated in countries where the, for political and other reasons. And they are, in addition, they select and use registrars, CDNs, advertising companies, or other services that can be easily used anonymously and basically do when notices are given. For five years, we have been working hard to combat manga piracy and have driven several huge sites to close through legal proceedings overseas and cooperation with Japanese internet and advertising industries. As a result, the number of visits to the Japanese language sites dropped from four million per month in the first period. However, New problems have arisen when trying to take countermeasures, such as repeated domain hopping, where the target site to change domains in a short period of time. And the number of visits does not decrease any further, but unfortunately, rather, the number of sites tend to increase and diversify. But the problem of piracy sites that impacts manga artists is still not fully recognized internationally. It is impossible to curb the piracy without cooperation and support from the rest of the world. Today’s diverse speakers include a legendary manga artist, and a researcher known as the father of the Japanese internet, a curator who opened a major manga exhibition at the British Museum and broke its record for youth attendance, and an editor who has worked in the manga business for more than 20 years in the United States. For the healthy and sustainable development of creative activities and the internet, we would like to gather the wisdom of everyone in the audience and discuss it together. So, could you show the… Yes. By the way, this is today’s speaker, Hagio-san’s representative. repetitive work, anyway. Okay. So the, let me. Let’s start. First, we’d like to talk about manga’s expansion throughout the world in various forms. Among the popular manga, One Piece have had over 510 million copies in circulation so far, and it’s on the Guinness World Record. But manga is not accepted only by publication, but through various ways, such as anime, games, and fan events. Nicole-san, could you share your experience about the creation and acceptance of manga overseas, including recent major exhibition at British Museum?

Nicole Rousmaniere:
I’d be delighted to. And I’m very excited to be here today to talk to you about this, because it’s an incredibly important subject. But first, I will tell you about the British Museum exhibition. Komenasai. As you can see, this is the British Museum, and we have what they call Toblerones, the Ashiripa from Golden Kamuy, introducing manga at the British Museum from the very beginning, when you enter the museum. This was in 2019. I’ve been reading manga since I was young, and I’m passionate about manga. I was a curator at the British Museum for over 15 years. My specialty is actually kouge, and three-dimensional objects, but I love manga. And in 2017, I got my first salary, and I put out a manga corner. So we always had. display, but the British Museum has been collecting manga, since the 1920s, with Kitazawa Rakuten, and those, and those types of objects, but they didn’t display them, they were considered ephemera. What I did at the British Museum was occasionally display manga, for example in the Asahi Shimbun display area, and in 2015, I displayed St. Oni-san, Shiba Tetsuya’s work, and then Hoshino Yukinobu and, and Nakamura Hikaru. It sounds like an odd combination, but it was fun, it was interesting and showing different types of manga production. What happened though, was 100,000 people came and looked at it. And this took the British Museum by surprise. They assumed that people weren’t really interested in manga, but they had to pay attention then. So in about a year or two later, they asked me, well, would you think about making a large exhibition? And I said, of course. And so I wrote up a proposal, and they put it to marketing research. I think that they believed it wouldn’t work. I think that they really were setting me up to fail, as they say in English, but they put five exhibitions out, Samuel Beckett’s Roman sculpture, a number of different exhibitions, and manga came in as number one. They were actually thinking they weren’t going to do it, but then they realized they had to do it. So I got the manga exhibition, I was delighted. And it turns out that they gave me the most beautiful space. And this space is right here, it’s called the Sainsbury Exhibition Galleries. On the ground floor, it’s huge. And it’s right next to where the Rosetta Stone is displayed. This caused a lot of issues within the museum. People felt, should Japan be displayed there? Should manga be displayed there? And there was a lot of debate. But happily, it went forward huge resounding success. I want to explain just a little bit about some of the results for that. But we’re giving you a sneak preview of the inside of what it looked like the top two slides are just without people in it. And then the bottom side you can see Captain Tsubasa, and with people in it, it was incredibly crowded. The exhibition was sold out. It turns out, afterwards, when they did the analysis, that it was the best selling exhibition. And most importantly, it had the youngest audience that the British Museum has ever had. In addition to that, what was impressive is that it has so in in Britain, they say BAME, B-A-M-E, but it means, you know, audiences that are not white came. And beyond that, it was also interesting, there were a number of certain types of groups, for example, people with autism, were certain types of groups that self identified that really loved the manga exhibition. So for the British Museum, this was a huge surprising result. So basically, this summarizes the results of the manga exhibition. One thing I really want to point out is that half the visitors had never paid for an exhibition at the British Museum before. So this broke new ground all around. What the survey at the end found out, though, was that most people identified with emotional outcomes, not intellectual outcomes, but emotional outcomes. This means they identified with the material. The average dwell time was one hour and 33 minutes, which is very long for the British Museum paid exhibition. This is the exhibition layout and, and our different zones. But what I just really want to focus on here is that we created a counterclockwise exhibition, because in manga, you really have to read it from right to left. And this is fundamentally different what in Britain, how you read from left to right, and even walking counterclockwise was really problematic for a lot of the designers and for the people in the British Museum, but we did it and I feel we, it shows that you can shift people’s minds and hearts and it was a huge success. It won the good design prize for 2020. These are just a few of the things that we put in it and I don’t have very much time but I want to just explain a couple more things. We of course had the father of modern manga Tezuka Osamu sensei’s work, but a lot of interest was Princess Knight. The idea of kind of gender fluidity or different types of genders and different types of representations. This was a big surprise for many of our audience. We also had really important artists like Chiba Tetsuya sensei come and, and he didn’t actually physically come but he drew a rugby for the World Cup. Rugby World Cup he drew a rugby picture for us and represented us and this really made a big difference although interesting enough his work isn’t translated, but still it seemed to reach people, but I have to say that the person who made the most difference to me was Hagiomoto sensei. She’s right with us, and she was there for us throughout the exhibition but she came with her editor who is extraordinary, Furukawa-san, and they gave many talks, they showed us how editors and manga artists work together. And this is a big deal and something I want to just mention is what I learned from this exhibition is that manga isn’t just a manga artist drawing and then it’s published. With it’s the, it’s I’d say maybe 50% I’m not quite sure but it’s once the manga artist draws the conversations with the editors, the name of the storyboards, the work that the publishing house does, finally the end product so it’s this combination, and with Hagiomoto sensei there, I felt that we could do it, and we did. I want to give one example of Shizuka Shinichi. Blue Giant Supreme. Maybe some of you know this, but this is with his editor Katsuki Dai. And during one of their conversations, they showed how they work with name, the storyboards and what’s in and what’s out. But I want to draw your attention to the drawing itself. So this is, we have Dai, he’s blowing on his saxophone and you feel this music shower coming in. It’s this immersive quality. It’s an emotive that is so incredibly important. And we’re coming to the end. I just wanted to say a couple more points. In the middle of the exhibition, we decided to put a manga library and the manga bookstore in a way. And the reason we did this was that one manga artist told me, manga is not what you put on the walls. It’s what you put in your hands. And that really struck me. And so we put this library and at first, the British Museum said, we’re not a library, please don’t put a bookshelf in the middle of the exhibition. But it was the most popular part of the exhibition. To have the books out, people could hold them. They said they would steal them. Not one volume was stolen. People would sit and read, even if it’s in Japanese and they knew. Holding manga in your hands makes a huge difference. The paper quality was brilliant, but this was wonderful and gave us free downloads. So we had free downloads available. We had 50 artists, 70 titles. It was a very large exhibition. But in the end, just to summarize, manga material is incredible. You can have Buddha and Jesus living together in a gap year in Tachikawa drawing manga as a subject. You can have you can have incredible subjects right here. One was really popular from paper manga, One Punch Man. This was incredibly popular at the museum. But I’d really like to mention the power of manga and how it, it’s not just in the paper, how it comes out into your life. So For example, we had this fabulous thanks to Kodansha attack on Titan huge blow up head, it became a major selfie moment for us, but people really identified and it was almost like they had found their tribe, they had found their manga. Manga’s power to me is because it can cross boundaries, it can cause. And these are the many lessons I learned from it, but what I also learned is that for the future of manga we need to protect it, we need to protect the artists, we need to protect their ability to work with the publishers and piracy is something that endangers the thriving of the industry. And so this panel is very important.

Moderator:
Thank you very much for your impressive insights. And, by the way, you can keep that yes. By the way, this is a cosplayers from all over the world so it’s interactive. And, Andy-san, could you share your perspective on the rapid growth of the global manga market from the business side.

Andy Nakatani:
Of course, of course, to do that. I’m Andy Nakatani, I’m the Senior Director of Online Manga at Viz Media. Prior to that I was editor in chief of the English language version of Shonen Jump, which in 2012 we released in a digital format and came out simultaneously. As Japan released chapters we would release them on the same day. So if we can see the chart. Is the slide visible with the chart. I will assume it is. So this chart represents, is it there? Yeah, okay, so this chart represents manga sales, graphic novel sales in the U.S. in units and represents manga and the blue is graphic novels that are not manga. So the total of the two is a total graphic novel sales. We only have data from 2007 on this chart. So I just wanted to say a couple of things about before. Manga was a very niche market in the U.S. until about 2006, when it reached quite a peak. And the main reason that it became popular was because of the popularity of anime on broadcast cable TV. And because of the prevalence of big box bookstores like Borders. Following that around 2011, there was a little bit of a decline, various market factors, also Borders started shutting down stores and eventually declared bankruptcy in, I think it was 2011. But moving on from there, there was steady growth mainly because in the United States. And then you see this huge spike that’s happening around 2019, 2020. And that clearly that was because the pandemic happened and people needed entertainment and distraction and they were staying at home. So they consumed a lot of manga. Now, I wanna emphasize that this is for print sales. And let’s see, slide, and before I talk about the slide with the various SimulPub platforms that we have, I do want to talk a little bit about piracy. So illegal content of manga, pirated content, scanlation sites, and that’s kind of like a portmanteau of scan and translation. And so a common term is scanlation to refer to the pirated sites. They release a vast amount of content. It’s free, and it comes out really fast. And to sort of come up with a strategy to combat that, there are various official online SimulPub manga platforms that are now available in English. Official publishers such as Biz Manga, Shonen Jump, Manga Plus, K-Manga, BookWalker, MangaUp, most of these are from Japanese publishers who have released content in English. There are various business models for these, various combinations of free content, subscription models, microtransactions, points. But the main thing here is that the content is released simultaneous with chapters that come out in Japanese. And so the translated content comes out on the same day. And for example, my company puts out Biz Manga and Shonen Jump, and we we put out the first three chapters for free and the latest three chapters for free. And to access the chapters in between, you subscribe for a low fee. And so those are just kind of the various models that are out there for simultaneous content.

Moderator:
Okay, Andy, thank you very much for your insights. And yes, so Hagio-san, you are a living legend of Japanese manga. And could you share your personal view on the experience as for manga creation and acceptance?

Moto HAGIO:
Hi. Yes, so this is Hagio speaking. Thank you very much for coming this afternoon. So, well, I have been reading manga since I was in primary school. And once I really got into this, as due to Emizo Rico, so they really had very good and interesting works. And I think the best thing about manga is to have interesting stories and very appealing characters. I think those are the two major points. When I was in primary school, manga was still something that was regarded as something rather vulgar in the schools and in the family and in the overall society. That is how people looked at manga. So when we were reading manga, we were scolded. But after I read Tezuka Osamu’s works, I thought that I can learn things that we cannot learn from the general society, but it was full of a lot of lessons. So first parents would tell us to study more and to do things more properly. So that is how people looked at manga. days. But in the manga world, there were a lot of emotions, a lot of stories, and a lot of how human beings can trust each other, and a lot of things that we can learn. So through manga, I was able to learn a lot of things that were people aside from what we learn at school. And through that, I really got into the manga world, and I kept on diving inside. And at the end of the day, I wanted to become part of that world, and that is how I became a manga artist. So that is something that I would like to continue pursuing in the manga world. Thank you very much.

Moderator:
Thank you very much. So, anybody has any comments on so far?

Nicole Rousmaniere:
What Hagio-san said was so incredibly important. It’s how manga brings you into another way of feeling. It’s not just reading. It’s the immersive quality of manga that’s part of its power. Can you explain that?

Moderator:
Thank you very much. I totally agree. Okay, so let’s move on to the next part, the impact of piracy. And from now on, you can keep the PowerPoint on every time. And it’s a bit busy, okay? There are approximately 1,100 known piracy sites, and among them, approximately 240 sites are piracy sites in original Japanese. And 400 sites are piracy sites with English translation, and approximately 460 sites are piracy sites translated into various non-English. other languages. And on this slide, you can see the typical Japanese manga piracy site. As you see on the left hand, you can find almost all popular mangas. And you can just click on any images, then the list of chapters appear on the right hand. And by clicking on a chapter, you can immediately scroll to read in quite high quality. And a visit to top 10 piracy sites in original Japanese for August 2023 is, as I said, more than 150 million visits per month. And seven of them are believed to have operators residing in Vietnam. This draws information or other, yeah, informations. And damages caused by the top 10 piracy sites in Japanese is estimated to be approximately 507 billion Japanese yen. It’s an estimate by number of site visits multiplied by regular retail price. So there should be some argument about this calculation, but anyway, it’s huge. This is a typical English manga piracy site, but as you see, it’s pretty much similar to Japanese ones, except for translated in English. And visits to the top three English manga piracy sites are even bigger than Japanese ones. Here are some 200 million visits per month. Andy-san, could you share your view on such piracy impact on manga artists and the industry?

Andy Nakatani:
Sure. So first of all, obviously there’s the loss of potential revenue of the manga artists. Maybe even more than that, I feel that piracy devalues the perception of what manga is and devalues all that the manga artists put into their work. It kind of fosters a sense of entitlement for people who read the pirated content, where they come to expect that they’re gonna read the content for free and they expect to be able to access it as soon as possible. And at times that’s even before the official release of the Japanese content.

Moderator:
Thank you very much. Hagio-san. All right. About these piracy activity, piracy sites, if you have any vision or.

Moto HAGIO:
Well, so as a creator of manga, if it is a pirated site, the revenue does not come to the manga creator. So in the 2000s. Many publishers digitalized manga and started to make this available on the internet. And so a lot of my works have been exposed onto the internet. And what I thought at that time was, this is a very easy and simple thing, and it’s on the internet. So I thought, I was feared that many people will be reading manga for free. But well, a publisher is a business entity, so they are able to count the number of people reading, so they were able to get revenues. And many readers of our old works were also paying, and we were able to receive the revenues, and we were able to make a living and to invest for future works. So for us creators, so if we do not get remuneration for what we did, it is something that is very sad, and it should not happen. So we have to do everything possible to first let the users know that they should not be reading pirated works, but they should be reading the official or paid works. So I use Kindle and I buy things through e-commerce online, but when you can either get one for free or get one for paid, I always tend to choose that for the paid selection, because if I’m extremely poor, I might be making a different decision, but that is what I tend to do. So I really appreciate that people should be reading the official versions going on.

Moderator:
Thank you. You are another living legend of the Internet world, and is this the only problem for manga piracy, or do you think it has broader relation to Internet at all? large, or any other insights or views?

Jun Murai:
Yeah, okay, yeah, thank you very much. This is Jim Wright, by the way. I’m known as the biggest fan of a manga in this country. And so, like Hagio-sensei, I’ve been kind of grown up with reading a lot of, lot of, lot of mangas, and probably you don’t believe that how I like, how I love the manga for my life. Hagio-sensei’s question, the pirates thing is very much kind of a digitally, internet has been providing the open space for exchanging the idea and many of the things. And so, the, well, the, I remember, this is IGF forum, Internet Governance Forum, by the way. And then, so I kind of, when the very beginning of the internet space, then the first things we encountered are the internet community. That time in the, some of the people on the internet on the world intellectual property came to the internet space, IETF people. And then I started to discuss how the global intellectual property is gonna be addressed on the cyberspace, right, on the internet space. That was a very much first experience of mine. I was a representative from IETF, and then I know there are several representative from a world intellectual property organization. And we started to talk about what the intellectual property to be addressed on the internet. So, because of digital information can be copied and it’s exactly the same copy, we can generate, and then the multiple copies for everyone. So, it’s very easy. And then instantly. So, that the impact of the, you know, kind of any copyrighted material to be in danger in a sense, right, for the thing. So, not only the music industry woke up on that scene, and then the movie industry as well. And so, various industries started to struggle with that kind of thing and the free copy of the digitized, well, the intellectual properties, copyrighted material or intellectual, the material with the copyright. So, we have a long history to work on that. And then, especially with sources, I mean, people who got, you know, kind of a rights to the manga artists, of course, right, and the music artists, and then in a movie, intellectual property, copyrighted movie and etc. So, at the same time, then the industries of those areas, including the publishing area, they started to think about extend their business over the internet in various ways. And so, the encrypted materials and the other things, I mean, other technology to support the, you know, the subscription technology on the web standard, you know, those technologies has been provided to protect the sources of our intellectual property contents. So basically the important thing is the working together type of a thing started that way. So kind of a crimes related thing, like a pilot is a crime, right? And then also the technology support to protect that against the crime for the kind of owner of the intellectual property. So that has been worked sometimes good and sometimes damaging the existing industries in a sense. But then from a broader view of a history, then I know it’s a very much a working and it’s very important. That’s also the spirit of IGF, Multi-Stakeholder as well.

Moderator:
Thank you very much for your insight. And in order to work together we should, we need to know how piracy sites work. And so let’s move on to the next part, that part. And let’s see how such, I’m sorry. Let’s see how such piracy site work. This is only a rough picture of how they work. Piracy site operators usually contract with hosting. server at center. In low cost, data logs are relatively tolerant, and operators contract with relay servers too, which is called contents delivery networks, or CDN, so that they can amplify their ability to accept users at low cost and at larger scale. Their income typically come from advertisement, which is on the right hand. The piracy operators select and combine registrars, hosting servers, CDNs, advertising companies, or other. And as you will see on the next slide, some essential services these illegal sites use, network registrars, or CDN services, as you see, concentrated on one or a few companies that can be easily used anonymously and basically do not end when notices are given. And there is another problem called domain hopping, which is a repeated move with redirected domains during a short period of time. For example, this is a real example, and a piracy site called A has changed its redirection ten times to five different domains within three and a half months only. And when the domain is rapidly changed, the effort for countermeasures must start over again. So, let’s see each such countermeasure and walls confronted a bit more closely. Let’s see. Okay. And the first step is sending direct removal notices to the sites. One publisher, for example, hires an anti-piracy company to make approximately 250,000 removal notices request monthly. Then next, passing legal procedure. Many of such removal notices are ignored or even if deleted, posted again. So, next we will pursue legal procedures. Since May 2020, we pursue legal procedures against approximately 50 piracy sites in the U.S. only. And identify the names and other information related to more than 10 personnel of interest. But it’s difficult and time-consuming to uncover the identities and servers often relocated before uncovered. Once we discover the identities, then the request for cooperation to foreign governments. But some countries often respond too late and too little. For example, since October 2020, we offer the identity information and ask to make an investigation to certain country, actually foreign government, through diplomatic channels and even regular meetings with the police department in charge. But so far, only one administrative penalty is charged in that region. Or requesting certain registrars, registries, and even ICANN to deal with malicious domain or domain problems. In this regard, many communiquรฉs and public comments have been made at ICANN. Also, we sent direct requests to registrars, but no meaningful action has been taken by subject registrars so far. Or asking certain CDN services to delete illegal files and stop providing their services to obvious piracy sites. It’s simply rejected. Our lawsuit is ongoing. Or another major things we are doing is cooperating with internet and telecommunication industries, making efforts to raise awareness. Actually, the situation is improving in Japan, but international awareness has not significantly improved. in this regard, or removing advertisement to cut their source of income. In Japan, advertisers and the agency organizations and rights holders cooperated to establish a framework for not placing adverts on piracy sites. So the situation improved to some extent. But again, outside advertisers, for example, non-members of any industry, often do not cooperate. So there are still many, mainly adult-oriented adults on these piracy sites. Finally, removing the search results of actual names and the domains of the piracy sites and reducing their spread through SNS. Since 2021, we have removed 28 malicious massive piracy sites and the courts. So this is our current situation. Murai-san, do you have any insight or comment on these efforts or even ideas?

Jun Murai:
Yeah, actually, yeah, the process has been done in a very, I mean, of course, I’m involved, and you are involved, and we’ve been working on those approaches. One of the approaches, oh, by the way, maybe I should explain that in Japan, we’ve been working together for this issue for a long time. And with the industry and the internet community, in that internet industry, actually, and also with the, if needed, then, you know, we are working and asking the government to move. So when the, if, you know, effective, sometimes, and sometimes not effective, I mean, the quickly since they use the word rejected, but then, you know, some of the industry in between the passing the data and the caching the data, like a CDN, it’s very difficult to identify that by their decision that this is good or bad, but then, you know, from the crime side end to end, and then the pirates is a crime, and so crime hunting type of a mechanism could work. And so I think, you know, the Japan having, you know, utilizing all those possible ways in the past to manga, right, and the sometimes effective in terms of, you know, the international, like a, you know, the police to police relationship with certain countries, and talking about the ICANN, when we started the ICANN process, then the government agency is going to be one of the stakeholders called the Government Advisory Board. And then in Japan, a government raised this issue to the GAC, Government Advisory Board, and then being listened to by the many government representatives in that group. We also… issue remain that if, I mean question, can ICANN impact the use of a domain name for the unwanted purposes? And it’s a very difficult for ICANN to, I mean, because domain name is a huge hierarchy things and the ICANN just dealing with a top level domain and therefore the entire internet infrastructure then I can, there is a limited things that ICANN itself can do. So that is the issue number two thing, right? ICANN worked very effectively in terms of sharing the issues with other government on this issue. And they properly discussed and I’d like to thank Japanese government to work, has been working that way. And then also the ICANN discussed on this, but then there is a limited thing that the ICANN do. ICANN, ICANN can do, I’m sorry, not ICANN, ICANN. And one more thing, again, the piracy impact need to be working with a very much different part, I mean, working together. So we need a legal expertise and we need the international legal expertise. We need the climate thing and then also the internet service provider. So that’s a reason why in Japan, in order to deal with this issue, then we have various places to put this thing. From the, one of the example is that we are having every three months then. And specifically for this subject, the CEO of an internet service provider and the CEO of a publishing company having a breakfast together to discuss about that periodically to be reported by people like what is the current status of this issue. So I think we are on the right process to work.

Moderator:
Thank you very much. So Nicole-san and Andy-san and Hagio-san, I’d like to ask your thoughts on our feelings for such efforts against piracy sites and the future of manga at large. So first, Andy-san, could you start? Now, I’m sorry that Nicole-san.

Nicole Rousmaniere:
Thank you very much. I feel that manga is one of the most important or one of the really precious treasures of Japan. It’s becoming worldwide, but it’s really something very, very special, and that needs to be protected. I mean, in a way, you could say it’s like any art form, Ukiyo-e, that wasn’t protected or even sushi that wasn’t protected, and then you can see what’s happened to it in the West now. But if you have manga now can still be protected, and I think it should become registered in some sort of way. But manga artists, editors, and publishers, they’re creating content for us to enjoy. You can see this with Hagiomoto-sensei and our editors, it’s really, really time-consuming. And if we’re not going to pay for that content, in a way, it’s like stealing their work. It’s pretty, it’s just something that we need. We need to stop doing. And I think that we need to stop doing it, there’s ways of stopping all these internet providers, but it’s in a way like drug use, you have to stop with the users have to stop using pirate sites and need to work towards that. By 2025. I’m going to be curating a new manga exhibition in San Francisco, the Sanford at the museum, and I’m hoping you’ll all support me in that. And what I’m hoping by 2025 is we’ll see a shift in in in manga piracy, and we’ll start to see that artists are getting being paid for properly for their work. I’m going to also invite you all to come and see and hopefully there’ll be a good solution.

Moderator:
I know. Thank you very much for your enthusiastic opinions. Yeah, we need to be to 2024 25 in San Francisco. Okay. So, yeah, I believe everybody is. And Anderson. Could you share your views.

Andy Nakatani:
Yes, of course. Looking forward to that exhibit in San Francisco, because, you know, being based in San Francisco. So, has really kind of plagued us for, for, for so long. And I’m never enthusiastic about public speaking, but being part of this panel was was actually a great experience because just interacting with with all the panelists is so encouraging to me that there’s a lot of work being done. such great efforts happening. And I think as others have said, it’s not just the people here, but also it’d be great if we could continue to get cooperation from the multiple industries involved in this. So it’s very encouraging to me.

Moderator:
Thank you very much. As a long time from the Andes, long experience in the publishing industry, the last word is quite impressive for me too. And Hagio-san, could you share your thoughts?

Moto HAGIO:
Well, it’s a small proposal I would like to make. In the case there is a piracy, and if there is the formal one, and we are expecting that we will be able to, maybe some kind of a privilege or the special treatment should be provided. For example, the points should be provided. Or the voices of the artists should be added to express, to appreciate and thank you. Some kind of a special gift or the special price should be given to the readers, the formal readers. So if the readers read the books, be it the piracy or the formal one, you might think it’s better to receive the cheaper one. It’s the same, but if you pay. If you pay as a user, there may be the pain on the part of the piracy readers, but there will be the pain if the royalty is not provided to the artist. So please think about it. Please use the formal routes and channel to make the selection. So it’s a matter of your way of life. Make the decision in your life. Reading the manga, there is always justice from the world of manga. There is justice. So if you are impressed by reading the manga, I hope you will be able to pay for that impression and impact.

Moderator:
Thank you very much for your valuable opinion. And the incentive to the readers of the official version would be impressive, and we should think about that. Do you have any comments on the points raised? Murai-sensei, how about you?

Jun Murai:
Well, yeah. Well, Hagio-sensei’s message is very strong, and so I’m very much moved. And being a big fan of the manga, not the internet expert or anything, so I’ve been thinking about this one. I was thinking about the music industry was also damaged by the internet sharing of music. So, by the way, this is a personal. a story, I’m sorry, but my mother is a musicologist, and so she always told me when I was a kid that the music is a live music, and that the recorded music is a better outreach to the other people. So the important thing is the live music, so we’re going to the live music. So if this story is applying to the manga, then I know, so all the sharing of the music, and then if you like music, and then you go to the live music concert, or whatever. So that model is becoming pretty much successful for the music industry these days, right? And then pretty much suffered by the internet, and then coming back to the very active concert and the theater type of thing. So I’ve been wondering how the manga is going to be. And the manga is, if you like manga, if you understand manga, manga is a combination of the author, artist, and then editor, and they’re working together for the paper printing art. There is a lot of new things coming in, with the digital format and the new format thing. But the printed manga is the origin of the art we are talking about this afternoon. Okay, if that is the case, then publishing companies should make a lot of efforts at inviting the people. I mean, digital, digitized, printed manga is different of a digital, digital manga, right? I mean. new format. So manga is basically the art of a printed manga and then the publishing company should, I believe, I’m asking the publishing company to work on the continued work with the legendary manga artist and also the young newcoming artist for the format. This is a quite a format. I think the Japanese has been working and the outside the country also. The kind of value of manga and that they love manga and the lovers of manga around the world. And so that’s what they are working on. So I really respect the publishing companies’ efforts to extend the value of the printed manga to be digitized outreach to the greater community of big fans of manga.

Moderator:
Thank you very much. Preserving the value of what is not easily substituted by digital copies would be something worth considering. So maybe there is a big hint within the Murai-sensei’s points.

Nicole Rousmaniere:
What I think Murai-sensei was saying is really very, very important. Paper copy, we can read it in digital copies, but having the paper copy of manga is incredibly important and I think it was a plea for this to continue. Just looking at it from a museum perspective. perspective. It’s about the archival qualities, this material, digital access changes, how you access digital content 10 years ago is different from how we access it today. So, it what will happen to manga 30 years from now that we are having today. If we don’t have it in paper, it may not survive, just don’t know is incredibly important. So, I think this was a plea for paper, and I would like to add my voice to that.

Moderator:
Thank you very much. Jason, do you have any thoughts or comments?

Andy Nakatani:
Yes, that’s actually pretty much exactly our strategy with our streaming services to kind of create a large funnel and attract readers, increase exposure of the manga, and then kind of guide them to buy to purchase the graphic novels whether that be digital graphic novel or print, and particularly in the United States, the print industry is very strong. So, yeah, that’s it’s exactly what our strategy is. Thank you very much.

Moderator:
So, if there is no further comments, then let’s take next 20 minutes or something for Q&A sessions. Anybody with comments, please stay in front of Michael. Oh, they are already doing that. Thank you very much. Yeah, please try to keep your comments or questions short within say one minute. So that many people can speak. So, that first.

Audience:
We’ll try to be brief it’s not easy I’m Victoria Bertola I’m known as one of the founders of the AGF by the Italian intern speaking as a lifelong manga and anime. Because when I was eight in television, I saw Mirai Shonen Konan by Miyazaki sensei and that changed my life forever. And so I saw what the music industry was, I saw the music industry doing the same meetings 20 years ago, and they failed. So I am afraid that you can have doing the same mistake. The mistake is you have to be careful not to do is not to confound the just straight privacy, like the people that really take like the last issue of One Piece and Kimetsu no Yaiba and put it online for money, with what your fans are doing piracy, because it’s the only way they can get access to manga, at least in the West, and maybe not in Japan. But in Italy, many manga are not available or only available many months later. I’m reading like, Aswan Senki comes up eight months after the Japanese version, and I wait for eight months, but many people don’t want to wait for eight months, why can’t you make it available in eight weeks or days, I think today with the technology could be possible. Or there are like 15 year old kids that have, I mean, one manga costs 1000 yen, 15 year old, maybe you can buy one month, but they want 10 and so they go to Manga Foxconn. But they may be when they will be 20, they will have money and they will buy more manga. So it will be your customers of the future. So don’t be too hard on your fans to go go against the people that really steal the money. But don’t think of your fans and think of ways you can make be nice to them, and give cheaper, available. Thank you. Thank you very much. So, yeah, I’ll take another question and then the answer to that one. So, please. Thanks. So first, just some clarifying question or comment here. We did the presentation about the countermeasure of piracy, but what I didn’t see is you really try to understand why people try to access to pirate website, because at the end, it showed that there is a demand for that content. And I think for Japan, maybe need to learn about other experience because if we talk maybe more than about the cool Japan, but all the artistic content, you have your neighbor in the West that is succeeding. So maybe it is also for here to think how to improve that area. Finally, you talked about artists and their world, reality and so on, but I want some clarification because unfortunately, in Japan, the industry doesn’t have a good reputation in terms of working condition for the artist and all who are involved. So it’s important to clarify how much they get at the end because you create causality between the piracy and loss of revenue, but it seems it’s more for the publisher. So if you can clarify more how artists are earning and how you can improve their condition because at the end, those who are making the mankind that content that people are looking for. Thanks.

Moderator:
Thank you very much. And yeah, sir, please.

Audience:
Yeah, okay. Alexander Savnin, Pirate Party International and Russian Pirate Party. I’m really sad hearing that in this audience governance forums discussed while chasing and punishing end users. You’re talking about users distributing to each other any content that’s against freedom of information, it’s against the possibility to people to access it. After you start going after users who are sharing their own contents, maybe previously, but you will start going after ones who are creating fan fics or in other stories or even parodies for something like this. And then censorship system. which might be used as political censorship also. I’m sad that I have to remind you that there is 21st article of Japanese constitutions which guarantees Japanese citizens freedom of distribution of information. Please try to distribute your information freely without blaming end users because again, so-called anti-piracy actions could really start making censorship, economic censorship, start chasing girls who are wearing boots which might be counterfeiting, something like this. Thank you very much. Thank you very much. So, okay, so please. Good afternoon, everyone. My name is Julia. I am a youth program delegate from Brazil and I am here to make a point about accessibility and also a reconsideration of some points exposed in the panel about regarding piracy. As other software fellows have introduced, piracy is a symptom of inequality rather than a problem of greed. So, the strategies presented were interesting, but they are considering the total accesses of the piracy consumer and as a matter of choosing free or paid. Many are not in the position to choose. I would like to reference Mrs. Hagio example, a very good example and very touching presentation about why we should support the authors. For example, a thousand dollar camera in my country is. worth five months of the minimum wage in our country. So they are definitely not buying $1,000 cameras, although many can buy $1,000 cameras. And then I’ll make another point, that there is much learned talent from Manga, and I’m very fortunate to meet Shigeru’s musical lessons in Nononba. Though, I only had access to it because I know English, and I had a relative living in Europe at the time, where they bought it in Europe, brought to me in English. And my peers don’t know English, so they will never meet this. There is no possibility. We translate roughly 500 types of brands of Manga, as in like Demon Slayer, and so on, One Piece. Roughly only 500 types of the brand, and they issue every year new issues of the same Manga. So we are not expanding the translator market. And then Piracy comes in, facilitating people doing the work for free. We have a very verdant community, Japanese community in Brazil, and they translate it for free, they publish online for free, and we have access to it. So I have a collection of Manga, which I only gathered through my life, and my new opportunities and jobs, that I know that many of us can do. So there is passion in translating the words for us to access. There is relevance and importance in trying to access such great content, and such profound knowledge. But sometimes we have no options, and we resort to it. So my question is, is there a room to consider that piracy might not be accessible? exterminated, but solve in its whole complexity. Which actions are in place considering this problematic? Thank you. So, yes, please. Hello, everyone. My name is Jose Artu. I’m also part of the Brazilian Youth Delegation, and I’d love to learn about this exhibition as well commented by Nicole. And I would like to know if there are future plans to include awareness actions about manga piracy in these exhibitions.

Moderator:
Okay, so we have already five questions, and there are three more. So let’s take this time to respondedit the questions and the comments so far. And so could you please up a bit? So first, there is a difficulty to obtain new episodes. I think this is for Anderson to answer first.

Andy Nakatani:
Yes, if I may. So I think we are making efforts to put out more content. And with those streaming services that was on the second slide that I had, different publishers are releasing more and more content. And I probably should have explained it a little bit more in detail, but for example, Shonen Jump, the Shonen Jump service releases chapters the same day they come out. And we’re doing as many series as we can right now. Currently we’re doing. We’re doing every single series that comes out in the Shonen Jump magazine in Japan. And we release these chapters for free. The three most recent chapters are free, and then you pay a low price, it’s $2.99 a month in American dollars for access to all the back chapters. So, we are trying to address putting out more content, making it easy access at a low price for more people to access. And then the other publishers are also doing the same thing with their streaming services. But we are making efforts.

Jun Murai:
Thank you very much. I care about that gentleman standing there. So, you want to hear from him or it’s too much? Yeah. All right, another round. Then, yeah, I just want to take a kind of a two, two kind of technical, and I… May I? Okay. So, so the first one is that, yeah, thank you very much for the, you know, all the, the questions I think, I think it’s very much, you know, the reasonable question and also the some of the important questions I need to address. The, you know, the over-pilot action is going to be, you know, kind of against the freedom of speech, which is the, you know, leading to a censorship. and the over-censorship maybe. And this is a very important and a serious issue always in a general, in the history of internet to address. So we are one of the area, I mean, the multi-stakeholder type of a discussion. When we started the ICANN, then the ICANN put the kind of government advisory board, but then all the kind of starting the discussion equally. So, which is very much the format to address those issues in a different stakeholders had a different voice and then listening to them. So IGF is one of the place to understand that one. So in that, well, anyway, probably the balance of that kind of business advantages on a certain area of our industry, and then also the over-censor ring situation should be avoided for the kind of open internet environment. So that’s probably a very important discussion and then applying to this area of our discussion, but probably it’s a taken for the older process in Japan for the manga piracy. Another very interesting question from the lady sitting in there was accessibility question. And then it’s a really important that the manga used to be the rate, I mean, comparing with the music industry, movie industry, other industry, then value. was kind of late to start addressing to all over the world. So I remember in the middle of 2000, 2013, then I visited the French manga shop and then all the books in Europe is kind of opened up this way. And then suddenly the manga is turned out to be a Japanese way, which is different from open from the right. I was very much surprised that, kind of those European fans of a manga accepting the basic culture things to address the manga. Story from the very, very, very famous soccer player from Spain that he was talking about. I started, he’s a big, big legend, a player. He said, I started the soccer from the reading of manga, but why the Japanese soccer player is all lefty? So that was a printed in a reversal. But anyway, so that power was recognized quite recently. Therefore, a lot of things, technical, the accessibility of the manga, I believe. The manga is now the language layer and then the other drawing layer is separated in many of the manga artists are utilizing the digital art tools. And then so that the placement and then the multi-language approach would be easier for the thing. And then the one of the reason why the pilot’s gonna be, kind of very much extending for the, kind of outreach was that the translation has been a very expensive and. could not be provided by from the origin of the manga publishers. So that is one of the example that you listen from, you know, the two people from the outside the country that, you know, they are making a lot of efforts and then, you know, although the publishing company of Japan is working with them to address those issues about accessibility, it’s gonna be a very important comment and the question. Thank you very much. And then, you know, probably continuous approaches to the Japanese manga space gonna be a very much a beneficial.

Moderator:
Oh, Murai-san, thank you very much. And one of such efforts includes Manga Plus by Shueisha where the English, Spanish, Thai, Indonesian, or Georgian or Russian, such a language in 190 countries available for one, two, three, episode one, two, three, one, two, three, three, three, three, three, three, available, the latest three episodes are available for free and it’s just information from the one publisher. And there was also a question about loyalty rates for the manga artists and I can say that typically it’s a 10% of retail price. So I think that that’s the kind of question that was standard here. And is there any, so could you please, you first and then you, please, yeah.

Audience:
Thank you very much. My name is Charles Chaban from the International Trademark Association. Again, I’m here as a fan, too, and of course, I was very happy to see this session from the beginning, in fact, because intellectual property is something important to know properly, and to be honest, I was a little bit puzzled now from some of the comments, but as our father of the internet here, June, whom I met 20 years ago in ICANN, for all the different views, so in fact, I think I blame ourselves as some of the people in the intellectual property field, that copyright maybe is not known, but at the same time, copyright is something important, and it has a version that say that copyright is mainly, there is something called fair use, so when you say fair use, it’s so when someone see it for his own benefit, for learning, so it’s not against the knowledge, it’s part of me to protect the owner of the copyrighted material, to be sure he gets his own royalty, as was mentioned, and make the knowledge even more important, and so it’s just a comment, so I have no special question, just to thank you for what you done here, and to tell you that even in the 80s, I used to attend many of these programs you showed, by the way, even the older one, I’m older here, even in the Arabic language in Jordan, so it’s available before the internet even, thank you. Thank you very much, so please. Hello, I’ll be quick, I promise, I’m Felp, I’m part of the Brazilian youth delegation here, and I’m also, that panel is really important to me, my art since my childhood has always been very influenced by manga, and even on the other side of the world, the artistic power of manga influences many artists, other artists like me, so however, in my childhood, the possibility of. purchasing manga was very restricted where I lived, so I read and was inspired by the few titles that I could physically access that time. So thinking about this power of the manga to feed new artists that will come from all over the world with the internet, which ways of distributing manga online legally in Latin America, do you envisage that at this time, when we have so much piracy in countries that often don’t have platforms to access them in a legal way, that develop values already established artists and enhances artists that are yet to come?

Moderator:
Thank you very much.

Audience:
Hi, well my question is this, how much do you involve the youths in this fight against piracy? Because you know most of the pirates arise from our generation or maybe the following ones and I would think that if us as in my generation and the following ones understand how much manga is important and might disappear as a cause, as a result of the piracy, they will stand up as the first, you know, the army line against their fellow age group members and probably share the same technologies as in there will be new technologies and everything. So how much are the youths and the coming generations involved in this fight?

Jun Murai:
As for the youth, Bob? Well, yeah, thank you very much. As many of the youth, you know, participants on the IJF raised a voice and which is good and then they make very short things and then I remember the software pirates was around and then they know all the software users and the PC software users and game users. Bob, they… there was a, you know, very much a youth people, youth, young people standing up and then they started to work on a, you know, kind of a phrase that, you know, we don’t use illegally copied software type of a thing. And so, if you visit the booth of a publishing company about the pilots, pilot’s booth, then they have a very much attractive campaign for the other people, including the young people. So I think that they would be, you know, kind of a very powerful supporter about this movement. So I think that they would be a very powerful supporter about this movement.

Moderator:
Thank you very much, Moraisan. And it is what exactly I. So, yeah, again, thank you very much for valuable questions and insights. And so it’s already 16.14. So we are now, it seems that we are running out of time. There is also, yeah, a good question online, but yes, this is for another day. Again, thank you for joining us for this session. And as Moraisan said, please visit our booth at number two, IGF. The Arigato video is now being shown. And written by 16 artists handed out, if you think. And yeah, finally, give speakers and staffs a warm hand. Thank you very much. Thank you. Thank you. Thank you. Thank you. Arigato gozaimasu. Arigato gozaimasu. Arigato gozaimasu. Thank you very much.

Nicole Rousmaniere

Speech speed

174 words per minute

Speech length

2164 words

Speech time

746 secs

Andy Nakatani

Speech speed

128 words per minute

Speech length

1101 words

Speech time

517 secs

Audience

Speech speed

171 words per minute

Speech length

1941 words

Speech time

680 secs

Jun Murai

Speech speed

131 words per minute

Speech length

2602 words

Speech time

1190 secs

Moderator

Speech speed

110 words per minute

Speech length

2559 words

Speech time

1391 secs

Moto HAGIO

Speech speed

158 words per minute

Speech length

868 words

Speech time

329 secs

Meet&Greet for those funding Internet development | IGF 2023 Networking Session #111

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Carlos Rey Moreno

Carlos Rey Moreno and Eric Huerta are coordinators of the LockNet initiative, which prioritises community-centred connectivity. The initiative supports various community-centred connectivity initiatives through regranting opportunities. Carlos Rey Moreno advocates for strengthening organisations involved in community-centred connectivity and creating enabling environments for their growth.

The LockNet initiative engages in national-level policy and regulatory analysis to support the development of effective policies and regulations that promote and sustain community-centred connectivity projects. They also work on technology development to ensure that the initiatives they support have access to the latest advancements and tools in internet technology.

The LockNet initiative places a strong focus on gender equality. They create safe spaces where women can enhance their knowledge and skills in internet technology and regulation. By empowering women in these fields, the initiative aims to promote inclusivity and diversity within community-centred connectivity projects.

The evidence supporting these activities and objectives can be seen through the initiative’s commitment to regranting. Through their financial support, they ensure the sustainability and impact of various community-centred connectivity initiatives. This demonstrates their dedication to promoting and contributing to the success of these projects.

In conclusion, Carlos Rey Moreno and Eric Huerta coordinate the LockNet initiative, which focuses on community-centred connectivity. The initiative engages in regranting, policy and regulatory analysis, and technology development to support these initiatives. They also prioritise promoting gender equality within the community-centred connectivity sector. The evidence supporting their work is substantiated by their commitment to regranting and their dedication to empowering and strengthening these projects.

Charles Noir

The Canadian Internet Registration Authority (CIRA) is committed to supporting various sectors of society. They focus on providing grants to non-profit organizations, registered charities, academics, universities, colleges, and indigenous communities. CIRA’s granting program aims to empower these groups and address their specific needs and challenges.

CIRA places particular emphasis on northern remote and indigenous communities, recognizing their unique circumstances and vulnerabilities. They strive to provide funding and support to bridge the digital divide and improve internet accessibility. This targeted approach demonstrates their commitment to reducing inequalities and promoting inclusivity, aligned with SDG 10 – Reduced Inequalities.

Charles Noir, Vice President of Community Investment Policy and Advocacy at CIRA, plays a pivotal role in shaping and advocating for their community investment policies. His support for grants to these select groups further highlights the importance of CIRA’s work in these areas.

In addition to their granting program, CIRA also invests in cybersecurity services for Canadians, offering free services to help individuals protect themselves online. This initiative addresses the growing concern of cyber threats and contributes to a safer online environment. It aligns with SDG 9 – Industry, Innovation, and Infrastructure.

CIRA also develops services for testing internet performance in a neutral manner, ensuring accurate assessments of connectivity. This impartial approach facilitates improvements in infrastructure and connectivity.

In summary, CIRA’s focus on grants for non-profits, registered charities, academics, universities, colleges, and indigenous communities, along with their dedication to providing free cybersecurity services and neutral internet performance testing, underscores their commitment to promoting inclusivity and security in the digital landscape. Their efforts contribute to the achievement of multiple Sustainable Development Goals (SDGs), including SDG 4 – Quality Education, SDG 9 – Industry, Innovation, and Infrastructure, and SDG 10 – Reduced Inequalities.

Laura Conde Tresca

The Brazilian Internet Steering Committee, with Laura as a board member, plays a crucial role in supporting and funding AI centres in Brazil. These AI centres are essential hubs for research, development, and innovation in the field of artificial intelligence. By providing financial resources and support, the committee enables these centres to drive progress, encourage collaboration, and contribute to the advancement of AI technology in Brazil.

In addition to their support for AI centres, the committee also demonstrates a commitment to promoting gender diversity in the tech industry. They offer small fellowships specifically designed for women, encouraging them to write papers and contribute to the academic discourse surrounding technology. These fellowships provide financial support and recognition, helping to address the gender gap in the field and empower women to excel in tech-related disciplines.

Furthermore, the committee extends its positive impact by providing support for small events focused on Internet governance. By sponsoring and assisting in organising these events, they contribute to the dialogue and exchange of ideas concerning the responsible and inclusive management of the internet. This support fosters awareness, knowledge-sharing, and collaboration among various stakeholders regarding the governance of online platforms and services.

In conclusion, the Brazilian Internet Steering Committee, under Laura’s guidance, is a driving force behind the progress and development of AI in Brazil. Their support and funding for AI centres, provision of fellowships for women in tech, and promotion of small events on Internet governance underscore their commitment to industry, innovation, infrastructure, and gender equality. Their initiatives serve as models for other organisations aspiring to create a more inclusive and technologically advanced society.

One noteworthy observation is the multifaceted approach of the committee’s initiatives. By combining support for AI centres, gender diversity, and Internet governance, they address key areas where progress is needed in the tech industry. This holistic approach recognises the interconnected nature of these issues and ensures that efforts are made across different domains to drive positive change.

Overall, the Brazilian Internet Steering Committee, with Laura’s involvement, serves as a pioneer and catalyst for advancements in AI, promoting gender equality, and fostering responsible Internet governance in Brazil.

Audience

Janne Hedronen, representing the Finnish Ministry of Foreign Affairs, expresses concern over the financing of the Internet Governance Forum (IGF), an organization that aims to promote sustainable industrialization and innovation through resilient infrastructure. The ministry has been a consistent donor to the IGF since 2006, providing approximately $2 million USD in funding. However, Janne urges participants to step up their efforts for financing the IGF, highlighting the importance and urgency of adequately funding the organization to fulfill its mandate.

The IO Foundation presents their work on data-centric digital rights and their support for the technical community. They view the technical community as the next generation of rights defenders, emphasizing their role in safeguarding digital rights in an increasingly data-driven world.

Carla Braga and Raimundo from the Amazon region focus their efforts on combating misinformation and disinformation, which are closely linked with the issue of deforestation. Their work highlights the connection between the spread of false information and the detrimental impact it has on efforts to address deforestation.

Rebecca Papillo, representing the .au domain administration, runs a community grants program aimed at promoting digital inclusion and innovation for marginalized communities. The program specifically targets regional and remote Australians, Australians with disabilities, and Australia’s First Nations people. By providing grants, Papillo aims to bridge the digital divide and empower these communities to access opportunities in the digital age.

Christian Leon, from ARSUR and the Internet Bolivia Foundation, is dedicated to protecting data, fighting against digital rights violations, and promoting digital inclusion. Leon’s work focuses on addressing issues such as digital violence and ensuring that everyone has equal access to and benefits from the internet.

Access Now has developed a grant program to support grassroots organizations. Over the past five years, they have disbursed approximately $8 million to 120 organizations. This program aims to empower and enable local organizations to champion digital rights and work towards reducing inequalities.

Catherine Townsend of Measurement Lab raises concerns about monitoring the internet. While Measurement Lab is actively involved in measuring the speed and quality of the internet worldwide, Townsend highlights the potential negative implications of excessive monitoring, emphasizing the need to strike a balance between privacy concerns and the necessity of monitoring to ensure internet accessibility and fairness.

Pranav from the Internet Society Foundation is dedicated to empowering youth ambassadors and early and mid-career professionals through training programs. These programs provide free courses that cover both technical aspects of the internet and policy-related issues. By equipping young individuals with the necessary skills and knowledge, Pranav aims to create a new generation of internet leaders.

The challenges faced in financing community development and training programs are acknowledged, with a volunteer community struggling to maintain and operate due to financial constraints. Efforts are being made to conduct webinars and seminars, but in-person meetings require a sizable budget. The need for financial support to train and develop skills in the new generation is underscored, along with exploring the potential for collaboration between industry and academia in regional settings.

Furthermore, the summary highlights the challenges of achieving digital inclusion in authoritarian regimes. Foreign donor restrictions are seen as a significant barrier to securing funding for humanitarian work in such regimes, while effectively presenting the impact of these initiatives poses an additional difficulty.

The importance of measuring impact for continued support is emphasized, although funding for impact measurement itself remains a challenge. Donors increasingly seek evidence of impact, particularly for technology tools, prompting the need to develop effective measurement tools. However, writing impact reports can be burdensome for smaller organizations.

Lastly, there is a notable demand for small grants in community networks, particularly at the local level. Larger grants from big organizations often do not align with the specific needs of communities, leading to an inadequate supply of funding. This highlights the necessity for increased financial support to meet the demand for small grants.

In conclusion, various stakeholders and organizations are actively engaged in addressing key issues related to the internet, digital rights, and digital inclusion. While funding challenges persist, there is a shared commitment to promote sustainable industrialization, combat misinformation, bridge the digital divide, protect data, and empower marginalized communities. Efforts are also being made to strike a balance between monitoring the internet for accessibility while preserving privacy concerns. The need to measure impact and provide small grants for community networks further underlines the significance of continued support in achieving these goals.

Jenn Beard

Jenn Beard is an employee at the ISOC Foundation, where she collaborates with Brian Horlick-Cruz. The ISOC Foundation’s main focus is on the development of a stronger Internet, its growth, and the defence of its integrity. In pursuit of this, they have implemented a comprehensive portfolio of activities.

The foundation offers approximately 15 grant programmes, covering a wide range of areas such as connectivity, digital skills, and digital learning. These grant programmes play a crucial role in supporting projects that aim to improve access to the Internet, enhance digital literacy, and promote innovative approaches to online education. This demonstrates the foundation’s commitment to SDG 9 (Industry, Innovation, and Infrastructure) and SDG 4 (Quality Education).

One noteworthy aspect is the collaborative effort between Jenn Beard and Brian Horlick-Cruz. Although no specific details are provided, their partnership suggests a dynamic and efficient working environment within the foundation.

The analysis indicates a generally positive sentiment towards both Jenn Beard and her contributions to the ISOC Foundation. As an employee involved in the foundation’s grant programmes, Jenn Beard plays a significant role in advancing its initiatives. Her work directly contributes to building a stronger Internet, fostering its growth, and defending it from potential threats. These efforts align with the foundation’s mission and its positive impact on society.

In conclusion, Jenn Beard’s work at the ISOC Foundation, in collaboration with Brian Horlick-Cruz, encompasses various grant programmes that aim to improve Internet accessibility, digital skills, and digital learning. The positive sentiment surrounding her and her contributions further emphasise the foundation’s commitment to creating a better digital future.

Alessia Zucchetti

LACNIC, the organisation dedicated to promoting digital innovation in Latin America and the Caribbean, offers several programs to support industry, innovation, and infrastructure in the region. One of its primary initiatives is FRIDA, the Fund for Digital Innovation in Latin America and the Caribbean. FRIDA has been in operation for almost two decades, demonstrating LACNIC’s commitment to fostering and nurturing digital innovation in the region.

In addition to FRIDA, LACNIC also prioritises applied research in various areas such as network architecture, internet stability, and security. By focusing on research in these fields, LACNIC aims to contribute to the development and improvement of the digital infrastructure, ensuring stability and security for online activities in Latin America and the Caribbean.

LACNIC’s dedication extends beyond innovation and research. The organisation recognises the importance of capacity building and aims to promote the participation of women in the technical community and the wider internet ecosystem. Through its programs, LACNIC provides opportunities for individuals to enhance their skills and knowledge and contributes to a more diverse and inclusive digital landscape.

LACNIC’s initiatives are aligned with various United Nations Sustainable Development Goals (SDGs) including SDG 9 (Industry, Innovation and Infrastructure), SDG 16 (Peace, Justice, and Strong Institutions), SDG 5 (Gender Equality), SDG 4 (Quality Education), and SDG 10 (Reduced Inequalities). This positive sentiment is reflected in LACNIC’s ongoing commitment to supporting digital innovation, applied research, capacity building, and gender equality.

In conclusion, LACNIC plays a vital role in promoting digital innovation and enhancing the digital landscape in Latin America and the Caribbean. Through the FRIDA grant program, focus on applied research, capacity building, and women’s participation, LACNIC supports industry, innovation, and infrastructure, contributing to the achievement of several SDGs. This ensures sustainable and inclusive progress in the region’s digital era.

Percival Henriques

Percival Henriquez, a distinguished board member at the Internet Committee and NIC.BR, is known for his expertise in internet governance in Brazil. NIC.BR, short for Nรบcleo de Informaรงรฃo e Coordenaรงรฃo do Ponto BR, is responsible for administering and managing internet domain names ending with “.br” in Brazil.

As a board member, Henriquez plays a crucial role in formulating policies and making strategic decisions to ensure the smooth functioning and development of the internet in Brazil. This includes overseeing domain name registrations, managing technical infrastructure, and addressing any issues or challenges that arise.

The Internet Committee and NIC.BR’s role is significant, as the internet has become an essential tool for communication, commerce, and innovation. The committee’s efforts to manage and regulate domain names contribute to maintaining a secure and reliable online environment for individuals and businesses.

Henriquez’s position highlights his expertise in internet governance and his commitment to advancing the internet ecosystem in Brazil. He is likely involved in discussions and decision-making processes related to internet policies, technical standards, and cybersecurity.

Having a dedicated and knowledgeable individual like Henriquez on the board ensures that NIC.BR remains at the forefront of technological advancements and effectively addresses emerging challenges in the dynamic digital landscape.

Overall, Percival Henriquez’s role as a board member at the Internet Committee and NIC.BR underscores Brazil’s commitment to promoting internet accessibility, security, and innovation. His contributions in shaping internet policies and strategies will have a significant impact on the development of the internet in Brazil.

Yoshiki Uchida

Yoshiki Uchida, a student at Keio University, actively participates in the White Project, an initiative that complements his studies on the Internet. The White Project, established by Professor Jim Ryan 37 years ago, is vital to Uchida’s academic journey. His involvement demonstrates a commitment to exploring and advancing knowledge in the field of Internet studies.

Uchida also expresses a keen interest in supporting the APNIC Foundation in the near future. The APNIC Foundation focuses on promoting partnerships to achieve global goals. Uchida’s intention reflects his dedication to contributing to these goals.

The evidence confirms Uchida’s involvement and interest. Uchida’s affiliation with Keio University and engagement with the White Project exemplify his commitment to quality education, a key aspect addressed in the Sustainable Development Goals (SDGs). Additionally, Uchida’s positive sentiment towards the APNIC Foundation indicates his willingness to engage in partnership goals and contribute to global progress.

In conclusion, Yoshiki Uchida’s academic pursuits at Keio University are enriched through his involvement with the White Project, aligning with his studies on the Internet. Furthermore, his expressed interest in supporting the APNIC Foundation demonstrates his commitment to partnerships for achieving global goals. Uchida’s dedication and positive sentiment towards both initiatives highlight his intention to make a meaningful impact in the field of Internet studies and contribute to broader sustainable development initiatives.

Moderator – Silvia Cadena

The APNIC Foundation, known for its work in supporting the development priorities of APNIC, is organising an event specifically for organisations that are investing in development. The purpose of this event is to foster collaboration among these organisations by providing them with the opportunity to find common ground and explore possible collaborations.

Prior to the COVID-19 pandemic, the APNIC Foundation used to host similar events, showcasing its commitment to bringing organisations together. This upcoming event aims to continue this tradition virtually, ensuring that despite geographical limits, opportunities to collaborate and cross borders are still present. It highlights the importance of open discussions and encourages organisations to engage in conversations surrounding their projects and potential collaborations.

One notable approach taken by the APNIC Foundation is to allow the fund-allocating organisations to speak first. This sets the stage for an informal conversation where participating organisations can share details about their projects. By giving each organisation an opportunity to present their initiatives, the event aims to create an environment conducive to collaboration and knowledge sharing.

Collaboration and co-funding are key elements of the APNIC Foundation’s strategy to increase the footprint of their work. The foundation actively seeks opportunities to collaborate with other organisations and invest in joint initiatives. By pooling resources and expertise, they aim to have a greater impact on various development priorities.

Silvia Cadena, a strong advocate for collaboration, emphasizes the importance of engaging with organizations that are investing in technical infrastructure and the technical community. Recognising the challenges faced by network engineers and cybersecurity professionals in gaining support from traditional donors, Cadena highlights the role of the APNIC Foundation in providing grants, fellowships, awards, and research support for such initiatives. This demonstrates the foundation’s commitment to supporting technical projects and fostering collaboration within the technical community.

In addition to supporting technical initiatives, the APNIC Foundation also focuses on programmes that address inclusion, infrastructure, and knowledge. Their efforts are aimed at keeping the Internet open, stable, and accessible. The foundation allocates IP addresses and ASN numbers across 56 economies in the Asia Pacific, solidifying its impact on the industry of innovation and infrastructure. Since its establishment in 2016, the APNIC Foundation has been actively implementing projects in various areas, including education, gender and diversity, and community building and strengthening.

In conclusion, the APNIC Foundation is hosting an event to bring together organisations investing in development. With a focus on collaboration, the event aims to facilitate open discussions, promote knowledge sharing, and explore potential collaborations. The foundation’s emphasis on co-funding, cross-border collaboration, and engagement with the technical community showcases its commitment to expanding its work and supporting the development priorities of APNIC. By supporting initiatives across inclusion, infrastructure, and knowledge, the APNIC Foundation plays a vital role in keeping the Internet accessible and stable.

Valerie Frissen

Valerie Frissen is the director of SIDN Fund, a separate foundation funded by the Dutch National Registry. The fund focuses on supporting initiatives that promote responsible internet use and raise awareness. Its main aim is to empower end users, enabling them to make the most of the internet while also being aware of the potential risks and challenges associated with it.

SIDN Fund plays a crucial role in supporting projects that contribute to the achievement of Sustainable Development Goal 4: Quality Education and Sustainable Development Goal 9: Industry, Innovation, and Infrastructure. By investing in initiatives that encourage responsible internet use, the fund helps to create a safer and more inclusive online space that benefits individuals and society as a whole.

Valerie Frissen strongly advocates for cooperation with other funding organizations to increase the impact of SIDN Fund’s initiatives. Recognizing the importance of collaboration, the fund actively engages with other funders in the Netherlands and international organizations. This collaboration allows them to pool resources and expertise, enabling them to implement larger-scale projects and reach a wider audience.

One notable example of this collaboration is SIDN Fund’s participation in a large conference in Brussels. This conference serves as a platform for bringing together digital rights funders from both Europe and beyond. By participating in such events, the fund not only learns from the experiences and insights of others but also shares its own knowledge and expertise. This ultimately contributes to a more coordinated and effective approach to digital rights funding.

In conclusion, as the director of SIDN Fund, Valerie Frissen emphasizes the importance of responsible internet use and raising awareness among end users. The fund’s support for projects in these areas contributes to the achievement of global development objectives. Through collaboration with other funders and active participation in conferences, the fund ensures a more comprehensive and impactful approach to advancing digital rights.

Garcia Ramilo

Garcia Ramilo is employed by the Association for Progressive Communications (APC) in a significant role overseeing a membership network that spans 40 countries. The APC is dedicated to promoting the achievement of the United Nations Sustainable Development Goals (SDGs), particularly SDG17: Partnership for the Goals. This underscores their commitment to fostering collaborations and partnerships to address global challenges.

In terms of resource allocation within the network, an interesting aspect is the sharing of resources through various means. A notable approach is through regranting, whereby resources are distributed based on the network’s priorities. This enables the APC to effectively support its members and partners in their initiatives. Moreover, the network also engages in capacity building and research, empowering members to enhance their skills and knowledge to drive positive change in their respective communities. Collaboration is another key aspect of resource sharing, as the APC actively works with members and partners to ensure resources are maximised and beneficial for all involved.

An intriguing aspect of the APC’s resource allocation strategy is that approximately half of the resources are directed towards its members, while the remaining 50% is allocated to different partners. This balanced distribution ensures that both the needs of members and external partners are met, reinforcing the network’s commitment to reducing inequalities (SDG10) and fostering partnerships to achieve global goals (SDG17).

In conclusion, Garcia Ramilo plays a crucial role within the Association for Progressive Communications, managing a membership network across 40 countries. The network places importance on resource sharing through regranting, capacity building, research, and collaborations. Roughly half of the resources are directed towards members, with the remaining 50% allocated to partners. Through these efforts, the APC aims to address global challenges, reduce inequalities, and foster partnerships to achieve the Sustainable Development Goals.

Changho Kim

In his presentation, Changho Kim, representing the Open Society Foundation’s East Asia Program, provided a comprehensive insight into their work supporting civil society organizations, with a specific focus on Northeast Asia. Northeast Asia includes China, Hong Kong, Taiwan, South Korea, and Japan.

Kim emphasized that the Open Society Foundation’s East Asia Program aims to provide support and resources to civil society organizations operating in these regions. These organizations play a crucial role in fostering transparency, accountability, and the protection of human rights.

The Open Society Foundation recognizes the value of civil society organizations in promoting democratic governance, advocating for social justice, and challenging systemic inequalities. Through financial grants, capacity-building initiatives, and strategic partnerships, the program enables these organizations to undertake projects, research, and advocacy efforts that address pressing issues in their respective societies.

Within Northeast Asia, the program seeks to address diverse challenges that vary across the countries in the region. In China, civil society organizations face numerous restrictions and obstacles due to the government’s tight control over civil liberties. However, the program seeks to support these organizations in their fight for social justice, human rights, and the rule of law.

In Hong Kong, recent political developments have highlighted the importance of safeguarding civil society space. The Open Society Foundation’s East Asia Program plays a vital role in providing resources and support to organizations working to protect freedom of expression, assembly, and association in the face of increasing restrictions.

Taiwan, on the other hand, offers a relatively more open environment for civil society organizations. The program aims to enhance the capacity of these organizations to advocate for progressive reforms and social change, particularly in areas such as gender equality, LGBTQ+ rights, and environmental sustainability.

South Korea, a vibrant democracy, faces its unique challenges, including labor rights, democratic participation, and social inclusion. The program supports civil society organizations in their efforts to address these issues and promote good governance, social cohesion, and inclusive policies.

Finally, in Japan, civil society organizations face challenges related to democratic participation, minority rights, and refugee protection. The program works to empower these organizations, enabling them to advance human rights, social justice, and democratic values.

In conclusion, Changho Kim’s presentation highlighted the Open Society Foundation’s East Asia Program’s critical role in supporting civil society organizations across Northeast Asia. Through its financial, capacity-building, and collaborative initiatives, the program aims to empower these organizations in their pursuit of social change, human rights protection, and democratic governance. By addressing country-specific challenges and fostering cross-border collaboration, the program seeks to contribute to a more inclusive, just, and democratic Northeast Asia.

Michel Lambert

Michel Lambert is a member of Equality, a Canadian organisation dedicated to advancing freedom online. Equality focuses on creating open-source tools and services that aid in the support of this cause. Their work aims to counteract the concept of “splinternets” and promote internet freedom for all.

In addition to their tool development, Equality also extends support to smaller organisations, small businesses, and individual developers. Over the past two years, Equality has actively provided assistance to these entities. Their support encompasses a range of areas, such as helping smaller organisations gain the resources and guidance needed to develop new technology against the splinternets. Notably, Equality extends its support to both small businesses and individual developers, recognising their role in technological advancements.

The new technologies fostered by Equality cover various aspects, with a particular focus on virtual private networks (VPNs) and satellite technology. These innovative solutions allow users to navigate online platforms securely and overcome the obstacles posed by splinternets. By facilitating access to such technologies, Equality empowers individuals and businesses to protect their digital freedoms and fully participate in the modern interconnected world.

The sentiment expressed towards Equality is overwhelmingly positive. The speakers involved in discussing this topic emphasised the importance of Equality’s work in promoting internet freedom and supporting technological innovation. Equality’s commitment to open-source development and its focus on supporting smaller organisations and developers highlight its dedication to fostering an inclusive and free digital space.

In conclusion, Michel Lambert collaborates with Equality, a Canadian organisation at the forefront of championing internet freedom. Their creation of open-source tools and services, along with their support for smaller organisations and developers, demonstrates their commitment to combatting the concept of splinternets. Equalityโ€™s efforts play a crucial role in ensuring digital rights and fostering technological innovation worldwide.

Paul Byron Wilson

Paul Byron Wilson, the head of APNIC (Asia-Pacific Network Information Centre) and a trustee in the Internet Development Trust, is a notable figure in internet connectivity and development. He plays a significant role in advancing the internet infrastructure in the Asia-Pacific region. Wilson’s work includes supporting high-bandwidth connections for research and education networks in the Pacific, contributing to SDG 9 (Industry, Innovation, and Infrastructure) and SDG 4 (Quality Education). The Internet Development Trust funds projects of the APNIC Foundation, including the ISF-Asia grants, promoting innovation and infrastructure development. Wilson’s involvement with ArenaPAC further demonstrates his dedication to creating high-bandwidth connections in the Pacific for enhanced education and research collaboration. Overall, Wilson’s leadership and involvement highlight his commitment to driving progress in internet connectivity and the promotion of quality education in the Asia-Pacific region.

Hirochika Asai

The White Project, founded 37 years ago by Professor Jim Ryan, is a renowned research consortium that focuses on conducting research and promoting educational activities. The project places a strong emphasis on collaboration between academia and industry. Hirochika Asai, a representative of the White Project, highlights the importance of this collaboration and its benefits for both sectors. This partnership enables the exchange of knowledge, expertise, and resources, leading to innovative breakthroughs and advancements across various fields.

One notable achievement of the White Project is the operation of ArenaPAC, a high-capacity submarine cable infrastructure dedicated to facilitating research and education. ArenaPAC serves as a crucial communication channel, connecting scientists, researchers, and educators, and enabling them to share data. Its significance is highlighted by Paul, who cites it as a remarkable achievement.

Additionally, the White Project recognizes research and education as vital for driving the future global acceleration of human activities, particularly in scientific research. They firmly believe that investment in these areas is essential to achieve long-term sustainable development goals. By fostering strong partnerships between academia, industry, and educational institutions, the White Project aims to create an environment that encourages innovation, knowledge exchange, and technological advancements.

In conclusion, the White Project, led by Professor Jim Ryan, is a reputable research consortium that conducts research and promotes educational activities. Through its collaboration between academia and industry, it has successfully facilitated the exchange of knowledge and resources. The operation of ArenaPAC, a high-capacity submarine cable infrastructure, supports research and education. The White Project recognizes the importance of research and education in the future acceleration of global human activities, particularly in scientific research. By continuing to foster collaborations and invest in these areas, it strives to achieve sustainable development goals and advancements in various fields.

Ellisha Heppner

Ellisha Heppner is the grants management lead for the APNIC Foundation. Her role involves overseeing the administration and distribution of grants, ensuring they are aligned with the foundation’s goals. One of the key programs she manages is the ISF-Asia grants, which follows a competitive process and accepts proposals on an annual basis. These grants focus on promoting projects related to infrastructure, inclusion, and knowledge.

The ISF-Asia grants are instrumental in advancing the Sustainable Development Goals (SDGs) set by the United Nations. They contribute to SDG 9, which focuses on industry, innovation, and infrastructure, SDG 4, which aims to ensure quality education, and SDG 10, which aims to reduce inequalities. Through these grants, the APNIC Foundation actively supports sustainable development and social progress.

In addition to the broad themes of infrastructure, inclusion, and knowledge, the APNIC Foundation provides specific funding for projects related to IPv6 and environmental sustainability. This is made possible through the Ian Peter grant, which aligns with SDG 9 and SDG 13, focused on climate action. By offering dedicated funding for these areas, the foundation promotes the adoption of IPv6 and the development of environmental solutions.

Ellisha Heppner’s role as grants management lead is vital in ensuring the effective distribution of grants in line with the foundation’s objectives. Her expertise and oversight critically contribute to the selection of promising proposals and the meaningful impact of awarded grants in the Asia-Pacific region.

In summary, Ellisha Heppner plays a key role at the APNIC Foundation as the grants management lead. Under her supervision, the ISF-Asia grants focus on infrastructure, inclusion, and knowledge, while also supporting specific areas such as IPv6 and environmental projects. Through these grants, the APNIC Foundation contributes to the achievement of SDGs and promotes sustainable development in the region.

Brian Horlick-Cruz

Brian Horlick-Cruz manages grant programs at the Internet Society Foundation, with a focus on community-oriented funding initiatives. These programs contribute to the achievement of SDG 9: Industry, Innovation and Infrastructure, as well as SDG 17: Partnership for the Goals.

In his role, Brian supports a broad range of technical communities, including network operator groups and national research and education networks. This involvement highlights his commitment to fostering collaboration and innovation within the industry. By providing resources and support, Brian ensures the growth and contributions of these technical communities to industry advancement and infrastructure development.

Brian’s impact extends beyond technical communities, as he also coordinates programs for Internet Society chapters and the National and Regional Internet governance forums. These platforms serve as arenas for discussions, knowledge sharing, and policy formulation that shape the future of the internet. Under Brian’s guidance, these programs facilitate the exchange of ideas and the development of strong governance frameworks.

Overall, Brian Horlick-Cruz’s work as a grant program manager at the Internet Society Foundation is highly regarded. The sentiment towards him and his contributions is generally positive and neutral, reflecting the significant impact he has made in the field of grant management and community support.

In summary, Brian Horlick-Cruz manages community-oriented grant programs at the Internet Society Foundation. He supports various technical communities, internet governance forums, and Internet Society chapters. His efforts contribute to advancing industry, innovation, infrastructure, and achieving sustainable development goals. Brian’s positive and neutral sentiment reflects the value and esteem he has earned in his role.

Janne Hirvonen

Finland has been actively funding the Internet Governance Forum (IGF) since 2006, contributing a total of approximately 2 million USD. However, there are concerns regarding the current financing of the IGF, suggesting that the existing financial arrangements may not be sufficient. The specific points of concern have not been specified, but there is a negative sentiment associated with the current situation.

On the other hand, there is support for upscaling efforts to ensure the long-term sustainability of the IGF. The call for upscaling is motivated by the recognition of the crucial role that the IGF plays in fulfilling its UN mandate. The IGF is seen as a platform that promotes dialogue and cooperation among various stakeholders to effectively address the complexities of internet governance.

To secure the sustainability of the IGF, it is suggested to explore unconventional means of financing beyond the traditional methods. This would involve fostering an environment that is open to suggestions and innovations in terms of financial support. By encouraging new approaches to funding, it is believed that the IGF can address the existing concerns and ensure its continued operation.

In conclusion, Finland has been a significant contributor to the IGF’s funding for over a decade. However, concerns have been raised regarding the current financing situation, prompting the need for upscaling efforts to ensure the long-term sustainability of the IGF. Exploring unconventional means of financing and recognizing the crucial role of the IGF in fulfilling its UN mandate are highlighted as important strategies to address these concerns and secure the future of the IGF.

Keywords: Internet Governance Forum, IGF funding, Finland, financing, sustainability, stakeholders, dialogue, cooperation, UN mandate.

Session transcript

Moderator – Silvia Cadena:
We have the okay? Thank you. I was just going to say we are having some technical issues, but apparently they have been resolved. Good afternoon, everyone. My name is Silvia. I am the acting CEO of the APNIC Foundation. APNIC is really proud and the Foundation is really proud to host this event again after COVID and a few ones that we missed. But before COVID, we used to try and do this with organizations that were investing in development to try and find some common ground and try to get to know each other, what our priorities are, and how can we find ways to collaborate or at least don’t feel that alone in the field when we are supporting our communities. So we are very happy to host you today. It is going to be a very informal conversation where we hope that organizations that are across the room that are some that we know and some that we don’t know will take the microphone and share about the projects that they are investing in. For those of you that are receiving funding from us, I would appreciate if you let the people that are investing to talk first and we will see how we can talk about the ones that are receiving funds at a different time, I would say. I am also very happy to have with me Valerie Friesen here who has been also spearheading a similar approach to those that have been investing in, you know, from the domain name industry, also foundations and organizations that are allocating funds to support internet development. So there are a number of different organizations and initiatives. So for us at the foundation, we are always looking at opportunities to co-fund and collaborate and make the footprint of what we are doing bigger. And although in some areas some of you have geographical limits about where your funds can go, same for us. We only cover Asia Pacific 56 economies. There are opportunities where we can cross, you know, borders and collaborate. So I will pass the mic to Valerie to introduce herself a little bit and then we will do around the table introduction, a very short, just focusing on who are you, your priorities and, you know, the organization that you represent. And then we will continue with the session. Thank you, Valerie.

Valerie Frissen:
Okay. Well, very happy to be here. Thank you, Sylvia, for organizing this. I’m working in the Netherlands as a director of SIDN Fund. And SIDN is the Dutch National Registry which founded a separate foundation to fund all kinds of initiatives and organizations that try to work on developing a responsible internet, as we call it. So we are funding particularly projects that are sort of empowering end users in terms of educating them to use and to know everything that is necessary about responsible use of the internet and awareness building kind of projects. We also cooperate a lot with other funders in the Netherlands and actually also with international organizations. Actually, at the moment, there’s a large conference in Brussels that is bringing together all the digital right funders from Europe and from outside of Europe where my colleague is participating now. And I think there are many of the registries, the domain name registries that also have public interest programs or community building programs or separate funds that are working in a similar way as we are. And this would be really interesting to cooperate.

Moderator – Silvia Cadena:
Thank you, Valerie. Valerie is also hosting a similar event at the ICANN meeting in Hamburg in a few weeks. Very informal. Who’s there? Just meet me after the meeting and we can see whether we can meet at the ICANN. Yeah. So we are trying to figure out a more regular calendar so that we can have a chat and see what our priorities are. On that note, I’m going to just mention very briefly what the APNIC Foundation also invest on. So we are the fundraising arm of APNIC. APNIC allocates IP addresses and ASN numbers across 56 economies in the Asia Pacific. And we have established a foundation in 2016 to support increased investment in the development priorities of APNIC to keep the Internet open, stable, reliable and secure in the region, but also to make it affordable and accessible. So the foundation is a fundraising foundation. And we are very lucky to have the support from the Asia Pacific Internet Development Trust at the moment. And we have one of the trustees here with us that will address us in a minute. And I think the kind of work that we are doing is to support programs across inclusion, infrastructure and knowledge. We have grants, fellowships, awards, research support, and we implement also projects directly across education, gender and diversity, and community building and community strengthening. So it’s kind of like a large portfolio that we are doing. And we’re trying to collaborate more, especially with the foundations and organizations that are investing in technical infrastructure and technical community as that part of the investment is very ignored, let’s say, by the normal donors as they tend to focus more on end users and digital literacy, safety, things like that. So getting support for network engineers and cybersecurity professionals is quite hard. So with that, I will pass on to my colleague, Alicia, here, who is the grants management lead of our main mechanism for funding to the community. And then we can get the rolling mic to start probably from Paul, this way, and we’ll see how it goes.

Ellisha Heppner:
Yes. Hello, everyone. I’m Alicia Hittner. I’m the grants management lead. My portfolio that I look after under the APNIC Foundation is the ISF-Asia grants, which is a competitive process called for proposals once a year that we look to fund in infrastructure, inclusion and knowledge. And we also have some subsets, which is IPv6, which is important to us, and also the environment with our Ian Peter grant.

Paul Byron Wilson:
Thanks, Sylvia. Hi, everyone. I’m Paul Wilson. I’m the head of APNIC, which I think Sylvia has already described very well. We also are trustees in an Internet Development Trust, which is funding projects of the APNIC Foundation and also an academic networking backbone in the Pacific that’s called ArenaPAC, and it’s establishing high-bandwidth connections to create a research and education network backbone around the Pacific at the moment. Thanks.

Brian Horlick-Cruz:
Hey, everyone. My name is Brian Horlick-Cruz. I’m a grants manager at the Internet Society Foundation. We run a whole big set of different grant programs funding many different kinds of initiatives. I work on a portfolio that mainly consists of sort of a community-oriented grant funding programs, so running a program for chapters of the Internet Society or running a program for the NRIs, so national and regional Internet governance forums, as well as a whole variety of different technical communities, including things like network operator groups, national research and education networks, things of that sort, and I’ll pass it on to our program officer, Jen Beard.

Jenn Beard:
Thanks. My name is Jen Beard. I’m also at the ISOC Foundation working on the things that Brian described, but also our full portfolio deals with building a stronger Internet, growing the Internet, and defending the Internet, so all of those different things we run about 15 grant programs that range from connectivity all the way to digital skills, digital learning, and, yeah, it’s great to be here with you all.

Garcia Ramilo:
Hello. I’m Chad Garcia-Ramelow. I’m from the Association for Progressive Communications. We’re a membership network, and we are in about 40 countries with our members. The grants, we’re not a grantee. We’re not a donor. We’re not a funding organization, but we do share resources, and the way we do this is through in many different ways, so we do regrant, and the regranting really goes to the priorities of the network. So one of them, it ranges from human rights-related to connectivity, community-centered connectivity, and Carlos can speak more about that and also Eric. One of the things, we do different things. It can be about capacity building. It can be about research, and much of it, about half of it, I’d say maybe would go to our members, but the 50% then goes to different partners. And sometimes it’s through open calls. Sometimes it’s through really working with a group of members or partners to collaborate on sharing the resources.

Alessia Zucchetti:
Hello, everyone. My name is Alessia Zucchetti. I am the coordinator of research and cooperation at LACNIC, which is the Latin American and the Caribbean registry. And, well, in the case of LACNIC, we also have the main grant is called FRIDA. It’s the Fund for Digital Innovation in Latin America and the Caribbean, which has existed since, well, for the past almost 20 years, so as long as LACNIC almost. And apart from that, we also are focused in applied research, mainly in technical topics related with network architecture, Internet stability, security, among other fields. And we also have, well, different programs that are focused mainly in capacity building, in favoring the participation of women in the technical community and the Internet ecosystem at large as well in the region. And, well, I’m very glad to be here with all of you. Thank you.

Changho Kim:
My name is Changho Kim. I work for Open Society Foundation’s East Asia Program. Yeah, we provide support for the mainly for the civil society organization. I’m only covering the Northeast Asia side, meaning China, Hong Kong, Taiwan, South Korea, and Japan. Thank you.

Percival Henriques:
Hello, everybody. I am Percival Henriquez from Brazil. I’m a board member at the Internet Committee and NIC.BR. Thank you.

Laura Conde Tresca:
Hello. My name is Laura Tresca. I’m also a board member of the Brazilian Internet Steering Committee. And the Brazilian Internet Steering Committee funds some AI centers in Brazil. And we have some small fellowships for women to write papers. And also we support small events on Internet governance.

Michel Lambert:
Hi. My name is Michel Lambert. I’m working with an organization called Equality based in Canada. We are creating, actually, tools and services, open source tools and services to support freedom online. We have started in the last two years to support smaller organizations, smaller business, even individual developers, creating new technology to counter the splinter nets. So all kinds of new, could be VPNs or satellite technology, whatever, just to make sure that people can access to content when there are some issues at that level.

Moderator – Silvia Cadena:
Can we continue with the gentleman over there? Thank you so much for doing the rolling microphone yourselves.

Charles Noir:
Hello, everybody. My name is Charles Noir. And I come from, I’m the Vice President of Community Investment Policy and Advocacy at CIRA, which is the Canadian Internet Registration Authority, .ca. We operate the .ca ccTLD. Part of what we do is we have a granting program. Amongst other ways, we give back. We’re focused on nonprofits, registered charities, academics, universities, colleges, and indigenous communities with a particular focus on northern remote and indigenous as of late. And so that’s part of what we do. We also invest in free or producing free services, for example, cybersecurity services that Canadians can use in order to help protect themselves online. We also provide and build some services so they can test their online Internet performance in a way that is third-party and neutral so they can hold telcos to account for the speeds they’re getting. But that’s about that.

Janne Hirvonen:
Hello. My name is Janne Hedronen. I’m representing the Finnish Ministry of Foreign Affairs. I took the seat on the table, as all the others also. Certainly our focus here is on IGF and funding the IGF. Ever since 2006, we’ve been funding IGF. Altogether, we’re at some 2 million euros, U.S. dollars, actually, as they counted. So our focus has been in the mandate of the IGF, what has been given by the UN resolution to it in the very beginning. Certainly we are open for suggestions also beyond IGF. What I’ve been hearing, what sort of discussions we’ve been having during this time here in Kyoto, I’m a bit concerned, I must admit, regarding the financing of the IGF. I hope that all the participants in these discussions take this concern seriously. I think it’s about time for us all to step up our efforts in this regard and ensure that the IGF has enough resources to fulfill its mandate. Thanks.

Audience:
Hi. I’m from the IO Foundation. I sat on the table thinking that it was about participants, and all of a sudden I’m just surrounded by people who give money. I cannot give money, I’m sorry. So our organization works on data-centric digital rights, and part of the project we’ve been involved in has been formulating this data-centric digital rights framework, as well as other projects to support the technical community in what we regard as their role as next generation of rights defenders. Thank you.

Carlos Rey Moreno:
Thank you very much. Here with Eric Huerta from Raisomatica. My name is Carlos Rey Moreno from APC, and together we coordinate the LockNet initiative. We do a lot of regranting, as Chad was mentioning before, in particular in relation to community-centered connectivity initiatives and creating an enabling environment that goes from strengthening organizations that are involved in the topic, building their capacity through national schools of community networks, and supporting policy and regulatory analysis or research on the topic to organizations at the national level, technology development, as well as creating skills or building skills and creating safe spaces for women to upskill their knowledge in technology as well as in regulation that has to do with the Internet. Thank you.

Hirochika Asai:
Good afternoon, everybody. I’m Hirochika Asai from White Project. Everyone calls me Panda because it’s easier to pronounce. I’m from White Project. White Project was founded 37 years ago by Professor Jim Ryan. The White Project is a kind of research consortium among the industry and academia. We are more focusing on the research and educational activities. We are now operating the ArenaPAC that is mentioned by Paul. That is their high-capacity submarine cable infrastructure for the research and education. I think the research and education is becoming more important for the future global acceleration for the human activities, including the scientific research. I want to contribute to that activity. Thank you very much. Nice to meet you.

Yoshiki Uchida:
Hello, everyone. I’m Yoshiki Uchida. People call me Uchiyoshi. Please call me Uchiyoshi. Now I’m studying the Internet at Keio University and the White Project. I’m interested in the APNIC Foundation. I want to support the APNIC Foundation in the near future. Thank you.

Audience:
Hello. We’re in the back row. I wasn’t expecting to say anything. My name is Gonaleia Sprink. I’m the chair of the Internet Society Accessibility Standing Group. I’m a partner with the Dynamic Coalition on Accessibility and Disability. and DCAD, as it’s called, has received funding through Vint Cerf to have travel support for persons with disability to be able to participate at the IGF, and I’m playing a mentoring role in doing that, because we feel that there needs to be more of a disability voice at the IGF. Thank you. Hello, everybody. My name is Amjad. I work for an NGO in North Africa, we are focusing mainly on digital rights and policies. I’m based in Tripoli, Libya, but we are working in the whole region, advocating on digital rights issues and trying to support activists on the ground, and also working on the policy level. Thank you. Hi, good afternoon, everyone. I am Glyndel Montarde. I am from CivisNet Foundation. CivisNet is an NGO. It focuses on serving the government through providing digital transformation through software and system development using open source systems, or open source softwares. However, we want to serve our country through our social corporate responsibility, and we’re looking for funds so that we could also empower the underserved and unconnected islands in the Philippines, knowing that we have more than 7,000 islands. And presently, we actually have a project with APNIC Foundation. Thank you. Hi, everyone. I’m Yuka Shori-Kataoka. Currently, I’m here today as a representative of APNG, Asia Pacific Next Generation. And also to train and develop a young leader in AP region. So I’ve been working as a language instructor and also educational practitioner. So currently, I graduated Keio University from Junmurai Laboratory. And also, I had a collaboration with WIDO Project and the Soy Asia Internet Project. So I’m really interested in this discussion during this APNIC meeting. So yeah, nice to talk to you. So yeah, nice meeting you. Thank you. Good afternoon, everyone. My name is Elsa Odron from CBSNet Foundation, Philippines, together with my colleague, Glenda. I think she already mentioned about the foundation. Are we introducing ourselves? Yes. Okay. Especially if you’re donors, if you are recipients. No, you don’t. You don’t. But say your name. I’m Heidi Rogers, and I work with the Tool Project. Thank you. Hello, everyone. My name is Zoe Thong-Bakhelemi. I work at Internews. We are both a funder and a recipient of funder in this space. I am on the platform accountabilities team. And this year, we actually launched a very small grant pool for APNIC Foundation. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. And I’m very happy to be here. So, welcome, everyone. My name is Zoe Thong-Bakhelemi. I’m a donor. I’m a recipient of funder in this space. I am on the platform accountabilities team. And this year, we actually launched a very small grant pool that I’m overseeing called a platform impacts fund, where we’re funding micro-grants to independent researchers based in the majority world researching and collecting mostly evidence-based collections on platform impacts within their communities. I’m especially interested in funding independent researchers or journalists or human rights defenders who are not affiliated with classic institutions, who have difficulty accessing funding for this specific type of work. So please, if you’re working in this area, I will be really happy to connect with you. Okay, I’m Nenad Orlic. I’m from Serbian registry. And basically, registry supports different projects, mainly in the scope of research and market development in Serbia. Thank you very much. Do you have, sorry, you can start over there. And if you, I know we have a few more people on the back, and that will be the end of the initial introductions. Okay. If you could. Hi, everyone. My name is Carla, and I am helping my friend Raimundo to tell about what he works in the Amazon region. I come from the Amazon region as well. Let me see, friend. But Raimundo is a Quilombola person who comes from the Maranhรฃo, that is a state located in Brazil region. And he’s come from the radio and the TV Quilombo organization about communication that comes from popular demand in the Brazil, who fights against, who fights with the digital rights and against the violations in the communities using the internet as a tool to face this challenge. And me, I am Carla Braga. I am executive director from the Amazonia Youth Corporation for Sustainable Development, that is the COJOVEN. And we work with education, research, and advocacy in the Amazon region. And we developed this agenda, who is agenda about public policies, projects, and programs to face the challenge about climate change in Amazon region. And we are working with misinformation and disinformation in the Amazon region, because the deforestation and forest degradation are totally connected with this problem. And right now, we are empowering the youth to face this challenge. And we are trying to build a wave of voice that comes from the Amazon territory to talk about our own realities and face the challenge. And we are here, looking for funding and connections that can support us in this challenge. And thank you so much. Thank you. I’m Rebecca Papillo. I’m from OUDA, the .au domain administration, so we run the .au in Australia. Through the OUDA Foundation, we run a community grants program. So, the objective of the program is to deliver grants to community programs that promote digital inclusion and digital innovation and drive benefits through the internet for particular groups of people, including regional and remote Australians, Australians living with disabilities, and Australia’s First Nations people. We’re also working on a broader grants program that we’re hoping to launch next year with longer-term partnerships. So, yeah, interested to hear what everyone else is doing. Thank you. You’re very new to your job, so welcome to this community. I’m a communications manager at OUDA, so, yeah, just here in a capacity to learn and take it back. Thank you. Good afternoon, everybody. My name is Christian Leon. I’m the secretariat of ARSUR, that is a consortium of 11 organizations working in digital rights in Latin America, and I’m also the executive director of Internet Bolivia Foundation that works in data protection, digital violences, and digital inclusion. Hi, my name is Al Smith. I’m with the TOR Project. We build privacy and anti-censorship tech. Hi, I’m Brett Solomon. I’m the executive director of Access Now. Hi, everyone. My name is Carolyn Teckett. I’m the director of campaigns and rapid response at Access Now. So Access Now is an organization focusing on defending and extending digital rights for people and communities at risk around the world. As part of that body of work, we have the Access Now grants program, which over the last five years has been able to deliver about $8 million to 120 organizations working at the very grassroots level. I think a really important aspect of that is supporting organizations that are working in digital rights, but also at the intersection of other human rights issues, whether that be gender, LGBTQ, indigenous, environmental issues, and that is a sub-granting program, so our primary funder for that grants program is SIDA, the Swedish development organization, but we’re also open to conversations about other donors that are interested in talking about sub-granting and have the opportunity to channel those resources to grassroots organizations. So yeah, nice to meet you all, and we’re happy to talk more. Yeah, good afternoon. My name is Chuck Brackett. I work for a USAID program, so I don’t give money, and I can’t take your money, but if you’re interested in working with USAID and you’re not sure how to get started, my colleague and I can talk with you about that, so if there’s anybody who doesn’t know how to approach AID. Good afternoon, everyone. I’m George Washington, also with Vistant. We’re a private US-based firm that has this contract with USAID called Digital Apex, so looking forward to speaking with you all, and we can sort of explain how the mechanism works. Thank you. Catherine Townsend, I’m wearing two hats. I work with the World Wide Web Foundation on human rights, preventing and countering online gender-based violence. Joining this session today, representing Measurement Lab, which measures the speed and quality of the internet around the world and provides the largest open data set about that. I think we do provide grants for both those organizations. My question also for this group and for those who are supporting the development of the internet is the role that Measurement Lab provides is an independent assessment of how the internet is performing and makes that public. How do we sustain that, and how do we expand that more widely? Is that a service that a nonprofit chasing funds and chasing private donations from tech platforms should be pursuing, or is that a role that a coalition of governments should be setting up who actually gets to monitor the internet? Thank you. Hello, everyone. I’m Pranav. I work with the Internet Society Foundation, and I oversee the empowerment work, and here we train youth ambassadors, early and mid-career professionals, and many of them are also at this IGF, so you can also engage with them and hear more about our work and how we do it. We also have our training and e-learning courses, where multiple courses on aspects of technical aspects of the internet and policy are available for free, and we also train them in person as well. So, happy to engage with you, understand how we can collaborate and work together. Hi, everybody. My name is Jeroen. I look after the Asia Pacific region for this organization called ICANN. We are a platform that discusses domain name policy, and we do have a grant program, which we’re going to go into three, and Dave and Alicia Marcos will facilitate the groups and

Moderator – Silvia Cadena:
take notes, and we have another group for the remote participation, but I believe it’s only one person. Two people from remote participation, so Kathleen will take care of the remote participation, and after this, the idea is that we will share our notes with you, organize a database of what we found out, also with consent of those that have agreed to share e-mails with the rest, so don’t worry, no blast spam going anywhere, and we will try to collect some additional information about your organizations and provide some additional details moving forward. So we will break. It’s just for a 15-minute conversation, and we will reconvene before the end of the hour, and I hope just you don’t run away and go to the next session just yet, just to stay for these short conversations, and we’ll see how it goes. Thank you very much. So one group in this side and one group on this side, which is easier to move.

Audience:
So this is a volunteer community to develop and train the young leaders in BP regions, so I am currently facing the financial problem to maintain and operate this group. It’s a voluntary base, but we are providing seminars and webinars throughout the year to provide educational opportunities to young generation people, and of course, senior people are also allowed to join this group and the webinar. However, we would like to meet in person, so every year, so we would like to provide some more interactive sessions such as ideas on and topics on for young generation people, so we need a certain amount of budget to have the in-person meeting. So the webinar seminar is easy to conduct, so without a budget, so only voluntary, so people’s effort, so but conducting the in-person meeting, so it needs real, yeah, real care among the budget preparation, currently, I am facing the financial challenges. So I am talking to the foundation, because like a previous meeting, foundation from this region, so but after some years, so recently I joined, and so however, the first generation of participants, so we are motivated to regenerate this group, so they got the opportunity to get a fellowship programme, they got the opportunity to study abroad and work abroad, so they would like to, the first generation of participants have the motivation to contribute, to train and develop, providing skills to the next generation of people. So I also need financial support, financial motivation, so that’s why we are trying to provide activities and activities, so however, we are talking about, I am supporting the first generation, so and also this is a good opportunity for companies and industrial areas people, because industrials have new members, so they also have, so they can also have an opportunity to discuss about the multiregional and diverse people in the region, so I think not only North Africa and Asia and maybe in the country, industrial and academia collaboration can be also facilitated. Yes, I am facing this difficulty, so we need some like a collaboration, yeah I’m talking to companies, so that’s why I am here to hear from what country, I’m from Japan, yeah but this, the AAPNG members are diverse people from other, yes other countries, majority is Asia-Pacific people, so but yeah. including the west side India, Nepal, and China, Thailand, Myanmar, and everyone is actively participating in this. Yeah, currently online meeting, but we would like to shift back to the in-person activity. So, yes, now we are trying to persuade some, yeah, to come and take a second to have a discussion, so we’re going to have a meeting. So is Chuck mentioning me, or is it actually Chuck? We’re sharing that. He said there are many other teams who would like to share the challenges, maybe related to what you’re covering. Let me just share. I’m Linda, I’m here from CPSN from Philadelphia. So since we are working as an on-the-job organization, the main problem we have since we are trying to traffic in employees who are supporting our monetary, our core operational needs. So right now, since we actually have, we presently have a grant from the Internet Foundation, which is, we run it for 12 months, one year, and we cannot just wait for that one year to finish without having our senior teams and clients involved, so I do it first. So, right now, our companies are into using digital connections and are sort of eye-opening. So, mostly, those IADs don’t even have their positions. So what we provide for our present project is we provide internet connections, and of course they should be supported. And so we want to communicate initiatives to other IADs, but we are still looking for additional funds so that that can be realized. And we are also into providing, we plan to provide education and training to out-of-the-school people who are not going to even go to school, and we want to give them the opportunity to learn system administration work so that they can get a better job. And to operate their project, we also need the internet. So there are many initiatives that we’re thinking of, and same as her program. Yeah, I think that’s most of the problems for non-staff non-profit organizations. Yeah, that’s right. Adele. Thanks, Adele, for sharing that. Is there any other who would like to share something additional to what they said already? I’m going to ask Greg, I said it before. We are, when I say we, the Internet Society Accessibility and Understanding Group, focusing on disability-based training for advocates in internet governance and digital rights. And so we are planning in our mind, of course, in relation to these elements, we want to provide some sort of face-to-face workshops. We started last year in Bangladesh, and we want to continue this in the coming months. As a part of that, we have a partnership with the Asia Pacific School of Internet Governance, and that will be held in Nong at the end of November, and we’re keen to involve those with disability education. But after that, we’re very interested in any contact with disability-free organizations. It’s basically building that voice in disability, because we know that people with disabilities have an opportunity, and we are under the guidelines of the National Forum on Disability. And so we want more training in that area. And so we have a company in Bangladesh, and we also are trying to be able to develop our programs. I come from the Amazon region, and the Amazon North Cooperation for Sustainable Development is an NGO founded by the Amazon North Cooperation. What happened in my territory is that we have a lot of problems. For example, my state, that is the state of Parรก, is the state who will receive the capital in 2025. And we have a lot of problems, and one of these problems is that the state of Parรก is two times larger than France, for example. And we don’t have public policies to face the challenges of the region in our territory. And in the same moment, we are facing a lot of problems that come from the impacts of the climate change. So one of these problems that is starting to correlate to the climate change is the problem about misinformation and disinformation in our territories. And for example, the problem about misinformation and disinformation are causing a big mess in our democracy. And we are using capacities of the youth to try to build public policies that can be sensitive to our reality and when I develop this agenda, as I told you before. And one of the impacts of this agenda, for example, is that right now in the state of Parรก, we have a public policy that is about the digital inclusion. And this comes from a hard process of advocacy. This is the agenda, it’s been important as an organization for the Amazon region. And our main problem is also about funding, and it’s about the people who know about our existence. We have a historical problem about the Amazon, that when we saw people talking about the Amazon, usually it’s not talking about the Amazon region, but it comes from our voices. And this is because there is no crystallized situation about what we are living in our territory, about our challenge. And right now we are empowering these voices to build the challenge and to try to build confidence to the people about where we are living and that we need to work on the policies and projects about education to bring us here, like I’m here right now, telling everybody about the problems that we are facing. And… I’m sorry for the bad English. It’s very well explained. So, is there anyone from the industry, from the funders here, I’d like to hear a little bit about what kind of approach and what do they need to apply or at least know a little bit more about those challenges? I think it depends on what kind of foundation or organization or some organization that are more open for the application process. Through the intermediary goals, I don’t believe that they are just receiving applications let’s say like once a year, and once a year and just look through and decide the pace of their planning. But I think in many other foundations, like us, it’s also having our own strategic priorities and in many cases, we kind of already have a number of standards and we just approach them and or we know that they are the organization having a very interesting idea and we can see that. So, in many occasions, we just approach them. I mean, of course, in some cases, you know, like confidence like this and we still have a very interesting conversation with many of the companies or the organization and that’s really kind of an opportunity. It’s quite rare for the organization which we do not know. You know, you just have soldiers and materializing funding. This is just kind of confidence. My challenge is I spend a lot of time with people especially because Asia is big. This is a confidence in a global scale. So, yeah, and I mean, I’m from a variety of countries, mainly working on like authoritarian regimes. So, I mean, let’s say we have to make this particular decision. I would just like to talk about the challenge that it is solving for the organization. I think that it’s like a very classical which organization like us are facing. And also, the other thing is, I mean, I’m not criticizing my organization but many other organizations also are now increasingly focusing on the impact of the peaceful outcome. So, that makes it quite difficult to work on authoritarian regimes. And actually, many, you know, this side is the theme for authoritarian regimes. So, let’s say, take myself, for example, like China because, you know, they are restricting all the funding for foreign donors and so on. And now, this kind of situation is happening all over the world, you know, in India now, and also in Thailand as well. So, I mean, it’s a really difficult situation because it doesn’t have circumstances that donors should be in a, and should be committed to continuing their support because they are now in a more difficult situation. But at the same time, donors also want to show, I mean, it’s more like the ego of the donor organization that shows the impact or what kind of outcome you can make through their fund. And actually, many syndicates show that, you know, our government is authoritarian regimes. So, yeah. So, that may be one of the very difficult and I think it’s very important. That’s why I think we promote society positive. You know, yeah. It’s kind of a big topic. Yeah, I also want to hear, you know, if any of you are working in a authoritarian regime, and how you, you know, how you kind of manage and present your impact. Yeah, let’s say, like, I think that’s a very good idea. It’s interesting that you bring up impact because one of our biggest challenges is getting funding to be able to measure impact, right? Donors want to see the impact, but for a technology tool, we need to build other tools to measure those things, and we need to build tools that will display that data easily. We need to build tools that will allow us to digest that information and make it accessible. So, that’s one of, actually, one of our longest-running challenges is measuring the impact of the Tor network, like, in a meaningful way that is updated and the pipelines are, you know, sustainable from a programming perspective. Because our donors really want that information, and we want to provide it to them, but we haven’t been able to find the donor that wants to help us build the pipeline that gets in between. And I really don’t know what the answer to that question is. And I’ve been with Tor for five years doing fundraising, and I’ve still never been able to really figure out how to answer that. And I also know that measuring impact and writing reports is a burden on a lot of organizations, especially small ones. I’m lucky that I get to partner with a grants manager and a project manager who write our reports, but not every organization has that. Yes, exactly. I just know that that’s a burden. But I understand the reasoning for wanting such information. Yeah, well, I think in our case, we mainly want to support the community networks. So community networks are self-sustainable. And so the first part of that is quite good. So the thing is that we have to bring them together, like supporting, actually bringing new technologies and developing new sorts of technologies. So that’s one of the challenges. But also, we have realized that most of the communities need some small grants. Because sometimes they just want the things that are at the local level, and it’s not expensive for them, so they can’t do much. But most of the grants for the big organizations are… Well, I’ve started bringing that to some organizations that have big communities. So that sometimes is a challenge. So we do some sub-granting, small granting. But we have a lot of demand for them, but we have not much money. So that’s definitely been very successful because it’s directly given to the people who work in the ground, in the projects that they want, and they have very nice projects. And it’s a variety of different things, not exactly in favor of the community, but sometimes for content development, sometimes for research. And I think that’s something that we should probably work on and think about how to get these grants and do sub-granting and small sub-granting. Sorry. Please feel free to keep the conversation. Hope you will still be able to communicate after. Yeah? You can have a directory, right? So you can still be able to exchange ideas. Did you write your name under there? It was not given to us yet. Just ask. Yes. That’s it. Thank you. Thank you. Thank you.

Moderator – Silvia Cadena:
Thank you, everyone. We ran out of time, and they are going to kick us out of the room. So we promised to take the notes and bring them to you, but thank you very much for attending. Thank you. Thank you.

Alessia Zucchetti

Speech speed

127 words per minute

Speech length

154 words

Speech time

73 secs

Audience

Speech speed

147 words per minute

Speech length

4711 words

Speech time

1922 secs

Brian Horlick-Cruz

Speech speed

155 words per minute

Speech length

114 words

Speech time

44 secs

Carlos Rey Moreno

Speech speed

160 words per minute

Speech length

133 words

Speech time

50 secs

Changho Kim

Speech speed

196 words per minute

Speech length

49 words

Speech time

15 secs

Charles Noir

Speech speed

138 words per minute

Speech length

163 words

Speech time

71 secs

Ellisha Heppner

Speech speed

139 words per minute

Speech length

74 words

Speech time

32 secs

Garcia Ramilo

Speech speed

159 words per minute

Speech length

175 words

Speech time

66 secs

Hirochika Asai

Speech speed

120 words per minute

Speech length

130 words

Speech time

65 secs

Janne Hirvonen

Speech speed

114 words per minute

Speech length

178 words

Speech time

94 secs

Jenn Beard

Speech speed

159 words per minute

Speech length

74 words

Speech time

28 secs

Laura Conde Tresca

Speech speed

94 words per minute

Speech length

55 words

Speech time

35 secs

Michel Lambert

Speech speed

154 words per minute

Speech length

95 words

Speech time

37 secs

Moderator – Silvia Cadena

Speech speed

151 words per minute

Speech length

1038 words

Speech time

413 secs

Paul Byron Wilson

Speech speed

127 words per minute

Speech length

76 words

Speech time

36 secs

Percival Henriques

Speech speed

90 words per minute

Speech length

24 words

Speech time

16 secs

Valerie Frissen

Speech speed

127 words per minute

Speech length

199 words

Speech time

94 secs

Yoshiki Uchida

Speech speed

139 words per minute

Speech length

51 words

Speech time

22 secs

Main Session on Artificial Intelligence | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator 2 – Christian Guillen

During the discussion, the speakers focused on various aspects of AI regulation and governance. One important point that was emphasized is the need for AI regulation to be inclusive and child-centred. This means that any regulations and governance frameworks should take into account the needs and rights of children. It is crucial to ensure that children are protected and their best interests are considered when it comes to AI technologies.

Furthermore, the audience was encouraged to actively engage in the discussion by asking questions about AI and governance. This shows the importance of public participation and the involvement of various stakeholders in shaping AI policies and regulations. By encouraging questions and dialogue, it allows for a more inclusive and democratic approach to AI governance.

The potential application of generative AI in the educational system of developing countries, such as Afghanistan, was also explored. Generative AI has the potential to revolutionise education by providing innovative and tailored learning experiences for students. This could be particularly beneficial for developing countries where access to quality education is often a challenge.

Challenges regarding accountability in AI were brought to attention as well. It was highlighted that AI is still not fully understood, and this lack of understanding poses challenges in ensuring accountability for AI systems and their outcomes. The ethical implications of AI making decisions based on non-human generated data were also discussed, raising concerns about the biases and fairness of such decision-making processes.

Another significant concern expressed during the discussion was the need for a plan to prevent AI from getting out of control. As AI technologies advance rapidly, there is a risk of AI systems surpassing human control and potentially causing unintended consequences. It is important to establish robust mechanisms to ensure that AI remains within ethical boundaries and aligns with human values.

The importance of a multi-stakeholder approach in AI development and regulation was stressed. This means involving various stakeholders, including industry experts, policymakers and the public, in the decision-making process. By considering different perspectives and involving all stakeholders, it is more likely to achieve inclusive and effective AI regulations.

Lastly, the idea of incorporating AI technology in the development of government regulatory systems was proposed. This suggests using AI to enhance and streamline the processes of government regulation. By leveraging AI technology, regulatory systems can become more efficient, transparent and capable of addressing emerging challenges in a rapidly changing technological landscape.

Overall, the discussion highlighted the importance of inclusive and child-centred AI regulation and the need for active public participation. It explored the potential of generative AI in education, while also addressing various challenges and concerns related to accountability, ethics and control of AI. The multi-stakeholder approach and the incorporation of AI technology in government regulations were also emphasised as key considerations for effective and responsible AI governance.

Clara Neppel

During the discussion on responsible AI governance, the importance of technical standards in supporting effective and responsible AI governance was emphasised. It was noted that IEEE initiated the Ethical Aligned Design initiative, which aimed to develop socio-technical standards, value-based design, and an ethical certification system. Collaboration between IEEE and regulatory bodies such as the Council of Europe and OECD was also mentioned to ensure the alignment of technical standards with responsible AI governance.

The implementation of responsible AI governance was seen as a combination of top-down (regulatory frameworks) and bottom-up (individual level) approaches. Engagement with organizations like the Council of Europe, EU, and OECD for regulation was considered crucial. Efforts to map regulatory requirements to technical standards were also highlighted to bridge the gap between regulatory frameworks and responsible AI governance.

Capacity building in technical expertise and understanding of social legal matters was recognised as a key aspect of responsible AI implementation. The necessity of competency frameworks defining the necessary skills for AI implementation was emphasised. Collaboration with certification bodies for developing an ecosystem to support capacity building was also mentioned.

Efforts to protect vulnerable communities online were a key focus. Examples were given, such as the LEGO Group implementing measures to protect children in their online and virtual environments. Regulatory frameworks like the UK Children’s Act were also highlighted as measures taken to protect vulnerable communities online.

The discussion acknowledged that voluntary standards for AI can be effective and adopted by a wide range of actors. Examples were provided, such as UNICEF using IEEE’s value-based design approach for a talent-searching system in Africa. The City of Vienna was mentioned as a pilot project for IEEE’s AI certification, illustrating the potential for voluntary standards to drive responsible AI governance.

In terms of incentives for adopting voluntary standards, they were seen to vary. Some incentives mentioned include trust in services, regulatory compliance, risk minimisation, and the potential for a better value proposition. However, the discussion acknowledged that self-regulatory measures have limitations, and there is a need for democratically-decided boundaries in responsible AI governance.

Cooperation, feedback mechanisms, standardized reporting, and benchmarking/testing facilities were identified as key factors in achieving global governance of AI. These mechanisms were viewed as necessary for ensuring transparency, accountability, and consistency in the implementation of responsible AI governance.

The importance of global regulation or governance of AI was strongly emphasised. It was compared to the widespread usage of electricity, suggesting that AI usage is similarly pervasive and requires global standards and regulations for responsible implementation.

The need for transparency in understanding AI usage was highlighted. The discussion stressed the importance of clarity regarding how AI is used, incidents it may cause, the data sets involved, and the usage of synthetic data.

While private efforts in AI were recognised, it was emphasised that they should be made more trustworthy and open. Current private efforts were described as voluntary and often closed, underscoring the need for greater transparency and accountability in the private sector’s contribution to responsible AI governance.

The discussion also touched upon the importance of agility when it comes to generative AI. It was suggested that generative AI at organizational and global levels should be agile to adapt to the evolving landscape of responsible AI governance.

Feedback mechanisms were highlighted as essential for the successful development of foundational models. The discussion emphasised that feedback at all levels is necessary to continuously improve foundational models and align them with responsible AI governance.

High-risk AI applications were identified as needing conformity assessments by independent organizations. This was seen as a way to ensure that these applications meet the necessary ethical and responsible standards.

The comparison of AI with the International Atomic Agency was mentioned but deemed difficult due to the various uses and applications of AI. The discussion acknowledged that AI has vast potential in different domains, making it challenging to compare directly with an established institution like the International Atomic Agency.

Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies that act as infrastructure. This proposal was supported by one of the speakers, Clara, and was seen as a way to enhance responsible governance and decision-making regarding crucial technological developments.

In conclusion, the discussion on responsible AI governance highlighted the significance of technical standards, the need for a combination of top-down and bottom-up approaches, capacity building, protection of vulnerable communities, the effectiveness of voluntary standards, incentives for adoption, the limitations of self-regulatory measures, the role of cooperation and feedback mechanisms in achieving global governance, the importance of transparency and global regulation, the agility of generative AI, and the importance of conformity assessments for high-risk AI applications. Additionally, the proposal for an independent multi-stakeholder panel for crucial technologies was seen as a way to enhance responsible governance.

James Hairston

OpenAI is committed to promoting the safety of AI through collaboration with various stakeholders. They acknowledge the significance of the public sector, civil society, and academia in ensuring the safety of AI and support their work in this regard. OpenAI also recognizes the need to understand the capabilities of new AI technologies and address any unforeseen harms that may arise from their use. They strive to improve their AI tools through an iterative approach, constantly learning and making necessary improvements.

In addition to the public sector and civil society, OpenAI emphasizes the role of the private sector in capacity building for research teams. They work towards building the research capacity of civil society and human rights organizations, realizing the importance of diverse perspectives in addressing AI-related issues.

OpenAI highlights the importance of standardized language and concrete definitions in AI conversations. By promoting a common understanding of AI tools, they aim to facilitate effective and meaningful discussions around their development and use.

The safety of technology use by vulnerable groups is a priority for OpenAI. They stress the need for research-based safety measures, leveraging the expertise of child safety experts and similar institutions. OpenAI recognizes that understanding usage patterns and how different groups interact with technology is crucial in formulating effective safety measures.

The protection of labor involved in the production of AI is a significant concern for OpenAI. They emphasize the need for proper compensation and prompt action against any abuses or harms. OpenAI calls for vigilance to ensure fairness and justice in AI, highlighting the role of companies and monitoring groups in preventing abusive work conditions.

Jurisdictional challenges pose a unique obstacle in AI governance discussions. OpenAI acknowledges the complexity arising from different regulatory frameworks in different jurisdictions. They stress the importance of considering the local context and values in AI system regulation and response.

OpenAI believes in the importance of safety and security testing in different regions to ensure optimal AI performance. They have launched the Red Teaming Network, inviting submissions from various countries, regions, and sectors. By encouraging diverse perspectives and inputs, OpenAI aims to enhance the safety and security of AI systems.

International institutions like the Internet Governance Forum (IGF) play a crucial role in harmonizing discussions about AI regulation and governance. OpenAI recognizes the contributions of such institutions in defining benchmarks and monitoring progress in AI regulations.

While formulating new standards for AI, OpenAI advocates for building on existing conventions, treaties, and areas of law. They believe that these established frameworks should serve as the foundation for developing comprehensive standards for AI usage and safety.

OpenAI is committed to contributing to discussions and future regulations of AI. They are actively involved in various initiatives and encourage collaboration to address challenges and shape the future of AI in a responsible and safe manner.

In terms of emergency response, OpenAI has an emergency shutdown procedure in place for specific dangerous scenarios. This demonstrates their commitment to safety protocols and risk management. They also leverage geographical cutoffs to deal with imminent threats.

OpenAI emphasizes the importance of human involvement in the development and testing of AI systems. They recognize the value of human-in-the-loop approaches, including the role of humans in red teaming processes and ensuring audibility in AI systems.

To address the issue of AI bias, OpenAI suggests the use of synthetic data sets. These data sets can help balance the under-representation of certain regions or genders and fill gaps in language or available information. OpenAI sees the potential in synthetic data sets to tackle some of the challenges associated with AI bias.

Standards bodies, research institutions, and government security testers have a crucial role in developing and monitoring AI. OpenAI acknowledges their importance in ensuring the security and accountability of AI systems.

Public-private collaboration is instrumental in ensuring the safety of digital tools. OpenAI recognizes the significance of working on design, reporting, and research aspects to address potential harms and misuse. They emphasize understanding different communities’ interactions with these tools to develop effective safety measures.

OpenAI recognizes the need to address the harmful effects of new technologies while acknowledging their potential benefits. They emphasize the urgency to build momentum in addressing the negative impacts of emerging technologies and actively contribute to the international regulatory conversation.

In conclusion, OpenAI’s commitment to AI safety is evident through their support for the work of the public sector, civil society, and academia. They emphasize the need to understand new AI capabilities and address unanticipated harms. The private sector has a role to play in capacity building, while standardized language and definitions are crucial in AI conversations. OpenAI stresses the importance of research-based safety measures for technology use by vulnerable groups and protection of labor involved in AI production. They acknowledge the challenges posed by jurisdictional borders in AI governance discussions. OpenAI promotes safety and security testing, encourages public-private collaboration, and advocates for the involvement of humans in AI development and testing. They also highlight the potential of synthetic data sets to address AI bias. International institutions, existing conventions, and standards bodies play a significant role in shaping AI regulations, and OpenAI is actively engaged in contributing to these discussions. Overall, OpenAI’s approach emphasizes the importance of responsible and safe AI development and usage for the benefit of society.

Seth Center

AI technology is often compared to electricity in terms of its transformative power. However, unlike electricity, there is a growing consensus that governance frameworks for AI should be established promptly rather than waiting for several decades. Governments, such as the US, are embracing a multi-stakeholder approach to developing AI principles and governance. The US government has made voluntary commitments in key areas like transparency, security, and trust.

Accountability is a key focus in AI governance, with both hard law and voluntary frameworks being discussed. However, there are concerns and skepticism surrounding the effectiveness of voluntary governance frameworks in ensuring accountability. There is also doubt about the ability of principles alone to achieve accountability.

Despite these challenges, there is broad agreement on the concept of AI governance. Discussions and conversations are viewed as essential and valuable in shaping effective governance frameworks. The aim is for powerful AI developers, whether they are companies or governments, to devote attention to governing AI responsibly. The multi-stakeholder community can play a crucial role in guiding these developers towards addressing society’s greatest challenges.

Implementing safeguards in AI is seen as vital for ensuring safety and security. This includes concepts such as red teaming, strict cybersecurity, third-party audits, and public reporting, all aimed at creating accountability and trust. Developers are encouraged to focus on addressing issues like bias and discrimination in AI, aligning with the goal of using AI to tackle society’s most pressing problems.

The idea of instituting AI global governance requires patience. Drawing a comparison to the establishment of the International Atomic Energy Agency (IAEA), it is recognized that the process can take time. However, there is a need to develop scientific networks for shared risk assessments and agree on shared standards for evaluation and capabilities.

In terms of decision-making, there is a call for careful yet swift action in AI governance. Governments rely on inputs from various stakeholders, including the technical community and standard-setting bodies, to navigate the complex landscape of AI. Decision-making should not be careless, but the momentum towards establishing effective AI governance should not be slowed down.

In conclusion, while AI technology has the potential to be a transformative force, it is crucial to establish governance frameworks promptly. A multi-stakeholder approach, accountability, and the implementation of safeguards are seen as key components of effective AI governance. Discussions and conversations among stakeholders are believed to be vital in shaping AI governance frameworks. Patience is needed in institutionalizing AI global governance, but decision-making should strike a balance between caution and timely action.

Thobekile Matimbe

The analysis of the event session highlights several key points made by the speakers. First, it is noted that the Global South is actively working towards establishing regulatory frameworks for managing artificial intelligence. This demonstrates an effort to ensure that AI technologies are used responsibly and with consideration for ethical and legal implications. However, it is also pointed out that there is a lack of inclusivity in the design and application of AI on a global scale. The speakers highlight the fact that centres of power control the knowledge and design of technology, leading to inadequate representation from the Global South in discussions about AI. This lack of inclusivity raises concerns about the potential for bias and discrimination in AI systems.

The analysis also draws attention to the issues of discriminatory practices and surveillance in the Global South related to the use of AI. It is noted that surveillance targeting human rights defenders is a major concern, and there is evidence to suggest that discriminatory practices are indeed a lived reality. These concerns emphasize the need for proper oversight and safeguards to protect individuals from human rights violations arising from the use of AI.

In terms of internet governance, it is highlighted that inclusive processes and accessible platforms are essential for individuals from the Global South to be actively involved in Internet Governance Forums (IGFs). The importance of ensuring the participation of everyone, including marginalized and vulnerable groups, is emphasized as a means of achieving more equitable and inclusive internet governance.

The analysis also emphasizes the need for continued engagement with critical stakeholders and a victim-centered approach in conversations about AI and technology. This approach is necessary to address the adverse impacts of technology and ensure the promotion and protection of fundamental rights and freedoms. Furthermore, the analysis also underlines the importance of understanding global asymmetries and contexts when discussing AI and technology. Recognizing these differences can lead to more informed and effective decision-making.

Another noteworthy observation is the emphasis on the agency of individuals over their fundamental rights and freedoms. The argument is made that human beings should not cede or forfeit their rights to technology, highlighting the need for responsible and human-centered design and application of AI.

Additionally, the analysis highlights the importance of promoting children’s and women’s rights in the use of AI, as well as centring conversations around environmental rights. These aspects demonstrate the need to consider the broader societal impact of AI beyond just the technical aspects.

In conclusion, the analysis of the event session highlights the ongoing efforts of the Global South in developing regulatory frameworks for AI, but also raises concerns about the lack of inclusivity and potential for discrimination in the design and application of AI globally. The analysis emphasizes the importance of inclusive and participatory internet governance, continued engagement with stakeholders, and a victim-centered approach in conversations about AI. It also underlines the need to understand global asymmetries and contexts and calls for the promotion and protection of fundamental rights and freedoms in the use of AI.

Moderator 1 – Maria Paz Canales Lobel

In her writings, Maria Paz Canales Lobel stresses the crucial importance of shaping the digital transformation to ensure that artificial intelligence (AI) technologies serve the best interests of humanity. She argues that AI governance should be firmly rooted in the international human rights framework, advocating for the application of human rights principles to guide the regulation and oversight of AI systems.

Canales Lobel proposes a risk-based approach to AI design and development, suggesting that potential risks and harms associated with AI technologies should be carefully identified and addressed from the outset. She emphasises the need for transparency in the development and deployment of AI systems to ensure that they are accountable for any adverse impacts or unintended consequences.

Furthermore, Canales Lobel emphasises the importance of open and inclusive design, development, and use of AI technologies. She argues that AI governance should be shaped through a multi-stakeholder conversation, involving diverse perspectives and expertise, in order to foster a holistic approach to decision-making and policy development. By including a wide range of stakeholders, she believes that the needs and concerns of vulnerable communities, such as children, can be adequately addressed in AI governance.

Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless coordination and cooperation between international and local levels. She suggests that the governance of AI should encompass not only technical standards and regulations but also voluntary guidelines and ethical considerations. She emphasizes the necessity of extending discussions beyond the confines of closed rooms and engaging people from various backgrounds and geopolitical contexts to ensure a comprehensive and inclusive approach.

In conclusion, Canales Lobel underscores the importance of responsible and ethical AI governance that places human rights and the well-being of all individuals at its core. Through her arguments for the integration of human rights principles, the adoption of a risk-based approach, and the promotion of open and inclusive design, development, and use of AI technologies, she presents a nuanced and holistic perspective on effective AI governance. Her emphasis on multi-stakeholder conversations, global collaboration, and the needs of vulnerable communities further contributes to the ongoing discourse on AI ethics and regulation.

Audience

The creation of AI involves different types of labor across the globe, each with its own set of standards and regulations. It is important to recognize that AI systems may be technological in nature, but they require significant human input during development. However, the labor involved in creating AI differs between the global south and the Western world. This suggests that there may be disparities in terms of the resources, expertise, and opportunities available for AI development in different regions.

When it comes to AI-generated disinformation, developing countries face particular challenges in countering this issue. With the rise of generative AI, which has become increasingly popular, there has been an increase in the spread of misinformation. This poses a significant challenge for developing countries, as they may not have the resources or infrastructure to effectively counter and mitigate the negative consequences of AI-generated disinformation.

On the other hand, developed economies have a responsibility to help create an inclusive digital ecosystem. While countries like Nepal are striving to enter the digital era, they face obstacles in the form of new technologies like AI. This highlights the importance of developed economies providing support and collaboration to ensure that developing countries can also benefit from and participate in the digital revolution.

In terms of regulation, there is no global consensus on how to govern AI and big data. The International Governance Forum (IGF) has been grappling with the issue of big data regulation for over a decade, without reaching a global agreement. Furthermore, there are differences in the approaches taken by different regions, such as the US and Europe, to deal with the data practices of their respective companies. This lack of consensus presents challenges in establishing consistent and effective regulation for AI and big data across the globe.

When it comes to policy-making, it is crucial to consider the protection of future generations, especially children, in discussions related to AI. Advocacy for children’s rights and the need to safeguard the interests of future generations have been highlighted in discussions around AI and policy-making. It is important not to overlook or underestimate the impact that AI will have on the lives of children and future generations.

It is worth noting that technical discussions should not neglect simple yet significant considerations, such as addressing the concerns of children in policy-making. These considerations can help achieve inclusive designs that take into account the diverse needs and perspectives of different groups. By incorporating the voices and interests of children, policymakers can create policies that are more equitable and beneficial for all.

In conclusion, the creation and regulation of AI present various challenges and considerations. The differing types of labor involved in AI creation, the struggle to counter AI-generated disinformation in developing countries, the need for developed economies to foster an inclusive digital ecosystem, the absence of a global consensus on regulating AI and big data, and the importance of considering the interests of children in policy-making are all crucial aspects that need to be addressed. It is essential to promote collaboration, dialogue, and comprehensive approaches to ensure that AI is developed and regulated in a manner that benefits society as a whole.

Arisa Ema

The global discussions on AI governance need to consider different models and structures used across borders. Arisa Ema suggests that transparency and interoperability are crucial elements in these discussions. This is supported by the fact that framework interoperability has been highlighted in the G7 communique, and different countries have their own policies for AI evaluation.

When it comes to risk-based assessments, it is important to consider various aspects and application areas. For example, the level of risk involved in different usage scenarios, such as the use of facial recognition systems at airports or building entrances. Arisa Ema highlights the need to consider who is using AI, who is benefiting from it, and who is at risk.

Inclusivity is another important aspect of AI governance discussions. Arisa Ema urges the inclusion of physically challenged individuals in forums such as the Internet Governance Forum (IGF). She mentions an example of organizing a session where a person in a wheelchair participated remotely using avatar robots. This highlights the potential of technology to include those who may not be able to physically attend sessions.

Arisa Ema also emphasizes the importance of a human-centric approach in AI discussions. She believes that humans are adaptable and resilient, and they play a key role in AI systems. A human-centric approach ensures that AI benefits humanity and aligns with our values and needs.

Furthermore, Arisa Ema sees AI governance as a shared topic of discussion among technologists, policymakers, and the public. She uses democratic principles to stress her stance, emphasizing the importance of involving all stakeholders in shaping AI governance policies and frameworks.

The discussion on AI governance is an ongoing process, according to Arisa Ema. She believes that it is not the end but rather a starting point for exchanges and discussions. It is important to have a shared philosophy or concept in AI governance to foster collaboration and a common understanding among stakeholders.

Overall, the extended summary highlights the need for transparency, interoperability, risk-based assessments, inclusivity, a human-centric approach, and a shared governance framework in AI discussions. Arisa Ema’s insights and arguments provide valuable perspectives on these important aspects of AI governance.

Session transcript

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Maria, for the opportunity to be here with you today, and I’m very pleased to have you here with us today. Good afternoon, everyone. My name is Maria Pascanales. I’m the head of legal policy and research at Global Partner Digitals, a civil society organisation that works in the issues related with technology governance. I’m very pleased to be here and have the pleasure of being the moderator of this session, which is about the digital transformation, and the digital transformation that we want. So I have the honour to have a distinguished panel of speakers here for enlightening this conversation. I will start to introduce them, and then I will bring some logistics for the unfolding of the session, and we will enter in the substantive discussion. So, first of all, I would like to introduce myself, my name is Maria Pascanales, and I’m a associate professor at the University of Tokyo, and I’m visiting research at the RIKEN Centre for Advancing Intelligence Project in Japan. We have with us Dr Clara Neppel. Dr Neppel is the senior director of the IEEE Europe headquartered in Vienna, and head of the IEEE Technology Centre for Climate. We have with us Dr Yusuf Zahid, who is the director of the IEEE International Policy and Partnership at OpenAI. Thank you very much. We have Dr Seth Senter, who is the deputy envoy for critical and emerging technology, and previous government service include as a member of the State Department policy, the planning staff, where he helped develop the department’s cyberspace and the national security commission, and he is also the director of the National Security Commission on Artificial Intelligence, where he led the writing of the commission’s final report. Finally, but very importantly, from civil society representation, we have Miss Tobikele Matimbe, who is a human rights lawyer, researcher and social justice activist from Zimbabwe, serving at Paradigm Initiative, a senior manager of partnerships and engagement. And finally, we have the panelists, who will be presenting in the next session, and in the next session, we will be organising, it’s like we will post some policy questions that have been at the centre of the designing of the session to the distinguished speakers, and we will have two rounds of questions for the first section, each panelist will intervene for five minutes, and then we will be followed by ten minutes the floor here inside, so for that, I ask you to put in line in front of the microphone if you will want to present some questions to the speakers, and also for our remote participant, please let us, our remote moderator, sorry, Christian Guillen, also part of the panel, to know if you have any questions I would like he provide to the speakers during the session. So with that being said, I will move to setting a little bit the scene of this conversation today. In that sense, I would like to highlight a couple of things that for me are really relevant in the conversation today. The first thing that I want to kind of be provocative with you in terms of the setting of the scene of this session is like to think about how during the last year we have been listening so much about artificial intelligence in our daily life, so people that it was not connected at all with thematic of the artificial intelligence, maybe not even familiar with the name of the technology, now it was interesting to know more about how this technology function and how we will take care of ensuring that the technology will be at the service of the benefit of the exercise of right and the daily life of anyone around the world. So this is the challenge that has been posted by the current reality and by the demands that come from the people and the pressure that is being put in governments and companies to get to find the way in which artificial intelligence will be governed for ensuring particularly that it’s developed, deployed, and used in a way that is beneficial for the human good. What we have been seeing is like for maximizing this artificial intelligence positive aspect in society, there is a fundamental need of agreeing in responsible and ethical principles for the development, and what I bring today as a proposition is that part of the discussion also should be So, in that sense, I have been working with my organization in proposing a number of principles that are linked with how we can be more mindful and be grounded in what is coming from the international framework of human rights as an essential element for guiding this task of thinking about technical standards, regulation, legislation at large, and other kind of voluntary guidance that can be developed for governance of artificial intelligence. So, in that sense, I have been working with my organization in proposing a number of principles that are linked with how we can make the international human rights framework applicable to the conversation of artificial intelligence, and we have come with five principles, the first one, like, to think that any kind of governance discussion about artificial intelligence should be grounded in approaches that have been developed in terms of the promotion and implementation of new and emerging technologies that have preceded artificial intelligence. The second one is that develop a risk-based approach to the design development and deployment of artificial intelligence, and I am pretty sure that part of the conversation with the panelists today will be unpacking what we mean by risk and what we mean for those assessments. The third one, it’s promote an open and inclusive design development and use of the artificial intelligence technology. And then, we also invite you to think about how we need to ensure transparency in the design development and deployment of AI, and hold the designers and deployers of artificial intelligence accountable for risk and harms. So, without more from me, with this proposition, we want to hear from every one of the panelists, and we will start the first round of comments, and to talk about particularly the two policy questions that have been proposed by the organisers of the session in the MAG that invite us to think about in the first round of our conversation on the matter of how the global processes connect that have been proposed by the organisers of the session in the MAG. So, I’m going to start with the first question, which is around governance of international, sorry, the governance of artificial intelligence at the international level, but also at the local level, with a side of regulating or guiding the governance for greater good. My first invitation to intervene is to ask the question, what is the policy of the international governance of artificial intelligence? And what are the principles and technical guidance to operational artificial intelligence governance that is effective in policy across jurisdictions?

Arisa Ema:
Thank you, Maria, for a very nice and kind introduction, and I’m really honoured to be at this panel, and for the question of governance, I think it’s really important to think about the different models that are used, not only like the design, but also like the developed, deployed, and used across the borders. And, for example, here in Japan, it’s the normal cases is that maybe we use this core AI model, for example, from the United States, for example. So, in that sense, it is really important to have the transparency about when we actually see this AI life cycle, and not only the transparency, but also it needs to be interoperable, and it needs to be, you know, it needs to be interoperable. It needs to be interoperable. So, not only the transparency, but also it needs to be interoperable. And this word, the framework interoperability is actually mentioned in the G7 communique in the 2020 at the Takamatsu, but somehow this is kind of like a very tricky word. What does it mean by framework interoperability? It means that we need to know that each country or each organization or maybe even one company has its own policy and their own way of assessing their AI systems, and also evaluating the risk and making the impact assessment. However, the legal system is different from country to country, and so each country’s discipline should be different from country to country, and so each country’s discipline should be respected. And otherwise, you know, this global So, I think it’s very important for us to have a clear understanding of what is happening in the world, because if we don’t, the discussion won’t work, and also, each country has its own context. For example, in Japan, we actually have the guidelines towards this AI, the utilization or the AI development, and not so much on the binding one, so we have to look into the public reputation, and that kind of soft law discipline actually really works, but that might be the Japanese case, and the other country or the other organization might have different aspect. So, it’s really important to know what actually, what companies or what country has its own risk management system, and also, what kind of risk management framework or the risk assessment framework, and with that, transparency, and also, that the exchanging the actual cases is really important, and, as I really appreciate that Maria raised the discussion on the risk-based assessment, so what does it mean by risk? So, we can discuss at the high-level risk or the low-level risk, but, for example, when taking consideration about, like, the facial recognition system, you see the airport, or maybe using the entrance of the building, the usage is totally different, but maybe using the same facial recognition system, so we need to look into the context, and we need to take into the, who is actually using, who is benefiting from it, and who has the risk on it, so exchanging the cases is really important, and, in that way, I think we can put all these kind of abstract principles into more kind of living kind of discussion as the making it to practices. So, maybe I will stop here.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Arisa, and I will want to continue that line of conversation, inviting Clara to also jump in, in her take on how technical standards relate to education, and also, in her take on how technical standards relate to ethical principles and can support effective, responsible AI governance at the global level, and how it’s your experience about how technical standards can account for these challenges, these ethical challenges, and the principles that are posted, but also human rights international standards, according to my provocation at the beginning.

Clara Neppel:
Thank you, and thank you for having me here. So, IEEE is a very old organisation. We were founded more by 140 years ago, co-founded by Edison. So, why would an inventor like Edison, who invented electricity, engage with others? He could have done it alone, and I think it was this realisation that in order to be accepted by society, you have to manage risks, and one risk at that time was clearly safety, and we started by dealing with safety, and since then, we are dealing with safety and security, but now, with AI, we see that we actually need to redefine risk. We have to move away from this, let’s say, more traditional dimensions of risk, like safety and security, and incorporate human rights that you just mentioned. And the question is how to do that. We started very early on, and it’s a bottom-up approach. So, we are also the largest technical organisation in the world, with more than 400,000 members worldwide, and these issues started to come up at the individual level very early on. These issues around what you just mentioned, about bias and so on, the question was how to deal with them. So, we started an initiative called Ethical Aligned Design, which identified the issues, tried to manage them with standards, for instance, but also with engaging with regulators. Now, when it comes to standards, we moved to so-called socio-technical standards. What are they? So, these are from value-based design to common terminology. Value-based design, what does it mean? It means taking values of the stakeholders in that context that you just mentioned into account, and that will be different values. Of course, human rights are always important. but you have different ways of how you are dealing with them. And how you prioritize these values and actually translate them into system requirements and giving that step-by-step methodology for developers proved to be a very efficient standard. Common terminology. What do we mean if we say transparency? It can mean it’s a completely different thing to a developer than for a user. So that’s also one of the standards which deals with finding different levels of transparency. Bias, the same thing. We all want to have eliminating bias from systems, but we actually need bias, for instance, in health care. We need to take into account the differences in symptoms for men and women because they react differently, for instance, when they have a heart attack. So context is very important. So we also complemented these standards with a certification, an ethical certification system. And we tried it out with public and private actors. And what is very important, after all, I think that it was mentioned before, is to start building up capacity in terms of training because we need this combination between technical expertise and expertise in social legal matters and so on. So as part of this certification process, we have a competency framework which defines what are the skills necessary for a trainer, for assessors, for certifiers. And we started working also for certification bodies so to build up this ecosystem which needs to be there in order to make this happen. So this bottom-up approach, of course, needs to be complemented by a top-down approach, the regulatory frameworks. And we engaged with the Council of Europe, from the European Union, and OECD, and so on. From very early on, from the principles, but also how to operationalize this regulation. One example is now with the AI Act, which basically mandates certain standards, where we also engage with the European Commission to see how we can map, let’s say, the regulatory requirements to standards. There is a report from the Joint Research Center that you can download. Thank you, I think.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Clara. We’re going to move now to hear a little bit on the take from James that represents some perspective on the private sector experience, and particularly following the flow of this conversation, Clara mentioned the values, the definitions of the values, but also the definitions of the terms of the frameworks that we will be using. So in that sense, my provocation and question to you is like, aside from the government efforts, the multilateral efforts, the technical standard efforts that we have been hearing, what are the current efforts that the private sector are conducting for reflecting some of these challenges of finding ways to address responsible AI governance, and how those link with the conversation that we are having here around ethical principles, but also human rights protection. So let us know what is your take on that.

James Hairston:
Yeah, thank you. You know, I think one of the places we really began is, you know, to listen and sort of to understand as the tools that we build are used in novel ways, and as we explore sort of the new capabilities, learning from expert communities and academics and standards bodies, experts around the world who are evaluating and testing, what are the new harms that we haven’t anticipated? I mean, we know that we won’t know them all ahead of time, and we try to take a really iterative approach and really explain what we’re building and how we’re building through tools like our system cards, and inviting sort of open red teaming and evaluation of our tools, but really understanding, you know, what is it that we don’t know? Where are the places in which languages are our tools not performing well? Where are the places where definitions, as have been discussed, need sort of stronger concrete backing so that, you know, we know that as we’re building these international conversations that we’re speaking the same language and sort of able to come together. cut through, whether it’s marketing by the private sector or areas that have yet to be fully defined, that we’re building from a common understanding. I think another important role for the private sector and that we really take seriously at OpenAI is just capacity building, and building the capacity for research teams of all types across civil society and human rights organizations and governments to be involved in this testing, to tell us what’s working, what’s not, capabilities that they’d like to see or ones that are not working. And so this is something that’s going to be iterative. We are clear as when we do our disclosures at the release of new tools about all the areas that we’re trying to solve for. There are important research questions about the future of things like hallucinations and understanding where watermarking, how to solve for watermarking questions across text or different types of video or different types of outputs across LLM tools. So our contributions, I think, begin with admitting what we don’t know, many of the places where there’s a lot of work to do, trying to help with the capacity building to work on safety and evaluation of these systems and really supporting work around the world by the public sector, by the private sector, by civil society, by academia, to get the future of these tools right and ensure that the conversations that we’re having around the world really level into concrete action that ensures the long-term safety of artificial intelligence. Thank you.

Moderator 1 – Maria Paz Canales Lobel:
Thank you so much, James. I’m gonna turn now to Dr. Senter that represents the US government in this conversation. And I will be curious, particularly with the frame that I presented at the beginning of the conversation of the pressures that are coming from the broader public now to governments to turn to action in relationship with harnessing the power of artificial intelligence for good. What is right now the perspective and the take of the US government about the more pressing challenges in the global governance of AI and how those also relate with the actions? and the collaborative work that you in the domestic level are taking also with the private sector, with other governments, to effectively address those challenges that you identify as the most pressing ones. Thank you.

Seth Center:
Great. And thanks so much. I think pressure is an interesting word to characterize the situation that we’re all in, not just governments. I think part of the reason why all of us are here and excited about AI and somewhat scared as well is because there’s a sense that we’re in a transformative era. And given that the IEEE was founded by Thomas Edison, I’ll start with a Thomas Edison quote. I wasn’t planning on it because it’s my favorite Thomas Edison quote. He was asked at the turn of the 19th century, about 20 years after the light bulb was developed, what the effect of electricity was going to be on the world. And he said, electricity holds the secrets that are going to reorganize the entire life of the world. You could apply that to artificial intelligence. The problem with that analogy is, at least in the United States, it took several decades to get to a regulatory framework for electricity. And I think no one here thinks we can wait several decades to get to a governance framework that includes regulation for AI because of the pressure. So with that being said, first of all, I commend an organization like IGF for bringing together a diverse group of multi-stakeholders like this to have a conversation about how to accelerate the pace of governance. I thank Japan in particular for hosting us and leading the G7 Hiroshima process, and we saw the effort and the pressure and the way in which speed can create results through that process. And then from that basis, let me just make four points, and I have about 30 seconds to make each of the points. Point one is perspective for all of us on AI governance. I think we have a solid foundation based in a multi-stakeholder approach. to developing the principles for AI, the OECD principles from 2019, the G20 principles as well. Within the United States, in the past couple years, we’ve developed two frameworks that are extremely important, and they touch on the human rights and values component of this as well, both of which were developed with extensive consultation across the multi-stakeholder community. One is the AI Bill of Rights, and the other is the National Institutes of Standards and Technology’s Risk Management Framework that had over 240 consultations over 18 months with the multi-stakeholder community to develop a framework for how to apply a safety and security framework to developing AI. So that’s the kind of perspective that we have to take to the challenges we have. Why then, if we have such a rock-solid foundation, are we having this conversation today? The obvious answer is GPT has created a new socio-cultural, political phenomenon, a new moment. In part, it is the Sputnik that all of us were waiting for when we were talking about AI to grip all of us into action several years ago. But in part, it’s because it’s raised all kinds of profound questions about safety, security, risk. And so we have to take it on in a new and substantial way. And that moves us into two problems or challenges. One is it intensifies and accelerates all of our fears that emerged from the digital era, and the other is it intensifies and accelerates all of our hopes and opportunities that come from a technological revolution. And so we need to get that balance right. I think all of us accept that, and that requires moving quickly. For the United States, speed then meant we have to balance between moving towards a regulatory framework eventually with getting governance action now. Our choice in the interim was to move towards what were called voluntary commitments that touch on a framework of safety, security, and trust, which hold companies accountable for a whole series of efforts to become more transparent, to protect security, to promote transparency, to ensure that their systems work as intended. And that’s basically our overarching architecture for what we’re approaching this era of, where we need clarity, we need speed, and we have to act in this era of pressure.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, Dr. Sancher. And I will move in this flow of the conversation with Toby Keeley, and particularly with a reaction from your side in terms of where is the global south perspective in all this conversation. We heard of like alternatives. So, what are the fundamental challenges and opportunities to build effective artificial intelligence governance? What are the alternative paths of dealing with artificial intelligence governance that usually are commonly led by global north governments or global north organizations from the private sector, from the academia, from the industry? So, what are the fundamental challenges and opportunities to build effective artificial intelligence governance that usually are commonly led by global south? And how do you experience the influence of these different trends coming from abroad, from this different sector, the regulatory ones, but also the ones that are related with different frameworks to address these issues of governance? Thank you.

Thobekile Matimbe:
Thank you so much, and it’s a pleasure to be here and part of this panel as well. I will just highlight that from a perspective of, you know, the global south, or rather I’ll just maybe narrow it down to the global south, we have a lot of data protection laws that have been implemented in the past, and I think where we are in terms of regulatory frameworks, we are at a place where we still try to catch up with regards to coming up with national artificial intelligence strategies, and what we have are data protection laws that have sort of, like, you know, just a drop in the ocean when looking at clauses that are not in the artificial intelligence framework, and we have a lot of data protection laws that are still being implemented, and at looking at that kind of context, we are facing a situation where we are trying to catch up with regards to how we can ensure the protection of human rights when we’re looking at artificial intelligence, the design processes, as well as the use. Because of that, you’d find that it’s a very, very difficult situation, and, you know, we have to appreciate that there is a lot of data protection laws that are still being implemented, and, as we look at the use of artificial intelligence, we have to appreciate that there are definitely centres of power. What I mean by centres of power when we’re looking at, you know, who has the knowledge of technology, who has, you know, the technical, you know, design, sort of, like, you know, ownership, you’d see that within the digital world, you’d see that there are a lot of stakeholders, a lot of stakeholders, a lot of voices from the global south, in whichever processes that are there, even at a global stage, or a global scene, there’s a need for inclusivity of not just civil society, but, as well, inclusivity when we’re looking at representation, and member states, as well, and their participation, and I think the inclusion of the digital world, as well, is a very important part of the discussion, and, you know, there’s a lot of discussion around AI, and any other, you know, global framework that can come out of, you know, the global scene, and that is something that can be leveraged. Looking at the regional level, maybe just taking it a level, you know, sort of, like, down from the global scene, you’d find that, from a regional perspective, we have the African Commission on Human and Social Rights, and, you know, we have the African Commission on Human and Social Rights, and, you know, we have the African Commission on Human and Social Rights, and, you know, we’re working with states within the African continent to develop, you know, strategies, and mechanisms, and legislative provisions that ensure that rights are protected when we’re looking at the use of AI, as well, in the context of, you know, of human rights, and, to date, since 2021, I would say, like I highlighted in my earlier remarks, that, you know, we do have, you know, another rights are safeguarded, but the real-lived realities remain within the global South where we find that there is, you know, sort of like lack of trust for use of AI because of inadequate policies. We also see that surveillance, you know, targeting human rights defenders remains a major concern. We do see that, you know, discriminatory practices that come with the use of AI are still a lived reality on the continent, so I think it’s something that needs to be addressed from a global perspective, and I think understanding that context, I will emphasize again that it’s something that is really important. Thank you so

Moderator 1 – Maria Paz Canales Lobel:
much to Ikele. And now we have finished our first round of comments and answers from the panelists in this session, so I open the floor for the questions that can come from the audience inside, but also I look to my colleague for knowing if there’s any question posted online. Yes, Maria, the

Moderator 2 – Christian Guillen:
chat exploding, but only through my comments. People are very shy still, so beautiful crowd out there. Just use this opportunity to actually ask all those questions you don’t dare to ask usually on AI and governance. These are just right people answering your questions. There’s one question, though, and that is very interesting, because it’s posed by a target group you very often forget. It’s a 70-year-old boy, Omar Farooq, from Bangladesh, and basically he’s asking how can we ensure that AI regulation and governance at the multilateral level is inclusive and child-centered, so that children and young people can benefit from AI while being protected from its potential harms? Thank you. Some of

Moderator 1 – Maria Paz Canales Lobel:
the panelists are particularly motivated to take on that question. I think that that question was put in the center the issue of like a specific vulnerable community, so when we design for for policy, for governance around artificial intelligence. This is an example that the children can be considered, but there are other also specific communities. So how we design for being inclusive in the governance, for high accommodation also for particular needs for vulnerable groups in a considered way that effectively provide governance that works for all these different cases. Clara? Yeah, go ahead.

Clara Neppel:
Well, I think that these are certain things which can be addressed both on a voluntary level as well as at a regulatory level. We see examples, for instance, Lego implementing quite a lot of measures to make sure that their online presence and then in an upcoming virtual environments, children are protected. But of course, here especially, I think that there is important to complement these voluntary efforts with regulatory requirements. And one example is actually the UK Children’s Act because we all agree human rights, children needs to need to be protected. But it is another question, how is that implemented in online? And UK code is one example of a regulatory framework setting up the, let’s say the requirements, but when it comes to operationalize it, it was one of the IEEE standards, age-appropriate design, which gives very clear guidance to implementers on what it means to implement this act. So there are already both regulations as well. And this is just one example. This is also discussed in other countries as well. So one example of how, let’s say, standards and regulation can interact to protect children online and other human rights as a matter of fact.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. I don’t know if all of the panelists have reaction. If not, we move to the next one. Do you want to react, James?

James Hairston:
I guess the only thing I’d add is just to base a lot of the work on top of the sort of research that’s being done by child safety experts around the world. There are just so many great institutions. And you mentioned the Lego example. Academics and organizations that are looking at the usage patterns and understanding how children and any number of vulnerable groups interact with these technologies, the harms or their expectations and how they diverge. Prior to working at OpenAI, I worked in virtual and augmented reality. augmented reality. And again, in safe settings, whether it’s doctors or research teams, really go deeper. And we don’t base the work on our understanding as adults. And this is, again, whether we’re talking about children using these tools or elderly populations, vulnerable communities who may have less access, that it’s research-based, that it’s evidence-based, and that I think in these settings it’s possible for organizations to really work with the community we’re trying to build the sort of safety tools and systems around. So I don’t think there’s anything revolutionary about that idea, but these organizations really do such important work. And I think supporting them and advancing their work and putting their research front and center in the development of policies is essential.

Moderator 1 – Maria Paz Canales Lobel:
Definitely. Thank you very much for that answer. Christian, we have another question? Or we have here in the inside? Maybe we could do it like quid pro quo. Yeah. We can take one from here. Yeah.

Audience:
Hi, I’m Viet Vu from Toronto Metropolitan University in Canada. While AI systems are technological in nature, as many of us know, it still involves a lot of human input of various different kinds. And we’ve seen media reports of the kind of labor that is involved in creating AI in the global south to be quite a bit different from the kind of labor that is involved in creating AI tools in the Western world. And so in governing creations of AI, how do we think about sort of international labor work standards and regulations?

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. So anyone from the panelists want to react to that? James.

James Hairston:
Yeah, I’m happy to begin. I mean, I think, again, the importance of protecting labor that’s involved in the production of these tools is essential. And sort of the work that’s been done over the years in advancing the rights of workers in other sectors, I mean, has to be applied in artificial intelligence and making sure people are compensated properly, that when there are abuses or harms that they are addressed. And so, you know, again, this is just an area where everyone is going to have to continue to be vigilant, whether companies inside the private sector, you know, monitoring groups, and just making sure that we’re listening and understanding the production, understanding where voices aren’t being heard, or where, you know, actors at any level of the sort of labor and employment chain and the production, the development of tools are sort of acting improperly. And so, you know, I think if there are places where existing law and policy can’t address those harms, and, you know, we certainly should be vigilant for places where there are gaps, we have to talk about them openly and constructively and sort of move quickly to make sure there aren’t communities and types of work that’s going on that is abusive or harmful.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. Maybe we have time for one last question online, if there is another one?

Moderator 2 – Christian Guillen:
Yes. Now they are popping in, actually. I shouldn’t have said anything. It’s three questions, but I’ll sum them up, okay? There’s one colleague from, one professor from the Afghanistan Kabul University, asking could we apply generative AI in developing countries like Afghanistan in the education system? I’d say why not, but maybe you have a brighter answer. And there are two other questions. One is asking for the accountability aspect, given that AI is not fully understood, and to balance AI values and risks, do you think how should it be dealt with the accountability about AI? And one last question that is rather referring to the ethics. For the moment, AI is providing output based on human-based input data, but with the time it may be processing its own created data. So is it ethically acceptable to have machines decide on humans’ matters based on no human data? This gets complicated now. What’s the plan to make sure once in time we’ll not be left in hand on something out of total control. So, very concrete question on the education system and a very wide question in the ethics. Yeah, for two minutes and a half it will be a little

Moderator 1 – Maria Paz Canales Lobel:
bit of a challenge, but maybe I can invite Dr. Seth to react in the question related to accountability mechanism, how we can build effective accountability

Seth Center:
mechanisms. Sure, I think every single governance question comes down to ultimately accountability. I think skepticism around governance frameworks that are voluntary come back to the question of accountability. I think even a hard law framework comes down to accountability if the challenge is figuring out what to measure in order to apply a hard law. From our approach, as we think about accountability in the context of a voluntary framework, at least as a bridge to something harder, I think it comes back to what you were talking about in part, which is there is a reputational cost that comes along with signing up to voluntary commitments. And James, I think you’ll probably have some views from OpenAI’s point of view as well on what accountability means for a so-called voluntary commitment. I think insofar as volunteerism and accountability are linked to technical action, you can talk about accountability in meaningful ways because it can eventually be measured. And I think that measurement question is extremely important to dive down below the abstract level of principles, where I think there is an increasing amount of skepticism that principles can achieve accountability. Thank you. We have one

Moderator 1 – Maria Paz Canales Lobel:
last question, but now I’m gonna close the queue because we need to move to the next segment, so please. Hello everyone. My name is Ananda, for the record. I’m

Audience:
the chair of USIGF Nepal and I represent our developing economy. And while IGF 2020 is being bombarded with all the topics from AI, we are still struggling to connect the people. 40% of the population in Nepal and APEC region is still unconnected. And if we see the 40%, those are connected are the newly adopters of the Internet. My question is, while developed nations are adopting AI and these technologies, the nations like Nepal are striving to actually counter the disinformation, misinformation that are being held by the generative AI that became so popular in 2022 with the use of it. You can name it Charts of IT or Google So, in this scenario, how does developing economy help these kind of nations in co-entering the digital era? And another thing is we discuss this kind of issues in multi-stakeholder platforms, but these platforms are not capable enough to actually set the policies, because when it comes to policies, multilateral system actually influence the policies across the world. So, how does developed economy co-create the digital ecosystem that is inclusive for all? Thank you.

Moderator 1 – Maria Paz Canales Lobel:
I think it’s a very complex question to answer in just a few minutes, and probably we’ll need to answer it from different panelists here, but I don’t know, for example, James, if you have a take in terms of like the jurisdictional channel, sorry, jurisdictional challenges that has the idea of like implementing this governance mechanism for companies that offer services to different context.

James Hairston:
I’ll maybe start with two projects that I think begin to get it sort of solving for this, but again, are just the beginning. We recently launched a project, a grant program for democratic inputs to AI to sort of give, you know, communities, nations, different domains, the possibility of trying to surface, you know, what are the unique values and the types of outputs that are responsive to sort of local contexts from AI systems that a community sort of expects, and those, you know, acknowledging that those may diverge, and sort of beginning to figure out what is that process that is sort of very locally, regionally, community-driven, and how can we sort of build on that. So I think that’s going to be one important stepping stone. Another is we actually also just announced what’s called our Red Teaming Network, and just the security and safety testing that is very specific, you know, to Nepal, to nations and communities around the world, and sort of, you know, again, encouraging safety and security testing, submitting evaluations, you know, you mentioned mis- and disinformation. If there are types of, whether it’s linguistic failures or ways that, you know, large language models or tools like our are attacked or vulnerable to sort of certain types of outputs, we want to know, you know, we want to really hear where we’re falling short or where, you know, perhaps a gap in understanding or a particular type of action is producing results that are especially harmful. And so I think that practice, building that community of practice, submitting those types of evaluations and growing the community that is doing that in different countries, in different regions, across sectors, is going to be important.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, James. With that intervention, we will move to the next segment of this conversation that is particularly linked and related to the role of IGF. We are all here sitting in this room and participating of this event on internet governance, and there is a particular value on the conversation that happened in this space and have been happening for 18 years, shaping digital technology, shaping the form and the use of the internet. So on that note, what we want to question during this part of the conversation with the intervention of the speakers, the role of the IGF as a convener and facilitator of artificial governance action. And for that conversation, I will turn first to Clara, and I will interrogate about the experience of IEEE working and developing voluntary guidance. What is just your perspective about the opportunities and limitation of self-regulatory effort to ensure responsible AI governance, and what could be the contribution of the IEEE experience in the role of IGF facilitating this international governance of AI discussions? Thank you.

Clara Neppel:
Thank you. So we see our standards being adopted. Actually, you know, once a standard is out, we as a standard-setting organisation, we don’t really need to know who adopted it. We just had a meet-up last week, and I was surprised to see how many people actually say that they know the standards they implemented in different projects, both private as well as from public actors. So I think that this, I would like to bring here, well, one example is, speaking of children, a UNICEF project which really used the value-based design approach to change, let’s say, the initial design of a system to find talent in Africa from, let’s say, a closed system which was intransparent to something which the young people actually have agency now on. So that is actually a proof of concept that you, by having certain methodologies and taking these values and expectations of the community into account, you actually end up with a different system. And I want to discuss here really the incentives of the voluntary engagements and what are the incentives of adopting a standard. Well, one is, and we have also the city of Vienna who is one of our pilot projects for the certification. Of course, if you are discussing with public authorities, one of their incentives is trust. They want the citizen to trust their services. And you probably also have a lot of private actors who have the same incentive. But if we’re talking about sea level people, of course there’s also the discussion, so what is it in for me? And we know from business schools that one way of, well, making money, well, two ways. One is to minimize cost and the other is to differentiate or focus. And we saw actually in the meetup investors who were interested in this standard because one way of doing value-based design or one outcome is that you end up with better value proposition. And I think that this is an important way of moving away only from the risk-based approach to actually thinking what kind of measures of success do we want to have in the future. Do we want to still have performance, which is of course important for us for a technical community, or profit, which is of course important for the private sector. How do we incorporate these other two dimensions, the people and the planet dimension? And I think that this is something that we have to discuss collectively. And of course the other incentive is, of course, to satisfy regulatory requirements. We see that now with the AI Act a lot of people are interested in these standards because they anticipate that these will be required. But here is also something where I want to see say, to very much stress that there is a limit on voluntary measures. So we as a technical organization, I think also as a private actors, our business of private actors is not to maintain human rights and democracy and rule of law. Of course they should, we all should be part of it and we should comply with it, but I think that it is, there are certain red lines which have to be decided in a democratic process. And the only way to do, let’s say, a common approach to this is this kind of feedback mechanism. If we want to have something like a global governance, we need to establish these lines of communications, to have standardized way of reporting incidents, to have benchmarking, testing facilities, and have, being here in Kyoto, having something like the International Panel of Climate Change, which actually has an advisory role to governments, to say where is it where we actually need to do something and see if new regulation is needed or if regulation needs to be adapted. As a matter of fact, we are just doing this with the Council of Europe, with one of the applications of artificial intelligence, immersive realities. We are working with them to see what are the possible impacts of human rights of these new technologies.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. I think that you bring a super relevant point about the role of incentive. I will be very happy to hear the take of the other speaker when they intervene about that, because I think it’s a challenge for everyone to identify and align with those incentives in order to bring the process to the right direction. But for now I will turn to Arisa and ask you and your experience as a social science researcher and your activities in that role include facilitating dialogue with various stakeholders. So I think it’s very important to understand what are some of the challenges in conducting that facilitation of the multi-stakeholder engagement with AI governance that you can share with us and how this can be effectively, for example, this learning integrated in the role that IGF needs to play as a facilitator of these discussions.

Arisa Ema:
Thank you very much. So I think the role of the IGF is really important. So I think the role of the IGF is really important. So I think the role of the IGF is really important. The previous session I organized the session and I invited my friends who are actually in a wheelchair however he or she can’t come. So I actually brought the robots, the avatar robots that they can operate from remotely from their home. So I think the role of the IGF is really important. So I think the role of the IGF is really important. So I think the role of the IGF is really important. And this kind of thing kind of connects to the first 17-year-old boy’s question. So it is really important to be connected to all those other stakeholders or the other people with some challenges. So I think it is very important too explanation what happened to the neighbors but to the people who are physically, you know, they can virtually come to these places and make the presentations, interact with others, but in the air the side, in our session, what we discussed is that although we have that kind of system, on the other side, there are many people who are not actually available, so I think it’s really important to, when we are discussing about AI governance, we need to put humans also into this kind of systems, and humans is kind of the most flexible, you know, or maybe resilient to be kind of adopted. So, what I wanted to say is that we need to be more adaptive to all of those kind of the crisis situation or maybe how to say, to be more creative and to be more active. So, what I wanted to expect, what I expect to this IGF forum is that we can talk about the AI governance, but we need to include human, and human-centered is the very key word. So, we need to be more adaptive to all of those kind of issues, but we need to be more creative and to be more active to all of those kind of issues, and I think it’s really important to repeatedly come out to all of this kind of discussion, and like the democracy, the rule of law, or the human rights, so with this kind of topic shared with people, and then it will be kind of connected or the discussion, and I think it’s really important, and I think it’s really important, and I think it’s really important. So, I think it’s not all the interesting and important things being discussed in this panel session, however, maybe the next-step action is discussed outside this room, so with this over the lunch, or maybe having in-person discussion, or maybe just having the tea, so that kind of forum is really important, and because IGF forum is open to everybody, we can talk with the person just by taking a moment, or offering a small room to have the person having a good time with the trovies. I would want to speak at, or like to the IGF to be inclusive and also this kind of in-person and informal communication is really important, and I really appreciate that many people came to Kyoto and also enjoyed the Kyoto.

Moderator 1 – Maria Paz Canales Lobel:
So, I would like to invite you to react to the IGF, and I would like to invite you to react to the IGF, because it has so many dimensions, it has the dimension of the different stakeholders, the dimension of the particular situation of vulnerable groups or groups in vulnerable conditions, which is more appropriate, but also has a geopolitical and geographic dimension, and on that, I will invite Tobi-Kiri to react in terms and founder of the IGF and how the IGF can contribute to, for example the interesting dimension of AI governance and how IGF can continue contributing for addressing these kinds of challenges at a big international scale, and that is what we were talking about because this technical

Thobekile Matimbe:
that challenge. Thank you so much. I think I’ll start from a premise of just highlighting that I know that a number of colleagues were not able to be here because of visa issues and I think when we’re talking about inclusion, I think it’s something that we need to proactively think about in terms of how we can make sure that we have inclusive processes but also accessible platforms for those from the Global South specifically. And just going beyond that, I’ll highlight that I think within, you know, the Internet Governance Forum there’s need for, you know, continued engagements and engagements with critical stakeholders and a victim-centered approach to the kind of conversations that happen here in the sense of having everybody, vulnerable groups, well represented in terms of the conversations that happen, especially when we’re looking at AI. I will also highlight that I think an understanding of the global asymmetries, I think it’s something that is important to continue to highlight because we do realize that when we’re looking at, you know, global north versus global south, the different contexts and I think it’s something that I highlighted earlier, the importance of context and I think my colleague here as well highlighted the aspect of understanding the different contexts that are represented within the Internet Governance Forum and I think it’s something that will continue to shape processes even better and also to be able to ensure that we come up with, you know, AI focused, you know, solutions or resolutions that ensure that, you know, no one is left behind when we’re looking at fundamental rights and freedoms particularly and I think just to emphasize that I think definitely this is, you know, a forum that we continue to leverage with regards to advancing So, I think, you know, we need to continue to engage in terms of what we are doing, you know, the promotion, protection, as well, of fundamental rights and freedoms, but also we need to continue to engage in terms of remediation for victims who are likely to suffer the adverse impacts, you know, of design, of technology, and that is something that cannot be overstated, and I will, I think, just, you know, round off by just highlighting that I think we need to continue to highlight that there’s a need to break down the walls. Earlier, I highlighted about the center of power, the centers of power, when we’re looking at AI, and I think the IEGF is that, you know, good opportunity to be able to break down the walls that stand in between the centers of power in a multi, a real multi-stakeholder engagement, where, you know, all voices are heard and no one is left behind.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much, and in this same line, Dr. Santer, I invite you to react to this very same issue of, like, how to deal with this diversity of realities and the diversity of processes that are ongoing for dealing with this diversity of realities at the national level, at the regional level, in some cases, as the European Union, that Clara had brought before, but also at the global governance system, some propositions coming from the UN, and creating new bodies for overseeing the governance of Artificial Intelligence, how that also can be approaches from the perspective of a government that is conducting its own efforts at the domestic level for, you know, for, you know, for, you know, for, you know, ways to protect privacy, protections, and democracy, So, you know, within this shared work, what are the efforts at the domestic level for finding the most appropriate way to address the governance of Artificial Intelligence and being inclusive in this, and how those efforts and this experience that is inquiring the government in the process of doing that can be used for the IGF for continue making this global Artificial Intelligence Governance discussions connected and interoperable. Thank you.

Seth Center:
Is the answer yes to that? But how? The tricky question is the how. Let me rewind just a minute to the question of accessible platforms and walk into how I think the IGF can play a role. I think if you get to the end of the governance story and you get it all right, you’re still left with the question of why we care about AI. And I think the answer that I think we believe in the United States, and I think most people in this audience believe, is that you should employ the most powerful technologies to address the most important issues. important problems in the world. And how do you get powerful AI developers, whether they’re companies or governments, although it’s usually companies, to devote time and attention to govern AI responsibly and then to direct it towards addressing society’s greatest challenges? And the answer is the multi-stakeholder community, directing them through conversation, delicate pressure, into thinking about those problems in meaningful ways. A few weeks ago at the UN General Assembly’s High-Level Week, there were a series of events that brought together different parts of the multi-stakeholder community and the multilateral community and countries to talk about these issues. The Secretary of State of the United States co-convened one with a whole series of diverse countries and companies, including OpenAI, and we just simply asked these companies what they were doing to address society’s greatest challenges, defined however they wanted to within the context of the SDGs. And if you open up those conversations and you have them at the UN, you have them in the General Assembly, and you have them at the IGF, if you ask questions about the impact on labor, if you ask questions about what we’re doing to protect children’s safety in the AI era, if we ask about inclusive access, it naturally changes the entire conversation. And so the young gentleman who asked a question about whether or not the multi-stakeholder community could make a policy or not, and I think there was a sense of skepticism, I actually am far more optimistic. Policy is made, at least in democracies, including ours in the United States, by listening to the inputs of everyone. Our entire architecture in the United States for our AI governance framework was built on listening to the multi-stakeholder community in a domestic context. context. The entire architecture for thinking about the voluntary commitments, our most recent one, included extensive multi-stakeholder conversations. And these are the way in which governments in democracies actually formulate policy. No government has the hubris to believe, at least the ones that I’ve talked to, that they understand foundation models and generative AI. They need the technical community, the standard-setting bodies to help them. They need companies and the experts in companies to help them. They need civil society, human rights organizations to help them. And out of that input comes an output, and that output is policy. And then you need governments to actually enforce the policies. And that, I think, is actually where we probably have a bigger challenge. But if you take the step back and you say to yourself, how do we ensure accessibility? How do we ensure collaboration? We should encourage the energy in all of the forums, whether it’s the UK Safety Summit, whether it’s the G7 Hiroshima process, whether it’s the UN’s H-Lab, because we are at the early stages of the next era of AI, and we need all of those conversations at this point in time.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. And I turn kind of a similar question now to the private sector, represented here by James, in terms of, like, you don’t have jurisdictional borders in the offering of your services. I mean, you are binded by different regulatory frameworks in different jurisdictions, but you need to deal with this question about artificial intelligence governance in a way in which you can operate as a company and offer your products and services beyond the borders. So what are the challenges in that perspective, in terms of, like, how you are dealing with the discussions of artificial intelligence governance at this different local and domestic level, regional in some cases, global also, and how bringing some of those challenges to the discussions here in the IGF will be useful in terms of address them for the perspective of the industry?

James Hairston:
Yeah, I mean, you know, start with the first challenge that comes to mind, which is sort of size. You know, we are trying to make sure we are in as many of the conversations that, you know, as we can be in, and, you know, in all regions of the world in every country, you know, cities, states, geographies, they’re important discussions. It’s impossible to be in every room, but I think coming off sort of the recent listening tour that we did around the world, have a great respect for sort of the, just the variance in sort of the needs for these tools, the different restraints that are going to be placed on areas where, you know, hard and soft law will differ, and so just making sure that we are, you know, in the right places, that we’re listening fully, you know, that we’re providing the right sorts of research and technical assistance, I think that’s probably one of the sort of threshold challenges, you know, of just sort of making sure we’re participating in the right ways, hearing and learning in the right venues. Then from there, you know, I think, you know, I think sometimes there’s a discussion about the sort of spectrum of, all right, you know, you have these really important short, medium-term risks as well as sort of some of the longer-term, you know, ensuring safety for humanity, sort of on the road to artificial general intelligence, you know, which is sort of seen as a spectrum and sometimes talked about as if, you know, you have to make a binary choice of either sort of addressing short-to-medium-term harms versus, you know, looking sort of further out into the future and being focused on building the international and domestic systems to solve for those. And you know, we don’t think that’s a choice, like we have to work on both, right? And we as the private sector, as a, you know, as a research lab, have to be contributing to those discussions as countries formulate their laws, but also on the other side of the regulatory conversation as, you know, countries, societies decide how they want to use these tools for good. And so, you know, being in enough rooms, contributing the sort of core research and technical understanding, making sure that, you know, the transparency, the work that we’re doing around our tools is aiding those conversations in as many geographies and for as many communities as possible, it’s a challenge, but it’s a responsibility. And so, you know, again, we just welcome and, you know, sort of being in as many of those rooms and as many of those conversations as we can be.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. And now I open the floor again for reactions and comments from the audience here inside, but also online. Do we have any online?

Moderator 2 – Christian Guillen:
Yeah, sure. Please go ahead. There’s a lot going on, Maria. Let me start with one question from Mokaberi from Iran. Could shaping the UN Convention on Artificial Intelligence help to manage its risks? Do geopolitical conflicts and strategic competition between AI powers allow this at all? And what is or could be the role of the IGF in this regard? And if I may, I would like to seize the opportunity to enlarge the question a little bit also to you, James, because I have the great opportunity sitting next to you. And you being a newbie here at the IGF, not just you as an individual, but representing open AI, what do you think could be the added value of the IGF when it comes to the discussions right now on regulations of AI and governance and do you have the impression open AI could kind of contribute in future times as well? So two questions in one.

James Hairston:
Yeah, no, I mean, absolutely. I think one of the comments earlier just on both benchmarking, like defining what does good look like, I think that’s going to be important just as much from a technical perspective as it is in sort of policy development around the world. And so I think there’s a really important role for IGF and international institutions to really harmonize those discussions and say, these are the benchmarks, these are how we’re going to be grading our progress. And that’s probably where I’d start. Similarly, to address sort of the first part of the question of sort of where we can build on existing work that’s gone. I mean, I think for a lot of these technologies and sort of where we’re heading next, it’s important to build on just the important conventions and treaties, areas of law that we already have in place. And that’s not to say that there won’t be new approaches, new gaps as we’ve been talking about today, but we also don’t necessarily need to reinvent the wheel everywhere. And so take the hard work that’s been done in areas like human rights and draw on that as we sort of figure out the places where we want to set new standards going forward.

Moderator 1 – Maria Paz Canales Lobel:
Thank you, James. I don’t know, there is any inside question? No, I don’t see anyone. Ah, there, sorry. I’m going to turn it over to Hossein Mirzapour, who is the co-chair of the IGF, and he’s going to talk a little bit about the IGF. Go ahead.

Audience:
Okay. Hello, everybody. This is Hossein Mirzapour from data for governance lab for the record. Thank you for bringing up this crucial issue of how IGF, whether or how IGF can help to deal with AI, specifically the governance and regulation of big data. I’m very proud to be part of the IGF, and you know better than me, we have discussed many times for more than one decade about the governance and regulation of big data, digital privacy, I don’t know, like data governance, and even finally, we didn’t, we were not able to reach a global consensus, a global framework to deal with big data, and we are not able to reach the same regulatory framework and laws, and you can compare now the DSA and DMA in Europe with the way U.S. is dealing with their companies, so my big question to a bit add a spice to your interesting topic is, as far as we have not been able to reach a global consensus and global framework to deal with big data, how can we be optimistic to reach a global consensus and global framework to deal with AI, and you know, well, AI is rooted in big data as well. And just last but not least, I have very quick yes-no question for James, Mr. James, who is representing the private sector today. Right now, is there any emergency shutdown procedure in your company, like, if you, by the case, you find that there is a, like, very emergency danger coming out from your company and corporation for the fines, for example, a pandemic, or, you know, financial crisis, is there any procedure in place right now for an emergency shutdown or not? Thank you.

James Hairston:
I can take that last one, and, you know, when there are we have harm reporting, and we take, you know, security report, and so, if there are admit this dates, with all the we can turn our tools off by geography in that way. I think there are probably many layers to that question beyond just on-off access, but happy to follow up and understand the types of shutoffs that you have in mind.

Moderator 1 – Maria Paz Canales Lobel:
You want to react to that?

Seth Center:
Maybe because I’ve never come to the IGF before, I’m not as down as you. I think there’s a tremendous amount of consensus on AI governance. I think obviously the challenge of enforcement and what the regimes look like may be a bridge too far at a global level, but I don’t think that’s an existential threat to the value of these conversations or pursuing an AI governance conversation. For instance, if we were to ask ourselves moving into a future in which foundation models and generative AI will likely subsume narrow AI, what are the kinds of safeguards you would want in place as a governance structure? I think everybody would basically agree. You want some kind of internal and external red teaming. I think you’d generally agree that you want information sharing among those who are developing these models. I think you’d generally agree that for finished models, which are potentially profoundly powerful, you would want some sort of cybersecurity to protect model weights. I think you’d generally agree that you can’t solely trust those developing them to be accountable, so you’d want third party discovery and auditability in some way, shape, or form. I think you’d basically want developers to agree on public reporting and capabilities. I think you’d basically agree that they should prioritize research and safety risk, including on issues like bias and discrimination. In my senses, if you get to the end of this, you’d also basically agree that they should employ these to address society’s greatest challenges. At that level, I’m fairly optimistic that we’re at least going in the right direction.

Moderator 1 – Maria Paz Canales Lobel:
Thank you. I see two more speakers lined up here. I don’t know if we have some online tool. Can you read the two and so we can have time for the other speakers?

Moderator 2 – Christian Guillen:
Okay, I have. And we post all of them to the panelists so you react. I’ll be very quick. Two questions actually. One from my side to Seth because it’s really pressing and I’m hacking the system right now. But Seth, you said before we have to get all stakeholders involved. I’d be interested in your opinion on the fact that was uttered somewhere here I think by the steering group UN on kind of the analogy of the we need something like the International Atomic Energy Agency for AI. This idea which sounds kind of rude. Do you think that is an adequate idea or not? And maybe I can pose the other online question already. Yeah. That is a colleague, actually a member of the parliament from South Africa who’s asking Willem Faber. Considering that AI technology was developed by humans, could we not explore the possibility of leveraging AI to establish government regulatory systems instead of relying solely on human efforts to find solutions? More of a technical thing.

Moderator 1 – Maria Paz Canales Lobel:
So AI regulating AI, basically that’s the proposition. So maybe we can turn for that one to James on what are the take and with Seth for the other one. Yeah.

James Hairston:
Well, one area of I think long-term research, this actually gets back to a question that was raised earlier. I think it’s important to have humans in the loop and the development of systems and then their testing. We’ve talked a lot about red team and audibility. And yet there are a lot of research possibilities around the use of say synthetic data in the future. We’ve been talking about bias and sort of what the future avenues for addressing them might be. And there’s one area of sort of work around the world that I think needs a lot more exploration in sort of how we might create high quality data sets that are derivative of research work by domains to sort of generate the ability to perform all sorts of new tasks. And in that way, you would have information not based on sort of a current corpus of the internet or people’s information that of course involves a lot of human training to get to, but that is derivative and is used to sort of build new capabilities. I think that’s going to show up in the some form, you know, in a lot of domains, and there are pieces of that that are going to require a lot of, you know, monitoring and evaluation, but there are other ways in which sort of synthetic data sets help solve some of the problems that we’ve been talking about. It’s not a panacea, of course, and that what you could use in sort of deconstructing and reconstructing information that tries to resolve gaps in, say, language of the available information we have today, or over- or under-representation of certain regions, or genders, or otherwise, that that synthetic data could then be used and applied to, you know, create personal tutors, or to improve genomics research, or advance our understanding of climate. So the synthetic, again, that’s one area of research, there’s a lot to do there, but I do think as we talk about sort of machine-created data, again, with a lot of humans and a lot of important standards bodies, and research institutions, and government security testers in the loop, there are actually some really, I think, interesting possibilities there, but that doesn’t mean we can just sort of step away and let that happen, so I’ll just leave them there.

Moderator 1 – Maria Paz Canales Lobel:
Do you want to react to that, Clara, maybe?

Clara Neppel:
Yeah, well, actually we have a working group on defining the quality of synthetic data, because again, we are coming back to define what is good, what is then ethical synthetic data, and yes, I agree with you that actually it is one of the way of providing, let’s say, scientific data to be used for research, and so if you’re using it in that way, I think it’s okay. But coming back to why it is important to think about the global regulation, or global governance, sorry, of AI, coming back to the analogy of electricity, I think that now we have this moment where it is out in the open, and it’s being used in so different ways and different geographies, so we need, now we are coming to Japan, we use a different plug and socket, so we need to have at least a transparency, what is being used there, where is it where we need to adapt, we need to have a kind of, as I mentioned before, transparency in the sense of what… you know, basic information about how these AI models have been used and what is important for that context. And I think that it is laudable that, of course, we have these private efforts to make AI as trustworthy as possible, but it is still something which is closed. So, I mean, some of the things are made open, but it is, again, voluntary. So, we need to have, like, a certain common ground to understand, you know, what we are talking about, what are the incidents, what are the data sets, where is synthetic data being used, what kind of quality of synthetic data is being used. And I think that once it becomes so everywhere, I think that there is a pressure as well to kind of have this, you know, standardized way of understanding the impact of

Moderator 1 – Maria Paz Canales Lobel:
AI. Thank you very much. So, I turn to Dr. Santer for the other question quickly. And after that, I will take one more question from the audience, and I will ask you, all the speakers, to do a round of final remarks so we can start to close. Thank you. Go ahead. So, I certainly think the

Seth Center:
IAEA is an imperfect analogy for the current technology and the situation we faced for multiple reasons. One being the predominance of private sector developers of AI versus state-based questions about nuclear control. The second being questions of the ease and facilitation of verification and what you’re trying to verify and track, I think, is quite different, at least in the era in which the IAEA was developed versus what we’re talking about in the AI era. I think there is one instructive lesson that comes out of the IAEA, however, and that is between 1945 and 1957, when the IAEA was established, was 12 years. And so, as we pound the table and demand action to institutionalize global governance around AI, we should be a little more patient with how this evolves. And I think I’ll leave it there. Actually, I won’t. I will say, look, we do need scientific networks that span countries that are convened to take on these problems, if for no other reason to build shared assessments of risk, to agree on shared standards for evaluation and capabilities, which I think we will need shared international approaches to. And so, I think we should continue to look for the right kinds of models for international cooperation, even if that’s not the right one.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. Please, your question.

Audience:
Yeah. Thank you very much. I’m Christine Mujumba from Uganda, but I’ll be speaking as a mother in this regard, and really advocating for the seven-year-old boy โ€“ I think it was from Bangladesh โ€“ who asked a question on children. And there have been follow-up discussions on whether such forums have a place in influencing policy. And being from a technical background and many other backgrounds, sometimes I find that we get lost in the high-tech definitions and all that, and we lose the low-hanging fruit of common denominators, such as what we shall all agree, even in our diversity, that we have all been children before. And even in that session before, when we were talking about cybercrime, it came out clearly that we need to protect the future generations. So I think for me, my ask to experts and partners like you, as you have your elevation pitches wherever you are, is not to sort of have the low-hanging fruits come out. If you all agree that you have been children, and we can find a child in us, let’s at least get there in addressing the AI that you want, and maybe these other things we will learn from there to have the inclusive designs you are talking about, whether to buy us things or not. So for me, it was really that plea of let’s find spaces, even in harmonization, in addressing common denominators such as preserving future generations. Thank you.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. I think that was a question, but a comment also. So I invite you to react in a final round. considering this last question, but your remarks in one and a half minute or less. I invite James to start and I will move this direction. Yeah.

James Hairston:
So, just final remarks here?

Moderator 1 – Maria Paz Canales Lobel:
If you want to address some of the last question, and if not, your final remarks. Yes.

James Hairston:
Yeah, no, I mean, again, I think the sort of public and private collaboration on sort of the safety of these tools, ensuring that both on the design side, on sort of the reporting, and in sort of the research we do about how children, other communities are using these tools, how to protect them, how to make sure, you know, even where, you know, tools like ours are not for use for, you know, anyone under 13. You know, understanding how young people and communities that are vulnerable come to these tools, how they interact with them is just going to be an important part of the work ahead, and being responsive to the new research that comes out of sort of the academic community and civil society, and, you know, being able to action reports of crime or of misuse is going to be key. I think in terms of sort of closing remarks, I mean, I think we’re at this important moment, and it’s just going to be essential that we really build on the momentum that, you know, has been put together, whether the work on the voluntary commitments that we very much see as our responsibility to continue to act on and to contribute to the international regulatory conversation and the promotion of long-term safety, that we just, again, sort of continue to get more and more concrete about, you know, where we’re heading, about the sort of international tools that we want to apply to these new technologies, and that we build the capacity both for identifying harms, reporting those harms, understanding what new capabilities are sort of working or putting communities and people at risk, but also what the really, you know, the unique opportunities are here for these types of tools, and those will be different. They will be adopted at different rates. The analogies to, you know, electricity I think is instructive because, you know, there will be different decisions made in education sectors or health sectors and finance and other areas. But, you know, really getting concrete about how we can take some of these tools and apply them to problems for people while also, you know, trying to solve for the long-term harms and risks, I think is going to be important. So I’m really glad to be here and to participate in this discussion.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much. Happy to have you. Thank you. Clara.

Clara Neppel:
Thank you. Well, I think that especially when it comes now to generative AI, what will be important is to be as agile as possible, and this will be important to be from the organizational level to the national level to the global level. And I think that for all these levels, we need feedback mechanisms that work. at the organizational level, we have to make sure that these feedbacks are also taken into account for the further development of this foundational model. I agree with you that, of course, it has to take risk into account and it has to be differentiated, but I think that for certain high-risk applications, we have to have conformity assessments, and this has to be done through independent organizations, because there is, again, a different incentive to self-certification than being compliant. I think, as well, that maybe the International Atomic Agency is really difficult as an analogy because we have so many uses of artificial intelligence. I would like to bring back, again, the idea of more of an independent panel, independent multi-stakeholder panel, which, as a matter of fact, should be implemented also for this important technologies, which are acting, basically, as an infrastructure right now. So if it’s a public infrastructure, we also need to have a multi-stakeholder, let’s say, governance for that. Thank you.

Moderator 1 – Maria Paz Canales Lobel:
Thank you. Maybe more similar to CERN, that Atomic Agency, just another idea. So I will move to Wikileaks for your final remarks.

Thobekile Matimbe:
Thank you so much. I think what is clear from this conversation is that, as human beings, we cannot cede or forfeit our rights to technology, and we need to continue to, I think, emphasize the importance of us remaining with that agency over our fundamental rights and freedoms, and in that way, we will ensure that children’s rights are promoted in the use of AI, women’s rights are promoted in the use of AI. I think we could also center conversations around environmental rights, et cetera, and I think it’s a critical conversation that we need to continue to engage in, and looking at basic concepts such as participatory democracy, I think bringing it into the realm of Internet governance, I think it’s something that we need to also emphasize, that there’s need for participation of everyone, marginalized groups, vulnerable groups. but also ensuring that the processes that we have are actually very inclusive and we have a truthful and meaningful multi-stakeholder approach.

Arisa Ema:
So, thank you. So, I think that the AI governance discussion is really important and also very challenging because the AI itself actually kind of changes and evolves and also the situation changes, the environment changes, and in that sense, the people who we need to involve will kind of expand and never, you know, shrink. So, the more people should be involved in this kind of discussion. And in that sense, in my first remarks, I kind of mentioned that we need some kind of concrete cases and to discuss about what will be the risk, what do you mean by transparency, and what do you mean by how to take the accountability at all. However, as many people as we are going to include, we need some kind of philosophy or, you know, shared concept that we can be united and we can at least collaborate with the same context or the same kind of common understanding or the common concept that we share. So, in that sense, I think these couple of days’ discussion really have kind of come up with various important concepts and the principles, goals, and so I really kind of enjoyed this discussion. And so, the last thing I would like to mention is this is not the end, but this is just like a starting point. And this wouldn’t never end, but I think we can enjoy the process of this kind of exchanges and discussion and we need to be kind of aware of that to involve as many people as we can.

Moderator 1 – Maria Paz Canales Lobel:
Thank you. And the final words from the speaker?

Seth Center:
You did a great job moderating us and keeping us on time. Thank you. I will sum up my take and theme using a quote from a famous basketball coach about AI governance. Be quick. quick, but don’t hurry.

Moderator 1 – Maria Paz Canales Lobel:
Thank you very much for that. So we’re running out of time. I suppose to summarize a little bit of this rich discussion, but I only will provide the highlight of the takeaways rather than the full takeaway. I think that we have heard the main takeaway that we have heard here from different perspective of the value of this multistakeholder conversation and the value of like making, continuing making it as much inclusive as possible and enjoying of the participation of the people that is already in this room, but also looking for the people that are still outside of the room and thinking about this as a necessary step in what Dr. Shunter was inviting us to be quick, but not hurry. So take the time for listening, different perspective, and take the time to evaluate the different options to address the different challenges. So we talk purposely about artificial intelligence governance because we think there is a broader concept of just regulation or just voluntary guidance or just ethics. It’s a broader aspect, and this is the value of the Internet Governance Forum that we can reach different aspect of the discussion and bring different levels of expertise and also be mindful of all this level of inclusivity and diversity, the one that refer to vulnerable groups, the one that refer to different fields of expertise, and the one that also refer to different geopolitical realities. So as Arisa was mentioning, this is not the end, it’s the starting. Thank you very much for keeping connected with the process, and thank you all my speakers.

Arisa Ema

Speech speed

191 words per minute

Speech length

1515 words

Speech time

476 secs

Audience

Speech speed

167 words per minute

Speech length

901 words

Speech time

324 secs

Clara Neppel

Speech speed

163 words per minute

Speech length

2258 words

Speech time

830 secs

James Hairston

Speech speed

171 words per minute

Speech length

2940 words

Speech time

1030 secs

Moderator 1 – Maria Paz Canales Lobel

Speech speed

169 words per minute

Speech length

3813 words

Speech time

1354 secs

Moderator 2 – Christian Guillen

Speech speed

166 words per minute

Speech length

711 words

Speech time

256 secs

Seth Center

Speech speed

157 words per minute

Speech length

2174 words

Speech time

829 secs

Thobekile Matimbe

Speech speed

196 words per minute

Speech length

1382 words

Speech time

423 secs

Letโ€™s design the next Global Dialogue on Ai & Metaverses | IGF 2023 Town Hall #25

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Raashi Saxena

After analysing the data, several key points emerge. Firstly, there are concerns surrounding misinformation in Gen-AI tools. Outdated or faulty information has the potential to harm reputations, and the emergence of doctored videos is a significant issue that can lead to gender-based violence harms. This highlights the need for careful consideration and regulation of Gen-AI tools to mitigate negative consequences.

The importance of addressing misinformation in dialogues is emphasised, as it is essential to navigate the threats and advances in AI technology. Including concerns about misinformation in dialogues fosters understanding and collaboration in finding solutions to tackle this issue.

The Indian government, in collaboration with Intel, has taken proactive steps to educate school students about AI through the initiative ‘AI for All’. This curriculum, implemented in central schools, aims to equip students with knowledge and understanding of AI concepts. Additionally, the government has partnered with startup incubators to promote conversations and podcasts about simpler AI concepts, broadening the accessibility of AI education.

Raashi Saxena, a notable figure in the field, is willing to share their AI curriculum and engage in offline discussions, demonstrating a commitment to collaborative exploration of AI.

Diverse participation in an Artificial Intelligence dialogue in India is celebrated, as it includes individuals from various age groups, including a Buddhist monk and housewives. The selection of participants from places with social turmoil and socio-political issues adds depth and perspective to the discussions, enriching the insights gained.

Information provision is highlighted as a fundamental aspect of empowerment. Concrete and accurate data enables people to make informed decisions. Facilitating access to reliable information fosters active participation and engagement.

AI discussions are seen as educational opportunities, expanding participants’ knowledge and understanding. The diverse contributors notably gain valuable insights.

The potential of AI in content moderation is acknowledged for its precision and ability to sift through large volumes of data. AI is considered a valuable tool in addressing harmful content, particularly following the increase in online presence due to the COVID pandemic and concerns about the treatment of human content moderators.

Developers, as key stakeholders in technology, should be actively included in conversations about its role in society. Their perspectives and expertise are crucial in finding solutions and addressing challenges.

Contextualising information according to local needs and languages fosters engagement and response. In India, in-person dialogues in small village settings, coupled with translation into local languages, facilitate more inclusive and fruitful dialogues.

The analysis also highlights that hate speech, misinformation, and propaganda are long-standing issues that technology has made more economical and efficient to spread. Ongoing efforts are needed to address these issues and regulate technology to mitigate their negative impact.

The inclusion of vulnerable groups, such as children and people with disabilities, is emphasised in discussions. It is important to adopt inclusive approaches that consider the needs and perspectives of all individuals, promoting a more equitable dialogue.

The significance of considering different languages in discussions is recognised, as it makes the dialogue more accessible to diverse communities and enables a broader range of voices to be heard.

Finally, the importance of adhering to dedicated time limits for discussions is emphasised to respect participants’ time and ensure efficient conversations.

In conclusion, the analysis of the data provides insights into AI, misinformation, education, and inclusivity. A balanced approach is needed to address challenges posed by technology, information provision and education are crucial, and inclusive dialogues should consider diverse perspectives. AI’s role in content moderation and the engagement of developers in conversations about technology’s impact are highlighted. Contextualisation of dialogue according to local needs and languages is essential, as are efforts to address long-standing issues. The inclusion of vulnerable groups and consideration of different languages promote a more inclusive dialogue. Adhering to time limits is also important.

Roberto Zambrana

In the analysis, it is highlighted that Roberto Zambrana has a neutral stance towards AI and expresses curiosity about what AI itself thinks. This suggests a willingness to engage in a dialogue with AI and consider its perspectives.

Furthermore, Zambrana advocates for a hybrid approach in reaching agreements on general terms and adapting certain topics, regardless of the country. This approach emphasises the importance of flexibility and adaptability in addressing various issues related to AI. This aligns with SDG 17: Partnerships for the Goals, which promotes collaboration and cooperation among different stakeholders to achieve sustainable development.

Additionally, the analysis emphasises the significance of education in understanding the concerns and frontline problems associated with AI. Zambrana recognises that involving both citizens and developers in the process can lead to better outcomes. This highlights the need for awareness, knowledge, and dialogue to ensure the responsible and beneficial use of AI.

Moreover, the analysis highlights Zambrana’s support for global dialogues as a means of overcoming barriers and achieving a balanced understanding of AI and metaverses. Such dialogues can foster collaboration and support between countries, helping them overcome challenges and realise the potential benefits of AI and metaverses. This is in line with SDG 9: Industry, Innovation and Infrastructure, which seeks to promote technological advancements, and SDG 17: Partnerships for the Goals, which emphasizes cooperation and collaboration between different stakeholders.

Overall, Zambrana’s neutral stance towards AI, advocacy for a hybrid approach, emphasis on education, and involvement of the technical community, and support for global dialogues underscore his commitment to fostering responsible and inclusive AI development. These insights serve as a reminder of the importance of considering diverse perspectives and engaging in collaborative efforts to harness the potential of AI for sustainable development.

Audience

In the analysis of the statements made by various speakers, several key points emerge. The first point of concern is the inequalities in access to and understanding of AI and ICT, which should be addressed. Certain populations face issues of inaccessibility and lack of comprehension of AI and ICT, and this disparity needs to be rectified. It is argued that there should be an effort to close the gap and ensure that everyone can benefit from these technologies on an equal level.

On a positive note, it is acknowledged that AI can be used to bring everyone along in global advancement. The potential of AI to drive economic growth and innovation is recognized, and the speakers highlight the importance of using AI to include everyone in the world’s growth. They question what can be done using AI to ensure that the benefits of global advancement are accessible to all.

Additionally, there is a need to balance the advancement of AI and Metaverse design with the prevention of potential problems. It is emphasized that while progress in AI and Metaverse is important, it should not be done at the cost of overlooking potential issues and risks. The speakers argue for a balance between moving forward and preventing problems, highlighting that hesitation to progress can hinder overall development.

The development of metaverse with helpful AI for teaching social-emotional skill lessons to international students is considered important. The speakers underline the need to design and implement a curriculum that incorporates the latest technologies, such as metaverse and AI, to provide effective education to international students. The current teaching systems are often based on standard meeting systems, and the integration of metaverse and AI can greatly enhance the learning experience and improve outcomes.

Global dialogue on AI with different stakeholders is seen as crucial. The speakers mention the importance of sharing knowledge and experiences about the internet, digital technology, and AI from various perspectives. This global dialogue can foster collaboration, learning, and the development of best practices in the field.

The positive impact of digital technology is emphasized. It is highlighted that digital technology helps people in different ways and has the potential to drive industry, innovation, and infrastructure. The speakers acknowledge the role of digital technology in advancing various Sustainable Development Goals (SDGs).

Inclusion of different experiences and perspectives in AI policy and best practices is advocated for. The speakers believe that incorporating various viewpoints and voices in the development of AI policies can lead to more inclusive and effective outcomes. It is argued that a diverse range of experiences contributes to the formulation of AI inclusion policies and the establishment of best practices.

The work of organizations like Mission Public, which engage with vulnerable sections of the population, is appreciated. The speakers commend their efforts to reach out to individuals who are not usually involved in processes such as the Internet Governance Forum (IGF). This outreach to unions and workers is seen as a positive step towards reducing inequalities and ensuring that all voices are heard.

A notable observation from the analysis is the importance of a qualitative approach to understanding the thinking behind deliberations. One speaker suggests that understanding the motivations and thoughts behind each deliberation can lead to a more comprehensive understanding of the issues at hand. It is argued that a more granular understanding of deliberations can be achieved by studying the thought process behind them, thereby fostering more effective decision-making.

Child protection and online safety emerge as critical topics in the context of AI. The speakers emphasize that AI can be leveraged to protect children and ensure online safety. However, they caution that AI can also cause harm, such as the creation of child sexual abuse material digitally. It is stressed that when discussing AI on a global and local level, child protection and online safety should be at the forefront of discussions.

A differentiated understanding of AI applications is deemed crucial. The speakers mention various applications of AI, including content moderation, combating fake news, and detecting copyright infringements. It is argued that having a nuanced understanding of these applications is essential for the effective and responsible use of AI.

The issue of AI bias potentially affecting the validity of information is raised. Concerns about bias in image recognition technologies are highlighted, illustrating how AI can perpetuate biases, particularly in gender representation. It is suggested that biases in AI models need to be acknowledged and addressed to ensure fairness and equality.

The need to strike a balance between regulation and the usage of technology is emphasized. One speaker calls for critical analysis and understanding of technology consumption, rather than relying solely on regulation or fear of using technology. The goal is to ensure responsible use of AI and technology while acknowledging the potential risks and benefits they bring.

Public participation in the implementation of artificial intelligence is seen as necessary. It is argued that involving the public and giving them a voice is crucial for the responsible development and deployment of AI technologies. A speaker highlights the importance of hearing from specialists and considers it a common responsibility to include experts from various fields in decision-making processes.

Proper governance of artificial intelligence is highlighted as essential. The speakers advocate for ensuring that AI is ethically and responsibly governed to prevent issues such as misinformation and fake news. It is emphasized that AI governance is crucial for maintaining peace, justice, and the functioning of strong institutions.

The importance of a uniformed approach in terms of digitalization and AI is highlighted. This includes the need for consistent standards and practices across different regions and countries. It is argued that a uniformed approach to digitalization and AI can help reduce inequalities and promote fair access to technology.

Overall, the speakers highlight the need to address inequalities, strike a balance between advancement and prevention, engage in global dialogue, ensure inclusivity, protect children, promote critical thinking, involve stakeholders, and govern AI and ICT properly. These points emphasize the importance of responsible and ethical development and use of AI technologies to achieve sustainable development goals and create a more equitable and inclusive society.

Antoine Vergne

The analysis explores various perspectives on artificial intelligence (AI) and its implications for society. One notable initiative is the AI for All programme, a collaboration between the Indian government and Intel, which aims to educate school students about AI. This programme is seen as a positive step towards ensuring that young people are equipped with the necessary knowledge to engage with AI technologies.

Opinions on the opportunities and threats posed by AI are divided. Around 50% of the groups believe that AI presents both opportunities and threats, while approximately 30% of the groups see it primarily as an opportunity. This reflects the complexity and multifaceted nature of AI and how it can impact different aspects of society.

There is a consensus, however, on the importance of aligning AI with human rights. Most groups agreed that prioritising human rights in the development and deployment of AI systems is essential. This reveals a shared understanding that while AI can bring immense benefits, it must be guided by ethical considerations and respect for fundamental human rights.

Another area where AI is seen as having significant potential is in research and development. The dialogues highlighted the belief that AI can generate numerous opportunities in this field. This aligns with the broader goal of SDG 9, which focuses on industry, innovation and infrastructure.

The notion of global governance of AI also emerges as a prominent theme. A significant number of participants expressed support for the idea that the governance of AI should occur at a global level. This recognition reflects the global impact of AI technologies and the need for coordinated efforts to address the challenges and benefits they bring.

Sharing experiences and knowledge about the internet, digital technology and AI from different stakeholders, countries and backgrounds is highlighted as being vital. This emphasizes the importance of diverse perspectives in shaping the development and utilisation of these technologies.

The European citizens’ panels, launched by the European Parliament, European Commission and the Council, were viewed as a crucial part of the Conference on the Future of Europe. These panels provided an opportunity for randomly selected citizens to discuss their views and wishes for the future of Europe. This inclusive approach highlights the value of citizen engagement and participation in shaping policy decisions.

Antoine Vergne stresses the need for ordinary citizens’ input in global discussions about internet governance. He highlights the importance of a more open and bottom-up approach to policymaking, allowing citizens to have an impact on policy decisions. This call for citizen involvement in governance reflects the desire for inclusivity and democratic decision-making processes.

The potential for future dialogues on AI and metaverses is also explored. The need to determine appropriate levels of governance and the importance of topic framing are discussed. Antoine Vergne supports the idea of both local and global topic framing for dialogues, recognising the value of context-specific discussions in addition to common global topics.

The analysis also highlights the significance of involving AI developers in global dialogues. By including developers in conversations about AI, not just in their professional capacity but also as citizens, a more comprehensive understanding can be achieved. This emphasizes the need to view developers and AI technology creators as both part of the solution and the challenge.

Global dialogues are seen as an opportunity to promote learning and mutual assistance among countries with different AI capacities. By sharing knowledge and experiences, countries can collectively address the challenges and maximise the benefits of AI technologies.

Inviting ambassadors from each participating country to engage in global-level reflections is considered an ideal approach. This facilitates the sharing of insights and lessons learned from national efforts and encourages international cooperation in addressing common AI-related issues.

Analysing qualitative data from citizen dialogues can present both challenges and benefits. While the process of aggregating and analysing the data may be complex, it offers valuable insights for policymakers and researchers. Artificial intelligence can play a role in making sense of the large amounts of data generated through citizen dialogues, enabling more informed decision-making.

Overall, the analysis reveals various perspectives on AI and its impact on society. It underscores the importance of education, alignment with human rights, ethical considerations, and global governance in harnessing the potential of AI. It also highlights the need for inclusivity, diverse perspectives and citizen engagement in shaping the future of AI technologies.

Session transcript

Antoine Vergne:
Okay. Thank you. So hello, everyone. My name is Antoine Verne. I am working at Mission Publique and we are working on citizen participation. It’s my pleasure today to be with Rashid and Roberto in that town hall to talk about citizens’ engagement and artificial intelligence and the future of it. So maybe Rashid, you want to say a word on yourself and Roberto too, and then we can give you some input and then we will have a discussion and then we can try to understand what could be the next step of such an initiative.

Raashi Saxena:
Thank you, Antoine. Hi, everyone. My name is Rashid Saxena. I’m from Bangalore, India. I was deeply involved in organizing the Global Dialogue back in 2020 on behalf of my country in India. I’m also a member of the scientific committee for the We the Internet project, which we will be discussing further. And I’m really happy to be here and I’m going to pass it on to my colleague, Roberto.

Roberto Zambrana:
Thank you very much, Rashid. I want to welcome as well to all the attendance to this session. It’s going to be very insightful. My name is Roberto Zambrana. I come from Bolivia. I was also involved in the dialogue in Bolivia a couple of years ago and happy also to be part of the scientific commission committee of We the Internet initiative. And well, with that, I think we can go.

Raashi Saxena:
Yes, moving on to you, Antoine.

Antoine Vergne:
Yes. So maybe, Roberto, you can share your screen and so we can have the kind of short presentation on looking back at what we’ve done together and then looking ahead at what we could do together. So it’s about the Global Dialogue on AI and Metaverses. Maybe you can put the next slide. So that was for the program today. And we can start with a short icebreaker. That’s always a nice way to get together. And maybe we can think about next question. That question is if you were able to ask an artificial general intelligence or something very, very advanced, one question, what would that question be? Maybe we can take 10, 15 seconds. And some of you in the room want to share with us or online what that question would be. So think about it. You are in front of a general artificial intelligence and you can ask one question, the kind of, for those who know, the Herschel’s Guide to the Galaxies, that question to the deep mind. You’re in front of deep mind and you can ask one question. What would that question be? Anyone would like to contribute with that? You want to? Sure, sure. We have one here in front.

Audience:
Okay, good morning, everybody. I’m Jane Mananiso, a member of parliament from South Africa. I’m part of the ICT group in parliament. And as well, I’m the WIP of Department of Higher Education and Training, Science, Innovation and Technology. One of my worries with regards to anything that has to do with ICT and AI, and AI, it’s the issue of a forever perpetual inequalities in terms of those who are at the peripherals and as well those who are illiterate. So I want to ask AI, what is it that we can do to make sure that we bring everybody along in terms of the advancement and transformation of a, for AI, be it AR, be it cyber security, cyber crime, and everything that has to do with human rights. What is it that we can do globally to ensure that we bring everybody along? And as we grow our countries and the world, we grow with everybody. Thank you.

Antoine Vergne:
Thank you very much. Thank you, Philippe. Very interesting question. Thanks a lot, yeah. Is there anyone else that would like to go? We have one question from Philippe, which is, can I trust you? So that would be the question that Philippe would ask, would be the trust question. Can I trust you? Other questions in the room? We have a mic there as well, if you want to go, or we can just pass the microphone. Ashley, what did you ask? Thank you.

Audience:
Thank you very much. Good afternoon. I’m Emi Tsudaka from Japan. I am working at a company called OSINT Tech. We use OSINT, we collect a lot of governmental press release in different languages into one language using AI. And we are trying to offer customers to more official and reliable information in English. Yes. And my question and my, what do you say, like question is that when you say design, let’s design the next global dialogue on AI and metaverses. And designing is very, very tricky if we try to avoid a lot of problems. But if we hesitate to move forward, we cannot move forward. So I would like to know the way you think about the balance of preventing problems, but going, move forward to have better world. So that’s my question, that’s. The balance between threats and how we need to advance. Exactly, yes. Thank you very much.

Raashi Saxena:
Thank you. That is a very valid question, given all of the Gen-AI tools that we have and the concerns around the misinformation aspects of it, whether it comes to information being outdated or having faulty information that could have reputational harms to gender-based violence harms with the advent of, I would say, revolutionizing and democratizing doctored videos. So yes, that’s a very relevant question to include in our dialogues.

Roberto Zambrana:
Yes. Any other question, maybe? Someone else, like, yes, please.

Audience:
Thank you. Hello, everybody. I am a high school teacher at a high school called Jiyugaoka Gakuen in Tokyo. And I’ve been trying to develop metaverse with helpful AI and want to teach international students take the social-emotional skill lesson. And we are now trying to develop such kind of a curriculum, but now I’m only using the standard meeting system. And I wonder how we can ask support for the AI developers or the metaverse researchers as a teacher, we need to use those latest technologies so that international students can collaborate together, but we don’t know how to ask for help to people in those research areas. So if you have a suggestion, I would like to collaborate with you. Thank you.

Antoine Vergne:
Thank you very much. I’m not sure if we have a solution, but we could ask the general AI to give us one. That would be one question to ask. Rashi, Roberto, what would you ask that?

Roberto Zambrana:
Well, I think I will ask the AI what the AI think of itself.

Antoine Vergne:
Yes, I think I would ask the same. I would ask the same.

Raashi Saxena:
From a curriculum perspective, the Indian government in their AI strategy has initiated something with, I believe, with Intel. It’s called the AI for All. So basically it’s building curriculum for school students in the central school board to give them education around that. And the Indian government also came in partnership with startup incubators to start podcasts and conversations around simpler concepts of AI. I’m happy to connect offline and maybe that’s of interest to you. But yes, there’s also, Antoine, we could also share the metaverse curriculum or rather the AI curriculum that we had with the dialogue. We’re happy to share that as well with you to keep the conversation going. Thank you very much.

Antoine Vergne:
Very good. So then maybe thanks, Rashi, for the transition. Maybe it’s time to look back at what we’ve done together in 2020. And maybe, Roberto, you can show us the next slide. Sure. So here, the idea what we would like to do is have a look back at a project we did together with many other partners. So on the left side, we have the strategic partners. On the right side, we have the strategic partners too, but in the countries. And on the left side, the strategic partners at a global level. And we formed a coalition and we did the design. So one of the questions was, how do you do the design? And we can talk about that later. We did the design and the implementation of what we called a Global Citizens Dialogue. So what it is, maybe, Roberto, the next slide. And the principle is pretty easy, is we take as many countries as possible over the world or all around the world and in each of those countries, we select a group of citizens, ordinary citizens, citizens that are non-engaged, non-expert, but are selected through random selection or through a system of snowballing to have a group which is representative of the diversity of its country. And so very important is non-expert, non-engaged. These are what we call day-to-day citizens, everyday citizens, ordinary citizens, lay citizens, whatever you want to call them. These are people that live in the country and have an experience of the internet or not, because in some of the countries, we, of course, also have people without internet connection, very important. And we gather them for one day of dialogue. And that day is normally one day all over the world. And they go through different topics. And for each of the topic, they get information. So we were talking about a curriculum. So it’s a very short curriculum in that case, but it’s about the main information on a topic, the main controversies on a topic. And then they discuss on that topic through one or two questions. And those questions guide the discussion. And then at the end, they give a collective answer to that question. So here you see the gender balance of our participants in 2020. So we had around almost 6,000 participants all over the world. And as you can see, we had a very good distribution of ages, good distribution of gender. And maybe on the next slide, you also see the distribution in terms of occupation, which more or less reflects the global population. So that was for us a check to see, okay, what we had in those rooms, in those almost 80 rooms, almost half of them virtual, because it was 2020. So it was in the height of the pandemic. But half of those dialogues were onsite in, yeah, how to say, in face-to-face meetings. So these are for the demographics. One of the session we had, one of the topic was governing artificial intelligence. And we asked the citizens to take a couple of positions and to discuss and collectively to give their opinion on the governance of artificial intelligence. One remark on that, it’s always what you will see after, it’s always what we call the collective judgment. It’s not only the individual opinion, but some of the resonance are also the resonance of the discussion of the groups. And that’s very important for us because we don’t want an opinion poll. We want to understand what people think, when they think. It’s a kind of advanced way of asking the people on complex topics. So, and one of the question we asked, and I’m sorry for the numbers below, because normally it should be 0, 10, 20, 30, but there was a glitch in the numbers. So it’s percentage. And one of the thing we asked the group was to reflect on if they thought that artificial intelligence was more a threat or an opportunity or equally. And as you say, at the end of the day, when they had discussed, the people said, okay, it’s equally an opportunity and a threat, almost half of the groups. And on top of that, around 30% of the groups also said that it was more an opportunity than a threat. So the first results we had was that generally people didn’t see AI as something very, very bad in itself. And it was rather a neutral or positive view on artificial intelligence. So next slide. Then we asked them, and that’s the big advantage of such dialogues is that you can have qualitative work. So we asked them to work on the priorities, which should be the priorities in developing AI systems and AI governance. And as you can see, it was the most, the highest one was it should be aligned with human rights. And then we asked them some questions which were closed questions. So that’s more individual questions in that sense. And here you can see also about the question of ethic and AI. And they had the feeling that, of course, there should be an ethicist involved in all that work in all of the different organizations. Maybe the next one. And that is hard to read, but just I wanted to give you the general impression of that. On that question, we asked the people to tell us, as a group, if there’s more an opportunity or threat in different fields of AI. The last one is where the people had the impression that it would bring the most opportunity and that research and development, science and research. They saw that it was one of the field where it would bring a lot of opportunities and not a lot of threat. Also to explain, all those sentences on the left that I’m sorry are not really readable, but I can give you, for example, the last one, where dilemma, and that’s why we work when we do, and because we are going to discuss about what could be a cycle, a new dialogue on that topic. It’s very important in those dialogues to phrase controversies or dilemmas, because we all know when it’s about to create public policy, to take decisions together, collective decisions, very often what we have is to solve trade-offs, to solve dilemmas, to solve trade-offs. And that’s when those deliberative processes, those citizens’ dialogues work very well. So if I take the last one, for example, or maybe, Roberto, you can start the one before. Can you, yeah, no, no, yes. So the last sentence, and AI brings advances in science and research that are not worth the huge investment needed. We should invest the money elsewhere. So that’s on the left. And on the right, AI brings a lot of breakthrough in science and research, which benefits humanity. So when you see on each of those lines, you had a controversy or a dilemma, and they had to choose. And for example, the first one, where the people thought the most harm would come, is data use is directed by those who want to get profit and exercise power, that is the left part, or data is used and organized for the common good and serves humanity. Here you see that the people all over the world had a more negative view on the use of data and how it would be done. So maybe next slide. And then we also asked, of course, the citizens to who should take decisions, governance, what should governance be done? And we are at the Internet Governance Forum. So it’s interesting to see that for AI, there was a very big part of the citizens that wanted to have a global level discussion and a global level governance for AI more than on other topic. And maybe you can show the next one. Okay, so maybe we can stop here. We can go one back. And maybe Rashi, Roberto, and I see that we also have Desiree online, want to add something on that experience in 2020.

Raashi Saxena:
Maybe we can give a chance to our colleagues online.

Roberto Zambrana:
Yes, we invite anyone online. We have Juliana, then I’m gonna answer. Yes, Juliana, yes. Okay.

Audience:
Hello, everybody. My name is Juliana from Indonesia. Happy to meet all the participants to interesting in AI and global dialogue organized by Mission Public. I think what Antoine presentation is quite clear about what the global dialogue on internet is happen. From my experience from 2020, it is a nice to hear sharing the knowledge, sharing the experience about what the internet, digital technology, and especially about AI in this conversation from different stakeholder, from different country and different background, because I know is that the detailed technology is help people in different ways. And everybody in different stakeholder, different country, different economic situation, it has different experience about the digital technology. And especially, I think what the different experience will bring to the inclusion policy and inclusion, what is it, the best practice for what we should do to have a better world. for the better application of AI in our life. I think it’s enough for me, Antoine.

Antoine Vergne:
Thanks, Juliana. So I’d say Roberto, Rashid, if you want to add something, but then after I can show a second example of a citizen’s dialogue on a related topic, and then we can open the discussion. I wanted to check if Noah is online. I’m not sure if Noah is online. Or anyone else that would like to share with us? Okay, so maybe, Roberto, you can share again the presentation. I can do the second part of the input. Sure. So in here, I will share another experience. This time, it was more done by Mission Publique and less by the coalition, but in a way, it’s a direct, it’s a child of the dialogue and of developments in Europe around citizen participation. And these were the European citizens’ panels. Maybe you can show the next slide. So the context in 21 and 22, we had in Europe a huge process, which was called the Conference on the Future of Europe. And this conference was launched by the European Parliament, the European Commission and the Council. And it was about asking citizens of Europe about their views and wishes and recommendations for the future of Europe. And this process was both at national and European level. It was both online and on-site. And one of the key pieces of that process were so-called European citizens’ panels. And those panels worked on the same principle as the dialogue with the internet, meaning we had a group of randomly selected Europeans coming from all EU countries and representing the diversity of Europe, each talking their own language. But that process was different because it was not one day, it was three weekends. So a much deeper process of discussion, but with a smaller group of people because it was 200 in each citizens’ panels. And in 22, 23, we had a new cycle of those panels with three topics, which were policies being prepared by the European Commission. The first topic was food waste because the Commission was preparing a directive on food waste. The third topic was learning mobilities. So the fact that you go abroad to learn and go back to your country because the European Commission was preparing a text on it, a program. And the second, you see it, it was about virtual worlds because the Commission was preparing a non-legislative text, an initiative on virtual worlds. Maybe you can show the next slide. So the next, so yes, basic facts, we had 150 randomly selected citizens with stratification from all over the countries in Europe, three weekends. And we had those citizens discuss with another, so it’s the photo you see on the links. And on the left side, maybe you can show the next slide, and Roberto.

Raashi Saxena:
Actually, in between that, Antoine, Desiree wanted to make a few comments. Yes. If Desiree is still online.

Audience:
I’m here. Hi. Yes, this is Desiree Milosevic-Evans. I wanted to make a few comments earlier on on the findings that you have presented in 2022. I believe that it’s very important, first of all, the work that Mission Public is doing, and that’s why we like to get engaged, to get out to the people who are otherwise not really close to the process of either national IGF or the global IGF. And this is not one of their first priorities to think about. So, from the point of concept, I really always liked how Mission Public tries to reach out to vulnerability section of population, but also of unions, of workers that are going to be really somehow affected by all the policies that we are discussing here. When you pointed out that some of them wanted regulation on a global level versus like regional level, I think it would have been also good to tease out like the motif as to how, why these deliberations happen that way. And if you could possibly somehow also quantify, you know, to understand a little bit the thinking behind it, I would personally find that useful. Of course, moderators that do it at the time of speaking to the groups really know, but I wonder in future it’s how we could present it a little bit, you know, more grain. And that was my only comment with regards to the first set of slides, but let’s continue with this.

Roberto Zambrana:
Thank you.

Antoine Vergne:
Yes, thanks, Desiree, and thanks. I thought you were online because you are also online in the room, so that’s why I wanted to give you the floor online, but you have both. You manage Ubiquiti, so congratulations, Desiree, for that. No, yes, maybe let me finish that from, and then I’ll go back to the question you had, and I think it’s a very important one for the future because indeed there is a lot of feedback on that that we had. So on the European level, the question we had and the commission asked to the citizens what was, what visions, principles, and actions should guide the development of desirable and fair virtual works? And we had them work a couple of weekends and give some recommendation to the commission. And yes, and then at the end, the output of that process was a communication from the commission about what they called Web4. So Desiree, you wanted to talk about Web3. We are with the commission, we are already at Web4 about virtual works. And this is very long, so you don’t need to read it, but it’s part of the official communication of the communication. And what is interesting is that they specifically mentioned the citizens panel as being the inspiration for their legislation. So here in terms of impact of such a citizen dialogue and the continuation of it, we can see what it can become. If you look at the last paragraph, the European commission says the citizens panel specified a set of guiding principles for desirable virtual works. And then they just list the values that the citizens had developed during the citizens panel. What I want to say with that is in 2020, we had a more bottom-up open approach trying to have impact on policymaking. The example I just showed here in 2022, 2023 was more top-down approach from a policy making body asking citizens, ordinary citizens. So you have really to imagine that the people that came to Brussels to take part to that panel were not expert, not stakeholder of internet. They had no clue about what metaverse is. They didn’t know the world. They didn’t know about virtual worlds, but they took the time, they were guided and were able to give recommendations and give the guiding principles they saw as important for the development of such a work. And with that, I would like to close the presentation saying, okay, what now? Yes, maybe the next one. So now, when you have heard that, but we first can have a discussion or we can have an open discussion in the time we have, but our motivation also to have that internal meeting with all the partners is to imagine what could be the future of such a dialogue. What could be a new version of it? Because we are still convinced and I look at Rashid, Roberto and other partners, I think we are still convinced that we need that input from ordinary citizens for the global discussion on those topics. So what could be the topic? And here, I also would like to connect back to Desiree what you were commenting is because what you said, indeed, we asked in 2020, we asked them, what should be the level of governance? So that was a framing to understand if they saw more global governance or local governance. And now, if we want to be more granular, as you said, we also should be able to understand should the topic be the same at global level or adapted to the context? I think it’s exactly what you were starting to explore, Desiree, is if we were to talk about a global dialogue on AI and metaverses, how should the topic be done and should it be a common topic for everyone or more local topic? And that’s the discussion we wanted to have with you. But I give the floor to Roberto, to Rashid to first comment on the presentation and introduce the discussion.

Roberto Zambrana:
Excellent, Antoine. We will manage to receive participants’ participation now. If you agree, now we are asking you if you can have an answer for these questions about first the topic that you think will be relevant to discuss regarding AI, artificial intelligence, and then if it needs to have a context in the country that we develop the dialogue. So those are the two questions, please.

Raashi Saxena:
I can reflect on a bit on the dialogue that we had in India. We did have a very interesting discussion and although a lot of people were not particularly subject matter expertise, I liked how we were looking at a very diverse age group. So we had participation from a Buddhist monk to housewives among the 50s. We also picked up participants from places where there is a lot of turmoil and angst. And I mean, I also come from a country which historically has the largest number of internet shutdowns. So internet connectivity is sparse or there has been internet shutdown for political reasons or otherwise. So what came out from those conversations is that people do have a lot to say and if you provide them with information that is concrete, that has the right data, we need to give them agency to be able to make their own decisions. But in short, it was a very good educational exercise for people to understand and people are, no matter what age, people are always going to be keen to participate and say what they have to say. So from that point of view, I thoroughly enjoyed the discussion as it was more of a literacy exercise which is something we all need.

Roberto Zambrana:
Great, we do have some participation now from the audience, please.

Audience:
Thank you, good afternoon everyone. My name is Katarzyna Stetiva. I represent the Polish National Research Institute and in terms of global dialogue on AI and what should be the main topics, I would definitely recommend child protection, child online safety because it’s a global, it’s a global both phenomenon and problem that there are many harms to children caused in the online environment. And digital artifacts are also stored online of what has happened to children. So AI can serve both as a tool to help in terms of, for example, finding child abusive materials in the big tail of either photos or videos and it can do a lot of harm. If you imagine digitally created material such as child sexual abuse material including visual appearance of an existing child. So we have different sides of the problem but I believe that this is the topic that should be discussed both at the global and at the local level, thank you.

Roberto Zambrana:
Thank you very much.

Raashi Saxena:
That makes a lot of sense when it comes to talking about AI being used for, you said content related to harms because it’s easier given that content moderators and the way they’ve been treated when it comes to their living wages or when it comes to looking at heinous content. Kind of, I would say contracting that to AI that could be more precise could also help in sifting through large volumes of data given that so many people have come in online after COVID. So yes, definitely AI would be a good application until there, next question.

Audience:
Yes, please. Morten, UNU-IGAV. Just to follow on that note, I agree with that but it comes back to also some of the survey results earlier to classical theme. It depends on the type of AI we’re talking about. Is it AI like chatGBT that students are using or we’re using for research or is used by students to cheat the teacher and skip learning experiences? Is it fake imagery? Is it fake news? So it comes back to this classical skill of not just having access but also the critical skills and thinking about what you consume online particularly when we’re looking at deep fakes and the like. ChatGBT, there was an interesting study done I think it was Oxford or MIT on law students. And they actually found that poor performing law students, this is bachelor level law, using tools like chatGBT actually increased their performance. Whereas top performance students dropped in performance because they were leaving things too late. So they started not being as creatively thinking about things as before. And these are university graduates. So there are these differentiations I think we need to make. And it comes back to walking into AI but also the metaverse and virtual realities with open eyes and with critical minds because there are pros and cons to these technologies. ChatGBT scrapes the internet and makes a proposal based on what are the loudest voices there. And if those loud voices are fake news or false information, well, that’s the output. And if you don’t double check as a consumer of these things then we are in a dangerous situation. But again, on the backend used cases to identify fake news, racist and discriminatory use, et cetera, et cetera. Even copyright infringements, there’s a lot of abilities there but there’s still all the classical dialogues and who are in charge of the algorithms. We’ve seen there’s been a lot of bias in image recognition and so forth. So how is that gonna come into this debate? And again, I think that starting with an educated learning and ensuring that we all critically assess what we consume and check alternative sources is part of that solution, not just regulation or fear of using the technology.

Roberto Zambrana:
Correct, yes. Please, if anyone would like to share also in online participants, please, you can raise your hands so we can allow you the mic, please. We have several participants online so you’re invited to. But we have here another comment, please.

Audience:
Okay, thank you. I think what I need to say here is to appreciate the responses after the question that I’ve asked in terms of what is it that I would want AI to respond on. And it was dealt with in depth from the issue of demographics, from the issue of public participation because at times people, they don’t think when we speak AI, you need to bring everybody along. People would speak about specifics, specialists. And I’m happy that now it is clear that when you speak about anything that speaks about transformation, you need to have a public participation. But I’m happy as well that in some of our participant, it comes out loud to say there’s a need of a continuous civic education in terms of AI. And I think what we need to do is not to shy away from the fact that AI governance, it is important. So that you deal with the issue of command and control, that nobody can just spread anything that has to do with any news, so that you deal with misinformation and fake news. So it is important that AI is governed properly well. And as well, I think that the issue of uniformity, irrespective of where you are nationally, locally, it’s something else. But having a uniformed approach in terms of digitalization and AI, it is important, not forgetting the fact that we all have different languages, but the content must not change. Because one of the things that makes us not to be in par in terms of developmental issues or studies, it is on the basis that when you come from America and I come from South Africa, our standards are not the same. And the contents change based on the issue of the country. But if we can agree that when we speak on the issue of artificial intelligence, we must remain in terms of the same content, I think we’ll deal with many issues that might affect progress in terms of 4IR revolution. Thank you.

Raashi Saxena:
Thanks, I think Dusty also had a few comments.

Roberto Zambrana:
So that will be a hybrid approach, meaning that we need to agree on general terms, independently of the country, but to adapt somehow a particular topics inside. Okay, great. Anyone else in the room, please?

Raashi Saxena:
I think Dusty had a few comments.

Roberto Zambrana:
Or online?

Antoine Vergne:
I have no question, no feedback online. And so for the moment, no.

Roberto Zambrana:
Maybe we have another comment, please. Okay.

Audience:
So I wanted to follow up on what the previous participant commented on the number of solutions, perhaps it led to that as well. With one of the questions one should be asking is what kind of AI implementation? should it be? And as it’s been mentioned earlier on, at the moment there’s like a plethora of many models of AI being developed, not just the chatGBT, but also the training data sets are being developed. And there is, you know, different kind of open source AI models, as we have heard in the main session, the father of Internet, supporting the open source, open sourcing some of these models, AI models that are being developed. But in that light, I also wanted to say that it’s important when we present these choices to people who are not expert in the fields, to also always give them some kind of understanding in trade-offs. So for example, maybe one example would be, you know, there could be an offset if only too few companies end up being the owners of the best data sets and having the more powerful algorithms. And on the other hand, open source could make many more models, but it comes down to many other things, like the size of the data set, whether it’s biased, if you do a search for who are the CEOs of hospitals in the world, it’s always a man. Is it true? No, it’s the wrong data set. And it’s not easy to fix that code in a line to say there are CEOs of hospitals in the world that are not only men, just a plastic example. But with the thought of whether it should be proprietary or open source AI models or training data sets, which are now more and more available, I think it’s also important to think about the guardrails that are built into some of these big proprietary models. That means that should be not allowed hate speech or, you know, they have these kind of constraints that are built in that are good in this. So you can control, maybe easier, regulate fewer of these instances of AI, whether they check GPT or something else. And on the other hand, open source, we think we would not be bound to just use a couple of these models. So there are these certain trade offs that in open source, it’s not open source unless you can modify. So you can, for example, modify that you allow hate speech. And then we ask ourselves, is this really artificial intelligence that is simulating human intelligence? If it’s intelligent, it should not be really suggesting propagation of hate speech and so on. And then there is a set of copyright issues as well. So there are all these, you know, questions that we could work on, because there is a rapidly growing set of developers that are making sustainable AI models and different kind of GPTs.

Roberto Zambrana:
Thank you. Thank you very much. One last round, if anyone, please. Yes, there is one comment. Thank you for a very inspiring presentation and comments. I’m

Audience:
Emi from Japan. I’m working for a private company right now, but involving educational arena as well, to provide reliable information from global governments, various governments. I really understood the participatory process is crucial in this topic. And now I have one new question that the developers should be involved in such participatory process. In order to, I feel like educating citizens as well as educating developers, maybe educating, the wording is not really what I mean, but to understand the concerns and also to understand the frontline problem is beneficial for both of us. So I feel, you know, understanding and educating and learning from both of the field is very important. And if I, so far, I don’t know such cases. So, yeah, I have my desire to know more about that. Thank you very much.

Roberto Zambrana:
Thank you very much. And indeed, a very important part of the dialogues will include the, I mean, not only the developers, but all the technical community that will be related to AI. It’s very important that. So thank you for that comment as well. I also wanted to make, in terms of people talk about… Yes. Sure. It’s from, yeah, it’s from Philippa Smith, right? Yeah. I’m just wondering whether a worldwide question might tackle digital divides and how this might impact on the understanding and use of AI from a global perspective. How can a global dialogue assist countries to support each other to overcome barriers so that there might be a balance in understanding and use of AI and metaverses? That’s a question that Philippa asks. I mean, AI is a tool to help with the dialogue, which has low connectivity. Is that the question? How can this dialogue can support to overcome barriers so that there might be a balance in understanding and use of AI and metaverses? So how the dialogue contributes to this? Do you want to take that one, Antoine?

Antoine Vergne:
Yes, thanks. But before that, I wanted to make a comment on Amy’s question about developers. And I think it’s really, really important indeed. And that’s something we need to extend the scope of that kind of dialogue, because they are part of the solution and the challenge. And maybe one example of something we managed to do in 2015, we had such a dialogue at global level on the climate agreement in Paris. So the same principle all over the world, groups of citizens and the question about the Paris agreement, information materials on that and discussed and gave their opinion. And in parallel to that, we did one process with employees from Engie. And you may know Engie is one of the big, big energy company at global level. And we had a process for their employees. So we had thousands and thousands of employees from Engie taking part in the same exact same dialogue as the citizens. And so the interesting part was they were as citizens, but also employees and stakeholders of an energy company. So it was very interesting to see that they had that double hat. I work for an energy company, but I am a citizen. And it was a very interesting thing. So thank you for reminding that indeed, having developers, having people that do also the technology is very important to address them not only as in their job, but also as citizens in such a process. So I think it’s yeah, thank you for that comment. And then we have a clarification from Philippa, I see about the. OK, and I don’t know if you see it, Roberto, but she says how countries that are more capable can assist others that might have issues through a dialogue. And so how can the dialogue assist the learning and mutual learning of different countries with different capacity?

Raashi Saxena:
We did that in India. In some places, we did two dialogues in places that are more remote where I wouldn’t say that the Internet penetration is low, but generally talking about digital literacy or these or these topics are usually not approached. And we those were the dialogues happened during the peak pandemic period. So we did have two in-person dialogues in small village settings where we trained a lot of journalists to be able to lies with and get outputs. And we realized that the format that we had might not have been the best. We might need a better way of contextualizing that information. We did localize it in terms of translating it into the local languages. India has a lot of languages, but we feel like maybe a more storytelling format with a few UI and UX experts testing out different ways to be able to evoke responses because that’s not something that they’re used to. People are not used to talking that long and pondering about those topics. So maybe there needs more time. But yes, there was something that was tried. And I’m sure there are other examples across the world that would have also worked. But also coming back to the question of developers, yes, developers need to be actually central to conversations like this to have, I would say, a more conscious and moralistic bent to this. At the end of the day, they’re humans. And also to also for all of us to reflect and see that all of these issues that we talk about, hate speech or misinformation, they’re not new phenomena that have been there because of the advent of technology. They’ve always been there. They’ve always had different modes. Sometimes you’ve had more expensive infrastructure to be able to enable these phenomena. But now you use technology which has made it more economical and cheaper and easier to, you know, spread propaganda. But yes, developers should be at the center of the conversation. Thank you for

Roberto Zambrana:
highlighting that. Yes, maybe a follow-up about a question from Felipe after she clarified about this. Yeah, we agree. She wasn’t taken as a tool. And this, we need to remember that this process is initially or mainly locally. So it’s between the citizens of each of our countries. But I will say, I don’t know what you think, Antoine, but maybe after we, as part of the results of the dialogues, we gather all the results, all the conclusions in each of the dialogues, we can have a sort of a round between coordinators of each of the countries to comment and maybe to identify which are the common topics as priorities that could be presented in the different instances that we need to share the reports, something like that. In that way, I think we can accomplish what it was suggesting and to actually have as a return this support coming from the countries that have more experience maybe in particular topics, assisting some others that don’t. So maybe that could be a good idea. What do you think, Antoine?

Antoine Vergne:
Yeah, that would be fantastic. And if I dream a bit, the next, of course, next piece would be to invite ambassadors from each country, from the participating citizens together at a global level to also reflect on their own results. And I think that needs a stronger infrastructure for the dialogue, but I think that would be fantastic to have that step and to be able to aggregate at different levels those results, because I think that’s one of the key that it’s very qualitative data. So it has the advantage of being that you can search a lot into it and understand why people say what they say. But at the same time, it’s the challenge, because indeed you have to analyze it. And maybe it’s where artificial intelligence can help indeed make sense of the data the citizens produce through such a dialogue, because until now, the analysis was human-made. And so maybe there is a full circle here to have AI help us understand what people say about AI, and that would be a nice way to have a circle and a connection between AI and citizens’ dialogues. It’s too far to the hour, so I think we can conclude if I get it right.

Raashi Saxena:
We have one question from the audience here.

Antoine Vergne:
Yes, okay. But I don’t know the timing, so I let you in the room get the last round of questions.

Roberto Zambrana:
The good thing is that we have the lunch after this, so we can have a license to extend a little

Audience:
bit. Please, Mark. Thank you, Roberto. Thank you, Antoine. Mark Carvell, Internet Governance Consultant. I was previously with the UK government. It’s not a question, really, but it’s just a point of information. And it may have cropped up earlier, because I arrived late for this session from the main plenary session on the Global Digital Compact. But I know from my association with Project Liberty, McCourt Institute, that they are participating in a focus group on metaverses at the ITU, and there are a number of working groups at the ITU on metaverses. And I think that is potentially a channel for contributing to the citizens’ aspects of this evolution of the convergence of immersive technologies with Internet technologies that’s going to be so transformative into those discussions. I think from what I understand, that they are valuable, quite wide-ranging, and Project Liberty’s particular interest is on decentralizing these technology platforms, on ensuring that they are properly respectful of ethics and rights and so on. So, I offer that as a piece of information. Yes. Conclusion here. I hope that’s helpful. Thank you.

Roberto Zambrana:
It really helps. Actually, we were talking about AI during the whole session, but of course, it wasn’t just that, but also some other emerging technologies like the metaverse. So, thank you very much, Mark, for that. I think we’re getting to the final moment of the session. And if we don’t have any other comment, maybe we can wrap up.

Raashi Saxena:
Yes, we can. Thank you. But I do believe we have one person. Maybe I’ll just take a room around and see if there’s anyone who has any last comments. Anyone at all? Anyone else wants to go? Can you give her the mic, please? Thank you.

Audience:
Thank you for giving me this opportunity to provide you with a final comment. A lot has been said on inclusion of developers and society and so on, but I do believe that it’s a strongly interdisciplinary issue. So, there must be a place for every single specialist who has something to say. And by this, I’m relating to the metaverse and instances because I originate from child protection environment. So, we have to benefit from what we know from the past and the research, and we have to check what is going on now. So, we need to make a bridge between the past, the present, and we need to listen to experts and specialists, such as, again, developers, sexologists, practitioners, policy makers. So, it’s a common responsibility, I would say. And by not including an expert of a particular field, we may simply overlook an important contribution. So, there is really, I think this room is a good example that there are many people from different environments, different angles, and we learn from each other. And this is the

Roberto Zambrana:
only way to proceed. Thank you. Correct. Correct. Yes. And of course, the learners, the teachers, and also not necessarily experts in a field, but also users of the technology.

Raashi Saxena:
Okay. There was one last comment that I also wanted to mention that we talk about children. There are other vulnerable groups, like people with disabilities, who should also be taken into account. And also, of course, different languages. And then, yes, we can go on and on. We should come to a halt. We don’t want to take away anyone’s lunchtime, but thank you so much for joining us. And yes, Roberto and I are going to be around at the IGF. I’m happy to take more questions, happy to have more discussions. And yeah, with this, we come

Roberto Zambrana:
to a close. Thank you. Yes. If you want, I don’t know, maybe Antoine would like to say goodbye as

Antoine Vergne:
well. Maybe just one. So, really, thank you for being there. But one thing is our intention is to not stop involving citizens into those discussions. So, if you’re interested in joining us in that effort of thinking about it and making it happen, we are open. We would love to discuss with you on how to do that together. I think. Excellent. Thank you. Thank you very much. Thank you. Thank you. Thank you. Thank you.

Antoine Vergne

Speech speed

171 words per minute

Speech length

4029 words

Speech time

1411 secs

Audience

Speech speed

145 words per minute

Speech length

3043 words

Speech time

1257 secs

Raashi Saxena

Speech speed

169 words per minute

Speech length

1236 words

Speech time

438 secs

Roberto Zambrana

Speech speed

156 words per minute

Speech length

897 words

Speech time

345 secs