Scramble for Internet: you snooze, you lose | IGF 2023 WS #496

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator 2

The discussions held at the Internet Governance Forum shed light on the ongoing struggle of Global South countries to ensure internet access and treat it as a basic human right. These discussions reveal a disparity in approaches to internet access and function between the Global North and the Global South. While the Global North countries have a different approach towards internet access, the Global South representatives focus on the fundamental aspects of internet functioning and the importance of internet as a fundamental human right. This discrepancy in perspectives highlights the disparities and challenges faced by countries in ensuring equal access to the internet.

Furthermore, following its withdrawal from the G8 in 2014, Russia has shifted towards aligning more with the Global South. Although specific reasons for this shift are not mentioned, this change in alignment could potentially impact Russia’s stance on global issues and its interactions with other countries in the future.

The discussions at the Internet Governance Forum offer a vital platform to address the crucial issues related to internet access and governance. By acknowledging and understanding the differing perspectives and challenges faced by countries in the Global South, there is an opportunity to bridge the digital divide and promote equal and inclusive access to the internet for individuals worldwide.

Moderator 1

The perspective of the global South is essential in discussions about fragmentation, particularly regarding technology and infrastructure issues. These countries often face challenges due to vulnerable infrastructure and poor internet governance, which can lead to frequent internet shutdowns. Such disruptions can have significant impacts on the economies, education systems, and overall development of these nations.

International cooperation is emphasised as a key approach to address these challenges. By promoting partnerships and collaborations, it becomes possible to ensure that all countries and regions have equal access to technological equipment and innovation. This is particularly important in bridging the existing digital divide between the global North and South.

Representatives from the global South tend to highlight the fundamental significance of the internet in discussions about fragmentation. They argue that access to the internet should be considered a basic human right, as it facilitates communication, access to information, and opportunities for socioeconomic development. Their perspective is influenced by the ongoing struggle to guarantee internet access for their populations, which is often hindered by various factors such as limited infrastructure, socioeconomic disparities, and inadequate internet governance frameworks.

It is interesting to note the stance of the Russian Federation in these discussions. Despite being geographically considered part of the global North, Russia has shown alignment with the perspectives of the global South. This shift in alignment became more noticeable after the country’s withdrawal from the G8 in 2014. It indicates that Russia is placing greater importance on addressing the challenges faced by the global South, particularly concerning fragmentation and internet governance issues.

In conclusion, the global South perspective holds significant weight in discussions about fragmentation, as these countries grapple with issues of infrastructure vulnerability and internet governance. International cooperation is crucial to ensure equitable access to technology and bridge the digital divide. The global South emphasises the essential nature of the internet as a basic human right, while the Russian Federation’s alignment with the global South highlights their shared concerns regarding fragmentation and the need for inclusive internet governance.

Roberto Zambrana

The internet was initially designed to connect the scientific and academic community, but it quickly expanded as people recognized the benefits and wanted to join for services like email and access to information. This early growth and widespread adoption of the internet marked a positive development.

However, as the internet continued to expand, issues started to emerge. One major concern was the security of the internet. With more users and an increase in the exchange of information online, there was a greater risk of cyber attacks and breaches. Governments also took actions that could be seen as leading to the fragmentation of the internet, potentially dividing it into smaller, controlled networks. These negative aspects raised concerns about the future of the internet.

Furthermore, the technical dimensions of the internet itself presented challenges. New protocols that altered the original architecture had the potential to lead to fragmentation. The introduction of the Hypertext Transfer Protocol (HTTP) was a significant advancement that facilitated the growth of the internet. However, changes like these could also contribute to fragmentation if not carefully managed.

Another factor that could contribute to fragmentation is the lack of actions to provide internet services to everyone. In many parts of the world, particularly in the Global South, over half of the population remains unconnected to the internet. This lack of accessibility and the failure of stakeholders to take action to address it hinder the expansion and unification of the internet.

Despite these challenges, there is recognition that maintaining respect for internet sovereignty is crucial. The internet should be treated as an entity deserving of respect, and there should be active exchange and adherence to the principles on which it was originally designed. This positive stance suggests that upholding internet sovereignty is necessary to preserve the integrity and functionality of the internet.

In conclusion, the internet’s original purpose was to connect the scientific and academic community, but it quickly evolved as people sought to benefit from its services. However, challenges such as security issues, potential fragmentation caused by technical changes and government actions, a lack of actions to provide internet services to all, and the need to maintain respect for internet sovereignty have emerged. These issues represent significant hurdles that need to be navigated to ensure the continued growth, accessibility, and integrity of the internet.

Dr Milos Jovanovic

Internet fragmentation is a complex issue that takes on three forms: Technical, Governmental, and Commercial. Technical fragmentation concerns issues with the underlying infrastructure of the internet, such as inconsistent network protocols and incompatible standards. Governmental fragmentation involves internet access and information flow being restricted by governments through censorship and content filtering. Commercial fragmentation involves business practices that prevent certain users from creating and spreading information, such as targeted advertising algorithms.

To maintain a sovereign internet, it is important to focus on critical infrastructure, ensuring the stability, security, and resiliency of the internet’s underlying infrastructure. This includes protecting information channels through encryption techniques.

However, geopolitical issues and interests hinder the development of a minimum common framework to manage internet fragmentation. Different regions hold different perspectives and approaches to internet governance, leading to fragmented development and lack of consensus.

Emerging technologies like Artificial Intelligence (AI), Blockchain, automation, and 5G/6G networks also impact internet fragmentation. AI presents challenges in defining its boundaries and ethical use. The implementation of these technologies can either exacerbate or alleviate fragmentation, depending on how they are developed and deployed.

Internet fragmentation is expected to continue and deepen due to a multipolar world and shifting power dynamics. Challenges exist in parts of the world, such as Africa, that are less connected. Bridging the digital divide and ensuring equitable access can help mitigate the negative effects of fragmentation and reduce inequalities.

In conclusion, addressing technical, governmental, and commercial aspects of internet fragmentation, ensuring critical infrastructure, considering the impact of emerging technologies, and promoting global cooperation are necessary to manage and reduce the negative impacts of fragmentation.

Olga Makarova

The analysis delves into two main topics: technological revolutions and internet fragmentation. It asserts that these revolutions follow a cyclical pattern that can be predicted. The cycle begins with an eruption and frenzy, characterized by rapid growth and excitement surrounding a new technological advancement. This is followed by a crash, where the initial enthusiasm subsides, leading to a decline in the market. Regulatory intervention then comes into play, as authorities step in to establish rules and guidelines to govern the technology. Finally, the revolution reaches its ultimate maturity, where the technology becomes an integral part of society. Currently, the analysis posits that we are in the midst of the fifth technological revolution, referred to as the information and telecommunication age.

Moving on to internet fragmentation, the analysis suggests that this phenomenon can occur due to a combination of technological, political, and economic factors. The internet is described as a collection of interconnected but autonomous systems. Fragmentation, as the analysis points out, lacks a clear-cut definition, making it a concept that is difficult to pin down. It argues that fragmentation may manifest in various forms, leading to potential consequences for connectivity and access.

Furthermore, the analysis proposes the idea of employing mathematical models to gain an understanding of and predict internet fragmentation. It highlights an older model from 1997 that quantifies fragmentation in terms of distribution, intentionality, impact, and nature. The analysis expresses optimism about the potential usefulness of mathematical models in comprehending the complexities of internet fragmentation.

In conclusion, the analysis provides valuable insights into the predictable cycle of technological revolutions, specifically focusing on the current information and telecommunication age. It also explores the potential for internet fragmentation, noting its potential consequences on connectivity and access. Additionally, the proposal to employ mathematical models as a tool for understanding and predicting internet fragmentation adds another layer of interest to the analysis. Overall, it offers a comprehensive overview of these topics, shedding light on past trends and potential future developments.

Otieno Barrack

The analysis explores the topic of internet governance, with a particular focus on its relevance in the Global South. It highlights the fact that many nations in the Global South are utilising systems and solutions that were largely designed in the Global North. This reliance on infrastructure not specifically tailored to their needs has resulted in a number of issues, such as internet shutdowns due to weak infrastructure.

The rise of internet shutdowns in the Global South is a growing concern, as they have a significant impact on local internet economies. This emphasises the need for internet governance to be applicable at a local level, despite its global public good nature. Design principles specific to internet infrastructure in the Global South need to be considered to ensure effectiveness and reliability.

Investment in the correct technological competence is also crucial. The private sector must invest in the appropriate technological capabilities to prevent infrastructure compromise. Poorly executed investments in technological competence can result in significant problems and hinder the development and stability of internet systems.

Additionally, the government plays a key role in creating a level playing field for all actors in internet governance. Their involvement ensures that the interests and needs of various stakeholders are taken into account. By fostering a fair and inclusive environment, the government can help promote the stability and growth of internet systems.

The analysis also highlights the negative effects of internet shutdowns on both local and global internet economies. Studies have shown that these shutdowns incur significant costs that extend beyond the immediate disruption of internet access. This further underscores the importance of addressing internet governance issues and safeguarding the stability and accessibility of internet systems.

In conclusion, the analysis emphasises the importance of relevant and applicable internet governance at a local level in the Global South. It stresses the need to consider region-specific design principles, as well as the significance of private sector investment in the appropriate technological competence. The role of the government in creating a fair and inclusive environment for all actors in internet governance is also highlighted. Lastly, the detrimental impact of internet shutdowns on local and global internet economies serves as a compelling argument for addressing these issues and ensuring the stability and accessibility of internet systems.

Session transcript

Olga Makarova:
development. She studies the relationship between technological development and financial bubbles. In 2020 Forbes named Carlotta Perez of five women economists worthy of our attention. She came to the conclusion that every technological revolution follows the same cycle. It all starts with an eruption, followed by frenzy, lots of ideas, lots of money. Then a crash and a turning point. At this stage, governments step in to regulate. And then comes synergy and maturity. According to Carlotta Perez, we are still living in the era of the fifth technological revolution, the age of information and telecommunications. And we have not reached the turning point yet. So the question now, what could be the turning point? What will happen after? What should be the institutional recomposition? What might synergy and maturity mean for the Internet? Can we increase this process and how? Each technological revolution causes many changes in society. This one gave rise to Web 2.0 and digital empires. However, the national state systems has not passed away since the Internet advent. While our virtual lives are in full swing in the digital empire’s vastness, our real life still takes place within the sovereign state borders. So, the questions are, could the growing confrontation between sovereign states and digital empires be responsible for the turning point start? Do we need a mature Internet? And what should it look like? We have not got proper answers to all these questions, but we are confident that we don’t want many fragmented split Internets to overrun the mature Internet. Internet fragmentation has a myriad of verbal definitions, sometimes emotional, sometimes sophisticated, but never precise. Some form of fragmentation can be useful for the entire Internet. Google’s QUIC is a case in point. But no definition can answer one important question. The question is, where is the very red line that crosses the boundary between fragmented and unfragmented Internet? The problem is complicated by the fact that fragmentation concepts treat the Internet as an unfragmented, pre-existing whole. But that’s not true. The Internet is a structurally… What we have just discussed in the lecture questions is the uneven fragmentation of Internet activities. It is a fundamentally fragment set of autonomous systems. The following question arises. For many years it was the shift in lifestyles and gatherings to amplify the perception of fragmentation issues which everyone can reach in their own way. So we constantly bump into the various forms of mild slow which states where you stand depends on where you see. Foods are laid out to ัะพะฒfication, but because everyone doesn’t know why, an ambiguous, mathematical model could help, but it hasn’t been created yet. So the question is what needs to be done to reach consensus? It seems that in trying to find the consensus, we need to find the foundation of the model. The foundation of the model is the internet invariance defined by ISOC in 2012. It is a great foundation. Any bridge of any invariance could be considered a form of fragmentation. But we also have bad news. In a universally understood model, Junt kernel can be used for various options. Here they are. affecting the entire internet. For example, any attempt to confiscate all IP addresses of one or more states would have dramatic consequence for the internet. We will account an example of deep structural fragmentation. We will get a chance to see real split internets without trust, unique identifier, globality, and much more. A similar case almost happened in March 2022 when some officials sent a demand to deprive Russia of all allocated IP addresses. But the technical community made the only correct decision not to do so. And this saved not only Russian users, but also the entire internet. When someone tries to punish someone by stripping them of the internet’s core values, there is punishing the entire internet by stripping it of its core values. But how many people think it’s obvious? So while this case shows that only the existing internet government’s ecosystem can protect Russian internet users, I’m afraid we will have to prove it. And probably the only way to do it to create a mathematical model of risk assessment. The entire internet is similarly impacted by sanctions that limit the ability of market participants to make payments for the facilities and services necessary to provide global internet access. The question is how to prove it. Filtering and blocking undesired content and platforms is a political development. All states, without exception, do it. Each sovereign state has its own undesired content blocking policy. The concept of undesired content is read by each sovereign state in its own way. Some sovereign states may apply similar blocking policies for a certain period of time. If you want to see for yourself, check out Blocking Website as Proxies for Policy Alignment by Nick Merrill and Steve Weber of the Center for Long-Term Cybersecurity at the University of California, Berkeley. In March 2022, access to some global platforms was blocked in Russia, some of Internet traffic disappeared and we were sure we would never see it again. Fortunately, we were mistaken. Customers were looking for an alternative. Finally, the Russian customers changed their preferences and started using other platforms. The graphics shows the very relocation of Russian customers with their content from one platform to another. What do Metcalfe’s Law and Dunbar’s Number have to do with this case? Metcalfe’s Law states that the network influence is proportional to the square of the connected user’s number. Metcalfe’s law is constrained by practical limitations, such as infrastructures, access to technology, and bounded rationality, which can be defined by Dunban’s number. Dunban’s number is a suggested cognitive limit on the number of the people with whom a person can maintain stable social relationships. This example allows to suggest that before March 2022, there were several Russian clusters on these platforms connected to each other and to other clusters by Metcalfe’s law and Dunban’s number. Links to the other clusters have forced Russian users to look for global alternatives. However, the limited number of such links prevented fragmentation of the user experience. The blocking did not have a significant impact on the content and user experience. The blocking had a significant impact on some platforms in some region. The good news is that some bounds may affect individual platforms, but not content. The bad news is we cannot predict how many resources need to be blocked to reach the border of the unfragmented Internet and cross the red line. Today, we can only analyze as post-factual, but we need an accurate prediction. Experts suggest four ways to avoid Internet fragmentation. The questions are, which way is right? Why one and how to reach consensus? Look like we can’t do without dull figures. We have a set of technical, political and commercial developments that may have an impact on fragmentation. Each development can be quantified in terms of its distribution, intentionality, impact and nature. Each case can be viewed as a function of these variables. The function value can be used to quantify one or more key dimensions. The question is, why don’t try to define a formula for fragmentation? We seem to be in dire need of scientists and science centers. Has anyone ever tried to define a formula for fragmentation? The good news is that the answer is yes. A part of this model is in front of you. The bad news is that the model was created in 1997. The Internet has changed increasingly since then. So, we don’t know if we can use this model or not. We need to check up. But verbal descriptions are not always convincing. Sometimes they are complex, full of emotion, and virtual versions of Maezler. Dolphins have a more powerful impact. Let’s put aside irrationality. Let’s get scientists involved. Let’s get started trying. Thank you. And here are some important references that I used to prepare my presentation. Thank you.

Moderator 1:
Thank you so much, Olga. That was a very, very interesting and comprehensive analysis. Thank you so much. I hope it will serve as a good basis for not only this discussion, but for many other deliberations on the topic. Because what we value usually in the Russian position is the comprehensive and inclusive approach with all the points. And we just witnessed the very, very profound approach to the topic. Thank you so much. We have Barak Otieno online. If you can continue from the region of Eastern Europe, which is Russia belonging to according to UN classification, even though it’s one-eighth of the world’s surface, we can move to Africa. And Barak, if you’re with us, can you please share the approaches of the technical community from your region?

Otieno Barrack:
Good morning. Good afternoon. Good evening, everyone. I hope you can hear me, Mr. Moderator.

Moderator 1:
Yes, yes.

Otieno Barrack:
Thank you very much. It’s 2am in Nairobi, but the beauty of the internet is that we have a common platform to be able to share in this course, especially on matters of global importance. I think looking at the subject and taking up from where the previous speaker has left, I would like to look at this from a perspective of Global South, especially in terms of the issues that we are dealing with insofar as internet development is concerned. My background is largely in internet infrastructure and internet policy development, both at regional and at policy level, and I’m a believer in the mantra of the Internet Governance Forum of thinking locally and acting globally. I think internet governance is more important if it is relevant at local level, despite the fact that it’s a global public good. And what I would like to stress insofar as internet fragmentation is concerned is that, especially for global South nations or developing South nations, it’s important that we take into consideration internet design principles. The Internet Society has continuously emphasized on the right internet design principles as it is. Most of the regions in the world, the global South, not limited, are using a system or a solution that was largely designed in the global North. And I think when I say design, context is very important. designing of buildings. You may find, for example, in parts of the global south, some designs which take into consideration environmental factors such that people don’t live in permanent houses, for lack of a better word. You find nobody communities that build temporary structures that consider the harsh or the hot weather in those particular areas. If I just juxtapose this or compare this to the internet, what should be the recommended design principles that each of the region should consider? I’m saying this because design is key, because it inevitably affects the structure of the internet and can easily result in fragmentation. We have seen the rise of internet shutdowns, especially in global south nations, where probably the design is not robust and there are single points of failure or single points where infrastructure can very easily be controlled or taken advantage of. Again, when we are looking at countries that have been affected by internet shutdowns, and I think the Internet Society and other organizations have actually done extensive studies on the cost of internet shutdowns on the global internet economy. We also see a scenario in which areas in which we witness a lot of internet shutdowns do not have established internet governance mechanisms. When I say internet governance mechanisms, I’m looking at national fora or opportunities such as this that bring together stakeholders to discuss on equal footing matters that affect internet governance in those particular jurisdictions. When I add to this, it’s also important for all stakeholders. to pay careful attention to their roles and responsibilities because this also inevitably affects internet, or rather affects the issue of internet fragmentation. When I look at, for instance, the technical, let me just look at the stakeholders in a local internet governance ecosystem, I’ll start with private sector. When private sector does not invest in the right technological competence, you find that we have half-baked engineers who then build infrastructure that can easily be captured, for lack of a better word, or that can easily be compromised. When I say compromised, it can be compromised either locally, or it can be by state actors, or it can also be compromised by non-state actors. We have seen situations in which cyber criminals take charge of internet infrastructure that affects various publics, or that affects various private sector interests. I would also like to consider, say, the role of government. Government is key because it creates a level playing field for all actors. So you find when governments don’t pay attention to internet governance conversations, there’s a likelihood that there’ll be a wrong impression, or feeling that they are under threat, and they are likely to respond wrongly whenever they feel that they are under threat by creating scenarios that result in internet shutdowns. As I have mentioned earlier, internet shutdowns have a profound effect on local internet economies, leave alone the global internet economy. Let’s bring into perspective the role of academia. shapes the skills of the engineers who build the local Internet and those who build the global Internet. So if academia is not paying attention to Internet architecture, to best practices, there’s a likelihood that we will end up with wrong architecture that can very easily result in fragmentation. And last but not least, I will talk of again two important actors and I’ll look at the media and I’ll look at the technical community. The media is an important watchman and the media should continuously point out whenever any of the stakeholders is not in step with what they’re supposed to do, or whenever any of the stakeholders is misusing the privileged opportunity that they have insofar as Internet governance. is concerned. So this would be my initial comments with respect to the subject of Internet fragmentation. And I must say that, especially for global South countries, there’s a scramble to implement various technologies, whether satellite related, whether fiber optic cable, which if we don’t pay attention to important Internet architecture development principles, it’s likely to result in a lot of Internet fragmentation. So I’ll stop at that and return the floor back to you, Mr. Moderator. Thank you.

Moderator 1:
Thank you so much for your very comprehensive and interesting speech. And I believe that the global South perspective is key when we are speaking about fragmentation, exactly to avoid situations when fragmentation may be a result of the lack of technologies and critical infrastructure. countries of the Global South. And that’s why we need international cooperation to ensure that all countries in all regions have the same level of technological equipment. And I believe that we will continue with the Global South perspective now. And Roberto, please, we are now moving to the LAC region. Can you share the perspective of the technical community and civil society of the LAC region and tell us your insights? Thank you.

Roberto Zambrana:
Thank you very much, Roman. And also I want to say hello to everyone in this panel and attending the session and in the distance to Barack, a very close friend as well. Well, I would like to maybe review a little bit of the history of internet that many of us will know. And if we remember back at the end of the 60s, the first and most important motivation was to get together everyone. I mean, in that moment, what I’m calling about everyone was the scientific and academic community. So nobody was thinking about security, nobody was thinking about superignity. No. The idea was to actually try to everyone get connected to this network that was starting to grow. It reached some other places in Europe and Asia and, well, as we all know, these big networks that started to be called as internet, then try to connect everyone. And then in the 90s, that’s another fact that we have to remember is that when the private companies, of course, started to put this kind of services, not only for the scientific community, but for the citizens as a whole in all our nations. Once again, it It was important to get everyone connected. I will say that the people wanted to be connected. The people wanted to have these services that we had, like email, access into information, et cetera. So suddenly, many, many people started to join to this network. We are talking about not tens of thousands, but maybe hundreds of thousands and millions. And something that increased this growth was the invention of the HTTP, the protocol that allows to navigate the internet. But then, of course, some other issues started to appear as well. Security issues, people that was taking advantage of this kind of infrastructure to do some bad things. And I think that’s where the society initially, of course, started to worry about these issues, and then, of course, the governments. And they deployed some sort of actions that perhaps could be understood as various ways of fragmentation. We all know that now. But I will say that those are not the only actions that we could westernize about. I will say that in terms of technical dimension of internet, claiming that we can have a better internet, maybe a more secure internet, and that we may actually be, let’s say, putting some other features to current protocols, we could actually have some better way of connections, more secure connections, more efficient connections. And then, we can see in this other technical dimensions that could also threat to the way internet was supposed to be from the beginning, as Barack was saying, to think about what. that the architecture with the principles of the Internet, as we know, and as we want it to be in the future, could be threatened by this kind of new initiatives. And one reason for that is that if we remember as well, back in those years, at the beginning of the Internet, one important entity coming from this technical community was IETF, which is currently the one organization that works with this large technical community and allows, of course, to come up with very clever, very interesting, and very evolved protocols during all these years. And I could say that from the information that I got even recently, because there are some interesting options that we can find now in the boot, talking about these new protocols, of course, they didn’t come up in a community or from a community in this way. Those new protocols might be interesting, might be good, but, again, it’s difficult to think about the results of these initiatives if we don’t see a community behind this, a big community that can have, in this case, technical decisions coming from the bottom up. So that’s another thing that we need to reflect on. And finally, another way of fragmentation that I think, and particularly this affects us in the global south, is related with business models for providing Internet services, of course. And in this case, I wouldn’t say it’s an action that actually any of the multistakeholders is doing that might be actually another way of fragmentation. But in this case, I think it’s a lack of actions, actually. A lack of actions, independently, if this is coming from the government or from the private sector, or even from the civil society when they have to demand this kind of services. The problem is that this lack of action is preventing that many people, that I will say in the Global South, is more than half of the population, is still not connected. That’s another big problem. And of course, at least for me, that’s another important way of fragmentation, if we’re trying to analyze all these different ways. So finally, we were listening about the other approach. And I understand very well about sovereignty. I understand the position exercising the rights of using its mandate. I mean, I’m talking about the different governments in different places. And in the ones particularly that, of course, in the way to maybe face some particular problems regarding security or some other motivations, finally, they decide to define laws that could be understood as another way of fragmentation. And that’s something that I started to reflect on during the last year. If we consider internet as an entity, as once some years ago, we started to consider the world, the mother world actually, or the mother earth, better said, as an active entity, as an entity that we need to respect, as an entity that we need to exchange with. And if we go and analyze internet as another complex entity in which we actually, part of our lives, we spend this then we also need to respect some sort of rights and and I will relate those rights with the principles that Internet was designed from the beginning, and we all need to keep them also in the future, and we also need to talk about Internet’s sovereignty as well, and I think with that concept,

Moderator 1:
it’s time to go back to Europe, and Dr Jovanovic, who also was a speaker in the previous edition of this session, can you please tell us something new, maybe, what you didn’t mention last time, and maybe reflect on those interventions which we expected from Europe today, hopefully, that will help you kind of in thinking about Internet fragmentation. Thank you.

Dr Milos Jovanovic:
Thank you, it’s my pleasure to be here in Kyoto to discuss this topic. When we speak about Internet fragmentation, the whole idea is that it’s a very complex issue. It’s a very complex issue, and it’s very difficult to see like three types of fragmentation. It’s technical fragmentation, governmental fragmentation, and commercial fragmentation. And speaking about geopolitical perspective, because I can’t, you know, separate all what’s happening in speaking about Internet fragmentation from the geopolitical perspective, I would put focus on governmental fragmentation, because I think it’s a very complex issue. And I think it’s a very complex issue, and it requires some, you know, group of users and certain users of the Internet to create, distribute, or access information. So it’s all about information. And you, Roberto, mentioned Internet sovereignty and information sovereignty. And I think it’s a very complex issue. And I think it’s a very complex issue. information sovereignty, and so on. And so it’s really important to discuss this. On another hand, we have technical fragmentations and the aspect about condition which underline infrastructure and some systems to fully operate. We saw some accidents in the past about it. And of course, we see commercial fragmentations, speaking about business practices, which prevent also certain users to create some informations and to spread information across the globe, speaking about their own interests and what they think is right. So when we speak about technical fragmentation, there are many aspects, speaking about routing corruptions, for example, which is really important. So blocking of new GTLDs, some alternate of DNS zones. This is really important, speaking about DNS system and who controls DNS system. When we speak about sovereign internet, which there are examples in China and other countries where they developed sovereign internet, it’s all about how we route our traffic inside of country. And speaking about small country, I’m from Serbia. I think we have a challenge regarding our inter-routes and all what’s happening right now. And it’s all about how we want to think in a way, how we want to secure our own infrastructure. Because when we speak about technical aspects, we usually speak about critical infrastructure. And it’s crucial for, I would say, sovereign internet of every state of the United Nations. After that, we came to some different approach, speaking about TOR, anonymization services, and VPNs, and so on and so on. This is also part of technical. fragmentation aspects. On the governmental side, there are, you know, also different points of view, and I will start with filtering and blocking services with some kind of censorship, but we shouldn’t say that it’s censorship if some governmental organizations say, okay, this is our right to protect interest of our citizens. You know, so that’s a good example, you know, and we see what’s happening right now in geopolitical perspective, you know, speaking about fragmentation processes, you know, between East, West, North, South, and, you know, for example, China is a good example. You can’t, you know, access many Western, I would say all Western services in China. When we speak about Russia, it’s also about Roskomnadzor and who protects, you know, rights of citizens of Russian Federation, all data should be stored, and so on, and so this is a part of governmental aspects, and, you know, that’s normal, because when we speak about Internet, I wouldn’t say that there is someone who has the right to say, you know, ownership of Internet is in our hands, so it’s, you know, decentralized network. That’s how I see, you know, from logical perspective, you know, there are different aspects, speaking about attacks on national networks, cybercrime, architectural and routing challenges inside of every country and between continents, you know. You know, last year when we were in Africa, we discussed, you know, lack of connectivity in some parts of Africa, you know, it’s a less connected continent, so that’s also an issue, you know, because if you speak about fragmentation, you know, we should, you know, see some parts of world where people do not have access to the Internet, you know, so after that, you know, when we discussed in last years and in different forums as well, you know, there are international frameworks, you know. We should speak about common approach, how to solve some challenges, nevertheless of, you know, what’s happening right now and with a geopolitical perspective, because, you know, if you want to achieve, you know, sustainability, which I think is really important, you know, we should focus on building minimum common framework, how to deal with such challenges, because, you know, living in a 21st century, many people do not think and they don’t think that such events are possible, and I would say geopolitical confrontation and, you know, fragmentation and so on and so on. But I think it’s crucial to understand that it’s very important to think how to sustain, to make this sustainable and to grant all people across the globe to access services. And, you know, we also have from governmental side, when I said accessing different services, many people would think about social networks, about controlling information channels, traffic flows and so on and so on. But I think, you know, it’s sovereign right of every state to control their own information flow. And in this circumstances, we should think about minimum common framework and how to make this all sustainable, because there are different interests of, you know, every player in this global arena, including, you know, global east, global west, global north, global south, you know, we should focus on building sustainable approach, and that’s my perspective. Moving back to commercial fragmentation, this is, you know, a challenge speaking about interconnection agreements, about policy interoperability, speaking about internet of things and emerging technologies, speaking about artificial intelligence, blockchain, there are different, you know, approaches and aspects and so on. So, blocking, discriminatory, you know, discriminatory aspects, speaking about net neutrality, what is neutrality, geo-blocking aspects, content, you know, potential cyber attacks on critical infrastructure as well, because if you use some, they would say, non-secure equipments. For example, in Serbia, in a country where I belong, you know, we have some agreement and our government signed that we will not use equipments in our critical infrastructure, which is from non-secured, you know, countries. So what that means, you know, this is also part of commercial fragmentation, you know. And for example, I will give an example from the United Arab Emirates. They signed a contract with Huawei speaking about 5G, you know. So speaking about commercial aspects, about infrastructure, about hardware, about, I would say, all what is critical infrastructure in every country, it’s also part of fragmentation on some, I would say, industrial level. So it’s a huge discussion, you know. If you want to use, for example, some Western hardware equipments and so on, do we belong in, I would say, geopolitical, you know, geopolitically to some, you know, aspects, I would say, and policy, and so should we respect this or not? So that’s always about, you know, what’s happening right now, speaking about NVIDIA microchips and some different server equipments and so on and so on. So I would say that, you know, we see internet fragmentation processes. From my perspective, it’s all started, you know, 2015, you know, 2015 year, 2014, you know. But now we are going deeply in these aspects, and we see three segments, I would say, technical, governmental, and commercial fragmentation, and it’s not all… It’s not only about, I would say, theoretically how we see, it’s about technically. And I know this forum, I mean Internet Governance Forum, there is, as Roberto mentioned, a huge technical community and in the last days we discussed some techniques speaking about anonymization and so on and so on. It’s also about how to secure your own information channels. So we speak about encryption techniques, which is really important, and you know, another topic which is very important is about how to secure metadata of communication. So this also includes ISP as a providers and other stakeholders in this process. So yeah, I want to conclude that right now I see, and I always mention this, you know, colleagues know, and Dr. Chukow and Mr. Glushenko also knows this, I always conclude, you know, speech with what we see right now as, you know, evidence, as a real thing, that we see three technical and I would say technological zones, you know, we see a Western European zone, we see Russian technological zone, we see Chinese technological zone, and it’s a good example when you visit China, you can’t use Western services. In Russia there is a strict laws, you know, all data of Russian citizens should be stored in the territory of Russian Federation. In a Western part of world it’s a huge discussion about Huawei equipments and the non-secure equipments, ZTE, Chinese initiatives, you know, we speak about 5G right after, I would say that China won 5G battle, and right after that, you know, American companies founded the 6G alliance, where they bring together all American companies in a position to try to won 6G battle, so it’s all about automation, about the new emerging… technologies, artificial intelligence, but this is a good, I always ask this, you know, who can define what is artificial intelligence exactly? And a few days ago, actually I think it was your day or first day, WindSurf proposed that artificial intelligence is machine learning. So when we speak about artificial intelligence, we speak about different algorithms, techniques, machine learning, data mining, and so on and so on. So speaking about only artificial intelligence, artificial intelligence, artificial intelligence, I think it’s useless. So we see some emerging technologies, of course, a machine learning, AI, blockchain, different processes, but it’s all about wider aspects. So it’s all connected with 5G, 6G, automation processes, smart city, sustainability, agenda 2030, and so on and so on. Global approach, speaking about fighting against, you know, against pollution, for example, China is a good example. 15 years ago, you know, you see how Beijing was and now. So there are initiatives in this, how we use technologies to fight against real, real problems. And I want to add at the end, you know, as a conclusion, that this fragmentation processes will continue. I don’t see that we are going in a direction of, as I proposed, you know, before, of minimum common frameworks, you know, how to deal with such challenges. I see, you know, strong direction in a way that this fragmentation processes will continue, will deepen, and this is all connected with geopolitical and strategical processes, which definitely started. And I would say that this is, speaking about internet technology and all aspects, it’s just a part of, you know, shifting of power from, you know, the West to the East, and of course we see some tendentions and some processes of global north-south cooperation because, you know, their colleague from Africa, they mentioned the challenges and so on, so, yeah, this will continue. I don’t see that internet fragmentation will stop, and I mean technological fragmentation and all, so, and I see this as a part of multipolar processes, so, the processes of rising of multipolar worlds. Thank you very much. Thank you.

Moderator 2:
Thank you very much, Milos. This is indeed a very insightful presentation, and as we see, even judging by the attendance of our today’s audience, this topic is still more interesting for global south representatives. We do not see global north here, and when global north countries host discussions on the topic of fragmentation, they discuss completely different things. Global south representatives tend to discuss the fundamental aspects of functioning of the internet, because they are still struggling to ensure internet as their basic human right, and this is the difference between approaches, and it’s not only in the sphere of internet, in the sphere, even in the sphere of the values, I would say, because very different level of development always causes such, let’s say, existential disputes, and we are happy to continue to convey the points of view of the global south countries, even though Russian Federation is… is the geographical North country. At the same time, after the commonly known events in 2014, when Russia withdrew from the so-called G8, me as an expert in this topic, in the sphere of G8, G20 and BRICS, I believe that it was the transition period of actually in a turning point of Russia to go into global South, which is quite interesting. We will see historically what it will lead to. The history seems to be repeating. I asked my colleague. Yes, yes. I asked my colleague, His Excellency Vladimir Glushenko, to summarize the discussion and share his vision and the vision of the expert community who participated in stock taking of GDC process conveyed by the Center for Globality Cooperation, please. Hello. Yes, good morning, everyone. Roman, thank you very much for giving me the floor. And I would like firstly to thank all our experts that expressed their very valuable and interesting views on such a hot, I would say, topic as internet fragmentation. Indeed, we have been discussing this topic for quite some time, if I’m not mistaken. The substantive discussions of internet fragmentation started at the IGF of 2019 in Berlin. And since then, really, this discussion has never stopped. Well, for me personally, I like the expression of, I don’t know which expert, but he or her or. She said that, indeed, the Internet has never been unfragmented. So there has been always a problem of fragmentation. And this is why the Secretary General of the United Nations, Antonia Guterres, just decided to suggest a global digital compact. And one of the priorities of this future document of the soft law is avoiding of Internet fragmentation. Really, it’s difficult to say at the moment that the global digital compact can do something to stop the fragmentation of the Internet, but it’s quite capable to formulate the universal rules and principles of decentralized development of national segments of the Internet. This document, I hope, can launch the international dialogue on the future of Internet on the basis of a common vision, if it contains provisions with clear criteria for the responsible behavior of all interested actors in the digital sphere. To my mind, in the most optimistic scenario, the global digital compact should define the framework and criteria for the operation and accountability of global digital platforms, ecosystems and metaverses. And it should also ensure that respect the right of UN member states to independently determine the parameters of the circulation of information and content within their jurisdictions. And this will greatly reduce tensions in the international discourse on the principles of freedom of expression and self-expression in the digital age. It will make it possible to demonopolize the rights and practice of individual countries and IT giants to censor the flow of information solely in their own interests. So, it was sort of a quote from the… from the contribution of the part of Russian export community. I’m not, of course, I’m not representing the whole Russian export community, but the organizations that took part in the discussion of the Global Digital Compact. And I’m sure that the discussion on the Internet fragmentation will, of course, continue. And to my mind, the name of today’s session, You Snooze, You Lose, is very well, very well characterizes the state of discussion around Internet fragmentation. I am sure that the IGF community has been doing very good work in this sphere, and specifically, I would like to thank the public network on Internet fragmentation for very substantive discussions and very interested outcomes. With this, I would like to thank again our today’s speakers and experts, and wishing all of you a very fruitful IGF. Thank you.

Moderator 1:
Thank you. Thank you very much, Vadim. And to be time efficient and not to delay the session, let us please conclude here. Thank you, everyone, for this morning exchange of views. It was very, very insightful, very interesting. I believe that those experts globally who will watch the broadcast and who were online with us and present in the audience had the chance to make their own conclusions about some new ideas our speakers presented. And I imagine that this is certainly not the last discussion. on this important topic and I kindly invite everyone to continue enjoying this beautiful forum sessions and have a productive experience for the next workshops and sessions. Thank you very much, have a great day and good ending of the forum. Thank you. And thank you technical team. Thank you. So, you.

Dr Milos Jovanovic

Speech speed

151 words per minute

Speech length

2080 words

Speech time

826 secs

Moderator 1

Speech speed

115 words per minute

Speech length

496 words

Speech time

258 secs

Moderator 2

Speech speed

135 words per minute

Speech length

850 words

Speech time

379 secs

Olga Makarova

Speech speed

112 words per minute

Speech length

1503 words

Speech time

807 secs

Otieno Barrack

Speech speed

130 words per minute

Speech length

990 words

Speech time

455 secs

Roberto Zambrana

Speech speed

144 words per minute

Speech length

1137 words

Speech time

473 secs

Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Hiroshi Esaki

The analysis covers a wide range of topics related to smart and sustainable solutions, the ethical use of technology, green designs, energy efficiency, the role of the younger generation in technological change, government-initiated smart cities, multi-stakeholder approaches, data ownership, and the future of education infrastructure. The overall sentiment of the analysis is positive, highlighting the potential benefits and necessary actions in each area.

One of the key arguments is the integration of smart and sustainable solutions in universities, which play a crucial role in shaping the minds of the next generation. The analysis emphasizes the need for universities to embrace the digital revolution and create campuses that are both state-of-the-art and environmentally friendly.

The importance of green designs and retrofitting existing structures to enhance energy efficiency is also highlighted. The panel stresses the significance of adopting net-zero footprint strategies and aligning with global standards, focusing on making existing buildings more energy-efficient rather than solely focusing on new construction.

Another area of focus is the G20 Global Smart Alliance, which aims to establish global norms for the ethical and responsible use of smart technologies in cities. The analysis expresses support for the alliance’s work and emphasizes the importance of setting global standards to ensure ethical use of technology for sustainable development.

The analysis also discusses the expansion efforts of the Global Smart City Alliance, which includes more than 36 pioneer cities globally. It highlights the importance of collaboration and knowledge sharing among cities to address common challenges and promote sustainable development.

The role of the younger generation in driving technological change is also emphasized. The analysis recognizes the power and potential of younger people in shaping the future and emphasizes the importance of investing in their education and empowerment.

There is also mention of the view that government-initiated smart cities can be a mistake, arguing for a multi-stakeholder, agile approach involving academia, industry, and government support.

The importance of data ownership is discussed, with a focus on individuals having ownership of their own data. The analysis highlights the need for discussions on data privacy and usage to ensure ethical and responsible data practices.

In terms of the future of education infrastructure, the analysis expresses optimism and discusses the role of advancing technologies in shaping educational settings. It mentions the Smart Campus Blueprint as an initiative to integrate technology into educational environments.

Overall, the analysis provides valuable insights into the various topics discussed. It emphasizes the significance of integrating smart and sustainable solutions, establishing global norms for responsible technology use, expanding smart city alliances, retrofitting existing structures, empowering the younger generation, adopting multi-stakeholder approaches, prioritizing data ownership, and embracing technology in education. The analysis encourages individuals to actively contribute to these efforts by joining initiatives such as the G20 Global Smart Alliance Network.

Audience

During the discussion, Taro emphasised the significance of STEM education, encompassing the fields of science, technology, engineering, and medicine. He stressed the need to prioritise these disciplines in the education system as they play a crucial role in driving innovation, economic growth, and societal development.

Taro argued that STEM education offers students a comprehensive understanding of the world and equips them with the necessary skills to navigate challenges in the rapidly advancing technological landscape. By fostering an interest and aptitude for STEM subjects, students can develop critical thinking, problem-solving, and analytical skills highly sought after in today’s workforce.

Supporting his argument, Taro cited statistics highlighting the increasing demand for STEM professionals in the job market, as well as the higher salaries typically associated with careers in these fields. He also referred to studies demonstrating the positive impact of early exposure to STEM education on students’ academic performance, engagement, and career prospects.

Encouraging active participation, Taro invited the audience to pose relevant questions, creating an inclusive environment where different perspectives could be shared and discussed. This facilitated a deeper exploration of the topic and a more holistic conversation.

In summary, Taro’s emphasis on STEM education stems from the belief that it is crucial for preparing future generations to thrive in an increasingly technology-driven world. Through a focus on science, technology, engineering, and medicine, students can acquire the skills and knowledge necessary to contribute to innovation, solve complex problems, and drive societal progress. The audience was encouraged to engage in the conversation by asking thought-provoking questions, leading to a more comprehensive understanding of the topic at hand.

Corey Glickman

The analysis focused on various aspects of sustainable urban development and energy efficiency in India and the United States. It highlighted the need for promoting equitable wellness and resilience in urban landscapes, acknowledging that smart monitors and controls in transport, buildings, environment, life, events, infrastructure, and utilities can enable communities to transform the urban landscape. The vision for a zero-carbon built environment includes the goal of achieving equitable wellness and resilience for all.

Decarbonization efforts were seen as requiring democratized action and support from all stakeholders to succeed. It was argued that enforced decarbonization standards at the government level without the involvement of the community, experts, learning institutions, and businesses can lead to failure. The transformation towards decarbonization takes place when there is participation from various stakeholders, ensuring that everyone’s needs and perspectives are considered.

The analysis expressed concern about the increase in building construction in India, which has led to a significant rise in building energy use. With India poised to become the fifth-largest economy in the world, the construction of new buildings at a rate of 8% annually has contributed to the escalating energy demands. However, it was also recognized that India has inherent advantages for building energy efficiency. These include a strong tradition of passively cooled buildings, a wide occupant tolerance to heat, a ready supply of local sustainable construction materials, inexpensive labor and craft costs, and careful use of resources.

Collaboration between the United States and India was emphasized, particularly in the field of building energy research and development. The U.S.-India joint center for building energy research and development, called CBERD, was highlighted as an example of such collaboration. It aims to develop building technologies that improve energy efficiency, comfort, and health safety. Through CBERD, significant collaborations between Indian and U.S. scientists have taken place, resulting in the development of nine new technologies, more than 100 peer-reviewed publications, and fostering mutual respect.

One notable aspect of the collaboration between the United States and India is the development of tools and resources for energy-efficient building design. These tools and guides aim to provide best practices for designing low-energy buildings and are specifically suited to the cultural, climatic, and construction context of India. They serve as valuable resources for the public and contribute to the advancement of sustainable building practices in the country.

The analysis also discussed the importance of digital transformation and leadership alignment in sustainable city development. Partnerships between the University of Tokyo and Microsoft were highlighted as contributors to this transformation. The adoption of technologies like digital twins and IoT devices was noted since these technologies already exist and can be utilized in the process of digital transformation. Furthermore, it was emphasized that alignment between visionary leadership and the actual implementers of policies is crucial for successful implementation.

The analysis advocated for using existing policies as a starting point for building sustainable urban environments, suggesting that the Green Sustainability City Alliance is working on embodied carbon for existing buildings and sustainable procurement as initial policies. However, it acknowledged that issues can arise due to complexities in zoning and challenges from local and national governance.

Localization was presented as an important factor when implementing policies related to sustainable urban development. It was acknowledged that what works in one city may not necessarily translate to another, and additional actions may be required upstream or downstream for policies to make sense in different contexts.

The discussion highlighted the positive role that policy discussion and collaboration can play in accelerating progress towards sustainable urban development. It was noted that policy leaders often have open attitudes towards discussions and are willing to share their networks, facilitating collaboration and the exchange of ideas.

Finally, the analysis acknowledged the significant role that global IT companies, particularly Microsoft, and other hyperscalers, will play in shaping the future of smart buildings and campuses. These global IT companies are viewed as instrumental in establishing the digital backbone necessary for sustainability and efficiency. The analysis also identified a potential winning formula for smart city development, which involves collaboration between university-based academic research, major IT service providers, and policymakers. This combination has been observed to be effective, particularly when implementing projects that involve academic-led investigations in controlled city areas or airports, supported by major IT service providers and policymakers.

Overall, the analysis offered valuable insights into the various aspects and challenges of sustainable urban development and energy efficiency in India and the United States. It emphasized the need for holistic approaches, stakeholder involvement, collaboration, and the leveraging of existing resources to achieve sustainable and resilient urban environments.

Hiroshi Esaki

The analysis highlights the potential of digital technology in enhancing energy efficiency, particularly through the use of cloud computing. It suggests that adopting digital technologies can result in over 80% energy savings. A footprints analysis reveals that following the EP100 plan can increase renewable energy usage to 25-30%. Therefore, digital technology can improve energy efficiency by up to 50%.

The analysis also emphasizes the positive impact of cloud computing and sharing economy in reducing energy consumption. Migrating from on-premise computers to data centers can lead to a 30-40% energy cut, thanks to high-performance HVAC systems. Additionally, cloud computing can save 70-80% energy through sharing economy.

Digital twin technology is highlighted as a tool for optimizing energy usage in system operation. A 12-year-old implementation resulted in a 31% energy productivity improvement, and current digital twin technologies can further reduce energy use.

Redesigning physical systems using digital technologies can significantly reduce carbon footprint. Comparative cost analysis shows improved energy productivity when digital transportation replaces physical transportation.

Collaboration between academia and industry is essential for effective decarbonization strategies. An example is provided where Tokyo University achieved over 30% decrease in energy consumption through collaboration. Young students working with seniors are seen as crucial for the future.

Hands-on experience and technology usage are emphasized, not just as theoretical study tools. A visit to Microsoft’s Redmond headquarters illustrates the importance of a concrete touch in the system.

Criticism is raised towards the government-initiated ‘smart city’ approach, advocating for a multi-stakeholder action involving academia and industry.

The concept of democratization is discussed, particularly in relation to data privacy and ownership. It emphasizes the need for a multi-stakeholder discussion.

In conclusion, digital technology has transformative potential in improving energy efficiency and reducing energy consumption. Cloud computing, sharing economy, and digital twin technology are key drivers. Collaboration between academia and industry is crucial, and hands-on experience and technology usage are essential. The government-led ‘smart city’ approach is criticized, and democratization in data privacy and ownership is highlighted. Policymakers, industry professionals, and researchers can benefit from these insights for a sustainable future.

Masami Ishiyama

Microsoft is leading the way in sustainability by adopting a comprehensive approach. By 2030, they aim to achieve carbon negativity, water positivity, and zero waste. This ambitious goal demonstrates their commitment to reducing their environmental impact and addressing sustainability challenges across their entire company. Microsoft is actively involved in various sustainability initiatives, including the G20 Global Smart City Alliance project, showing their dedication to collaborating with other organizations to drive sustainable change on a global scale.

Data and technology play a crucial role in Microsoft’s sustainability strategy. They have developed innovative solutions that leverage data analytics and technology to optimize energy usage and reduce their environmental footprint. For example, their smart building solution, in partnership with Ionic and equipped with Power BI, Azure IoT, and Dynamics 365, has shown a 6-10% reduction in annual energy consumption. Microsoft also utilizes one of the world’s largest corporate real estate data stores to optimize operations and save money, highlighting the value of data in driving sustainability efforts. Their operational platforms, Data and BI, along with Azure Digital Twin, contribute to enhancing sustainability by providing efficient data management and processing capabilities.

Microsoft recognizes the importance of data ownership and privacy in the digital age. They are committed to safeguarding customer permissions and protecting their data against potential threats. By empowering customers to have control over their data, Microsoft ensures transparency and supports their data privacy concerns. This strong emphasis on data ownership aligns with the principles of industry innovation and strong institutions outlined in the Sustainable Development Goals (SDGs).

The implementation of effective smart campus strategies exemplifies Microsoft’s commitment to sustainability in both their internal operations and external collaborations. For instance, their partnership with Temple University has resulted in optimizing energy efficiency and reducing resource usage. Microsoft’s smart campus strategy involves streamlining processes, identifying clear Internet of Things (IoT) use cases, managing construction schedules, and maintaining accurate floor plans. By prioritizing energy optimization and resource management, Microsoft demonstrates their dedication to creating sustainable campuses and positively impacting the environment.

Furthermore, Microsoft provides software solutions, such as Azure Digital Twin, that have the potential to reduce electricity consumption. By utilizing this technology in buildings, energy efficiency can be improved, contributing to the goal of affordable and clean energy outlined in the SDGs.

Data ownership and governance concerns are major obstacles in today’s digital landscape. Microsoft recognizes the growing importance of generative AI and data and supports the need for clear data ownership and controls. They assert that data ownership belongs to the customer and that a multi-stakeholder decision-making process is crucial in addressing data ownership concerns. This stance aligns with the principles of peace, justice, and strong institutions highlighted in the SDGs.

Overall, Microsoft’s comprehensive sustainability approach is demonstrated through their goals of carbon negativity, water positivity, and zero waste by 2030. Their involvement in global sustainability initiatives, use of data and technology to optimize energy usage, commitment to data ownership and privacy, successful implementation of smart campus strategies, and software offerings for reducing electricity consumption all showcase their dedication to sustainability. Microsoft’s approach not only aligns with the SDGs but also highlights their commitment to responsible corporate citizenship and driving positive change.

Session transcript

Moderator – Hiroshi Esaki:
We’ll give him a microphone. Good morning, everyone. I’d like to warmly welcome all of you to this vital session where we delve into the concept of smart campuses and their potential to revolutionize the way our universities operate, not just technologically, but also with a perspective of social, economic, and environmental responsibility. Universities play an integral role in shaping the minds of the next generation. And as we stand at the cusp of a digital revolution, it is imperative for these institutions to integrate smart and sustainable solutions into their infrastructure. Today’s session will unveil the intricacies of creating campuses that are both state-of-the-art and sustainable. Today we are honored to have with us, sorry for that, today we are honored to have with us Mr. Corey Krigman, Task Force member from the G20 Global Smart Alliance, and Mr. Masami Ishiyama from Microsoft Japan, and Dr. Hiroshi Ezaki from the University of Tokyo. And I’m Yuta Hirayama, as a moderator and advisor to the G20 Global Smart Alliance. This is a session overview. We also should write on the inspiring new public-private partnership, or PPP, led by esteemed institutions and corporations. A notable highlight of this initiative is the collaboration between the University of Tokyo and Microsoft, alongside other key players. This initiative is facilitated by the G20 Global Smart Alliance, which I belong to, and aims to build a global campus network. The essence of this network is to harness the potential of IT, networking, data security, and governance practices to foster cutting-edge research on sustainable design and emerging technologies. We also explore the pathway to achieving a net-zero footprint through pioneering digital infrastructures that leverage IT, IoT, generative AI, and more. The focus is not just on creating new green designs, but also on retrofitting existing structures to make them energy-efficient, aligning with global standards and supporting the green economy. Okay. This is a session overview, and I’m opening and introducing now, and after this, you know, I try to explain about what the G20 Global Smart Alliance is, and then I will move to the other speakers. Good. So maybe you may not know what the G20 Global Smart Alliance is, and this activity was born in 2019. So at that time, Japan government was the G20 presidency, and we tried to put the smart city, the world’s smart city, it be kind of the topic in G20 discussions. In 2020, also, Saudi Arabia government is also pushed to discuss about what the importance of the smart city. 2019, 2020, there are so many smart city projects built all over the world. On the other hand, technology governance is the issue. For example, the privacy issue, or vendor lock-in issue, or fragmented business model is also very difficult. So we tried to, our mandate is, sorry for that, our mandate was to bring together global stakeholders to establish and advance a set of global norms for the ethical and responsible use of smart technologies in cities. So that is what we wanted to do. After that, actually from 2019 to 2022, we developed five principles for responsible and ethical smart cities, and also we developed some model policies. Model policy is like, you know, there are so many technology governance issues over there. However, we gathered many experts from all over the world, and like Esaki-sensei and Corey is one of the task force members, but we discussed what the issue in the city and what, you know, policy should be more, like, prioritized to adapt to the cities. And we discussed a lot, and then we are developing some policies. For example, one of that is the accessibility policy. So like, you know, there are so many, you know, accessibility issues over there, so we try to put such policy to the cities, and then we try to increase that kind of, you know, reduce that kind of gap in here, and also, like, privacy impact assessment policies also we developed. This is very important policy for many cities, and in Japan, this policy, it was introduced by the cabinet office, and then now gradually implementing to some cities. For example, the Tsukuba city, one of the super city, is implementing this policy to their cities. On the other hand, and also open data policy is also very important, but our, you know, project was not only developing the policy, but how to implement this kind of policy to the cities, and then we try to develop some, you know, city network here, and then now, you know, globally, we have more than 36 pioneer cities, and also we have some, you know, local cities, local kind of the regional alliances is over there, and like Japan, we have more than 37, 8 cities in Japanese community, and also now we are developing Latin America or ASEAN network, so we are developing such kind of regional alliances, and, you know, in 2021, the Global Smart City Alliance was received the Governance and Economy Award in Smart City Expo World Congress. So our project was kind of, you know, popular these days, but I know many people doesn’t know this, so today I’m very honored to introduce our project. And lastly, last March, we had a joint event with Japan government, and so these photos are the G7 official public-private event high-level roundtable for the G7 Sustainable Urban Development Minister’s meeting, and in this event, Dr. Ezaki-sensei and Corey had met in the session, and now we are starting to discuss about today’s main topic, green building policies. So I’d love to introduce what the Global Smart City Alliance was, you know, did. Okay, my story is too long, so I’d like to pass to Corey. So, Corey, are you okay?

Corey Glickman:
Yes, I am. Can you hear me? Yeah, yeah, I can hear you. Please. Yes, excellent. Okay, well, first of all, thank you very much. So I’m Corey Glickman, and I just want to spend a few minutes talking a bit about the transformation component. So first part is talk about the overview of the transformation of the built environment for wellness across multiple sectors, and that would include the idea of residence, agriculture, administration, industry and commerce, education and research, infrastructure services, and transportation and communication, and these components make up the diverse community activities that we all experience in our urban environments. And what works very well that we know is putting in smart monitors and controls across all aspects of cities, we would focus on areas of transport, buildings, environment, life, events, infrastructure and utilities. And when we do this, we enable communities to transform the urban landscape. Next slide, please. So there are four aspects that we synthesize or levers that we use in this idea of transforming the built environment. The first one is decarbonization. So radically reduce the emissions for a zero carbon built environment. Second is democratization. So provide equitable wellness for resilience for the living environment. The third is digitalization, having a digital backbone that smartly connects our buildings, our distributed energy resources, our people and our businesses. And the fourth is demonstration. The ability to visualize our hypothesis and our tests that sets the direction for the next generation of city transformation experts. These are absolutely vital for us to be able to show what progress can be made and what ideas can be put forward across this year. And then lastly, what I’d like to talk about very quickly is the vision. So we create this vision for a zero carbon built environment by promoting this equitable wellness and resilience. And probably the most important lesson that I can share with you, having done this for several years now in several cities around the world, and what we’ve done with the G20 and our partners here, is we know that decarbonization is actually a user-centric, multi-stakeholder approach. That will fail when it’s enforced by governments that are not supported by democratized action. That means you can set those standards as a government level and policy level, but if everybody does not contribute and participate, it is going to fail. We see that happen. So the action item that we can most leave you with is that you need to demonstrate by leading. You need to have the whole community participate, particularly those that are experts and those that are in their learning institutions and those in the businesses. And when that happens, that democratization, teams with government, teams with public and private entities, is when you truly see transformation take place. So with that, I’d like to thank you for your time, and I’d like to pass it on to the next speaker.

Moderator – Hiroshi Esaki:
Okay. Thank you, Corey, for those enlightening insights. Moving forward, it’s crucial for us to view these transformations through the lens of one of the tech industry giants to discuss Microsoft’s vision on achieving net zero with digital. I’d like to welcome Mr. Masami Shiyama. Over to you, Masami.

Masami Ishiyama:
Thank you. This is Masami from Microsoft Japan. So I’m going to introduce the Microsoft Sustainability Initiative and Smart Campus Matter very quickly. So the reason why I’m here is that Microsoft is a task force member at the G20 Global Smart City Alliance project, as Yuta-san said. And also, another reason is that Microsoft has just announced, agreed and signed the strategic MOU with the University of Tokyo on the green transformation last August. In this agreement, Microsoft is exploring a way to support the University of Tokyo’s effort to achieve net zero emission through the use of our technology. So I will touch on those details later on in this session. So firstly, let us introduce how Microsoft has been tackling the sustainability agenda as a whole company. Here is a bit of history on our journey and the future goals. Since back in 2009, Microsoft established our first carbon emission reduction goal. For more than a decade, we have a steady build on our commitment to innovation and investment in technologies. Onward to 2050, we will continue to reduce by removing the company direct or electricity use emission since we were founded in 1975. Big commitment and big announcement. And this slide, here is a simplified view of our future goals. Carbon negative, water positive, zero waste by 2030. And we are also building a planetary computer to better monitor, model and manage the world ecosystem and protect more land than we use. Across the company, we are driving this ambitious goal internally and helping set best practice and new standards for business around the world with software-driven innovation. Already, we see a new area of solution emerging driven by data. Through our work with customers and partners such as managing data using advanced analytics, machine learning and a virtual model in the cloud, we are helping organization in many aspects. As you can see, we are building on space topic, supply chain topic, also circular economy topic and also smart grid infrastructure solution topic. When it comes to data, as the G20 alliance focused on technology governance, discussion often lies around the ownership and control of data. At Microsoft, we have fundamental principle. Your data belongs to you. We don’t use your data for our business. When you or your customer desire to open up your data, we commit to safeguarding your permissions and protecting your data against potential threats. Today’s main is building and space, so let’s see our own example first. When it comes to sustainability campus at Microsoft, we run like a medium size of city that is scattered across the globe. That vision is to build, deliver and operate connected, accessible, sustainable and secure workspace that creates the best employer experience. So this is customer number one for us, for smart building solution. Our initial effort to reduce power consumption in our building was focused on the headquarter, Microsoft Redmond campus, which spanned 125 buildings serving more than 60,000 people. Across the campus, there were multiple building system, 60 million annual UTT spend. Microsoft used Ionic, who is a partner solution running on Azure and extended with Power BI, Azure IoT and Dynamics 365 to remotely monitor and manage the building across the campus. As a result of initial effort, Microsoft achieved a 6 to 10% reduction in annual energy usage with implementation payback in less than 18 months. So when we think about the smart campus, employer experience or student experience is very key, meaning such as productivity, hybrid, wellness or access. In order to improve the employer or student experience with the campus, we need to platform and operations that help optimize how we build and run our real estate. We have two operational platforms, Data and BI, and also Azure Data Twin, and six operational functions on the right side. So today’s agenda is the smart campus, so my slide will touch on Data and AI and Azure Data Twin today. So first one, Data and BI. So we run one of the world’s largest corporate real estate data store, which we rely on to optimize the operations and save money. There are about 20 resources, sources of data inputted. However, the real value comes from the ability to combine the data source for insight. For sustainability example, we have UTT cost data for electricity, natural gas, fuel, including transport fuels, waste, including recycled and water. The next level up is to apply machine learning to it. Like two use case, number one is space optimization, batch data plus Wi-Fi MAC address. Number two is energy efficiency, a more smart start. So another one is Azure Data Twin, and other foundational platform. So it’s to create the digital replicas of our physical world. The digital twin is a normal world. Our physical world means things, place, people, and state, and the slide shows example of each. So like data, having the digital representation of physical world is only valuable when we use it. For example, sensor system that detect environmental conditions such as temperature and air quality. We have a lot of smart campus space practice and case studies around the world, but we’re going to introduce the campus universities case study. So this one is about Temple University in Philadelphia. Temple University facility and operations need to create a smart building strategy to optimize operation across its 240 buildings to reduce cost and enhance service for its school, business, employers, and student. So Microsoft partner eMagic utilized Microsoft Azure Digital Twin solution in five buildings on Temple’s Philadelphia campus as the initial phase of the integrated facility management solution. This solution enables the university to cut cost, optimize energy efficiency, and reduce technology and resources and improving service level on the campus. So as I mentioned at the beginning, based on those technology component and case studies, as I said, we are exploring a way to support the University of Tokyo’s effort as a first step to achieve net zero emissions through our technologies. Of course, the University of Tokyo has been doing various activities about green transformation so far, such as the Sustainable Campus Project starting in 2008, and also participation in the Net Zero, Race to Zero campaign, and also publication of the UTokyo Climate Action last year, starting last year. The goal of our first campus GX project is to help them improve energy efficiency from sustainability perspective. This has both environmental impact and its technology architecture could apply to other smart campus scenarios outside of the University of Tokyo, not in Japan, not to all over the world. So as we mentioned, the G20 Smart City Alliance focus on technology governance. Microsoft stick to a basic rule, as I said, your data is yours. As I stated in the bottom right corner of the slide, open data environment. This one, yeah. And we started the campus GX project as a pilot, which aimed to reduce energy through smart campus technology, and have been discussing the architecture and how to adapt the technology. With that, we will expand the current smart campus pilot project, which aims to reduce energy consumption with Microsoft technology, collaborating with GUTP, Green University of Tokyo project, which Esaki-sensei is leading, to create smart building reference architecture, which would influence other smart building policy and the entire industry. So this is last slide of my session. I’ll end by mentioning some lesson learning that Microsoft about the smart campus. Number one, starting with data, begin by collecting and analyzing data from sensor and system to identify the campus issue and opportunities. The data insight forms a foundation for effective strategy. Number two, optimize process. Before introducing the new technology, optimize the existing process for effective strategy. So number three is define IoT use case. So let’s specify a clear use case for IoT device, such as monitoring energy consumption or improving security. Number four, importance of the floor plan. So it is crucial for smart campus implementation. So let’s have an accurate floor plan, so that’s a key. Number five, lastly, the construction schedule. So properly manage construction schedule for new infrastructure and technology, meeting the budget and deadline requirement. So thank you for listening, and hand over to Dr. Esaki-san.

Hiroshi Esaki:
Thank you for introduction. I want to share with you a concrete number or concrete action based on the vision the Microsoft or Hirayama-san or WEF are having. The important thing is we should show what we can do using digital technology or using the Internet. First one is many of you may not know about EP100, that’s the electrical energy productivity 100%, which means using the digital technology, you want to improve the efficiency, especially energy efficiency by double, meaning the same work can be done by a half of energy. That is relatively quite easy in the case of digital. For example, when we use Google or Microsoft regarding their application, when we have the cloud computing, more than 80% of energy saving be able to do. That is not a false number, that’s really, really we can do. This is the footprint in 2022, how many carbon footprint each country has. The important thing is this is the how many or high ratio of the renewable energy introduction in each country. Some of the country already 90% or 80%. Most of the developed country probably 30% or 20%, means it’s large percentage of renewable energy we have to introduce that you may consider. When you think about EP100, the number you have to introduce into your world about renewable energy going to half, that’s the real number. This is, for example, Germany or UK or Spain or Ireland, when you have every single industry, every single factory or campus went to an EP100, we can reduce the power energy consumption into 50%, then only 25% of the increase in the renewable energy. In the case of Germany or UK or Spain, you can think about this as a practical number you can do. This is India, USA, and Japan. We need just a plus 150% renewable energy increase. That would be possible to do, not the five times, not the 10, 10 times larger renewable energy. That is the power of digital or the internet, you can realize. Also, I want to put in front of you three techniques for decarbonization. First one is going to already built system that’s as-is system solution. Second thing is energy grabs by the digital twin for the system operation. That means there are many opportunity to apply data-centric operation or artificial intelligence that this IGF team, that’s going to be applied to quite easily when we have accurate data. Second one is to be for the future infrastructure design. That is quite important for developing country or emerging countries, even for developed countries. So in the case of design, we must reduce the number of physical resources using digital technology. Also we design the system by design, the construction and operation, how we use the digital technologies. This is one of the example when you think about both IT or by IT as-is and to be. The left-hand top, that’s going to be explained by the Microsoft, that is digital twin. That is graphing whole of the system behavior or how the system are going to do. Important thing is the computer itself be able to analyze and visualize the system operation when you have the digital twin. This is one of the example 12 years ago, I hacked, I’m sorry, I digitized digital twin at my university against the earthquake, shocking in Japan. My campus spending 66 megawatt. My building consume one megawatt. When we have digital twin, we can reduce 31% or 22% energy saving. I don’t want to say energy saving, that is energy productivity improvement, going to be 30% or 20%. It was 12 years ago, technology going to be improved a lot. So more complicated, more good, digital twin going to be done. And also at that time, we are academia. Microsoft is industry. Important function of the academia is want to have interoperability. So we hate lock-on by Microsoft, nor Google, nor Met, even that, right? That is important thing is a multistakeholder discussion should have those kind of global standard for interoperability. So next one is the ASIS of IT, that is yet another interesting thing you can do. This is the actual example, practical resolution. Also this is more than 10 years ago, BMW in Germany has their own IT set of facilities. They analyze all of the tasks in their company. Then they realize only 20% of the tasks require small latency and very critical data. It must be in nearby their facilities or 80% of the tasks allowing large latency and no critical data like R&D simulation or the others means 80% of the tasks be able to migrate to 100% renewable energy country, which is Iceland and Sweden, right? Since the Internet or computer system can be globally distributed, then you can select the location or soil, whatever you want. That is 12 years ago lesson learned, we did. Tech technology be able to apply those kind of thing, right? So this is the lesson learned from this, 100% renewable energy, going to be done in somewhere on the earth. Then also some of the on-premise computers be able to go into the data center. Then at least 30 or 40% energy cut be able to, due to the very high performance HVACs. When you use a cloud, as I mentioned, 70 or 80% be able to cut by sharing economy. Sharing economy is also good, not only for the power saving, but also the resource reduction. The physical resource like computers or HVACs or the other, or building itself, large reduction of the system be able to do. So the other one, especially for developing country or emerging countries, 2B, how you think about design infrastructure. This is the cyber first I mentioned. By IT, for the 2B environment, think about assuming you have sophisticated good digital technology. So this is one of the example. This is the logistics in about 200 years ago. It was exclusive logistics system every single industry, every single company has. That is the exclusive use, exclusive build, the infrastructure. What the very good invention by the human being was container and pilot. This went to sharing economy in physical package transportation. When you have a container or a pilot, every single material be able to put into the same package. The package be able to transfer by airplane, train, ship, or car, whatever you have, which is a completely perfect sharing economy for existing material or merchandise as well as future materials. One of the example using this particular infrastructure was Amazon. So this is before the Internet. What the Internet did was exactly the same thing as container and pilot. Digital information going to be transferred everywhere on any technology like Wi-Fi, glass fibers, copper fibers, and also any material digitized thing going to be able to transfer everywhere on the earth, like text, video, voice, whatever you have, a program as well, or recipe for the 3D printer as well. One of the other thing I want to share is the cost of carbon footprint regarding the physical object transformation versus digital object transformation. The huge cost is going to be different. Huge energy productivity improvement can be done, replace the physical transportation to digital transportation is going to be done. This is actual number, material, electricity versus digital bits, two order of magnitude. This is real number I discussed with a power company in Japan, how the difference cost on the operational cost, the investment cost, install, and operation, and replacement, then digital bits are going to be 100 once compared to electricity. Electricity versus material, yet another two order of magnitude difference. This is very interesting. So this is the reason why I put in those slides is we want to show the demonstration, what we can do at the concrete number from the figures. Thank you.

Moderator – Hiroshi Esaki:
Thank you very much, Esaki-sensei. We have now arrived at our interactive session. So this is a golden opportunity for all attendees to pose questions, share thoughts, or discuss any of the topics we’ve touched upon today. So does anyone have any questions here? Maybe a first note. I think, Corey, are you there? Maybe you wanted to introduce one video, right?

Corey Glickman:
Certainly.

Moderator – Hiroshi Esaki:
So could you introduce shortly about the video, and I will ask the IT operator to start the video.

Corey Glickman:
Absolutely. So this video represents a program that I had worked on with Berkeley University, with India, and with the U.S. government, looking at the transformation of cities in the use of the technologies of the areas that we’ve discussed. So the way to view this video is a program that was ran for seven years and went across three countries, and it’s sharing some of the lessons and some of the activities that took place. If you would like to run the video, that would be great. Yeah, okay. India is poised to become the fifth largest economy in the world. As more buildings are added at a healthy rate of 8% every year, building energy use is skyrocketing. Trends in the Indian construction, especially the new construction, the urban heat increase and the high occupancy levels in India present unique challenges to the building ecosystem. India enjoys many advantages, including a strong tradition of passively cooled buildings, a wide occupant tolerance to heat, a ready supply of local sustainable construction materials, inexpensive labor and craft costs, and careful use of resources. At Lawrence Berkeley National Laboratory, we are committed to working with Indian research community, industry, and government to develop building technologies that enhance building comfort, push the envelope for efficiency, and improve the health, safety, and life of building occupants in both countries. The United States and India have been collaborating on a U.S.-India joint center for building energy research and development called CBERD. CBERD is a dynamic public-private partnership that involves academic research institutions and partners in both countries that do collaborative research to bring new energy efficiency technology to both U.S. and India. In CBERD, we deploy what we call a three-by-three model. The first three is make sure that we advance government policies, industrial practice, and research findings about energy-efficient buildings, and the second three is making sure that we understand how to design them right, how to build them right, and how to operate them right. Only when this happens, we are able to implement on a wide scale throughout the economy energy-efficient buildings with technologies that are highly cost-effective and are able to reduce energy consumption per square foot by about a factor of five below what is the norm. Through the collaborative research between U.S. researchers and Indian researchers, over the last five years of CBIRD, we have developed nine new technologies, 40 significant exchanges between Indian scientists and U.S. scientists, more than 100 peer-reviewed publications, four patent disclosures, and we have more than 10 demonstrations. One of the guiding principles of doing that was to bring together information technology and physical systems. U.S. has had a long lead for building world-class physical systems, facades, HVAC systems, high-efficiency chillers, and so on. India has a fantastic depth in technical prowess in information technology. Our goal was to bring them together in a way that benefits both countries, and each country gets more than what they put in. Working shoulder-to-shoulder on common problems, developing joint publications, joint technologies, having joint demonstration projects, has led to such a deep mutual respect and understanding that I couldn’t have imagined we would be ending at this point. The expertise that the U.S. scientists brought in in this Indo-U.S. collaborative project on building energy efficiency was very helpful. It helped in accelerating the research, developing products and processes which can be deployed and make a real difference in the building sector in India. Another way we collaborate between the U.S. and India is by developing tools and resources for the public that are available on our websites, as well as new facilities like this game-changing facility called FlexLab. FlexLab is the world’s most advanced testbed for energy-efficient technologies. FlexLab is also a testing system to allow us to integrate the systems with the electric grid, with batteries and photovoltaic systems. I want to mention the new Best Practices Guide that is a tool for how to design energy-efficient buildings, and it has a lot of information on designing the faรงade, the HVAC systems and other components for low-energy buildings. These best practices are particularly suited to the cultural, climatic and construction context of India. The guide is based on three core principles. One, using a triple bottom-line framework for energy-efficiency decision-making, using financial capital, environmental capital and enhanced working environments as a theme. Two, aggressive but achievable energy performance targets. And three, creating a shared set of values across all stakeholders, from building owners, developers, builders, architects, engineers and policy makers. The strategic insight into design, the idea of integrating the building with its electromechanical systems in conceptualizing solutions is a real lesson here. It is the technical depth, the analytical framework and the advice that is given, whereas as the guide goes across various climatic zones and looks at different technical solutions, is extremely helpful indeed. I think it’s a great piece of work. I feel like India is being propelled into a digital and decarbonized future, and buildings are a prime opportunity to actually use this advantage and really make and shape the future.

Moderator – Hiroshi Esaki:
So Corey, thank you for introducing the video. As Esaki-sensei mentioned, India and the US and Japan are not advanced in using renewable energy, right? So I think we have much space to increase this kind of field. So Corey, I want to ask you, based on your experience in the digital transformation landscape, what do you believe are the primary obstacles, not only the universities, but today we discussed about green building policy in universities, but not only the university but more like the business field. So what is the obstacle of this field? Do you have any thought?

Corey Glickman:
Sure. I would say experience has taught us that the vision really has to be led, I think, with a portion of the city. So just as you’re talking about the University of Tokyo teaming with Microsoft, that is a great place to start. You can define what is a smart space or a smart city. And so an obstacle would be, although you have to have very large ambitions, you need to choose a section that is doable, and you need to start fast, actually. And many of these technologies, these digital twins, and these ideas of IoT devices, they exist, right? So I would start with tried and true technologies. If you think too far out that only technologies you can depend on five years, 10 years from now are being discovered, you’re not going to move very fast. You should start with known technology, do something that’s sizable, but also look at scale and do responsible R&D. And I think the biggest obstacle is ultimately aligning the visionary leadership to the actual implementers, right? It goes back to that democratization and getting people on the ground to do this. Ideas of digital twins and visualization is a huge way of overcoming this and really having success. Great. Thank you. So I think you are developing the green building model policy in the G20 Global Smart Alliance, right? So if possible, could you introduce some point about you are developing the policy? Certainly. So one of the programs that we are leading is looking at what we call the Green Sustainability City Alliance right now. And it’s about taking policies that, of course, would make sense for cities, but there’s a lot out there, right? Many organizations doing things. So what we looked at was saying, let’s look at existing policies and start with areas that would have the most impact and build upon others’ work already versus reinventing or going in a different direction. So our first policy is actually embodied carbon. And we said embodied carbon for existing buildings. We’re going to do new buildings eventually, but we take existing structures first. And then the second part that we’re going to be looking at for policy is actually procurement. So the idea of sustainable procurement. How do you choose the right materials? How do you get to the right economics coming across there? And then the third area we’re still exploring. It takes about six to eight months to do a policy. We’re just finishing the embodied carbon one. And we’re starting the sustainable procurement. We’ll likely be zoning. And zoning is so important, but it’s a very complex government issue, locality issue. And I would say the lesson that we’ve learned over and over again that we hear from everybody, it’s about contextualization or localization. You can take a great policy that works in London or that works in Tokyo. And does that translate to Kyoto? Or does that translate to another city? You probably have to do something upstream or downstream in order for that policy to make sense, right? And I would say the other one is that when you ask other policy leaders who are working on these programs, they’re very open to discussing and to sharing their networks. And that’s another very powerful thing. I think often policy groups try to work in their own silos, and they don’t reach out enough. And when they do, you can quickly accelerate what’s taking place. So that’s really what we’re looking at right now.

Moderator – Hiroshi Esaki:
Great. So thank you. So what role do you see for global IT companies shaping the future of smart campuses or smart buildings? I mean, so now, yeah.

Corey Glickman:
So they’re going to play a very key role. Because ultimately, these systems have to live in a digital backbone, right? They have to be digitalized for this to work. So that’s the hyperscalers. This is the Microsofts, right? This is these tool sets that come across there. So IT global, even as we talk about whether it’s generative AI or other areas that are more traditional about running systems, think of this. All buildings already run off of systems. We already have systems that look at our economics, that look at our energy, that look at our mobility. However, as we look at sustainability, and we look for these efficiencies that Dr. Esagai was talking about, we have to build things upstream and downstream connectors to those backbones. When he talked about BMW, unless it works in their centralized system, they’re building attachments. They’re not rebuilding things from scratch. And that’s what’s important for this consistency. Because it’s this specialized factory approach combined with academic R&D leadership, I think is really what does very well. And I will say that the winning formula that I see right now is what I’m seeing taking place at this point at this table. And what it means is this. If you can take a university academic-led project and look at something like an airport or a controlled part of the city, and you can get a major IT global service provider with that, and with the policymakers, you have the chance to have that winning formula.

Moderator – Hiroshi Esaki:
Thank you very much. So back to the Tokyo University’s cases. So I think you already realized more than 30% decreasing of the energy consumption, right? So what is the key point? I think you have more key kind of the issue to implementing such kind of decarbonized decision. Do you have any thoughts?

Hiroshi Esaki:
Well, simple thing is we love technology, and we love Earth, and we love globe. So also, we really love the students. They’re working together. Also, they are future power to change the world. So that’s an important thing when we have a collaboration with industry and academia. In the case of academia, not only the senior professors, they don’t have any power anymore, right? The younger people have a lot of powers and experience to the future. So when I talk with a colleague, he initiated leading universities’ collaboration about such a technological hackathon or demonstrations. In his slide, there is a demonstration is quite important, right? How we show the fact or knowledge, experience, sharing those things is quite important, not only by document, by real experience. But touching to the computer system in the arriving building or campus, that is quite important. So that is that we share with Microsoft when we went to the Redmond headquarter office. We really share. Engineers or executives should touch on real system. They realize what’s going on. Then think about the real solution or concrete solution, not the politician we are. That’s the colleague firstly mentioned. The mistake of the smart city at this point of time is government-initiated, not a multi-stakeholder action. We didn’t. So we must have multi-stakeholder, agile approach with academia and industry, with supporting by government. That is the important model we want to share based on the practical experience. That is the IGF should do. The other thing is the democratization. That is yet another point the colleague mentioned about. Not controlled by the single large company nor large government. The data itself owned by users, right? So how to protect those privacy or intellectual property? Then though we must have the kind of collaboration in the case of the public sectors, infrastructures or private sectors. That kind of very careful, very healthy multi-stakeholder discussion about how to manage the data privacy or data usage is yet another thing. Important thing is that is not determined by government. It must be determined by multi-stakeholder discussion.

Moderator – Hiroshi Esaki:
So do you want to introduce? Okay. So thank you, as I can say. Then move on to the, I’d like to ask to Ishii-san about. So this is actually, so when I heard the, you know, Microsoft, you know, Azure Digital Twin, I’m very interested because, you know, using the IT software, you know, so this means, you know, we use the electricity, but we can reduce the electricity, right? So this is kind of a compliment, but, you know, this is very interesting. So as Esak-sensei mentioned, but, you know, like the Microsoft is definitely the giant. And if you provide such kind of software to each building, maybe many building owners or, you know, some developers or they kind of, you know, worry about that, right? So as a technology governance issue, so what is the obstacle on your business field? Or if you have any thoughts or, you know, things, could you share this?

Masami Ishiyama:
Yeah, thanks for your questions. Well, as Yuta-san said, governance of the IT and also data is very important. So we see that not only the general IT, but also now the generative AI is very, like, appearing very rapidly. So as I said, the ownership of the data and control of the data is really important, even more important than ever. So as Dr. Esak said, the multi-stakeholder decision-making is really important. So to do that, it’s โ€“ so we think about the โ€“ how I can say? We think about the ownership of the data. So that could be the obstacle. As Microsoft said, Microsoft said that data ownership is the customer, but we need to โ€“ multi-stakeholder need to recognize that to move forward very smoothly. Yeah, I guess.

Moderator – Hiroshi Esaki:
Thank you very much. But Corey mentioned, like, you know, so as a global smart alliance, we are developing the kind of green building, you know, model policies. But I think, you know, for many companies, if we have such kind of the guideline, you know, model policy, I think it’s very easy to discuss for, I mean, you know, what is the standard. And, you know, if we know that this is very, you know, easy to implement such kind of thing. So I think, you know, we really needed to implement such kind of policy to the market. Yeah. Thank you very much. So still we have three, four minutes. So if the, you know, participants have any questions, I’d like to ask to the speaker, but do you have โ€“ no? Oh, yeah, online also. Okay, I can’t see any questions. So maybe, you know, after the session, if you want to communicate with each speaker. Okay, could you read this?

Audience:
Taro mentioned, I think it should be science, technology, engineering, medicine. STEM, the education thing. So please feel free to add any question.

Moderator – Hiroshi Esaki:
Okay, so could you back to the slide? Sorry. So I just want to mention some points. So I know, you know, in this venue, you know, there are so many experts here. And definitely, you know, what we discussed today, we are a lot of experts. And if you want to join the G20 Global Smart Alliance Network, let me know that. So there are so many, you know, experts, policy makers, academia and private sector, you know, experts joining our, you know, project. And they are, you know, discussing about what, you know, policy should be implemented to the city. And, you know, we are always welcome. So let me know if you want to join this. And as a conclusion, so thank you very much for participating today. What an enlightening session we have with the HUD. From understanding the Smart Campus Blueprint to discussing cutting-edge technologies role, it’s clear that the future of education infrastructure is on a promising path. A special thank you to our esteemed speakers for sharing their knowledge and to all attendees for their active participation. We don’t have any questions. Let’s carry forward these learnings and insight to make our campus smarter and our world a better place. Thank you and see you in the next session of IGF. Thank you very much. Thank you. Thank you very much for coming. Thank you very much.

Audience

Speech speed

114 words per minute

Speech length

26 words

Speech time

14 secs

Corey Glickman

Speech speed

161 words per minute

Speech length

2403 words

Speech time

898 secs

Hiroshi Esaki

Speech speed

118 words per minute

Speech length

1826 words

Speech time

931 secs

Masami Ishiyama

Speech speed

125 words per minute

Speech length

1653 words

Speech time

796 secs

Moderator – Hiroshi Esaki

Speech speed

143 words per minute

Speech length

2013 words

Speech time

844 secs

RITEC: Prioritizing Child Well-Being in Digital Design | IGF 2023 Open Forum #52

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During the discussion, different concerns and questions were raised regarding various aspects of children’s digital life. One of the concerns highlighted was the issue of tokenism and the need for genuine child participation. The Belgian Safer Internet Center, which operates under the InSafe umbrella, was mentioned as actively working towards achieving a true representational group of young people. The sentiment expressed was one of concern, aiming to avoid using children as tokens and instead promoting their meaningful involvement in decision-making processes.

Another concern raised was the need to provide guidance on the evolving capacities of children. Jutta Kroll from the German Digital Opportunities Foundation mentioned the existence of a special group on age-appropriate design within the European Commission, indicating a recognition of the importance of tailoring digital content and experiences to suit children’s developmental stages. The sentiment expressed in this regard was one of questioning, suggesting a desire to better understand how to navigate the evolving digital landscape in a way that benefits children’s well-being and educational development.

The importance of involving parents in their children’s digital life was also emphasized during the discussion. Amy from ECHPAD International highlighted the importance of parents being actively engaged in their children’s gaming and digital experiences. Additionally, Carmen, a parent, expressed the view that online life is not a necessity for children, underscoring the critical role of parental education in safeguarding their well-being in the digital world. This sentiment emphasized the need for parents to stay informed and involved to ensure their children’s online safety and well-being.

Another worrisome issue identified was the lack of pedagogical understanding among developers. Carmen expressed concern regarding developers’ limited experience in educational theory and practice, highlighting the importance of incorporating pedagogical expertise into the development of digital content and platforms aimed at children. This worry reflected the need for developers to have a deep understanding of how children learn and develop so that digital resources can effectively promote quality education.

Finally, the speakers questioned the next steps to address these concerns. David from the Association for NGOs Insurance Group in the Asia-Pacific region specifically raised the issue of creating guidelines for parents, educators, and workers. This standpoint emphasized the necessity of establishing clear guidelines and engagement strategies to support parents, educators, and those working with children in effectively navigating the digital landscape and ensuring children’s well-being and educational growth.

Overall, the speakers stressed the importance of promoting online safety and well-being for children. Genuine child participation, appropriate guidance for evolving capacities, parental involvement, pedagogical understanding among developers, and the creation of guidelines for parents, educators, and workers emerged as key areas of focus. These observations highlighted a collective desire to ensure a positive and supportive digital environment for children, where their rights, education, and safety are prioritized.

Shuli Gilutz

Digital play is increasingly recognised as a crucial component of children’s well-being and development. Research has shown that digital play can provide positive experiences that promote children’s overall welfare. It is considered one of the most important ways for children to interact with the world. However, there is a pressing need for the design industry to prioritise the creation of safe, engaging, and beneficial digital play experiences specifically tailored for children.

Many designers are eager to create positive and empowering digital play experiences for children, but they lack the necessary training and guidance to do so effectively. Collaborative efforts are underway to work with designers and understand their requirements. The aim is to develop a comprehensive guide that will enable them to create positive digital experiences for children.

The project is built upon research, and the current stage involves consulting with designers from companies across the globe. The ultimate goal is to provide businesses with a guide that is grounded in real data about children and technology. The team hopes that this will dispel myths and misconceptions surrounding the topic and educate designers on best practices.

Creating a guide for businesses based on real data about children and technology is crucial in ensuring that child-friendly digital experiences are prioritised. By aggregating information from global companies, the team plans to develop a prototype that will serve as a valuable resource for designers. The final product, expected to be released in the autumn, will provide designers with the knowledge and insights necessary to create safe and beneficial digital play experiences for children.

In addition to the design industry’s responsibilities, there also needs to be a broader shift in designing for children. Instead of viewing it as a mere regulatory requirement, there should be an understanding that this is the future. Designers must embrace the challenge of creating a fully holistic environment for children to thrive in, focusing not only on safety but also on their overall well-being.

Companies that fail to adapt their design approaches to meet the needs of children may ultimately be left behind. The industry must pivot its perspective and prioritise designing for children. This shift in approach is vital to ensure that children have access to digital experiences that enhance their development and well-being.

Beyond the design industry’s role, parents also play a crucial part in supporting their children’s digital play experiences. Engaging in digital games with their children helps parents understand the gaming world and actively participate in their children’s activities, thereby contributing to their well-being. Furthermore, direct discussions between parents and children about concerns and motivations are proven to be effective in helping children understand the importance of activities such as playing outside or balancing their digital and non-digital pursuits. These conversations enhance children’s understanding and overall well-being.

In conclusion, digital play is a critical aspect of children’s well-being and development. The design industry needs to prioritise the creation of safe, engaging, and beneficial digital play experiences. Efforts are underway to develop a guide based on real data about children and technology for businesses to ensure child-friendly design practices. There needs to be a broader shift in designing for children, viewing it as the future and creating a fully holistic environment. Companies that fail to adapt may be left behind. Parental engagement and direct discussions with children are essential in supporting their well-being.

Adam Ingle

LEGO Group is committed to prioritising the well-being of children in their digital products. They actively avoid incorporating addictive qualities or manipulative design patterns into their games. By doing so, LEGO ensures that children can engage with their digital experiences in a healthy and balanced manner.

In addition to designing responsible digital products, LEGO Group is taking the initiative to improve overall digital experiences for children. They are collaborating with UNICEF to drive this effort and aim to elevate industry best practices. By working together with other industry leaders, LEGO Group intends to create a coalition that will promote better digital experiences for children worldwide.

Recognising the online safety crisis, LEGO Group is actively promoting proactive measures and cultural change within the digital industry. They understand that the failure to invest in children’s well-being can lead to potential harm and a loss of trust in the digital industry as a whole. By addressing the crisis head-on, LEGO Group demonstrates their commitment to protecting children and building a safer online environment.

Adam Ingle, a prominent advocate for children’s well-being, believes in a holistic approach to digital design. He emphasises the importance of not only focusing on safety and protection but also nurturing children’s creativity and imagination. Ingle argues that an overemphasis on addressing online harms could result in sterile digital environments. He believes that a certain level of flexibility and age-appropriate design is necessary to create engaging and beneficial digital experiences for children.

Moreover, Ingle calls for governments and policymakers to establish regulatory frameworks that incentivise the development of productive digital experiences for kids. He highlights that current discussions primarily revolve around addressing online harms and urges for a broader perspective that considers the impact on children’s well-being. Government intervention, according to Ingle, can play a crucial role in fostering child well-being in the realm of digital design.

To implement age-appropriate design, LEGO is actively involved in the EU’s AADC (Age Appropriate Design Code) method. This method allows tailoring privacy policies, default settings, and aspects of game design to cater to the specific social interaction needs of different age groups.

When it comes to teenagers, finding the right balance between their social connections online and the associated risks is crucial. It is acknowledged that some level of social connection is necessary for teens’ well-being, as it enables them to form organic friendships online. However, measures can be implemented to mitigate the risks associated with teens’ online interactions, such as disabling certain features for younger age groups and promoting online safety education.

In conclusion, LEGO Group’s commitment to prioritising children’s well-being in their digital products is evident through their conscious design choices and collaboration with UNICEF. They actively address the online safety crisis and advocate for a holistic approach to digital design that balances safety, protection, creativity, and imagination. Adam Ingle’s call for regulatory frameworks and the promotion of age-appropriate design further underscores the importance of creating productive and beneficial digital experiences for children.

Sabrina Vorbau

The strategy for a better internet for kids is being revised through a co-creation approach. This approach involves actively involving children by consulting them across Europe. Open discussions with adults, mainly focusing on parents and teachers, have also taken place. Additionally, experts from various fields including industry, academia, and policymakers from the national level have been invited to provide their insights. This collaborative effort ensures that the revised strategy takes into account the perspectives of all key stakeholders involved.

The importance of involving young people in policy decision-making is emphasized. By including children and young people in all aspects of the decision-making process, it ensures that the policies and tools implemented effectively meet their needs. This can be achieved through various means such as conducting consultations, involving young people in expert groups, and actively cooperating with them in organizing events like the Safer Internet Forum. This approach recognizes the expertise that young people possess and highlights the significance of their input in shaping policies that concern them.

Meaningful youth participation is considered vital in the pursuit of better internet policies. While progress has been made in this area, more efforts are needed to ensure that children and young people are involved as part of a multi-stakeholder approach. It is crucial to see young people as experts in their own right, rather than merely as a necessity in decision-making processes. By acknowledging their expertise and actively involving them, it maximizes the positive impact of policies and initiatives implemented.

Furthermore, there is a call for more stakeholders, particularly industry and policymakers, to implement the policies that have already been established. The big plus strategy, which is seen as a significant policy framework, plays a crucial role in ensuring children’s well-being. It is essential that this policy is effectively utilized and applied to achieve its intended goals. By implementing these policies and involving key stakeholders, including industry and policymakers, a more robust framework can be created to address the challenges and concerns surrounding children’s well-being in the digital world.

In conclusion, the co-creation approach to revising the strategy for a better internet for kids involves the active involvement of children, consultations with adults, and engagement of experts from various backgrounds. The inclusion of young people in policy decision-making processes is essential to ensure that their needs are effectively met. Meaningful youth participation, along with the implementation of existing policies, particularly by industry and policymakers, is crucial for achieving a safer and more inclusive internet environment for children. The big plus strategy sets the framework for addressing children’s well-being, and it is vital that it is adequately implemented.

Josie

The session concentrated on the significance of prioritising children’s views and well-being in the digital environment. Shuli Gillets, a renowned expert in child-centred design with over 20 years of experience, discussed the power and importance of designing technology that has a positive impact on children. Gillets stressed the need to focus on three key principles: protection, empowerment, and participation.

Adam Ingle, the Global Lead for Digital Policy at the LEGO Group, explained the motivation behind prioritising this issue. He argued that businesses have a responsibility to uphold high standards of safety, privacy, and security in their digital products. Ingle advocated for policies that give children more agency online and highlighted the potential risks associated with neglecting to invest in the well-being of children.

Professor Amanda Third introduced the Ritech Responsible Innovation in Technology for Children framework, which aims to create a digital world that prioritises children’s well-being. She emphasised the importance of conducting research centred around children and their experiences in the digital age. Additionally, an ongoing research project on responsible innovation in technology for children was discussed.

The session concluded with panelists sharing their thoughts on taking action to achieve positive design for children’s well-being. They underlined the need for collaboration between government, industry, and young people, as well as the importance of taking tangible steps in the pursuit of this vision.

In summary, the session provided valuable insights into the importance of prioritising children’s well-being in the digital environment. It highlighted the role that design, policy, and research play in creating a positive and secure digital space for children.

Amanda Third

The analysis examines various aspects of children’s digital play experiences, covering topics such as wellbeing, safety, participation, and design. It explores both positive and negative elements, providing a comprehensive understanding of the subject.

On the positive side, the analysis highlights the diverse and enjoyable experiences that children have with digital play, emphasising the joy and connection it brings. It also acknowledges the positive impact of creativity on children’s wellbeing, underscoring the importance of involving children in design processes.

In terms of safety, the analysis recognises that children face challenges online, including encounters with inappropriate content and potential safety issues. It emphasises the need for measures to protect children from these risks.

The analysis also explores the concept of child participation, noting its role in developing protective capabilities in children. It stresses the importance of reaching out to vulnerable and diverse children through partner organisations with expertise in engaging these groups.

A key focus of the analysis is the development of a wellbeing framework that supports the enhancement of children’s wellbeing through digital play. This framework, based on data analysis and children’s experiences, proposes indicators and measures to evaluate the impact of digital play experiences. Ongoing research involves testing the effectiveness of this framework through real-world digital play experiences.

Additionally, the analysis emphasises the importance of understanding children’s digital play experiences comprehensively. It advocates for actively listening to children and incorporating their perspectives into the design and evaluation process. This approach ensures that the framework and subsequent considerations reflect children’s actual experiences and needs.

The analysis also touches on the rights of the child as a guiding principle in this context, suggesting that any actions or decisions should be taken consciously and with a strong commitment to upholding children’s rights.

In conclusion, the analysis underscores the significance of children’s digital play experiences, providing insights into both the positive and negative aspects. It emphasises the need to ensure children’s safety, enhance their wellbeing, promote their active participation, and consider their diverse needs. Through ongoing research and the development of a wellbeing framework, the analysis aims to provide evidence-based solutions that contribute to the optimal design and enhancement of children’s digital play experiences.

Session transcript

Sabrina Vorbau:
really a co-creation approach where we tried, where also the European Commission endorsed, to really make it a multi-stakeholder approach when we are talking about better internet for kids. Together with our colleagues from the INSAFE and the INHOPE network, some of them are sitting here, or SAFER Internet Center, so really the contact point for us at national level, they did a consultation with children and young people across Europe, I think more than 750 children were consulted on their needs, on their priorities, and this was really the foundation of the revision of the strategy, to really take it to the young people first, to understand what they’re doing online, what they’re concerned about, but also what they enjoy online. In addition to this, we then also did an open consultation with adults, so mainly focusing on parents and teachers. This went mainly through social media, we developed a survey, we also translated the survey in all the EU languages, and we gave opportunity to teachers and parents to complement what the young people already mentioned to us. And then the last stage was of course also to invite other experts to reflect on what should be included in the policy, so that was of course industry, but also academia and policy makers from the national level. So we can already see that the process of revising the strategy really happened with everyone around the table, including children and young people. And then last May, the new strategy was adopted, and it’s really put at its heart and its front children and young people. It’s based on three pillars, child protection, child empowerment, and child participation. And I think especially pillar two and pillar three are really, really important. We do believe, and there’s really great endorsement and support from the European Commission to make sure that really young people are part of the action, that they’re considered as experts as well, that they have a seat around the table when decisions are being made, but also when new technologies are being developed. So it really encourages stakeholders to make sure when they work on Better Internet for Kids related policies or tools to really invite and include the young people in this process. Of course it’s a policy, it’s a policy document on Better Internet for Kids, so it was also very important to make, to create it in such a way that children and young people are aware of what is written in the strategy, aware of their rights. So this is why we also worked on a youth and child-friendly version of the strategy. I brought one copy here, but you can find it online, which also really happened in a co-creation process with the young people. They advised us on the wording, how this child-friendly version should be formulated. They also advised us on the colors they choose, and they said okay, these icons, these colors, this is really like what attracts us, what we like. What was also interesting, that they advised us to put a sort of like cluster in the end to better explain some terms. I think for us the term policymakers, we all know what that means, but for the young people it was not clear, they didn’t understand what that meant. So that was really refreshing and helpful for us to really understand how we should go about it. This is also translated once again, because that’s also really important. Of course the the common language is English, but we really want to reach young people at national and local level. So we also, with the help of our colleagues at the Safer Internet Center, made sure this is translated in all the EU languages. What happened until then, almost a year after, when it comes to implementation on our side, and again this is with support of the European Commission, we really try to include young people in all our actions. When, for example, we are doing consultations with stakeholders, when we form expert groups, we are inviting young people to be part of these groups. Those young people we are working with are young people that are working at national level together with the Safer Internet Centers. They are typically between the age of 13 to 18, 19 years old, and they have the opportunity through the Safer Internet Centers to also get involved in the work we are doing. Maybe I conclude with a very tangible example. Every year we are hosting our annual conference on behalf of the European Commission, which is the Safer Internet Forum, and what happened last time, that for the first time we involved the young people in the whole development process of the conference. We had a small group that we worked with really on the program. We discussed what should be the key topic of the conference, what should be the slogan of the conference, how should the visual identity look, and what should we do on this day, what kind of sessions do you think would be useful, what do you think works when engaging with stakeholders, who should we invite to speak at the conference. And I think this was a very, very refreshing process, and I think that’s also the point we’re trying to make, to really try to involve the young people from the beginning, from the early stages on, and not give them a finalized document or a finalized tool and a policy and say, okay, this, please use this, we feel it’s useful for you. So we have to educate with children and young people and not to them or for them. So I think I conclude it here. Thank you. Thank you so much, Sabrina. I think we’ll

Josie:
return to a few of the concepts you’ve introduced. Firstly, you know, the three pillars of the strategy, protection, empowerment, participation, I think really speaks to the spirit of this project, but also the importance of prioritizing children’s own views. And we’ll hear from Amanda about the first phase of this, which really embodied that, I think. Our final perspective to complete the triangle for this first part, I’m such a pleasure to introduce the newest member of our UNICEF team. Shuli Gillets is a global expert in child-centered design with over 20 years of experience working both in industry and academia, leading UX research, design, and strategy of digital experiences for children and families. In the past decade, Shuli has served as a Google Launchpad UX mentor, a teaching fellow at Tel Aviv University, and a founding board member of Designing for Children’s Rights Association, and now a member of UNICEF’s Business Engagement and Child Rights team. So welcome, Shuli, and taking us from the policy or government perspective to the industry perspective. And governments, as we know, have an essential role in creating the enabling environment for businesses to respect children’s rights. The actions of industry itself is another essential piece of the puzzle when it comes to prioritizing child well-being in the digital environment. Some of us in the room might be wondering, you know, why are we focusing on design specifically? What does this mean? Can the design of digital experiences really matter for children? And is good design possible? And what’s the power of designing positive technology for children? It would be great to hear your views on that.

Shuli Gilutz:
Thank you. Good morning, everyone. It’s great to see everyone here, and thanks for that, Josie. It’s always great to start talking about children being part of this, but after we hear from children and we hear their need for this, we really have to find a way to make impact in a broad sense. And regulation, legislation, and policy are important tools in children’s positive digital play experiences, and extremely important in guiding and limiting industry in protecting children online. However, in digital play, impact goes beyond mitigating harm. And while that is still the baseline and critical, research has shown, and we’ll hear more about it soon, that digital play can afford positive experiences that promote children’s well-being in different ways. And that is really what we’re trying to do and reach out to companies to help them achieve this. So I’d just like to mention a few terms we’re gonna all refer to so you know what we’re talking about, because they can be used in different ways. So there are many digital experiences for children online. Why do we talk about digital play? I mean, children do a lot of things. So first of all, play is one of the most important ways in which children interact with the world. I mean, and develop an essential knowledge and skills and experiences. That’s also why it’s a child’s right, and everybody here knows that. And children treat digital play the same way they treat physical play. They don’t make those differences. That’s for us, the older generations. And they expect the same safety and joy they have from all the physical play. And of course, that’s not the case, as we know, because it came in later. So we want to help create the environment for them by guiding industry to do so. And Ritech, this project, looks at children’s well-being. So we define children’s well-being by a spotlight on children’s own lived experience. So their subjective experience with digital play. How do they view it? What makes it a good experience or bad experience for them? We found, talking to children, that safety and security is key. But there are also additional outcomes that make up well-being in the eyes of children when it comes to digital play. Like empowerment, social connection, competence, and creativity. In many cases, digital play is a critical lifeline for children’s well-being, enabling all these in a way no other context can. So when we talk about good design in Ritech, we talk about where designers and industry can help support these interactions and then that kind of thriving with children. And most designers today want to create a positive and empowering digital play experience for children, but they don’t know how. I mean, they haven’t trained either in child rights nor in child psychology or in any way. They’re just designers. And they would like to do the right thing. So this is a complementary piece to policy work that is like a top down approach. We were looking at a bottom-up initiative to give designers and industry the tools to create positive digital play experiences and promote the benefits that those have for children. What we’re doing now is working with designers to understand their needs and develop a guide for business that they can implement easily in their design process. To create online experiences that are safe and private and also connective, creative, expand learning, competence, curiosity, and creativity. And of course, fun, exciting, joyful, and inspiring. Thank you.

Josie:
Thanks, Shuli. And that’s a great segue to the second part of the of the session. And next slide, please. Where we will dive in a little bit more to this particular project, Ritech Responsible Innovation in Technology for Children. And I won’t give a long preamble, only to say that this is the question that we’re reflecting on. How can, practically, businesses and policymakers create a digital world that prioritizes the well-being of children and maximizes the opportunities and the potential for positive impact? And with that, I’d like to introduce the next speaker, Adam Ingle. Next to me is the Global Lead for Digital Policy at the LEGO Group, where he helps LEGO maintain high standards of safety, privacy, and security in their digital products, and advocates for policy that empowers children online. Previously, Adam led the Information Commissioner’s Office Emerging Technology Unit, assessing the data protection impact of emerging technologies, and advised both industry and government on how to mitigate privacy risks. Can you tell us, Adam, a little bit about what motivated the LEGO Group to prioritize this topic? And from your perspective, what are the potential pitfalls associated with businesses failing to invest in getting it right when it comes to designing for children’s

Adam Ingle:
well-being? Thanks, Josie. So, at LEGO, kids are at the center of everything we do. You know, they really are the DNA of the company. It’s wonderful to see a child here listening to this talk as well. I mean, really, we associated with our physical bricks. That’s what everyone knows us for. You know, even in the booth that we have out in the Exhibition Hall, everyone comes up to us and say, what is LEGO doing here? What is LEGO doing with the digital space? And yes, I mean, we have this great history of being there in physical play, but we also want to be where kids are. And increasingly, that is online. And, you know, we also need to carry over our commitment to learning, our commitment to safety, our commitment to child well-being from the physical to the online world. And while we’ve been online and, you know, building games and building digital experiences for many, many years now, we want to understand what best practice is. And that isn’t just best practice in safety and protection, that’s best practice in enabling children to grow, to learn, to thrive online. But you can’t just make that out of thin air. You’ve got to do the research, you’ve got to do the hard yards, you’ve got to work with fantastic people like Amanda and Julie and others who have, you know, deep expertise in these areas. So that was really the impetus for starting this Rightech project. It’s to, along with UNICEF, it’s to understand, you know, fundamentally at a research level, what are the building blocks that support child digital well-being? And how can industry really commit to building these products in a way that’s empirical and measured and sustainable? So we want to be the flagship digital service, the flagship kind of industry provider building well-being in our digital products. We want to lift industry best practice, we want to build coalitions in this space. We all know that the, and I feel like this phrase has been said many many times at this conference, but the internet is not designed for kids. Digital experiences aren’t designed for kids. They should be. That should be the future. And I think there’s increasing consensus around this, so we want to drive that alongside UNICEF through this project. And I think it all starts with really embedding these things in our company first. So for example, we’ve already begun the process of, you know, internalizing the initial Ritech findings. So we have a responsible child engagement team. They actually run this project internally for Lego, but they’re also a horizontal team that consults on child rights, child well-being, child issues across all digital design and gaming experiences at the Lego Group. We’ve got responsible digital engagement managers, we’ve got responsible gaming managers. They’re all looking at the Ritech framework and as our product teams build and develop experience for kids, they’re consulting with these managers whose mandate is child well-being and making sure that these aspects are reflected in our digital design experiences. We have a responsible gaming framework, which is a kind of a must check box thing for any games that we make that includes healthy game design. So that talks about, you know, how do you build games that help children emotionally regulate, that don’t have addictive qualities, that don’t have negative enforcement cycles, that don’t have manipulative design patterns. So that’s already integrated into our gaming experiences there. We’re also building kind of digital design cards and digital design principles. So for example, these kind of build on not just the Ritech work, but some of the work that’s come out of the Digital Futures Commission in the UK. So they have kind of key tenets like how do you ensure safety, how do you allow for open-ended play, how do you enhance imagination and creativity. So kind of building on those best practices, as well as the findings from the Ritech framework. And we’re also, you know, wanting to actually measure our company’s performance and the gaming performance that we have on well-being. So we’re building a well-being KPI at Lego to actually push product teams, developers, to meet as a criteria for success, kind of well-being outcomes. Now that’s difficult to do over in the process of doing that, but that’s, you know, a key aspiration of actually performing and measuring against well-being. And I think I can share briefly kind of an outcome from the initial research. So we used one of our games in the Ritech phase 2 research, Lego Builders Journey. And this is kind of a challenging puzzle game with a strong narrative. And initial findings kind of associated, you know, the experience of increased competence, relatedness, and belonging that kids had. Because they were able to enact with Lego minifigures, and they were empowered to explore the game, and were awarded for kind of success. And they kind of had this open and imaginative play experience based in a Lego world. because they had open-ended play, because they had this sense of autonomy and agency that did increase kind of findings of competence. So we’re already kind of seeing the existing games designed and measured against the framework, but I think when this becomes much more formalized and robust, you know, and we build it in, you know, we can really augment and enhance those outcomes. So I think that’s all what we’re doing internally and I think, you know, the success of it so far and the sense of positive feedback we get is itself a reason to do it, but it’s also just the right thing to do. I think the pitfalls of industry not doing this is, you know, really losing, one, the creating potential for extreme harm for kids at this really kind of crucial development age, but two, just like losing a sense of trust and we already see a massive trust deficit in the digital industry at the moment. You know, there’s an online safety crisis happening at the moment. We’ve seen, you know, reports from the U.S. Surgeon General talking about teen mental health crises, issues across the board and that’s, there’s a regulatory response happening to, you know, to ensure that we mitigate some of these harms, but, you know, that’s, it’s not going to solve the challenge if regulation just gets handed down and industry is forced to do it. We need to be proactive and you actually need a cultural change in industry in order to ensure that, you know, the harms are mitigated and not just mitigated, but the well-being is enhanced. So that’s really what we’re trying to do. I’ll leave it at that. Thank you, Adam.

Josie:
Really interested to hear that experience of how do you build this into incentive structures within the company, you know, making it a KPI is a really interesting example and I’m sure we will have time to unpack parts of that in the discussion, but also a nice segue. You mentioned the framework. You might be thinking, but what framework? Well, next slide, please. This is, we will hear a little bit about this piece. It’s a little bit difficult to read on screen, but it’s reproduced in the handout in front of you and this massive banner. I would like to introduce Professor Amanda Third, who is a professorial research fellow in the Institute for Culture and Society, co-director of the Young and Resilient Research Center at Western Sydney University and a faculty associate in the Berkman Klein Center for Internet and Society at Harvard University. She’s an international expert in youth-centered participatory research and has led child-centered projects to understand children’s experiences of the digital age in over 70 countries, working with partners across corporate, government, not-for-profit sectors and children and young people themselves. It’s a real pleasure that you’re able to join us. Amanda, can you tell us a little bit about this framework, what we mean by this phase

Amanda Third:
one research and what does this tell us? Yeah, sure. Thank you, Josie, and good morning, everyone. It is really nice to see everyone, especially the younger members of our audience here this morning. Before I leap into talking about the framework, I would just begin with a little reflection that it’s been so nice over the last few IGFs to see our conversations progressively mature and move away from thinking only about protection and to think about protection and participation in tandem. It’s really, really refreshing and I’m really pleased that wellbeing is finally making a big splash on the agenda for children’s digital practices because, of course, the work that I have done and many others have done too shows that actually when children engage with digital media, whether that is scrolling through videos to watch, choosing which games to play or who to interact with online, that question of their wellbeing is really top of their mind constantly. They’re constantly reflecting on whether or not this is good for me at some level and they make their choices accordingly. So, it’s really time for us to take wellbeing very seriously. So, in this project, we were very excited to be able to work with almost 400 children across 13 countries, predominantly in the Global South, and to re-analyse the data from 30,000 survey participants to work out how children’s digital media practices impact their sense of wellbeing and what we can do to really augment their wellbeing through good design. So, basically what we did was we used a creative and participatory-based workshop method to engage with children in languages that they speak in their own contexts to really dig deep into their experiences of digital play. And what we found from that was that children have got very kind of diverse experiences of digital play, but one thing that really stood out across the sample was that digital play brings children a lot of joy and a lot of connection with others and that there’s really a lot for us to work with there in terms of augmenting their experiences online and supporting their wellbeing. So, also though, and as Aditi was gesturing towards in her opening words, children also though really do understand that there are limits to digital play. They’ve got a very strong sense that their safety is at stake. They do have unpleasant experiences and actually what really came through as we spoke to them this time around is that their experiences of diversity really, you know, diverse children have or meet with different kinds of obstacles online, discrimination, barriers to their good participation and culturally inappropriate content, things like this. So, really we do need to pay very good attention to diversity online. They overwhelmingly talked about how wonderful digital play experiences are for connecting with other people online and I think this is the reality for children. They do interact with other people online. They mostly interact with their friends. They occasionally interact with strangers but, you know, that those social dimensions are really things that we need to foster because they bring children a lot of joy and that has positive impacts for their wellbeing. Safety for them is also a priority. So, they are calling on governments and in particular our private enterprise to really safeguard their wellbeing online. They want us to do more to make sure that they are protected and this includes everything from the most serious risks of harm right through to things like the ways that they might encounter advertising in those settings. They also talked about how games are one way for them to express their creativity and sort of talked about creativity as an integral part of their digital play experiences and clearly creativity comes along with a whole range of benefits from, you know, sort of feeling empowered and to take action to express oneself. These are all things we know are positively correlated with wellbeing. So, these are some of the things that came out of the interactions with children and then what we did was we sort of distilled, analysed this in conjunction with the survey data and we distilled it into this wellbeing framework that you see in front of you. So, the eight pillars of this interim framework and I stress that it is an interim framework, it is going to be revisited shortly but these are, if you like, the design principles that we need to take forward and to use to shape the digital play experiences that children have online and you can see they very closely correlate with the kinds of experiences I’ve just very quickly summarised for you. So, from here to the other thing that we’ve done to support this framework is we’ve developed a series of indicators and sample measures that we can then use to measure whether or not digital play experiences are hitting the mark. So, this is a sort of an attempt to, if you like, embed children’s experiences at the heart of our measurement processes to make sure that we are really, really making the impacts we intend. Okay, and so I think, you know, there’s still more work to be done here. This is only phase one that we’ve completed so far. We’re about to do complete phase two but I think what’s really come through very strongly is, as Sabrina was pointing to, well actually all of us have pointed to in different ways, the importance of engaging children in these design processes and I think if you’re here in this room you’ve already got some inkling that this is important somehow and I know I’m preaching to the converted here but what I would urge you is to really stay attuned to the meanings of engaging children and young people. Let’s not get lazy about the ways that we think about participation. Let’s not turn it into a tick box. Let’s make sure that we continue to reflect on our practices, reflect on what value children can bring to these processes and really continue to refine the ways that we do these things over time because I think by doing so, not only do we get better results in terms of the design of products but we also build the next generation of change makers. Thank you.

Josie:
Thank you so much, Amanda. If I may, I have a quick follow-up question which is to ask you a little bit about, you know, we keep saying phase one, phase two, research and of course research takes time and the project is ongoing but can you tell us a little bit about what does this phase two research actually consist of and what can we expect to see? Yes, so thank you, Josie and I’ll make it

Amanda Third:
quick because I know we’re under pressure but phase two is a new phase of research carried out by a range of different institutions around the globe, interestingly. So, the Centre for the Digital Child in Australia, New York University and the University of, oh I’m going to get this wrong, Sheffield, thank you. That was my instinct but, you know, I’m a little jet lagged and what they are doing now is they are taking the framework and testing that against a particular, you know, a set of real world digital play experiences and they’re doing that in a range of different ways using different methods to really understand how children’s experiences play out and how then we might need to refine the framework accordingly. So, we’re doing everything from measuring, you know, sweat and heart rates right through to sort of like the more ethnographic style of research which is talking to children about their experiences as they play and we’ll integrate all of that into a revised version of the framework and roll that out with designers through a range of

Josie:
initiatives. Great, thank you so much. We are coming close to the section where we will have a bit of interaction and invite you to chime in with questions but before we do that, very briefly, Shuli, can you tell us just for those in the room what can they look forward to in terms of the next steps and how they can be involved? Yes, so as Amanda mentioned, we

Shuli Gilutz:
really started this project based on research. We want to base everything we do on real data. There’s a lot of, you know, myths going on around children and technology but after we do that, we want to take that into practice and use that for impact. So, the stage we’re working on now in parallel to summarizing the research is creating a guide for business. So, in order to create the guide for business, it’s not just about finding a way to summarize all the research but it’s really to create something that businesses will use and we’re talking about executive levels but also like we mentioned designers in practice. So, what we’re doing is actually talking to designers from companies that create digital play all over the world. It’s very important for us to reach out and get a diverse group of companies, not only ones that create in English for English speaking kids but a large sample from all over different countries and we’re working with country offices and that comes all over the world to do that and talking to designers about their challenges and needs and designing for children and we’re going to have all that information aggregated and find a way to create some guide for them which will be something applicable for their design process, design tools and assessment for applying Write Tech framework. So, the next stage after we finalize all the information from the companies is actually to create kind of a prototype for the designers and test it, pilot it with different countries that are designing different digital experiences and then hopefully by next fall we’ll have something to show everybody that has been developed together with all these companies from all over the world. If you would like to chat more about that please visit us at our booth and I’m sure we

Josie:
will be able to discuss at more length. We are challenged to think about really actions and concrete things through these sessions. So, to wrap up the panel part, I’d like to just invite each of us one by one in 10 seconds, 20 seconds, just one action that you think should be prioritized by any stakeholder group whether that’s government or industry or young people when it comes to achieving this vision of positive design for child well-being and then we’ll throw it open but this will really help us I think try and distill everything that we’ve spoken about. May I invite Sabrina to start?

Sabrina Vorbau:
Yeah, sure. I would say meaningful youth participation. I would hope and Amanda said that there is you know more that progress has been made but more needs to be done. So, I would wish for a multi-stakeholder approach where we would consider children and young people to be an equal part of it. So, to consider them really as expert and not as a necessary and coming back just to the big plus strategy, I think it’s a very great piece of policy. It sets the framework, it’s there so I would encourage all the other stakeholders especially industry and policymakers to really implement it to put it in action. It’s there, it’s meant to be used so I think that’s the the only way forward.

Josie:
Fantastic, thank you. Adam?

Adam Ingle:
I’m sure Amanda surely might cover off the industry expectation so I’ll be a bit policy wonky and say that I really would welcome I think and Lego would really welcome governments and policymakers to actually recognize the need for a holistic approach to digital design. So, right now there is a lot of discussion and rightly so around addressing online harms but an over focus on harms can lead to sterile environments and we actually need to build experiences and have the regulatory frameworks that incentivize experiences that allow us to tick off on all these eight competencies. That is safety and protection is one but kind of creativity, imagination and you need some level of flexibility in design in order to do that. So, government’s thinking about how you holistically increase child well-being in digital design and creating frameworks that enable companies to design like that.

Shuli Gilutz:
Thanks, I’ll talk about companies and industry. I think there needs to be a shift from looking at designing for children just something that’s regulated and they need to do by law and they’re different ages and you know complying with different frameworks like GDPR or COPPA or others. The shift should be to understanding that this is the future. There is no going back. We have to design a fully holistic environment for children to thrive in not just to be safe in and whoever isn’t doing this will be just left behind. So, I think industry really has to change pivot the way it’s looking at designing for children and I hope that will happen.

Amanda Third:
Okay, it’s always tough going last on this little tweet link thing. So, I think I would challenge us to continue to really problematize some of the distinctions that we make. Often what we do is we pitch protection against participation. We talk about them as two separate things and I think there’s a lot of value in thinking about how participation breeds protective capabilities. So, that would be the first. The second would be to really look closely at young people’s practices or children’s practices. Sometimes we dismiss their practices out of hand and we say they’re mindlessly scrolling or they’re just mucking around, but actually those things we need to look closely at. There’s a lot going on in those little spaces that support and sustain their well-being and there’s again a lot of fertile ground there for us to talk about. The last thing I would say is, this is really not tweet link, sorry Josie, but the last thing I would say is that design is really, really, really important. But we’re also investing a lot of hope that design is going to solve a lot of problems. So, for us to think about what are the limits of design and where do other pieces of the puzzle need to fit in.

Josie:
Fantastic. Thank you. Thank you to our panelists. Now is the time to please raise your hands. We will have roving microphones and we’ll take a few questions at one together and then we’ll portion them out to two panelists. So

Audience:
let’s start and go around. Please. Hi everybody. I’m Niels from the Belgian Safer Internet Center. We work under the InSafe umbrella where Sabrina is a part of. Something that stays a constant struggle for us in order to avoid using child participation as a sort of tokenism as I said before or simply a box to tick. How can we reach like a true representational group of young people? Like without a constant focus on this we enforce this Matthew effect where representation can even be a misleading thing. Because when only privileged people are being reached we get the wrong idea about a certain situation. So is there any interesting research or findings that are best practices about this? Because for example at the Belgian Safer Internet Center we’ve been experimenting over the past years. For example when we were doing trainings with parents we would allow them to bring their children for example. A small thing but which allows more people to be part of something. But I’m looking for more ideas here because this stays a constant struggle. Thank you. Yeah thank you. My name is Jutta Kroll from the German Digital Opportunities Foundation. First of all I want to thank you not only for the presentations but for the wonderful approach and project. I really really believe in it. My questions regard the principle of evolving capacities of children. You’re talking about designing for children’s well-being but they are not all the same and therefore I’m really interested how that can be done. In parallel to the big plus strategy the European Commission has set up a special group on age-appropriate design which is working in this regard. I would like to know whether this could be brought together. Thank you. Great question. Thank you. I think we had a few on this side of the room. Oh we have another mic yes. Yeah thank you very much. Yeah my question well one of my questions has been stolen but absolutely I think I think just to add to Jutta’s point sorry my name is Amy from ECHPAD International. To add to Jutta’s point I guess this how do we navigate the difference between platforms designed for children platforms used by children and how can we build in a kind of an experience that is flexible enough so that older use sort of an experience that doesn’t work for them but also children are supported. And I guess the second thing is about parents. We hear often that you know some research shows just the ongoing importance of parents being involved in the gaming life and the online digital life and accompanying children in that and what does the framework address that in some way to kind of also bring parents on that journey. Thank you. My name is Carmen so I speak on capacity as a mom today. I come from nuclear physics and internet systems to different worlds but I’m also mom. You just told my question because it is very important to involve parents because when I gave birth to my children they didn’t come out with the phone. So we provide them a phone and actually I have two daughters and they don’t have a phone. They only use their computer when they are at school. They don’t live online. They live outside. So we’re talking about something we give for granted that the children they will live online their life. They’re not gonna live there online and as you said there is no turning back but there is a turning back because we can walk in parallel ways. The online life and the offline life. So if we only if they only live online now we take away all the senses so we won’t feel pain anymore when we walk on top of a brick. Lego brick. So and it’s very nice. I see them playing and every year they get from Santa a lot of Lego and this makes the children build up this new world together. And then I was pretty worried to hear that the developers they have no pedagogical experience. So we expect this from from the teachers. So I would expect this also from the developers to have this kind of knowledge. Otherwise you just give something to the children and they have to figure it out. And some parents you should educate the parents because we see a lot of parents we give they give a telephone to children and they think it’s the babysitter of the children and they don’t explain all the threats that they’re online. So they give data and all these kind of things. So there are there are certain. It’s pretty interesting what you said and I love your speech when you said you involve the children which is extremely important. But first you should educate parents as well because this is not a substitute to a parent. So it’s like giving a nice Tesla to the children and just say just go out and drive. It doesn’t work like that. Thank you so much. Thank you to you. We have five minutes left but I notice we have one question from behind us and then. OK. There’s David from the Association. We are also working for working with NGOs insurance group in Asia Pacific region. So my question is basically about our next step and also the engagement of other stakeholders. First about the next step. Knowing that right now is creating the guideline or the guide for business and policymakers. And it’s also at the other audience was mentioned. Parents engagement is very important in a sense and also the workers. So I’m just wondering for the next step. Would that be any guideline also for parents and as well as the workers who work closely with children as well as educators. So there’s the first question. And the second question is about right now is on phase two for the research. I’m just wondering for for NGOs and also for Institute from the other regions. How can we involve in like stage two stage three or afterwards. So there’s many about next step and for our actions. Thank you so much. We’re going to have to be very economical with our answering. But I think that I’ll be very quick just to respect that we have an online participation as well. And we’ve got a young person who’s obviously very passionate and is doing sounds like doing amazing things in Bangladesh. So he’s been quite active in the chat and he’s wondering how he can become involved in global initiatives and like this to represent kind of children at a global scale and also really agreeing with the points that have been made that you know a child understands children’s priorities most. So really reinforcing this importance of having developers you know have this insight and really respect children. So that that one’s from the online chat. Brilliant and it’s

Josie:
wonderful to see that engagement coming off live. We have three minutes left and I know the next session is in the room preparing to get ready. I’m gonna have to cluster this into representation and diversity and in the access we have to children in in in the research side evolving capacities and parents. Takers. Okay we’re just doing a round where we’ll each have maybe 30 seconds to to answer whichever question spoke to you most. Please Amanda and then we’ll go

Adam Ingle:
this way. I’ll be super quick and I’ll before I forget I’ll mention that for all the everyone that attended this early session I’ve got Lego loot if you want it at the end so please come see me and I can give you some stuff. So on the evolving capacities matter and kind of age-appropriate design and designing for brackets you know there there is methods to do this and and you know as as it’s already required by age appropriate design codes I mean Lego is a part of the EU AADC method as well. So we need to think you know what’s an appropriate level of social interaction for a 10 to 13 year old 13 to 16 year old you know. You actually probably need for teens well-being some level of social connection you know form organic friendships online. However that comes with risks you know you are have a contact risk with strangers so maybe you disable certain features for 13 to 10 year olds. Equally the level of communication and language that you use can be tailored so privacy policies default settings certain aspects of game design can be kind of tailored in a certain way. So there is through the wonders of kind of technology ways to really tailor these these different things. So I’ve got more to say but I’ll stop that. Thank you. Thanks. Okay I’ll quickly talk about the points that

Shuli Gilutz:
raised about parents. I think it’s critical and it always comes up because it’s a big challenge for parenting today. So parenting is hard and we all appreciate that. It’s hard to teach your children something that you didn’t have when you were growing up and I think the two main recommendations that come out with a lot of research and working with parents and families are number one play with your kids. So when parents don’t know what’s going on here or in the Xbox they tend to put in all these myths and stuff and they can’t really help and support their children and making good decisions and that’s what we’re really doing in parenting. So once you play with the children this goes back to child’s participation and learn what they’re actually engaging and then you can have a meaningful discussion you can see the well-being you can see the good things but you can also see when it’s not that great and then you can really talk about it so that’s very very important even if you think you don’t want to play this game go sit down play learn and have a discussion you may even enjoy it and the second one is talk to children about what you’re worried about about the playing outside why why do you want them to play outside that will be a very interesting discussion and children appreciated because children just wanna do you have well-being they want to do what’s fun and good for them they want to be healthy they want to enjoy and that’s why they still play with Legos and they still play outside because it’s fun it’s great it’s good for them so have those discussions with children rather than try to tell them what not to do without really knowing what’s going on in their lives thanks very quickly on

Amanda Third:
the representation question spot on I would say something controversial and say I don’t think representations a useful idea when you’re doing child participation I think what we need to pay attention to is reaching out through partner organizations who have deep expertise engaging vulnerable and diverse children to reach the children that that will give us a diversity of opinion and then I think we have to really make sure that we are tailoring our methods to make sure that we can can speak meaningfully with with different kinds of children right and that means often letting go of this idea that there is a perfect research method and a perfect way of engaging with children and going with the flow being guided by your sense of the rights of the child and and and yeah you know moving forward consciously I guess yeah thank

Sabrina Vorbau:
you and the final word no I just wanted to thank everyone for for the reflections and I think everyone that posted a question and made a comments I think that’s our job to try to connect these dots I think children and adults they need to have a conversation we need to approach this as a conversation and not educating to them and for them but with them thank you so much for all your

Josie:
participation and let’s continue the conversation outside and yeah thanks

Audience:
yeah yeah I’ve got to make sure I’ve got it right right I think it’s at 15 yeah to 15 to 315 yeah focus as you said yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah

Adam Ingle

Speech speed

180 words per minute

Speech length

1632 words

Speech time

544 secs

Amanda Third

Speech speed

171 words per minute

Speech length

1713 words

Speech time

600 secs

Audience

Speech speed

148 words per minute

Speech length

1354 words

Speech time

550 secs

Josie

Speech speed

164 words per minute

Speech length

1170 words

Speech time

427 secs

Sabrina Vorbau

Speech speed

169 words per minute

Speech length

1254 words

Speech time

444 secs

Shuli Gilutz

Speech speed

178 words per minute

Speech length

1433 words

Speech time

484 secs

Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Albert Antwi Boasiako

Ghana has made significant progress in integrating child protection into its cybersecurity efforts. The country has passed the Cybersecurity Act, which focuses on child online protection. Additionally, Ghana has established a dedicated division within the Cybersecurity Authority to protect children online. This demonstrates Ghana’s commitment to ending abuse and violence against children, as highlighted in SDG 16.2.

Furthermore, Ghana has seen a remarkable improvement in its cybersecurity readiness, with a rise from 32.6% to 86.6% between 2017 and 2020. This progress aligns with SDG 9.1, which aims to build resilient infrastructure and foster innovation.

Research and data have played a crucial role in shaping Ghana’s cybersecurity policies and laws. Through research, Ghana has identified the challenges faced by children accessing inappropriate content online, leading to more comprehensive child protection strategies. This highlights the importance of evidence-based decision-making, as emphasized in SDG 9.5.

However, Ghana has faced challenges in implementing awareness creation programs, particularly in reaching a larger percentage of the population. With a population of 32 million children, Ghana has only achieved around 20% of its awareness creation goals. Overcoming this challenge is crucial in combating cyber threats effectively.

Fragmentation within governmental and non-governmental spaces has been a significant obstacle in child online protection efforts in Ghana. To address this, Ghana needs to institutionalize systematic measures and promote collaboration among stakeholders. This will ensure a unified approach and enhance response effectiveness.

Albert Antwi Boasiako, a proponent of child protection, advocates for the integration of child protection into national cybersecurity frameworks. Albert emphasizes the importance of research conducted with UNICEF and the World Bank in shaping cybersecurity policies, aligning with SDG 16.2.

Public reporting of incidents is also essential for maintaining cybersecurity, as supported by Albert. The establishment of the national hotline 292 in Ghana has proven effective in receiving incident reports and providing guidance to the public. This aligns with SDG 16.6’s objective of developing transparent and accountable institutions.

Implementing cybersecurity laws can pose challenges, particularly in certain developmental contexts. Factors like power concentration and specific country conditions can hinder their practical application. Overcoming these challenges requires continuous effort to ensure equal access to justice, as outlined in SDG 16.3.

In the African context, achieving uniformity in cybersecurity strategies is crucial. Discussions on streamlining online protection and combating cyberbullying in Africa are vital for better cooperation and enhanced cyber resilience across the continent.

Ghana supports regional integration for successful cybersecurity implementation, sharing its expertise with other countries. However, fragmentation within the region remains a challenge that needs to be addressed for effective collaboration and coordination in countering cyber threats.

In conclusion, Ghana’s efforts to incorporate child protection, improve cybersecurity readiness, and promote evidence-based decision-making are commendable. Overcoming challenges related to awareness creation, fragmentation, law implementation, and regional integration will contribute to a more secure digital environment for children in Ghana and beyond.

Marija Manojlovic

Online child safety is often overlooked in discussions surrounding digital governance, which is concerning as protecting children from online harm should be a priority. This issue is further exacerbated by a false choice that is frequently posed between user privacy and online safety. This notion that one must choose between the two is flawed and hinders progress in safeguarding children in the digital realm.

The fragmentation within the digital ecosystem hampers progress in advancing child online safety. Marija, a leader in the field, has observed that collaboration and coordination among various stakeholders, including governments, the private sector, and academia, are crucial. However, there is an alarming level of fragmentation that impedes progress and the development of effective strategies to ensure children’s safety online.

One positive aspect that emerges from the discussions is the recognition that failures and learnings should be shared openly. Marija proposes that companies and organizations not only share what has worked but also what has failed. Transparency and the sharing of experiences can lead to better solutions and a more cooperative approach to addressing online safety challenges.

To truly drive change, it is essential to understand the root causes of digital challenges. Marija suggests moving upstream and examining the design and policy choices that contribute to online safety issues. This entails exploring how societal norms and technological design enable child exploitation, gender-based violence, and other online hazards.

Creating a unified digital agenda is crucial for maximizing the benefits of digital technologies and ensuring online safety for children. Misalignment in digital agendas can hinder progress, but engaging in meaningful discussions and sharing innovative solutions can help establish an internet environment that is beneficial for all, particularly children.

An evidence-focused and data-informed approach is necessary to effectively protect children online. Marija emphasizes the significance of testing, experimentation, and the sharing of results to inform decisions and shape policies. Building evidence through a cooperative spirit between different stakeholders is key.

Ghana serves as a unique example where child protection has been institutionalised in their cybersecurity work. This highlights the importance of countries actively integrating child protection into their cybersecurity strategies and policies.

However, it is disheartening to see that the innovation ecosystem is not always inclusive of individuals who require safety measures due to various reasons, including concerns for their well-being. This exclusion reinforces the need to address safety concerns to create a more inclusive and diverse innovation ecosystem.

The intersection of online child safety, inclusive digitisation, and gender balance should not be disregarded. Ensuring online safety is crucial for promoting inclusivity and achieving gender equality in the digital realm.

More work needs to be done in preventing gender-based violence and image-based abuse online. These serious issues require attention and effective strategies to protect individuals from harm.

Additionally, it is essential to challenge and address the prevailing narratives and perceptions of these digital challenges that are rooted in gender norms. Overcoming these deeply ingrained biases and stereotypes is crucial for creating a safer and more equitable online space.

While the internet presents numerous opportunities for young people, their participation and protection must be prioritised. Their experiences and perspectives need to be recognised and incorporated into decision-making processes to ensure their safety and well-being.

Moreover, it must be ensured that existing vulnerabilities, such as the gender divide, toxic masculinity, and extremism, are not exacerbated in the online world. Digital platforms should actively work towards a safer and more inclusive environment that nurtures positive interactions and discourages harm.

Lastly, increased investment in the field of online safety and protection is needed. Governments, industry leaders, and other stakeholders must allocate resources and finances towards robust initiatives that safeguard children from online threats.

In conclusion, addressing online child safety is essential and should not be overlooked within the digital governance discourse. It is imperative to dispel the false dichotomy between user privacy and online safety, overcome fragmentation, and foster collaboration among diverse stakeholders. Sharing successes and failures, understanding the root causes of digital challenges, building a unified digital agenda, adopting an evidence-focused and data-informed approach, institutionalising child protection, promoting inclusivity, challenging gender norms, ensuring youth participation and protection, and increasing investment in online safety are all integral to creating a safer and more inclusive digital environment for all, particularly children.

Mattito Watson

The analysis examines four speakers discussing various aspects of USAID’s strategies and initiatives related to youth and digital experiences. Firstly, it is noted that USAID’s digital strategy was released in 2020, indicating their adoption of digital technologies in development practices. As one of the largest development organizations globally, this digital adaptation is significant in terms of reach and impact.

Additionally, USAID has implemented a child protection strategy, demonstrating their commitment to safeguarding children’s well-being. Mattito Watson, who leads the child protection efforts within USAID’s child, children, and adversity team, plays a key role in this area. Moreover, USAID has a youth strategy that emphasizes collaboration and partnership with young people, rather than a paternalistic approach.

The analysis highlights the importance of involving youth in decision-making processes. To facilitate this involvement, USAID established a digital youth council, which serves as an advisory body and nurtures future leaders. The council consists of 12 members, including a gender-balanced representation of seven girls and five young men, underscoring USAID’s commitment to inclusivity.

Understanding the digital experiences of youth is vital. Mattito Watson’s efforts to comprehend the digital experiences of different youth demographics have led to the establishment of the Digital Youth Council, reinforcing the commitment to engage and empower young people.

In conclusion, the analysis reveals USAID’s strategies and initiatives to involve youth and incorporate digital experiences. The release of the digital strategy, implementation of child protection and youth strategies, and the establishment of the digital youth council showcase USAID’s efforts to stay relevant and foster inclusive development practices. By recognizing the importance of involving youth and understanding their digital experiences, USAID is taking a forward-thinking approach that can drive positive change and reduce inequalities in line with the Sustainable Development Goals (SDGs).

Andrea Powell

The internet has brought both great opportunities and risks for children. On one hand, children now have more access than ever to knowledge, entertainment, and communities, empowering them in various ways. However, there are also troubling aspects of cyberspace, with the dark web being used for criminal activities.

In terms of digital diplomacy and internet laws, there is a call for coherence. The belief is that everything that is forbidden in real life should also be forbidden online, and everything guaranteed offline should also be guaranteed online. Efforts have been made to implement this belief, such as discussions on how to apply the UN Charter or Geneva Convention within a conflict.

Solutions to digital challenges should come from a cooperative effort involving all stakeholders. Governments, companies, civil society organizations, and researchers all have different responsibilities and prerogatives that can contribute to problem-solving in the digital sphere.

One pressing issue is the lack of attention and resources given to child protection online compared to other areas. The field of child protection online is weaker, with less funding and organization, especially in comparison to efforts against terrorist content.

Creating an environment where there is effective testing and sharing of solutions to digital issues, such as age verification, is crucial. Different approaches to age verification exist, each with different levels of privacy, efficiency, and centrality. Finding the right balance is important.

Image-based sexual violence is a growing global issue that disproportionately affects vulnerable groups. There are over 3,000 websites designed to host non-consensually shared intimate videos, and young people are increasingly exposed to this form of violence. Survivors often experience psychological distress, trauma, anxiety, and even suicidal thoughts. Shockingly, over 40 cases of child suicide as a result of image-based sexual violence have been uncovered.

There is a need for better knowledge and public awareness of image-based sexual violence. Most law enforcement agencies lack knowledge of the issue, and public misunderstandings perpetuate victim-shaming attitudes. Global regulation and policies need to be harmonized to tackle this issue effectively. Barriers to addressing the issue include the need to prove the intent of the abuser, and it is argued that online sexual violence should be classified as a serious crime.

Tech companies are also called upon to take more accountability and engage proactively. Currently, there are over 3,000 exploitative websites that could be de-indexed, and survivors are left to remove their own images, effectively cleaning up their own crime scenes. Tech companies should play a more active role in preventing and dealing with image-based sexual violence.

In order to support victims of image-based sexual violence, global standardization of support hotlines is necessary. The InHope Network provides a model of global hotline support for child online sexual abuse, and this approach could be expanded to address the needs of victims of image-based sexual violence.

In conclusion, while the internet provides numerous opportunities for children, it also poses risks that need to be addressed. There is a call for coherence in digital diplomacy and internet laws, solutions to challenges should involve a cooperative effort from all stakeholders, child protection online requires more attention and resources, image-based sexual violence is a pressing global issue that demands better knowledge and regulation, tech companies should be more accountable, and global standardization of support hotlines is crucial.

Henri Verdier

The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation, security and safety, and violence and gender issues. It reveals that a significant portion of online crime occurs on the dark web rather than social networks, with real-time videos of crimes for sale. To combat this, the analysis suggests increasing police presence, investment, and international cooperation. It also highlights the issue of internet fragmentation at the technical layer, which needs to be addressed. Additionally, there is a disparity in trust and safety investment by internet companies, with greater investment in larger markets and less in smaller ones, especially in Africa. The analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a more balanced approach to internet companies. Contrary to popular belief, the analysis argues that innovation and regulation can coexist, with regulations sometimes driving innovation. Furthermore, the analysis emphasizes that security, safety, and innovation are not mutually exclusive, and solutions can be found by considering all three. The analysis also explores the interconnectedness of violence and gender issues, noting that social networks play a role in radicalization and that violence often targets gender and minority groups. Ignoring gender issues can lead to overlooking other interconnected issues. In conclusion, the analysis provides a comprehensive examination of various topics and offers valuable insights for addressing these complex issues.

Cailin Crockett

The analysis highlights unanimous agreement among the speakers on the importance of addressing gender-based violence, particularly online violence. They argue that all forms of gender-based violence stem from common root causes and risk factors, often driven by harmful social and gender norms. Furthermore, they emphasize that these crimes are majorly underreported.

The Biden-Harris administration strongly supports efforts to end all forms of gender-based violence. They have taken a comprehensive approach to tackle the issue, including setting up a White House Task Force dedicated to addressing online harassment and abuse. This demonstrates their commitment to promoting accountability, transparency, and survivor-centered approaches with a gender lens. The administration acknowledges that gender-based violence has ripple effects on communities, economies, and countries.

In combating online violence, the speakers underline the importance of prevention, survivor support, accountability for both platforms and individual perpetrators, and research. These pillars form the basis of the strategy against online violence. The task force comprises various government departments, such as USAID, the Justice Department, Health and Human Services, Homeland Security, and more. The Biden-Harris Administration has already outlined 60 actions that federal agencies have committed to taking to address online harassment and abuse.

The speakers note that the United States’ federalist nature leads to multiple approaches being taken across different states and territories to address abuse issues. This diversity reflects the unique challenges and needs of each region. Additionally, they assert the need to balance the interests of children with the rights of parents, as parents may not always be inherently able or willing to represent the best interests of their children.

Investing in prevention and adopting an evidence-informed approach are crucial in addressing gender-based violence. The administration recognizes the importance of maximizing options and support for survivors of abuse to effectively prevent and combat violence.

The CDC’s analysis, titled ‘Connecting the Dots’, aims to identify shared causes of violence across the lifespan. This research contributes to a better understanding of the various forms of interpersonal violence and helps inform prevention strategies.

Finally, the speakers call on civil society to demand government investment in tackling these issues. They emphasize the importance of allocating resources to effectively combat gender-based violence and online violence. This partnership between civil society and the government is crucial for making progressive changes and achieving the goal of ending all forms of violence.

Overall, the analysis emphasizes the urgent need to address gender-based violence, with particular emphasis on online violence. It acknowledges the comprehensive measures taken by the Biden-Harris administration and stresses the significance of prevention, survivor support, accountability, and research. The speakers’ insights shed light on the diverse approaches taken across the United States and highlight the importance of balancing the rights of children with the rights of parents. Investing in prevention and evidence-informed policy is considered essential, and the CDC’s efforts to identify shared causes of violence are valued. Lastly, civil society plays a vital role in advocating for government resources to effectively combat these issues.

Salomรฉ Eggler

The extended summary of the analysis highlights the significant role played by GIZ in integrating child online safety into its projects. GIZ is committed to incorporating child online safety from the outset of its projects, ensuring that the protection of children in the digital space is a top priority. This proactive approach underscores GIZ’s commitment to safeguarding children’s rights and well-being.

Furthermore, GIZ takes a comprehensive approach to ensure child online safety is embedded in every aspect of its projects. By integrating safety requirements at every stage, GIZ creates genuine child online safety projects specifically designed to address the unique challenges and risks faced by children online. This holistic approach is crucial in effectively protecting children from online threats and promoting their digital well-being.

To aid in the implementation of child online safety, GIZ utilises user-friendly tools that do not require extensive expertise in child protection. The Digital Rights Check tool is one such example, helping to assess projects in terms of human rights considerations, including child online safety. This tool allows GIZ to evaluate the extent to which its projects uphold fundamental rights and make necessary adjustments to ensure the protection of children’s rights.

However, the analysis highlights the challenges faced in implementing child online safety. Various cross-cutting issues, such as gender, climate change, and disability and inclusion requirements, need to be balanced with child safety considerations. This requires GIZ practitioners to find a delicate balance between these competing priorities to ensure that child online safety is not compromised. Moreover, limited budgets and time constraints further complicate the implementation process.

Nevertheless, the analysis indicates that increasing digitalization projects present an opportunity to mainstream child online safety. As GIZ’s digital projects continue to expand, there is a chance to incorporate child online safety into more frameworks and tools. By leveraging the digital rights check and other appropriate measures, GIZ can ensure that child protection considerations are integrated into larger projects, leading to a safer online environment for children.

Overall, the sentiment towards GIZ’s efforts in integrating child online safety is positive. GIZ’s commitment to embedding child online safety into its projects and using tools to assess projects in terms of human rights, including child online safety, demonstrates a proactive approach towards protecting children’s rights in the digital age. However, the challenges associated with implementing child online safety, along with limited resources, highlight the need for ongoing commitment and collaboration to overcome these obstacles.

In conclusion, GIZ’s role in integrating child online safety is crucial. By prioritising child protection from the outset of projects, adopting a comprehensive approach, utilising user-friendly tools, and capitalising on digitalisation opportunities, GIZ demonstrates its commitment to creating a safer online environment for children. Continued efforts, collaboration, and resource allocation are essential to overcome challenges and ensure the effective implementation of child online safety measures.

Moderator

Omar Farouk, in collaboration with UNICEF and the UN Tech Envoy, is actively involved in Project Omna, aiming to tackle pressing digital issues such as cybersecurity, bullying, and privacy on a global scale. The project is focused on addressing the challenges faced by children in the digital space and ensuring their safety.

The importance of balancing child safety and economic growth in the digital realm is a key aspect of the discussion. It is evident that as the world becomes increasingly interconnected, it is crucial to protect children from the potential harms that exist online while fostering an environment that promotes economic growth and innovation.

One of the primary arguments put forward is the need for strong partnerships between government, businesses, and civil society to effectively address child safety in the digital space. Collaborative efforts among these stakeholders are crucial in developing strategies and implementing measures that protect children from online threats. By working together, they can leverage their respective expertise and resources to create a safer digital environment for children.

The summary highlights the related topics of child safety online, government-business partnerships, civil society, and the digital space. It is evident that these topics are intertwined and interconnected. Effective child protection in the digital space requires cooperation and collaboration among all these stakeholders.

Furthermore, the discussion emphasizes the role of partnerships in achieving one of the Sustainable Development Goals (SDG 17: Partnerships for the Goals). This demonstrates the global recognition of the importance of collaboration in addressing complex challenges like child safety online.

The summary does not mention any specific supporting facts or evidence. However, the involvement of UNICEF and the UN Tech Envoy in Project Omna provides a strong indication of the credibility and importance of the initiative. Additionally, the fact that the summary mentions the need for partnerships suggests that there is evidence supporting the argument for such collaborations.

In conclusion, the expanded summary highlights Omar Farouk’s involvement in Project Omna, undertaken in partnership with UNICEF and the UN Tech Envoy, to address critical digital issues. The discussion emphasizes the necessity of balancing child safety and economic growth in the digital space and calls for strong partnerships between government, businesses, and civil society. By working together, these stakeholders can effectively tackle the challenges faced by children online and create a safer digital environment for all.

Julie Inman Grant

The issue of online safety for children is a significant concern that requires attention. Children make up one-third of global internet users, and they are considered more vulnerable online. The sentiment towards this issue is mainly negative, with arguments emphasising the need for safety measures and awareness to protect children.

One argument highlights that the internet was not designed for children, and thus, their safety should be considered. This emphasises the negative sentiment regarding the lack of adequate safeguards for children online. The related Sustainable Development Goal (SDG) is 3.2, which aims to end preventable deaths of newborns and children.

Another argument focuses on the long-term impacts of children becoming victims of online abuse. Victims of child abuse are more likely to experience sexual assault, domestic violence, mental health issues, and even become offenders themselves. This negative sentiment highlights the serious societal costs associated with online abuse of children. The related SDGs are 3.4, which promotes mental health and well-being, and 5.2, which aims to eliminate all forms of violence against women and girls.

Education and awareness are seen as crucial factors in addressing online safety for children. The positive sentiment is observed in the argument that prioritising education and awareness regarding internet safety is essential. Programmes and initiatives aimed at parents and young people demonstrate the commitment to promoting safety. The related SDG is 4.7, which focuses on education for sustainable development and global citizenship.

The inadequacy of age verification on online platforms is highlighted, with a negative sentiment towards platform responsibility. The argument is that platforms need to improve age verification, as even eight and nine-year-olds are reporting cyberbullying. It is emphasised that young children lack the cognitive ability to handle risks on such platforms. The related SDG is 16.2, which aims to end abuse, exploitation, trafficking, and violence against children.

The importance of developing technology with human safety, particularly children, as a core consideration is emphasised. The positive sentiment is expressed in the argument that the welfare of children should be considered from the beginning of technology development. Anticipating and mitigating risks is crucial to ensure their safety. The related SDGs are 9.5, which promotes enhancing scientific research and technological capabilities, and 16.2, which aims to end abuse, exploitation, trafficking, and violence against children.

The effectiveness of self-regulation in dealing with cyberbullying and image-based abuse is questioned, expressing a negative sentiment. It is argued that self-regulation is no longer effective, with evidence suggesting a 90% success rate in removing cyberbullying content and image-based abuse. The related SDG is 16, which focuses on peace, justice, and strong institutions.

Cooperation between regulatory bodies and industry is advocated as necessary for prevention, protection, and proactive and systemic change. The positive sentiment is observed in the argument that such cooperation is essential to effectively address the issue. Initiatives and networks have already been established to work together in removing abusive content. The related SDG is 17, which emphasises partnerships for achieving goals.

It is noted that there is no need to start from scratch when building regulatory models for online safety, expressing a positive sentiment. The argument is that localized materials have been developed in multiple languages to ensure wider accessibility, and sharing experiences, including mistakes, can help prevent future harm. The related SDG is 16, which focuses on peace, justice, and strong institutions.

Lastly, it is argued that online safety must be a collective responsibility, reflecting a positive sentiment. The argument emphasizes that no one will be safe until everyone is safe. This highlights the importance of individuals, communities, and organizations working together to ensure online safety for all. The related SDG is 16, which focuses on peace, justice, and strong institutions.

In conclusion, the importance of online safety for children is a pressing issue. The negative sentiment arises from concerns over their vulnerability and the long-term impacts of online abuse. Education and awareness, improved age verification, technology development with child safety in mind, and cooperation between regulatory bodies and industry are crucial for prevention and protection. The limitations of self-regulation are observed, and the need for collective responsibility is emphasized. Addressing these issues is vital to ensure a safer online environment for children.

Audience

During the discussion on the protection of children’s rights, several key points were raised by the speakers. One speaker emphasised the need to draw practical measures to prioritise child rights. This is particularly important in addressing issues such as abuse, exploitation, trafficking, and violence, which are central to SDG 16.2. The speaker highlighted their work at the Elena Institute, a child rights organisation, and their involvement in the Brazilian Coalition to End Violence.

Another speaker emphasised the importance of laws and design in avoiding fragmentation and effectively implementing new ideas. This is crucial in the context of child rights, as effective implementation requires a holistic approach. The speaker did not provide any specific supporting facts for their argument, but the need for coordination and coherence in policy and legislation is broadly recognised in this field.

The discussion also touched upon the need for better cybersecurity strategies and laws to protect online users, especially in African countries. The speaker highlighted the progress made by Ghana in this regard and stressed the importance of addressing cybersecurity in the context of digital inclusion and progress. They suggested gathering best practices and suggestions at both the national level and civil society level to combat issues such as cyberbullying.

There were also concerns expressed about balancing parental supervision tools with a child’s right to information and seeking help. The speakers pointed out the high rates of online abuse in Brazil, and the potential risks of violence coming from within the family, highlighting the need for caution with supervision tools.

The debate over prevention measures, such as sexual education, in conservative countries was mentioned as well. The discussion highlighted the challenges faced in advocating for such strategies, as they can be seen as taboo in conservative countries. The importance of finding practical approaches to deal with child abuse and exploitation, while considering cultural and social contexts, was emphasised.

In conclusion, the discussion emphasised the importance of practical approaches in safeguarding children’s rights. It called for the development of effective strategies and laws to address issues such as abuse, exploitation, and violence in both physical and online contexts. It highlighted the need for coordination, coherence, and best practices at multiple levels, including national and civil society. The debate also shed light on the challenges of balancing parental supervision tools with a child’s right to information and the difficulties in advocating for prevention strategies in conservative countries. Overall, the discussion underscored the need for comprehensive and contextually sensitive approaches to protect and promote children’s rights.

Ananya Singh

The USAID Digital Youth Council plays a crucial role in involving youth in digital development. The council has been created by USAID to ensure that the voices of young people are incorporated in the implementation of their digital strategy. They provide a platform for youth to have their voices heard and influence the development strategies. This initiative is aligned with SDG 4: Quality Education and SDG 8: Decent Work and Economic Growth.

The speaker, who is part of the USAID Digital Youth Council, actively works towards providing the platform for youth to have their voices heard and influence development strategies. This highlights the importance of giving young people a voice in shaping digital development. The sentiment is positive towards this argument, as it recognises the need for youth to have a platform to be heard.

Furthermore, the council has been instrumental in guiding the implementation of the USAID’s digital strategy and raising awareness about digital harms. They have co-created sessions on emerging technologies, which indicates their active involvement in shaping the digital landscape. This is in line with SDG 9: Industry, Innovation, and Infrastructure, and SDG 17: Partnerships for the Goals.

Moreover, the council members have designed apps to educate young people about digital harms, showcasing their creativity and commitment to addressing challenges in the digital world. This demonstrates the council’s dedication to empowering young people and equipping them with the necessary knowledge to navigate the digital space safely.

Involving youth in decision-making processes has been found beneficial, and the Digital Youth Council exemplifies this. Ananya Singh, a member of the council, was allowed to share the stage with USAID administrator and U.S. Congress representatives, indicating the recognition and importance given to the council’s involvement. Additionally, young council members were involved in planning and speaking at multiple sessions of the USAID at the Global Digital Development Forum, further highlighting their active participation in decision-making processes. This aligns with SDG 16: Peace, Justice, and Strong Institutions.

Overall, the Digital Youth Council’s work has been a success story in empowering youth and promoting digital engagement. By providing a platform for young people’s voices to be heard, guiding the implementation of digital strategies, raising awareness about digital harms, and actively participating in decision-making processes, the council is contributing to the advancement of SDGs and ensuring that youth are active and equal partners in digital development.

Session transcript

Marija Manojlovic:
Welcome, everybody. Welcome to people in the room around this huge roundtable at the end of the day. My name is Maria Manojlovic, and I’m director of SAFE Online. This event is called SAFE Digital Futures, Aligning the Global Agendas. I want to welcome participants as well, and if you’re joining us online, please, we have an online moderator, my colleague, Natalie Shroop, so please drop in the chat quickly where you’re joining us from and feel free to drop in the questions throughout the session. We will be monitoring the chat and making sure that we can respond to your questions. As I said, my name is Maria. I lead the work on SAFE Online as part of the End Violence Global Partnership. We are the only global fund focused on the safety of children issues online. We fund system strengthening across different sectors. We fund research and data, and we fund technology tools that are looking into tackling harms and risks to children in digital environments. So far, we have invested around $100 million in over 100 projects, making impact in over 85 countries. So this work, we interact with a wide variety of players and stakeholders from various sectors and fields of engagement. We interact with governments, private sector, and industry. We work with child protection organizations, with civil society organizations, as well as industry and academia. And through that engagement, we have realized one thing, and this is the reason why we have organized this session today on the alignment of various digital agendas, which is that there is alarming level of fragmentation in this ecosystem, which is truly hampering progress in many aspects. But in particular, when it comes to safety of children, children are too often left aside and not even considered in the discussion of digital governance and development. So some of the common reactions when you work in this field are the following. We have had literally people turn their backs on us when we mentioned children in our interactions in relation to online issues. They would say, well, we work on infrastructure or protocols, or we work on connectivity or access, but we really don’t deal with kids. Our engagement is focused only on women and girls. Sorry, we can’t really speak about kids more broadly. Or we just work on human rights more broadly, but kids are not really part of that. Or we really just care about education. Education is really critical in access, but safety is not that critical for what we do. But somehow placing children and safety in the overall global agenda on digital development and human rights has been particularly hard. Then there is the famous privacy and safety dichotomy, the tension between how do you assure privacy of users at the same time ensuring that users can. And not only you, I mean, I actually hate the term users. It’s humans and people. Like users, it’s not some other category of creatures roaming around. So when we think about privacy and safety, we need to be thinking more broadly about how do they interact at the level of humans. But when you speak about prevention and response to online child sexual exploitation and abuse, that’s even more harsh distinction. So you will find, if you find yourself at the end of the spectrum who cares about online safety, you will end up being accused of various things. Like the latest really fancy thing that we’ve been accused of is that we are trying to end privacy online, which sounds really cool. But it’s just, it’s unbelievable. So we believe that this dichotomy is really false. And we believe that more nuanced conversations are needed. We believe that we should not be forced into choosing which one matters more. And when we know that we can and should have both. So as we were discussing this yesterday with Maria Ressa and Justina Arden was saying, you can and should have both. And this should not be a matter of choice that we should be making. And sometimes having much more deep and upstream discussions is going to be needed for us to be making some nuanced and meaningful contributions in that regard. So now that I can vent and complain a lot, let me be more positive. What are the causes of this misalignment? And I believe there are a few things that we can think about. But in order to advance the state of the internet, which is beneficial for humans, and in order to maximize the benefits of digital technologies, we have to invest efforts to understand where these misalignments come from and how we can overcome them so that we are in fact more aligned and more impactful. That is why I believe that the most important, this is the most important discussion that we can have. And in order to do that, I want to ask you to do a couple of things. First one is let’s move more upstream. Instead of focusing on manifestations of the issues in these various fields, like technology facilitated at GBV or gender-based violence, lack of connectivity and access, cyber crime, and so on, to actually upstream design infrastructure and policy choices that enable these things. We are repeatedly seeing that the driving forces and engagement techniques behind radicalization, violent extremism, political extremism, misogyny, child sexual exploitation and abuse are very similar, both from societal and norms perspective, as well as from the technological point of view, in terms of how the design choices of digital platforms are enabling these phenomena and how not only they’re enabling them, making them worse and exacerbating them. Second thing that I want to ask you to do today is share learnings and failings openly, not only what has worked and succeeded, but what has not in your previous engagements, so we can do better. You will hear a lot from our speakers today about that, but also speak about solutions and approaches that work across the landscape. How to engage with governments, how do you create political will, what will it take to do that, how do you engage with industry and create incentive for more action, accountability and transparency will be critical. So today we have brought a lot of speakers, eight speakers and experts from various fields to help us frame this discussion. We will not make them speak all at once, so don’t be scared. I will introduce them throughout the session, but we want to make the session as interactive as possible, so we’re going to split the session in three segments. We’re going to have speakers introduce for five minutes their catalyzing, igniting remarks, and then we will open the floor for discussion. We have asked people, and again asking people to please come to the table if you want to join us at the roundtable, but also people online, please drop in your questions in the chat and we’re going to be making sure that you can participate. For all of those online, yes, please get engaged. And finally, there’s a huge diversity of perspectives and expertise in the room, so please be respectful when you speak, and this is a safe space for people to express their opinions. For people who are new to this field of online child sexual exploitation and abuse, there may be some sensitive things said here and some triggering facts, so we’re just giving some warning to you. Take care of yourself. If people need to step out, please step out at some point. And then again, be mindful that we all want to speak, so please keep your remarks concise and focused. And with that, let’s dive in. So, the first segment we’re going to talk about today, we have labeled cybersecurity and online safety, but again, these are such fluid agendas, and you will see how we are going to try to unpack all of that. I will kick off with my question to Ambassador for Digital Affairs, Andrie Verdier from France, and what we want to do is see how various agendas around cybersecurity and online safety interact around issues of child online safety. So, Andrie, in many ways, we spoke this morning as well, but you basically sit at the intersection of this issue that you want to discuss today. You are someone who has worked in private sector, public sector. You have worked in academia. You have worked on digital commons. You have worked on counterterrorism. You have been one of the instrumental people leading on the Christchurch call from French side, but also on the Paris call for cybersecurity. And most recently, and that’s how we started interacting, you’ve been leading the work on child online protection lab. So, as somebody who is wearing literally like 15 hats, can you tell us a little bit more about, from the global perspective, but also French perspective, how do you see all of these issues aligning, and what have you learned throughout these engagements, and what are the opportunities and challenges around these potential efforts to make these things more aligned? Over to you.

Andrea Powell:
Wow, in five minutes. Thank you very much for the invitation and for the opportunity to exchange with such a panel. Yes, as you said, we try, like our friends of the US, for example, to build a global and coherent digital diplomacy, because everything is interconnected and you can start from cybersecurity or education or else, at the end of the day, you have to to be coherent. And since I’m the first speaker, probably you, all of you will say the same, but let’s recognize that internet is something great, even for children, that they have access to more knowledge, more entertainment, more communities, more empowerment than ever, and that something is disbalanced now. So first, we have some troubles with the cyberspace itself, with the dark web, which is a very efficient tool for criminal activities. We did commoditize a lot of things, like payment or taking a room, which is very efficient for a lot of businesses, even for crime business. We have big companies that are very, very big, monopolistic, and why not, but sometimes they have unexpected negative externalities from their business model. And for example, we can observe filter bubble or echo chambers or radicalization. And regarding all of this, we have to find solutions that does respect the promises of internet. That’s the first point. And for this, yes, I was thinking this morning during another panel, 30 years ago, John Perry Barlow did write the Declaration of Independence of Cyberspace, because at this time, we could consider cyberspace as something external to the society. There were a place somewhere named cyberspace. Today, we could say cyberspace did eat the world. So it did contaminate and transform everything. And so to start to answer to your question, we have two principles for diplomacy. First, everything that is forbidden in our life should be forbidden online. And everything that is guaranteed in our life should be guaranteed online. So freedom of speech. So we have to forbid what is forbidden and to protect what should be protected. That seems very simple, but we all remember how difficult it was to implement. For example, when I go to New York to discuss in the UN about international law in the cyberspace, we are speaking about few very simple laws, like Geneva Convention. But we did discuss during 25 years to be sure that we do understand in the same way how we will apply the UN Charter or the Geneva Convention within a conflict, which is just one topic. So this idea that seems simple is not so simple to implement. But the second thing is that we, government, we didn’t build this system. We don’t understand how it does work. I’m an entrepreneur. I did create three internet companies, small, but I understand, but I don’t build it. I did never seen the algorithms themselves. I don’t access to the source code. So the companies and of course, civil society and researchers, but the companies has to be part of the solution. So we need not even a multi-stakeholder approach, but an efficient multi-stakeholder approach, which cannot just be a room where we discuss politely. We need to put the pressure, we need to ask for results. We need to, everyone in the room has responsibilities and prerogatives, and sometimes a business model or mandates, but we don’t have any other way. We need to be sure that we will find the solution all together and that the companies will contribute to find the solutions. Here I’m speaking generally about terrorism, harassment, gender balance, and child protection. If we go to child protection, what I did learn in this journey is that this is a very difficult topic, maybe one of the most difficult. First, as you say, this is very difficult to engage the conversation on those issues. People don’t want to recognize that, I don’t know, in France, for example, 25% of children less than 10 years did accede to pornographic content. That’s a big problem and we know that 20% of the adults did, were a victim of some kind of sexual offense in their life. So that’s one person out of five. So people don’t really want to recognize this because it would conduce them to change a lot of things and we could recognize that there is much less money in this field than, for example, in the fight against terrorist content. Regarding terrorist content, you have strong organization, you have a lot of technologies, you have a lot of monies. Or let’s mention this, if you want to, all of us, we could try, if you try to publish a small part of a Hollywood movie online, in 10 minutes it will be removed because Hollywood did finance solutions to detect this and to intervene very fast, very quickly. So this is a weaker field with less money and, I’m too long? Okay, I finished, with a wide range of issues and that’s the second thing, because everyone agreed to protect children, but here we can speak about strong and heavy criminality like human trafficking or whatever you can imagine, child abuse, but you can go to harassment, online harassment, so something lawful but harmful. But you can even speak about the consequences of some algorithms regarding the way you observe your body, for example, and the connection between some over-representation of some pictures and anorexia, and we should pay attention to this. So this is a wide range of topics, very impressive, not the same level of heaviness, if I may. So I will conclude and we’ll continue, but you did ask for a project, something more positive. As you know very well, we are trying to launch the Child Online Protection Lab. The idea here is to build evidences all together, in a cooperation spirit between companies, civil society organizations, research and governments, because I feel that one part of the issue, this is a very ideological conversation, everyone we should make this or this, and no one tests, no one experiments, no one shares the results. So for example, and I finish with this, if we just speak about age verification, so which should be normal, you should be able to test the age of someone pretending to go to a pornographic website, but you have dozens of approaches and some of them are better for privacy, others are more efficient, others are centralized, decentralized, etc. So we need to see the details and to test and to implement and to share the results. That’s one approach France will encourage deeply during the next years. Thank you. Thank you, Henri, and I like really

Marija Manojlovic:
the focus that you put on evidence and data that can really help us bridge these debates, but also bring back home the actual work on solutions, not only at the level of principles, and I think that’s something that we can also jointly think about as one of the ecosystem pushes to be more evidence-focused and more data-informed in our discussions and experiment more cross-sectorally as well. So thank you for that. So moving on from this kind of a very global and an interesting initial intervention, I would like to move to Dr. Albert from Ghana. Dr. Albert is the Secretary General of the Cybersecurity Authority of Ghana, and Ghana is really unique in many ways, but one of the ways that we really are particularly interested in is that it is a unique example in the world where issues of child protection have been streamlined fully into the work on cyber security at the national level. You are director of the Cybersecurity Authority and you have been at the center of those developments. Can you tell me a little bit more about what were the key factors leading to this outcome? The fact that you’ve managed to institutionalize child protection as part of the cybersecurity work, what was the political will, the ripeness of the issue, public attention, institutional setting, legislation, what were kind of the key driving forces behind that? In five minutes. Thank you.

Albert Antwi Boasiako:
Oh, certainly. Thank you. First of all, a pleasant afternoon to my colleagues here, participants, but also our colleagues who have joined virtually. Maria, I want to thank you on behalf of my government for the invitation, but not just that. The support your institution and violence against children has rendered to us over the last few years. You’re right. I think I’ve been around for a while. For the past six years, I’ve been leading Ghana’s cybersecurity development as a national advisor and then as director general of the agency responsible for cybersecurity development. You are right. A number of competing factors there. There’s a national security interest. Of course, the issue of terrorism, cyber terrorism comes up. There’s a private sector interest issue of protecting critical information infrastructure, the intelligence aspect of cyber, the civil society, academic part, but you can’t take away the critical concerns around children. I think at a national level, one always expects to have a 360 degree about some of this development. I think we’ve achieved some successes when we started this process. Ghana’s cybersecurity readiness according to the IT ranking was around 32.6%. That was the middle of 2017. At the end of 2020, Ghana’s rating jumped to 86.6, basically university cycle from F to A. A number of things have been done. Permit me to highlight some of this. I think one approach we also adopted, of course, the political commitment is key. I keep on telling my colleagues, I’m lucky to be running Ghana’s cybersecurity because I have the support of my government. My minister runs when she’s presented with a sound policy or personal matter. We don’t delay. Of course, my government is the president is committed to that. I think within the past six years, it’s been quite a sighted journey, notwithstanding the challenges, including financial challenges. We had a unique approach in terms of based on this approach on data. Research is key. We had to conduct research for this process to reference. We work with UNICEF in 2017 to look at opportunities, but also risks of children and interesting dynamics. On one hand, which you all know, there’s a trend in which children are using internet and devices. Consequently, we established that four out of 10 children had also had contact within a part with sexual content. On one hand, you have a positive development with respect to opportunities for children, even on seven underserved communities using the internet. On the other hand, you also have this disturbing trend in which they are always coming into contact with content that certainly had a potential to impact on their well-being. We also had to do a research with World Bank support, with Oxford University, the cyber security capacity maturity model, which also highlighted the gaps around the protection of children. This research led to a number of interventions. The first one was legislation, we’re going to pass a cyber security act that incorporates child online protection as a whole division within my authority, but also the law criminalizes certain sexual offenses. We’re lucky to tackle what has become the sextortion quickly within our law. It has had a lot of positive impacts afterwards. Of course, awareness creation also was put into legislation to make it mandatory for the states to lead that process. That is one aspect of the institutionalization of child online protection, but we had to also look at policy aspect of things. We developed a child online protection framework, which incorporated a number of best practices, including the WeProtect framework, but also ITU guidelines that we provided. They’re pretty important. As part of the institutionalization, I’ve mentioned my agency has the division for child online protection headed by a director, a very senior person. It’s not just that. Through the work we did with UNICEF and support from my agency, we established a child online protection forensic lab, which was the first one in the south region. I was to help investigative bodies in terms of forensic evidence to support the work because deterrence is key. Criminal justice response is also one of the areas that I need to look at it as a national response mechanism. Most importantly, and this is where I think I draw a lot of inspiration from, the institutional arrangement. Certainly, my experience, somebody needs to lead. You need a champion, but you need to carry people along. Different agencies, the gender ministry has a responsibility, education ministry, civil society, academia, the telecommunication service providers. We needed to bring all these actors together. I think anybody who has visited us has seen we’ve achieved a lot of success. There’s a consensus on the table on a way forward to be able to address child online. The last two areas that we’ve also achieved success is incorporating awareness creation around those risk areas and children into our national program. Ghana launched what we call a safer digital campaign. We came out with four pillars, government, business, public, but also children. Specifically, dedication. This has been institutionalized. You don’t treat awareness creation around the risks that children are facing as a sub-team. No. I think that’s one area we’ve achieved a lot of success is going through the schools across the whole country in terms of raising awareness with the collaboration with UNICEF. The last one is also reporting. We need to empower the public with the children to report. Ghana, when you call 292, it’s free. You can call that on a smart device or any other device and you can report incidents. We’ve become lucky. This is just to conclude. Initially, when we set this national hotline, we thought it was going to receive only incidents. In other words, the citizens, children are going to report only after they have been more affected. No, it has changed. It’s becoming more like a tool for them to seek guidance. When they make a call, somebody says, send me your note, or click on this link, they’re able to call. We advise and encourage them, please call 292 free. Don’t pay anything, 24 hours, and just at least conduct some minimum due diligence. I think that has been personally, as a public servant, the most important deliverable, a service good for the public. I really want to recommend that we look at those options as a best practice. Of course, there has been challenges. I wish we can speed up in terms of awareness creation. Ghana is big, 32 million. I don’t think I’ve been able to achieve my awareness creation mandate even 20%. I feel very uncomfortable. There’s a huge gap. I think we need to scale up our efforts, and the needs are there. Thank you.

Marija Manojlovic:
Thank you so much, Albert. It was really good. I do want to say this morning, when we talked about working Ghana, one thing that really struck me was how you eloquently described, well, actually, your strategic intent behind immediately legislating, because you wanted to remove this uncertainty around, there is political will now, but there may not be next political cycle. How do you immediately ensure that you institutionalize this thing, and also create incentives for ecosystem ownership? Not only that it’s you who has political will that leads on these things, but make everybody take the bit of responsibility and accountability over it, and create that ecosystem responsibility be shared. Thank you for that. Julie, I want to get back to you now. Your work is globally known. You are the first governmental regulator and independent agency focused on online safety. You’ve done tremendous work, both for Australians, but also global population and across. We use your resources all the time, and they’re really always the highest quality possible. eSafety is a regulator, but also it’s an agency that works on prevention of various forms of crimes. You have wide range of powers and functions that you try to apply really comprehensively. What is interesting about your agency is that many people don’t know that it started as only being focused on children. It’s kind of went from children to become everything else. It’s really been great to have you here to give us kind of a sense of how, because it seems it was centered around kids, how do you see kids’ issues now being embedded in this broader risks and harms ecosystem, and what are some challenges and opportunities for us to make that, as you did, a very big and part of a joint up effort? Over to you. That is a great and very hard question to encapsulate

Julie Inman Grant:
in five minutes, but really it was actually a political decision that it would be focused on children initially. There was a well-known media personality who is open about her mental health struggles. She had a nervous breakdown. She was very active on Twitter. I was interviewing for a role with Twitter to start their trust and safety and public policy roles across Southeast Asia, Australia, and New Zealand. She tragically ended up taking her life. It became known as the Twitter suicide, and a petition started to government that just said, government, you need to step in and do something. This was in 2014. Because of concerns about freedom of expression, the ICT minister at the time, Malcolm Turnbull, who became the prime minister, said, we’re going to start small with children’s e-safety, because nobody can argue that children aren’t more vulnerable than adults. We took a bunch of functions from across the government, put it into the Children’s e-safety commissioner, and that included, we’re the hotline for Australia, child sexual abuse material, taking reports on terrorist content, but also set up the world’s first youth cyberbullying scheme, where we serve as a safety net when the platforms fail or they miss cultural content, and the seriously harassing, intimidating, humiliating content targeting children doesn’t come down. When I took the role in January 2017, I was asked to set up the revenge porn portal. I said, no, I’m not going to call it revenge porn. Let’s call it what it is, image-based abuse for everyone. That’s how that started, but I think it’s really important to know that we take a vulnerability lens to everything we do, and nobody can argue, again, that children aren’t the most vulnerable cohort online, because the internet clearly was not made for children, although children make up one-third of the global internet users. And young people today don’t differentiate between their online and offline lives. It is their playground. It is their school room. It is their friendship circle. All that said, we had a very stunning national research, the Australian Child Maltreatment Study, that found that a stunning 28.5% of Australians have experienced sexual abuse by the time they’re 18. That’s more than one in four. Beyond thinking about it as an online issue or an individual issue, which is why we take down content, because it’s retraumatizing, but the comorbidities that exist that follow a child throughout their entire life, they’re more likely to experience sexual assault later in life, to be in family and domestic violence situations, to have drug and alcohol dependencies, to have serious mental health and suicidal ideation, and also to become sex offenders themselves. So we need to think about this in terms of the long-term societal costs as well. And did you know that our Canadian counterparts found that 20% of survivors are recognized on the street for the child sexual abuse series that they’ve been seen in? So you can imagine how traumatizing that is. So when we have that debate about adults’ privacy versus a child’s dignity or a child’s right to be free from online violence, I think, what about a child’s right to privacy when they’re being tortured and abused? We need to really rethink about how we rebalance this. So what have we done in just three broad areas to address this? We have these complaint schemes where we’re taking trends analysis all the time. So we know that kids are actually coming to us younger reporting youth-based cyber bullying because when kids on TikTok and Instagram and Snap at eight or nine, so now we’re getting reports of cyber bullying of kids. And this goes back to Henri’s comment about age verification. We need the platforms doing a better job. Like eight and nine-year-olds have no business being on these platforms. They don’t have the cognitive ability to be able to address this. So we do the fundamental research. We’ve got the programs. We know that 94% of Australian children have access to a digital device by the time they’re four years old. So parents need to be the front lines of defense. We’ve got a program for parents of under fives to be safe, to be kind, to make good choices, and to ask for help. And then when they get into the primary years, it’s about the four hours of the digital age, respect, responsibility, digital resilience, and critical reasoning skills. We have youth advisory committees so that we can hear from young people about what is going to work for them. So we have them running our scroll campaign. So it’s authentic and it’s resonating. We have them writing letters to big tech saying, this is what we want from you. We want you to take abuse seriously. We want you to take action. We are your future customers, users, humans. But then we also have systemic and process powers where we’re compelling more transparency from the major platforms on what they’re actually doing to address child sexual exploitation and sexual extortion and harmful algorithms. And next week we’ll have a major announcement and enforcement action. We’ll be holding five more companies to account in this area. So the more that we can shine light on what is and isn’t happening, the more we can push safety standards up. And that goes to the whole idea of safety by design as well. Again, we can’t have safety be an afterthought. The welfare of children to be an afterthought. We really need to revolutionize the way that technology is developed with humans and safety at the core. Again, not after the damage has been done. We need to get ahead of technology changes so that we’re anticipating the risks. We’re never going to get a hold of generative AI if we’re not focusing the scrutiny on how the data is chosen and it’s trained. And if we wait until it’s extricated out into the wild, we’re going to be playing a huge game of whack-a-mole or whack-a-troll, as I like to say. There we go.

Marija Manojlovic:
Thank you, Julianne. And thank you for always grounding us back into the research and data that you collect and how you always try to think in terms of long-term engagement. Because engaging with kids as young as zero to five, we are building future for healthier engagements later on. And from the prevention lens, that’s really, really critical for prevention. Because we are seeing that, again, perpetration is also starting to be done earlier and earlier. And we keep on engaging with just a certain group of kids, which is like adolescents, but no engagement with younger generations. So thank you for that. I know that you and Dr. Albert will need to leave at some point. So I’m just going to give that heads up to people. But with that, I’m going to give the floor for anybody who wants to ask any questions at this point in time after the first round of interventions. If there are any questions or comments, please now raise your hand and we can pass you on the mic. Or if there is anything online that is coming in, do you want to… So is there anybody in the room who has… Oh, there it is. There is one. I think you can use the mic over there. Yes, thank you.

Audience:
Hello, I’m Ana from Brazil. I work in the Elena Institute. It is a child rights organization. And we are part of the coordination team of the Brazilian Coalition to End Violence. And I would like to hear your thoughts based on INSPIRE. How can we draw some measures and practical measures to think about the priorities in this area? Is the law, is the design, or how can we think about standards to avoid the fragmentation and to implement all these new ideas that you were talking about?

Marija Manojlovic:
I’m looking at Julie, but anybody can pick up the mic, please.

Julie Inman Grant:
I think we all will probably agree that self-regulation is no longer enough. And this sounds strange coming from a regulator. I don’t think purely regulation is going to be enough either. And that’s why we have this 3P model with prevention, protection, and what I call proactive and systemic change. And that does mean working cooperatively with the industry to achieve outcomes. We have a 90% success rate in terms of getting cyberbullying content and image-based abuse down because we work informally and cooperatively with the networks. And that is the way we get that content taken down more quickly. To sort of solve this issue as more governments are thinking about how they set up either their own independent regulatory authorities or how they start small if they don’t have the political will, we’ve started the Global Online Safety Regulators Network. We now have six members of the network. I’m going to be calling you Dr. Albert soon. But we also have observers who don’t yet have independent regulators who can learn from these models. But please go to esafety.gov.au. We have a strategy. We’re trying to do as much capability and capacity building as we can. We were the only ones for a while doing it, trying to write the playbook as we go along. And we’ve made a lot of mistakes. We’re happy to share those as well. But I don’t feel like anybody needs to start from scratch. Even if it means we’ve localized a lot of our materials into multiple languages, take it, use it, localize it in a way that works for you. We’ve got to be in this together. None of us are safe until all of us are safe.

Marija Manojlovic:
Thank you so much, Julie. And Dr. Albert, do you want to? Just a quick one. I felt the sentiment of my Brazilian colleague, especially when she used the word fragmentation. That’s the reality. She’s speaking from a context. And I think I saw this when I was first appointed.

Albert Antwi Boasiako:
Again, it is a problem because you see, this is what I call ad hoc. Ad hoc happens even in the non-governmental space. Ad hoc activities are happening in government testing. I think that recognition is key for effective response. So institutionalization means essentially you are taking some systematic measures. What Ghana did, and I keep on my own experience from the development context, developmental context. Again, there’s a lot of things from our context. You are now starting to put the necessary structures in place. Champion is key. You need to have someone who drive to bring all this. In Ghana, I identify among the civil society institutions. I identify one of them which was quite active, very respected. And I use. So we did CSA and A institution. And we mobilized others around them. In government, we took that, that we had to carry gender, ministry, children, education along. That was deliberate, intentional. Other countries haven’t succeeded. I will mention, it’s a struggle. And the power concentration, and I’m saying those from the developmental context, they are real. And without being conscious and identify what I call champions in all sectors, it’s likely to be a little bit problematic. You may still have the law. And I think some of my Western colleagues keep on sometimes you are surprised. But you have this law. It’s been there, but nothing is working. In my context, the law is good, but frankly, getting people even to sit at the table can be a challenge. And that is why really I felt like in terms of the Brazilian situation, picking champions are wrong. Of course, the child online protection ecosystem is a collection of different players. And I think the first step is to look at those who are quite active. They will drive them. They are respected within the ecosystem. And able to mobilize ideas along there. Thank you.

Marija Manojlovic:
Thank you, Dr. Albert. And just one more note for Brazil. I know that you’re a member of the We Protect Global Alliance as well. Thinking about the model national response is a framework that one of the frameworks that you can use to start thinking and charting different areas of engagement and how that needs to be happening. But again, we’ll be happy to chat with you afterwards as well. I will excuse Dr. Albert and Julie who need to go to the next session. Yes, I know. Can I make a brief comment? Ambassador, sorry. Thank you so much. My name isโ€ฆ Sorry, I make a brief comment. I want to make this in front of you.

Henri Verdier:
So first, let’s recall that a large part of the issues we are speaking about is not on social networks. In the dark web, for example, if you want to buy a real-time video of a rape of a baby because it does occur, it’s not on business companies. It’s on the dark web. And here we need more policemen, more investment, more international cooperation. But this is not about company regulation. This is about fighting crime. Regarding company regulation, I understand the fact that it would be better to have a world with one common rules. But this is not what I call fragmentation. The fragmentation of internet is a fragmentation of the technical deep layer. We have to fight this. But we are democracies. We have the right to have proper rules. Or we are not democracies anymore. And we are not here to build a unique market for five companies. And I want to say there is another fragmentation, and that’s very important. That’s a fragmentation of the investment regarding trust and safety. Because most of those companies, and we can understand why they do this. They do invest regarding their sales. So they do invest a lot on big markets and much less on small markets. And of course, especially in Africa, for example, they don’t invest a lot. And we should ask them, we could do this in this framework of the UN, to equalize a bit the investment. And to take a small part of the investment in Europe or US to invest this in Africa, for example, or Brazil, why not?

Marija Manojlovic:
Thank you. Thank you.

Audience:
Maybe first in the room, and then we can do the online. But also, we will need to move to the next segment as well. But go ahead. Okay, thank you. My name is Peter King. I’m from Liberia. I would like to thank the NCH boss from Ghana. The question is an open question, but I would like for him to help in terms of suggestions that can be of best practice for other countries like Liberia, who are struggling to have cyber security strategy laws to protect online. What are some of the suggestions he can offer to African countries that are not at the level of how Ghana is moving with certain issues of tools that are used to create awareness on cyber security issues? The reason is because we look at inclusion, and then the level of progress, I’m thinking about uniformity. I just want him and other panel members who can share some of these best practices or advice on how to actually streamline online protection and cyber bullying in our African context. The European context may be different. Maybe four years in Europe, he has an idea of how to use a mobile phone. In our African context, he doesn’t even know that is it. Can we look at these dynamics and what are the best practices and suggestions for national level, civil society tools that he can use, and also at a level of maybe even the security sector to combat these kind of issues? Thank you so much. With permission of you, Dr. Albert, I will put you in touch with…

Albert Antwi Boasiako:
In fact, I brought a card and just a quick one because I’m being moved to another session. But a good thing is we are in touch with Liberia. I think a number of African countries have messages to tell us, and they keep on coming. We share the letter we’ve achieved. I think we’re sharing within the region. The only problem I’ve seen within the region is just fragmentation. So you have one ministry visiting you, others are left out. That’s why I was stressing the Brazilian situation. So there has been contact with the body in Liberia. Fortunately, we haven’t been able to really integrate the structures. But please, we will discuss. Thank you very much.

Marija Manojlovic:
Thank you so much. Nathalie, do you want to move ahead or do you want to ask a question? One brief question from online.

Moderator:
So from Omar Farouk, a 17-year-old from Bangladesh, passionate about child safety online. Started Project Omna and working with local and international organizations like UNICEF Bangladesh, UN Tech Envoy, to tackle digital issues like cybersecurity, bullying, and privacy, not only in their country but globally. Given the rise of cyberbullying and privacy concerns for children, how can we strike a balance between protecting kids online and fostering innovation and economic growth in digital space? What strategies can be developed to create strong partnerships between government, businesses, civil society, ensuring child safety is top priority? So just perhaps speaking to that balance between the economic growth and innovation piece along with child safety. So perhaps, Ambassador, if you don’t mind speaking a little bit to that.

Marija Manojlovic:
Ambassador Verdi, do you want to take that one? We’re kind of looking at you trying to avoid the look. That’s the eternal question, the big question.

Henri Verdier:
As a former entrepreneur myself, I just want to say two things. First, so sometimes if you forbid something, you forbid it. So it doesn’t, that’s not a problem of innovation. For example, I don’t know, a century ago when we did forbid child work, private sector said, ah, we cannot work like this, et cetera. But finally, we did all adapt. So that’s important. There is not always a contradiction between innovation and regulation. And some regulation can be tools for innovation, like, for example, a good standardization can be regulation and good for innovation. The second thing is that very often people oppose for security and privacy and innovation, for example. But very often, if we work a bit more, we can find solution. But you have to take in consideration that you are looking for three goals, for example, security and safety and innovation. So probably your first idea won’t be a good idea. You need to work a bit more, but you can still find solutions. And that’s why we need those multistakeholder, efficient multistakeholders to work all together and find solutions. I could, I won’t, but I could share dozens of examples on how we did a very fine tune, some good balances between all those goals. So it was not the first idea, but then we did find solutions. Thank you for that.

Marija Manojlovic:
And thank you for the question from Bangladesh. I think one of the things that I think neatly ties into the next segment that I want to open now is that sometimes innovation ecosystem is not inclusive of people who need to be part of it because various reasons, including safety. So women becoming part of the innovation ecosystem was for a long time not an option because they just didn’t feel welcome in certain environments. So making sure that innovation is not separate from ensuring safety in various environments. So now we want to move to a segment on gender based violence and image based abuse. One of the key things that we really want to unpack right here is how can online child safety be better positioned as crucial to inclusive gender balanced digitization. And another thing that we always struggle with is how can more be done in prevention work to address common narratives and perception of these issues grounded in gender norms and better center survivors. So with that, I will introduce Kaylin. Kaylin, you’re a senior advisor to the White House Gender Policy Council. You’re working on issues of technology facilitated abuse and harms. And you’ve been involved in the development of some of the landmark principles, guidance and coalitions in this space, including the Global Partnership of Action to Tackle Defacilitated GBV. So how do you see convergence of these various agendas from the White House perspective? And also from the perspective of the drivers of abuse, harassment and other harms? And how do they intersect with child safety and protection? Huge question, but over to you for five minutes. Thank you so much, Maria, for that question. And to Maria, Natalie, safe online for hosting this critical discussion that is really so important to be present at the Internet Governance Forum.

Cailin Crockett:
As Maria mentioned, I’m Kaylin Crockett. I am a senior advisor with the White House Gender Policy Council. I’m also director for military personnel and defense policy with the National Security Council. And for the past two plus years, I’ve coordinated the Biden-Harris administration’s efforts to address sexual violence in the military and also to counter online harassment and abuse as a feature of our domestic and foreign policy. These two portfolios might seem really quite distinct, but they actually share a lot in common and I think speak to the heart of our discussion today. And the first and foremost is that all forms of gender based violence and interpersonal violence across the life course share root causes and common risk and protective factors that perpetuate and are driven by harmful social and gender norms. And they are some of the most underreported crimes and abuses because survivors are too often shamed, silenced and made to feel invisible. This certainly has been true for survivors of sexual violence in the military, as well as for survivors of child sexual abuse. There are core values also that I think bind together the child online safety agenda with the ongoing work we must all do to promote a safe, secure and inclusive digital ecosystem for all people, but particularly for women and girls, children and LGBTQ plus people. This really means three things, I think, in particular, accountability, transparency, being survivor centered with a gender lens and, of course, prevention. I’m really fortunate to work for an administration led by a president and vice president that have been lifelong champions to address gender based violence and to stand with survivors. The administration really understands that the consequences and costs of gender based violence impact in addition to individual survivors, communities, and the ripple effects of gender based violence and all forms of abuse are felt across our communities, our economies and our countries. And it must be said in this conversation that women and girls from marginalized communities, including people of color, LGBTQ plus people and individuals with disabilities, among others, are disproportionately impacted. And it’s important to also be clear here, excuse me, that online violence is violence, and it can result in dire consequences for victims, ranging from psychological distress self censorship and decreased participation and political and civic life to economic losses, increased self harm, suicide, and forms of physical and sexual violence. In this campaign, President Biden made a commitment to convene a national task force to develop recommendations for federal and state governments, technology platforms, schools, and other public and private entities to prevent and address all forms of online harassment and abuse, with a particular focus on tech facilitators. gender-based violence. And in June of 2022, the president issued a memorandum that established a White House task force to address online harassment and abuse, which I’ve had the fortune to coordinate. This is an interagency effort that I think really speaks to that ecosystem approach that other colleagues have raised. It is co-led by the Gender Policy Council and the National Security Council, and involves many diverse government departments and agencies from USAID to the Justice Department to Health and Human Services, Homeland Security, and several others. And the senior representatives across the agencies that comprise the task force have met regularly with justice system practitioners, public health professionals, researchers, advocates, parents, youth, and importantly as well, partner governments to identify best and promising practices, gather recommendations, and learn from lived experiences to inform a blueprint for action. The initial actions of which were previewed in an executive summary that the White House released this past March, and will ultimately be fully captured in a public final report and blueprint of the task force that we’re working on to compile towards the end of this year. And again, most importantly, we’ve met with survivors, and especially youth, who shared how experiences of online violence have disrupted their lives, impacting their well-being, their health, relationships, careers, and career aspirations. And while each of their stories is unique, they share common threads and lessons that inform the work of the task force, and they have outlined concrete measurable actions, 60 and counting so far, that federal agencies have committed to address online harassment and abuse. And I know I’m already over time, so I’ll just briefly mention the four pillars of the blueprint that are inherently multisectoral. Those are prevention, survivor support, accountability for both platforms and individual perpetrators of harm, and research. And as an administration, we’re working truly across the whole of government. We’ve committed to updating and expanding resources to address gender-based violence online, and including child sexual exploitation. For example, the Justice Department is dedicating an unprecedented amount of resources to address cybercrimes that particularly impact women and girls, including image-based sexual abuse. And we’ve also really recognized the outsized impacts and harms of online harassment and abuse on children and youth, including in May, Surgeon General issuing an advisory on youth mental health and social media, which particularly emphasized the intersection of gender-based violence and child sexual exploitation online. So with that, I will look forward to sharing more in the Q&A. Thank you. Thank you so much, Kaylin, and thank you and the Biden administration for really

Marija Manojlovic:
taking such a strong lead and position on these issues, because we, you know, as everybody has been saying, a majority of the platforms and companies we speak about are based in the U.S., and what U.S. does is really going to matter for a lot of the other people across the world. So we are really looking into you for action on this. Particularly, thank you and the team and everybody else in the global partnership also for making sure that we are not siloing the work on child online protection, as well as the issues on gender-based abuse and violence. Unfortunately, Andrea Powell from the Panorama Image-Based Abuse Program has not made it in time from the airport, so we will, if she comes, we’ll just include her in the discussion, but if not, we will move ahead. I’ll just open the floor for one or two quick questions or comments on this, and then if there are none, we will move ahead with the next segment. I’ll wait for a little bit. Natalie, is there anything online coming in, or anybody in the room? Oh, there is. Please come on in.

Audience:
Hello, everyone. First, thanks for the great session. I’m really, really happy to be listening to you all. I’m Emanuela. I’m from Brazil as well. I work in Instituto Alona with child rights, and one question that I have when we talk about this theme of gender-based violence and also about child abuse and exploitation. In Brazil, we have high rates of abuse that happen online, so that happens at home, so I have two questions about this. The first is about supervision, parental supervision tools. How can we balance this complicated debate when we have the supervision, but we also know that the violence can come from the family, and this could be a risk for a child’s right to information, a child’s right to seek help, and how to do this in a practical way? This is my first question. The second is that we also have a very conservative country, and when we talk about prevention measures like sexual education, this can be a tough, tough debate that raises a lot of different issues. I would really like to hear you guys’ approaches on advocacy on this kind of prevention strategies that we can use because of the maybe taboo that this theme could evoke in more conservative countries. Thank you.

Marija Manojlovic:
Emanuela, over to you.

Cailin Crockett:
Thank you so much for that question, and as many of the experts in this room are aware, the United States, we are a federalist country, so we have 50 diverse states and territories as well as that, and so there are a multitude of approaches that have been coming up across the states on how to address these issues, and so for the administration’s perspective, we want to be really careful about balancing what you’ve said and recognizing those concerns given that parents may not always be inherently able to represent or willing to represent the best interests of their children, and we always want to maximize options and support for survivors of abuse at any age, so I think it’s a very timely question, and I think it’s really important that in line with your second question, we really take an evidence-informed approach and really focus on prevention as well. One of the areas that we’ve continued to invest in is through our Centers for Disease Control and Prevention really taking a public health approach to recognizing the shared causes of violence across the lifespan. We have an analysis that the CDC has done called Connecting the Dots that I quite like because what it really does is it connects the dots between multiple forms of interpersonal violence from sexual violence, intimate partner violence, child abuse and neglect, cyberbullying, and so youth violence, community-based violence, and so that’s one area where we’ve seen promise, but of course with everything, resources are so important too, and so the voice of civil society to really demand governments proportionally investing in these problems is so critical, so thank you for your work.

Henri Verdier:
A brief comment. Of course, there is a tendency everywhere, including in France, to say that those are questions for woke, decadent, and very liberal people, but actually, no, everything is connected, and I share your point. This is about violence, and maybe I can share two examples. First, within the crisis group, now we are speaking about algorithmic radicalization. Most of the terrorist attacks were done in Europe. I don’t mention Israel, which is a different situation, but in the EU, most of the terrorist attacks we had to face were done by very young people with the role of the social network in the radicalization process, and most of the terrorist attacks we had in Europe were coming from masculinist movements. It was not jihad or, I don’t know, it was masculinist people against LGBT or against, so everything, all those issues are connected, and if you pretend to avoid gender balance and gender protection, you will miss a large part of the other parts. Thank you so much. This is exactly why this session exists, to make these links and make

Marija Manojlovic:
them really clear in everybody’s mind, but also in our ability to create policy and otherwise responses to these phenomena. Andrea has made it. She literally just ran into the rooms, so she’s still in time for her intervention and the same segment, so perfect timing, Andrea. I hope you have had a time to take a breath. So, Andrea Powell is the director of the image-based abuse initiative at Panorama Global. In your work, you’re building partnerships and mobilizing efforts to ensure that no one experiences the enduring trauma that results from image-based abuse and other types of online harm. We are very proud to be part of this coalition and your work, and have been above all so impressed with how you’ve ensured that lived experiences, lived experience experts are an essential part of this coalition. What I want to ask you is, what opportunities do you see for better alignment between IPSA work, the image-based abuse, sexual abuse work, with online child protection, but also from your perspective, what have been some definitional and content-related issues, as well as in terms of practical tools and best practices we can build between these two fields. So, over to you. Thank you very much. My apologies for

Andrea Powell:
being late. Very happy to be here with all of you. Again, I’m Andrea Powell, and I am the director for the image-based sexual violence initiative housed at Panorama Global, and we most recently launched a new coalition, the Reclaim Coalition, that brings together over 50 stakeholders from 23 countries, most notably from civil society, academia, law policy, as well as lived experience experts, often called survivors. And what I mean in that context is not just individuals who’ve endured this ongoing trauma, but individuals who are active in the field of addressing image-based sexual violence. Image-based sexual violence as a threat are the act of creating and sharing intimate images without someone’s consent. It is a form of online sexual violence and a violation of privacy that disproportionately affects women and girls, LGBTQ+, and indigenous and BIPOC individuals. Anyone who deliberately views, shares, or recreates these non-consensual images is participating in a sex crime whose unique feature is that the abuse lives on long after it’s over, growing in magnitude for the whole of the world to see. When non-consensual imagery is shared over text messages, online forums, or posted in social media platforms, it can quickly reach a global audience via uploads to pornographic websites that do not or cannot reliably verify age or consent. Those who are victimized live in a state of constant trauma and fear when their victimization may happen again. Will their parents find out? Were their friends, co-workers, college admission counselors, or future employers? It is never post-traumatic stress disorder because the trauma lives on continuously. This type of technology-facilitated gender-based violence is growing in global prevalence. There are over 3,000 websites online that purely are designed to host this non-consensual intimate videos and images reflecting a vast enabling environment that facilitates this form of abuse. And what we know from the survivors in the Reclaim Coalition is that younger and younger children are ever more being exposed to this form of violence. Those who are impacted, whether they are adults or children, frequently experience elevated levels of psychological distress, trauma, extreme and prolonged anxiety, and suicidal ideation. In the early stages of the formation of the coalition, we uncovered over 40 cases of children who’ve ended their life as a result of image-based sexual violence, often within 24 hours of their abuse, leaving their parents little to no time to intervene. As a woman who was, as a child, a victim of sexual violence, I chose not to reach out for help. I chose to live in silence. And I never thought that silence was a privilege. Yet the survivors who bravely advocate in the Reclaim Coalition never got a chance to make that choice. Their sexual thoughts are there for all the world to see. And thus, this is a global problem, but there is hope and there are global solutions. Many child victims of image-based sexual violence are adults when they discover their victimization. And many survivors who are now adults and are part of the Reclaim Coalition experience reputed abuse online every time they dare to advocate publicly on this issue. As a matter of fact, as I boarded this plane, I was working with an individual who just came out publicly and had all of her images re-uploaded to a site called Pornhub. This trauma does not stop on their 18th birthday. The very real harms do not go away. And abusers continue to share and re-upload more content. Leading up to the launch of the Reclaim Coalition, we hosted a private summit with lived experience experts from eight countries. What I thought was going to be an informative, well-agended program became a witnessing session where survivors shared their stories and created the formation for 17 recommendations that we shared with our colleagues, most notably at the Gender Policy Council, as well as forming the basis for our first landscape report, I Didn’t Consent, focusing and centering this issue around privacy and consent in an innovative way that eliminates the question of why was the image put up there? What was the intent of the abuser? Because in all reality, we don’t ask domestic violence victims why their husband hit them. We don’t need to ask online survivors of image-based sexual violence why their abuser abused them. I came up with five core areas of intervention that I think we can take lessons from the area of child protection and build upon this to look at this issue, not as siloed intermittent interventions across children’s spaces and adult spaces, but things that we can do across those divides. First, we need to build knowledge. The public misunderstands and lacks awareness about online sexual violence. Without knowledge, survivors don’t know where to get help. Law enforcement don’t know how to intervene. And frankly, the public misunderstands and continues to shame victims instead of the abusers. We need to harmonize global regulation and policies. The policies to address good child and adult online sexual violence can and should be more harmonized. This includes removing the barrier of proof of intent of the abuser, as well as classifying this as a serious sex crime. In fact, we should ensure that across the globe, the online sexual violence of children and adults, most affecting women and girls, is taken seriously and given the serious type of criminal penalties that offline sexual violence endures. We also need to standardize global hotline support to ensure that hotlines that address the adult abuse image-based sexual violence receive the same standardized global standards as does in the child space. There is an allied network that may have been brought up today called the InHope Network. It is a phenomenal network of, I believe, over 80 hotlines across the globe looking at child online sexual abuse. We need to do this in the adult space as well. We also need more tech accountability. Those 3,000 websites that I mentioned earlier could simply be de-indexed and go away. So why haven’t they been? There needs to be an opportunity for tech to engage in a proactive intervention way with civil society, learning from lived experience experts, and this is quite possible. We also know that image removal is a critical piece of justice and healing for survivors. It’s very difficult to heal if your abuse is continuing to be placed online where anyone can Google your name, your address, your school, and learn everything about your exploitation. Image removal should not be different across different platforms and sites, but what we hear from survivors is they’re effectively left to create their own digital rape kit and clean up their own crime scene, and that is an unacceptable standard. In closing, I wanted to say that we have the will, we have the solutions, and our children depend on us. If we address image-based sexual violence for everyone today, there truly will be less child victims tomorrow. Thank you.

Marija Manojlovic:
Thank you so much, Andrea. There is so much I want to pick up on, but there is literally no time, so we’re going to just leave you to have conversations after this gets mic-dropped, literally. So, thank you for that. I will need to immediately move to the next speakers because there is not enough time. We have only 15 minutes left. What I’m going to move now on towards is the digital innovation ecosystem. That was a question we had from online, but also we just want to move to discuss a little bit more broadly this entire ecosystem. Salome, I want to go to you. You go from the German Development Agency, GIZ, and you are the director of the Digital Transformation Center in Kenya. GIZ is famous for investing in working in the field of digitization, innovation, cyber security, and skills. I’m very curious to hear from you a bit about challenges and opportunities of integrated child online safety into this work across all of these areas, whether that has been done successfully so far, or what are the plans for the future?

Salomรฉ Eggler:
four minutes. So sorry. I’ll try my best, Maria. Thank you so much for that question. I’ll start with two disclaimers. I’m sitting here as a non-child online protection expert. I’m sitting here as a practitioner who has as a goal to mainstream, to bring in child online protection into our activities. And as you were saying, there are many fold, right? They range from digitizing government service to working with the tech ecosystem and tech entrepreneurs. SMEs going digital to working on a transition and everywhere there are angles around child online protection. And yet, and maybe that’s the entry point I’ll take. We have a twofold approach in GIZ, how we try to work around this question. So the first part of the approach is really that mainstreaming idea. And I like to use the image of a braid. When you braid the hair, we ideally want to braid in child online safety measures and considerations from the onset of a project and not, and I say, we’re also guilty of that as GIZ in the past, not adding it as a bow in the end, right? Of your braid, but you really have it as per design into all of our activities. And the second part of the approach is really genuine child online safety projects that focus not only on integrating and mainstreaming the topic into other activities, but generally trying to tackle a certain topic. For instance, one of our activities that we have jointly also with children, for children created is a set of online training nuggets where children starting from the age to 10 to 15 can explore how to navigate the online world safely in a very easygoing way. And this is one of the aspects as well, where we try to have that initiative and these trainings available in up to 10 languages by now. Also in Kiswahili, for instance, for Kenya, and we saw how important it is as well to translate all these phenomenal tools that we have by now, by ITU, by UNICEF, et cetera, to other languages as well, to make them accessible to all the children around the world and youth growing up. So that’s the twofold approach that we are pursuing with GIZ at the moment. And now maybe come to your questions around challenges, opportunities, I see. What struck me in your introduction was your point where we’re saying, oh, you talk to someone working on infrastructure, they’re saying, oh, that’s not about children. You talk about someone working on digital skills, and they say, oh, no, no, that’s not really what we’re doing. Reflecting on that, within GIZ, it might be a slightly different variation. I would more call it an attention economy, where I, as a practitioner on the ground, I’m interacting with my colleagues that are working in the child protection unit. And they tell me how it is important that we mainstream these activities and considerations and safeguards, et cetera. And I see the importance. And at the same time, I talk to my colleagues in the gender department, in the climate change department, in the disability and inclusion department, et cetera. So in the end, I end up with, I know that all these considerations are so important. And yet, my reality on the ground is, I have a highly political, highly dynamic political environment. I have technological debates that evolve so quickly. I have a limited budget and limited time. And often, and maybe that’s the lessons learned, also for myself, I end up maybe going with those that scream the loudest. Or, and maybe that’s the positive side, and maybe what could be helpful, what has been helpful for me, where I have resources that are easy to use off the shelf. And I don’t have to become half of a child protection expert in order to implement these activities, but it’s some tools that I can really use, take, implement. And that has been really helpful for some of the activities that we have been, for instance, on data protection, been able to do, take these, apply them without having to generally become an expert in itself, because we’re working at a very interdisciplinary level in the end. So that’s maybe one of the challenges slash opportunities that I see. And, okay, then I’ll come to the opportunity. That was the challenge. Let me talk about an opportunity that I see as well, is developing agencies, financial assistance organizations, et cetera, are setting up bigger and bigger digitalization projects. When I started in GIZ, we had a 3 million project, that was it. And now I don’t even know how much, only the Digital Transformation Center in Kenya is a 30 million project, right? So we are getting bigger and bigger. And I think it’s the opportunity as well for us to mainstream in our own frameworks, in our own tools, ways to include these considerations and something, maybe a best practice that I could mention here, we’ve developed what we call the digital rights check. It’s an online tool. It takes you 30 minutes as a practitioner to go through that tool, to assess your project, either at the onset of project design stage or implementation stage. And that check tells you, it’s a bit broader. It’s about human rights in general, but there’s a specific part on child online protection, tells you exactly, have you thought of XYZ? This is what you could do. These are further resources. These are people you can contact. And that has been highly valuable to me because I can cater towards all these needs that have, the importance is clearly there, but it kind of meets my daily environment in which I operate. So maybe that’s also an opportunity there to have these kinds of hands-on tools like the digital rights check to guide our activities on the ground. Thank you so much Salome,

Marija Manojlovic:
because I think you kind of brought back some reality into the context in terms of lack of resources or trying to align all of the resources across different agendas. And I think when you speak about how do you actually decide what you need to work on, I think I have a perfect answer for you because there is Matito and Ananya who are going to tell us a little bit more about USAID’s work. But let me just introduce that for a minute. So Matito, you are USAID’s lead on child protection within the child, children and adversity team. And most recently you have been leading on a cross-agency effort to define USAID’s approach and roadmap of digital harms. And I think you found yourself with a set of premises that you embarked on the process and then you switched everything around when you started involvement with young people. And I think that’s really what I thought was really the most thoughtful thing that USAID could have done is to engage at early stages with young people. So tell us more about that journey, what you’ve learned through that journey. You’ve established your digital council. Ananya is here who was part of the first cohort and became an advisor afterwards. So I’m going to give you both six minutes. Sorry to give us that. Please. Thank you so much. I’m so sorry that this is going to become like a

Mattito Watson:
running game. It’s first of all, thank you for everyone who is last one speaking. Good news is a lot of people have said things I’ve already wanted to say today so I can zip through my talking points. Bad news is that we’ve lost part of our crowd. So thank you for everybody who has stayed to the very end. You’re going to get the best part of the session right now. So I want to thank you for also saying that USAID was thoughtful. We’re not always called being thoughtful over here at USAID. We’re one of the largest development organizations. We’re the international branch of the U.S. government. Our job is to really save lives, reduce poverty, strengthen democracy, and get people out of assistance. And so to do that, we’ve got to be always looking towards the future. And so USAID came a little late to the party in terms of our digital strategy. It just came out in 2020. It’s very comprehensive. It’s very robust. I recommend people go online and read it. But when they were developing it, the question came up from our team, the child protection team, where are the children and youth? They are our future. They are going to be picking up whatever we lay down, and they’re going to be driving it as the next generation moving forward. So as the child protection person at USAID, or one of them, but leading in terms of our children and diversity team, they asked me to lead on our digital strategy. I am a field practitioner. I spent 25 years in Africa working with children and youth. I am not a digital person, and that ended up being a good thing for reasons I was going to talk about. We’ll skip over for the moment. But I said to myself, how do I get around my blind spot? How do I really understand what’s happening with you? How do I understand what’s going on with a 16-year-old girl online in Brazil? Or how do I understand what’s happening with a 12-year-old boy in Ukraine? And so my brilliant idea, which I somewhat stole from Microsoft, I’ll give them that nod of credit, was to develop a digital youth council. So at USAID, with our youth strategy, we want to work with youth, not for youth. And so that means bringing youth to the table, listening to what they have to say, and incorporating their viewpoint in our strategy and our implementation. So back two and a half years ago, I created a 12-member digital youth council consisting of seven girls and five young men, five young women, six, seven young women, five young men, to not only advise us in terms of, is our strategy on point? Where are we going? But also to build the next generation of changemakers, the next generation of leadership. We’ve had our second year happening now. Putting my money where my mouth is, also to make this last part go as quick as possible, I am going to turn the floor over to Ananya, the voice of the youth, to tell us what was it like working with USAID? What did you see us responding to your voice? And how did you feel in terms of the overall process?

Ananya Singh:
First of all, thank you very much for inviting me today. Not only is this topic very close to what I’m deeply passionate about, but I’ve relentlessly worked on this for the past three years. And this session provides us an opportunity to reflect on what some of the best practices in this area have been. And as the youth advisor to the USAID Digital Youth Council, I am very happy to have been invited to shed light on the success story that our council has been. And hence, as I speak, I hope that the story inspires more people to take action for the future, with the future. As a generation of young people born into the digital age, we understand how digital technologies impact or impair our aspirations and rights. All we need is a platform to be actually heard. Given that digital technology helps to enhance our capacity to engage with and empower the youth, there is no excuse anymore to not reach out and actually seek input from the youth in a more participatory way, treating them as the active and equal partners of digital development as they are. Recognizing this, the USAID, which has for long prioritized positive youth development, established the Digital Youth Council in 2021. I consider it to be my absolute privilege to have been a part of the Digital Youth Council since its very first day. Over the past 2.5 years, the council has not only served as an important voice in helping to guide the implementation of USAID’s digital strategy, but has also helped to raise awareness about digital harms in many countries and influence national leaders, the private sector, civil society, local communities, and other youth on how best to keep safe while learning, playing, exploring in the digital world. We have co-created and led sessions on innovation, emerging technologies, such as machine learning, artificial intelligence, large language models, chat GPT, and tried to establish a connection with digital harms which target young children, including our young council members. With the support, training, mentorship, resources, and encouragement that we provide through our extremely carefully designed program, our council members have been able to design apps that educate young people about digital harms through interactive games and other modern features. In fact, one of these apps is about to go live on the Google Play Store by the end of this year. We’re also very proud to have involved our young council members in planning and speaking at multiple sessions of the USAID at the Global Digital Development Forum in 2022 and 2023. Personally, I have had the opportunity to speak at the USAID Youth Policy Launch in 2022. The USAID enabled a young person like me to share the stage with and ask questions directly to the USAID administrator and the U.S. Congress representatives. I also had the opportunity to emcee the USAID’s International Youth Day event in 2022, where 1,200 people from across the globe joined us to celebrate young people and engage in a panel discussion on intergenerational solidarity, inclusion, protection, and mental well-being. But our magnum opus, the first virtual symposium on protecting children and youth from digital harms, attracted the attention of thousands of leaders in government, civil society, and private sector. And we organized this in collaboration with Save the Children and Tech Change. This even brought together influential policymakers and our young council members for panel discussions on themes including, but not limited to, online harms, hate speech, and cyberbullying. This symposium helped to further the U.S. government’s APCA strategy and USAID’s digital strategy. Thank you everyone for being with us this early afternoon here in beautiful Kyoto, Japan, and welcome all the remote participants that are following from all over the world, from my colleagues in Latin America, and must be extremely late at night, but I’m sure that some of us, some friends of us are there. And sure, there’s so much for me, and my name is Olga Cavalli, I am the National Cyber Security Director of Argentina, and also I chair the South School of Internet Governance here with me, my dear friend, Tracy Hawkshaw. Thank you.

Mattito Watson:
And as you can see, she makes my life a lot easier, because the voice of the youth are us being able to really provide that platform. We’re actually seeing that change start to happen, and we’re actually getting it right in terms of where we should be investing in U.S. government, in terms of dollars, in terms of protecting children and promoting employment for them. Thanks.

Marija Manojlovic:
Thank you so much, Matita, and Ananya, you’ve made my job also easier, because that was a really perfect closure to this discussion. I do want to thank all the participants. I’ll just take two minutes, or maybe like one minute, to try to sum up some of the main takeaways, but I think we all agree, and young people tell us, that the Internet is great. They love it. They like to be online. They like to engage online. It’s opening so many opportunities for them. But online and offline worlds are not, for them, separate. This is just the way that the world is. And we need to make sure that what the rules apply in the online world can be applicable, and offline can be applicable in the online world as well. And some things that are going to help us align across different agendas will be really much more rigorous and strong focus on participation of people who have lived experiences, people, young people who can tell us what the needs are, but also participation in terms of really using a vulnerability lens to understand the trends and threats online to make sure that we can, as we are building this great online world, that we can make sure that we are not exacerbating existing vulnerabilities, existing gender divide, existing issues around the gender norms and toxic masculinity, and issues around radicalization, extremism, and all other forms of expressions of violent behaviors and power dynamics that exist in the offline world, in the online world. And the last thing I want to say is that we have really seen and are calling for action in terms of increased investment in this particular field, because it is really sorely lacking investment, dedicated investment from both governments, but also industry and other players, whether it’s investment in foreign policy goals or investments domestically or investments in internal organizational infrastructure or in frontline services that we all need to have. So with that, I will thank you all for participating in this discussion. I have definitely been too ambitious in terms of the topics we want to cover and people we want to hear, but I’m really grateful that you are all here. I will run to my plane right now, but I will leave you all to chat a little bit more. Hopefully you go for drinks or something. Those who are online, please reach out. We will be happy to engage with you. Go on safeonline.global and follow us on social media, and we will be happy to engage with all of you, and thank you for the session. Thank you all for joining us and we look forward to seeing you again tonight. Again.

Cailin Crockett

Speech speed

149 words per minute

Speech length

1232 words

Speech time

497 secs

Albert Antwi Boasiako

Speech speed

156 words per minute

Speech length

1675 words

Speech time

644 secs

Ananya Singh

Speech speed

165 words per minute

Speech length

738 words

Speech time

268 secs

Andrea Powell

Speech speed

164 words per minute

Speech length

2426 words

Speech time

886 secs

Audience

Speech speed

162 words per minute

Speech length

608 words

Speech time

225 secs

Henri Verdier

Speech speed

166 words per minute

Speech length

731 words

Speech time

263 secs

Julie Inman Grant

Speech speed

166 words per minute

Speech length

1386 words

Speech time

500 secs

Marija Manojlovic

Speech speed

194 words per minute

Speech length

4378 words

Speech time

1352 secs

Mattito Watson

Speech speed

190 words per minute

Speech length

655 words

Speech time

207 secs

Moderator

Speech speed

179 words per minute

Speech length

131 words

Speech time

44 secs

Salomรฉ Eggler

Speech speed

177 words per minute

Speech length

1057 words

Speech time

359 secs

Safeguarding the free flow of information amidst conflict | IGF 2023 WS #386

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Rizk Joelle

Digital threats and misinformation have a significant negative impact on civilians residing in conflict zones. The dissemination of harmful information can exacerbate pre-existing social tensions and grievances, leading to an increase in violence and violations of humanitarian law. Furthermore, the spread of misinformation can cause distress and a psychological burden among individuals living in conflict-affected areas. This hampers their ability to access potentially life-saving information during emergencies. The distortion of facts and the influence of beliefs and behaviours as a consequence of the dissemination of harmful information also contribute to raising tensions in conflict zones.

One concerning aspect is the blurred line between civilian and military targets in the context of digital conflicts. Civilians and civilian infrastructure are increasingly becoming targets of digital attacks. With the growing emphasis on shared digital infrastructure, there is an increased risk of civilian infrastructure being targeted. This blurring of lines undermines the principle of distinction between civilians and military objectives, which is a critical pillar of international humanitarian law.

Moreover, digital threats pose a threat to public trust in humanitarian organizations. Cyber operations, data breaches, and information campaigns not only damage public trust but also hinder the ability of humanitarian aid organizations to provide life-saving services. This erosion of trust compromises their efforts to assist and support individuals in need.

To address these challenges, it is crucial for affected communities to build resilience against harmful information and increase awareness of the potential risks and consequences in the cyber domain. Building resilience requires the involvement of multiple stakeholders, including civil society and companies. Information and communication technology (ICT) companies, in particular, should be mindful of the legal consequences surrounding their role and actions in the cyber domain. It is important that self-imposed restrictions or sanctions do not impede the flow of essential services to the civilian population.

In addition to community resilience and awareness-building efforts, policy enforcement within business models is crucial. Upstream thinking in the business model can help reinforce policies aimed at countering digital threats and misinformation. However, the discussion around policy enforcement in business models is challenging. It requires expertise and a feedback loop with tech companies to find effective and efficient solutions.

In conclusion, digital threats and misinformation have dire consequences for civilians in conflict zones. The dissemination of harmful information exacerbates social tensions and violence, while digital attacks on civilians and civilian infrastructure blur the line between military and civilian targets. These threats also undermine public trust in humanitarian organizations and hinder the provision of life-saving services. To tackle these challenges, it is essential to build community resilience, increase awareness, and enforce policies within business models. Collaboration between stakeholders and tech companies is key to addressing these complex issues and safeguarding the well-being of individuals in conflict zones.

Speaker

In conflict zones, technology companies face a myriad of risks and must carefully balance the interests of multiple stakeholders. These companies play a critical role in providing essential information and functions but can also unintentionally facilitate violence and spread false information. One major challenge is responding to government demands, such as granting access to user information, conducting surveillance, or shutting down networks. These demands can come from both sides of the conflict and may lack clarity or have excessively broad scope.

Dealing with government demands during peace is limited in conflict situations due to associated risks. Companies can request clarity on demand legality, respond minimally or partially, challenge the demands, or disclose them publicly. However, in conflict settings, these actions may pose significant risks.

To navigate these challenges, technology companies can implement various measures. These include establishing risk management frameworks, clear escalation procedures, and consistent decision reviews. By doing so, companies can better manage risks of operating in conflict zones. Collaboration with other organizations in coordinating responses in conflict regions and consulting with experts to understand potential implications of decisions can also help.

Respecting international humanitarian law is a key principle of corporate responsibility in conflict situations. Companies are expected to respect human rights and require guidance on respecting international humanitarian laws when conducting business in conflict-affected areas. Enhanced due diligence, considering heightened risks and negative human rights impacts, is recommended by the United Nations Guiding Principles on Business and Human Rights.

Further articulation is needed on what international humanitarian law means for technology companies, indicating further guidance is needed in this area. To address design issues in platforms, companies should consider building the capacity to apply a conflict lens during product development, better identifying and resolving issues in conflict zones.

Addressing information topics requires considering both upstream and downstream solutions. This comprehensive approach takes into account the flow of information from sources (upstream) to distribution and consumption (downstream).

Overall, technology companies operating in conflict zones face unique challenges and must navigate complex risks. Implementing effective risk management frameworks, respecting international humanitarian law, and incorporating a conflict lens into product development can better address the multifaceted issues they encounter. Further guidance is needed in certain areas to ensure operations in conflict zones align with established principles and standards.

Chantal Joris

The analysis delves into the challenges surrounding the free flow of information during conflicts. It starts by highlighting the digital threats that journalists and human rights defenders face in such situations. These threats include mass surveillance, content blocking, internet shutdowns, and other forms of coercion aimed at hindering the dissemination of information. The sentiment towards these challenges is negative, as they pose a significant threat to the values of freedom of expression and access to information.

Another significant aspect explored in the analysis is the role of tech companies in conflicts. Digital companies have become increasingly important actors in these situations, and the analysis argues that they have a responsibility to develop strategies to avoid involvement in human rights violations. This neutral stance reflects the need to address the complex ethical dilemmas faced by tech companies, balancing their business interests while safeguarding human rights.

The analysis also discusses the reliance of civilians on information communication technologies (ICT) during conflicts. Civilians often use ICT to ensure their safety, gain information on conflict conditions, locate areas of fighting, and communicate with their loved ones. This neutral sentiment highlights the significance of ICT in providing vital communication channels and access to information for affected civilians.

The analysis further sheds light on the attempts made by the army and political parties to control the narrative and shape the discourse during conflicts. Conflict parties often aim to manipulate information and control the narrative for various reasons. This negative sentiment highlights the detrimental impact of information control on the public’s understanding of conflicts and the potential for shaping biased opinions.

A key observation from the analysis is the necessity of a multi-stakeholder approach in conflict contexts. It stresses the importance of different actors, such as ICT companies, content moderators, and organizations like the International Committee of the Red Cross (ICRC), working collaboratively to tackle the diverse threats to information flow. This positive sentiment reflects the recognition that no single entity can address the complexities of information challenges during conflicts alone.

Moreover, the analysis calls for identifying gaps in understanding and addressing the issues related to information flow during conflicts. This neutral sentiment highlights the need for more clarity and targeted efforts to bridge these gaps. The conclusion emphasizes the importance of comprehensively addressing the challenges and harnessing the potential of information communication technologies to ensure the free flow of information during conflicts.

In conclusion, the analysis explores the various challenges and dynamics surrounding the free flow of information during conflicts. It highlights digital threats, the role of tech companies, civilian reliance on ICT, information control by conflict parties, the necessity of a multi-stakeholder approach, and the need for identifying gaps for clarity. With this comprehensive understanding, stakeholders can work towards developing strategies and policies that uphold the values of information access and freedom of expression in conflict situations.

Khattab Hamad

Sudan is currently embroiled in a civil war between two allied forces that began in 2013. However, the conflict has been riddled with challenges and disagreements, particularly regarding security agreements and the unification of the armies in Sudan. These disagreements resulted in the conflict’s end on April 15th. Unfortunately, the sentiment surrounding this war is negative.

Information control has played a significant role in the conflict, with internet disruptions and the spread of misinformation being notable events. Internet shutdowns during exams and civil unrest have been used by authorities to manipulate public opinion. The sentiment towards these events is negative.

Another issue in the conflict is the misuse of social media platforms, which have been exploited by both sides to spread their own narratives and manipulate public opinion. This misuse has prompted concerns about information imbalance and led platforms like META to take down accounts associated with the Rapid Support Forces. The sentiment towards this misuse is negative.

The RSF (Sudanese Armed Forces) and the Arab Support Forces have been criticized for their harmful practices towards civilians and the nation’s infrastructure. Privacy violation cases, including the use of spyware, have been reported. The RSF imported the predator spyware of Intellexa, while the National Intelligence and Security Service (NISS) imported the remote control system of the Italian company hacking team in 2012. The sentiment towards these privacy violations is negative.

The conflict has also had a significant impact on the ICT (Information and Communication Technology) sector in Sudan. Power outages have impaired network stability and e-banking services, forcing ICT companies to rely on uninterruptible power supply systems and generators. The sentiment towards this situation is negative.

On a positive note, telecom workers have been recognized as crucial for maintaining access to information infrastructure during conflicts. It is argued that they should be given extraordinary protection, similar to doctors and journalists, due to their vital role in ensuring the continuous flow of information. The sentiment towards this proposal is positive.

In conclusion, Sudan’s civil war has had far-reaching consequences, impacting security agreements, information control, privacy rights, the ICT sector, and the protection of key players in the information infrastructure. Efforts to address these challenges and protect these key players are essential for promoting peaceful resolutions and mitigating the impact of future conflicts.

Tetiana Avdieieva

During the armed conflicts in Ukraine, there have been severe restrictions on free speech and the free flow of information. Since the war began in 2014, the country has witnessed a decline in the protection of free speech and access to information. This has resulted in mass surveillance, content blocking, Internet shutdowns, and sophisticated manipulation of information.

Digital security concerns have also arisen during these conflicts. Attacks on media outlets and journalists largely originate from Russia, with DDoS attacks on websites disrupting connectivity. Coordinated disinformation campaigns on social media and messaging platforms further exacerbate the situation, influencing public opinion and spreading false narratives.

One key issue highlighted is the control over narratives and the free flow of information during armed conflicts. The ability to shape public opinion becomes a powerful tool in these circumstances, with the potential to influence the course of the conflict and its outcomes. It is crucial to address this issue by formulating an exit strategy that lifts restrictions from the outset of the armed conflict. This strategy should consider the vulnerability of post-war societies to malicious narratives and work towards reestablishing human rights that were restricted during the conflict.

Another significant concern is the gap in international law regarding the handling of information manipulation during peace and conflict. Current legal frameworks do not adequately address the issue, leaving room for exploitation and the spread of disinformation that incites aggression and hatred.

There have also been attempts to shift the focus away from the harm inflicted upon civilians and the suppression of opposition during these conflicts. These attempts to change the narrative divert attention from the atrocities committed and the need to protect the rights and safety of civilians.

The extensive support for the invasion among the Russian community is a cause for concern. According to data from Meduza, a significant portion of Russian citizens, ranging from 70% to 80%, support the invasion. This highlights the challenge of countering misinformation and disinformation within Russia and addressing the narratives that drive aggression and illegal activities.

The role of ICT companies in moderating harmful content in conflict settings is crucial. These companies need assistance, both globally and locally, to effectively combat harmful information. This includes distinguishing between harmful information and illegal content, as well as understanding the localized contexts in which they operate. Local partners can provide valuable insights into regional issues, such as identifying and addressing local slur words and cultural sensitivities.

However, it is important to approach the role of tech giants with caution, avoiding a strategy of blaming and shaming. Over-censorship and driving people to unmoderated spaces can be unintended consequences of such an approach. Instead, a collaborative approach that involves ICT companies, multi-stakeholder engagement, and responsible corporate practices is necessary to foster a safer online environment.

In conclusion, the armed conflicts in Ukraine have led to significant restrictions on free speech and the free flow of information. Digital security concerns, information manipulation, and the spread of disinformation within Russia pose additional challenges. It is crucial to adopt an exit strategy that lifts restrictions and safeguards vulnerable post-war societies from malicious narratives. Efforts should also be made to address gaps in international law regarding the handling of information manipulation. The support for the invasion among the Russian community and attempts to divert attention from civilian harm and opposition suppression further complicate the situation. ICT companies play a crucial role in moderating harmful content, and a collaborative approach is necessary to strike a balance between curbing misinformation and ensuring freedom of expression.

Audience

An analysis conducted by Access Now reveals that prevailing trends in content governance are endangering freedom of expression and other fundamental rights. Several issues have been identified in relation to parties involved in conflicts, highlighting the dangers faced by these rights.

During times of crisis, content governance has been exploited in various ways, breaching international humanitarian law. One concerning practice is the intentional spread of disinformation as a warfare tactic. Additionally, platforms have been used for population movement, and sharing content depicting prisoners of war illegally has been observed. These actions not only violate international laws but also contribute to the erosion of freedoms.

While internet restrictions exist in conflict zones, it is interesting to note that Russia maintains significant accessibility to various platforms. Many Ukrainian media and telegram channels continue to be effectively available in Russia. Furthermore, despite restrictions, information can still flow through various social media and messaging platforms. This highlights the complexity of internet restrictions and the need for further examination.

The analysis also underlines the need for international laws addressing informational warfare. Both Russia and Ukraine face internet warfare, yet there is a lack of legal frameworks specifically designed to address this issue. The absence of such laws creates a significant gap in addressing and countering the threats posed by disinformation campaigns and cybersecurity breaches.

Russia particularly faces numerous cybersecurity threats and disinformation campaigns, primarily originating from Ukraine. Instances of Russian citizens’ personal data being leaked and published online have been identified, along with the identification of over 3,000 disinformation narratives against Russia. These threats pose challenges to the integrity and security of information in the country.

Social media platforms’ over-enforcement is flagged as a major problem for media and journalists, with many legitimate news sources having their accounts suspended or restricted. This issue is particularly prevalent in cases involving conflict settings, such as Palestine and Afghanistan, where the presence of dangerous organizations contributes to heightened enforcement measures.

The complexity of platform rules is highlighted as a concern in conflict settings. In such situations, rules can be confusing and easily violated, with typical infractions including the posting of images depicting dead bodies. This observation sheds light on the challenges faced by content creators and users as they navigate restrictive guidelines during conflicts.

Addressing misinformation requires the implementation of upstream solutions, as highlighted by Maria Risa. This approach focuses on addressing misinformation at its root causes, rather than solely addressing its dissemination. By focusing on upstream solutions, it is possible to create more effective strategies to combat misinformation and its harmful effects.

The analysis raises questions about the design of platforms and the role of algorithms and business models in managing information. It suggests the need to reconsider and possibly redesign these aspects to ensure fairness, accuracy, and accountability in content dissemination. This observation emphasizes the ongoing need for innovation and improvement within the digital landscape.

BSR, a leading global organization, provides a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings. This initiative aims to promote the respect and protection of human rights, even in challenging circumstances. The toolkit, developed in collaboration with Just Peace Labs, offers detailed guidance, making it an invaluable resource for responsible business practices.

Furthermore, the analysis advocates for human-centered approaches in digital transformation, particularly in conflict zones. Stakeholder consultation can be challenging in war zones, highlighting the importance of ensuring that the interests and needs of all individuals are considered and that no one is left behind in the process.

There is a noted lack of focus on countries like Afghanistan and Sudan in discussions surrounding these issues. This observation emphasizes the need to broaden the scope of discourse and pay equal attention to conflicts and human rights violations occurring in these regions.

Global media platforms play a substantial role in shaping public opinion, primarily through their recommendation algorithms. However, concerns arise regarding the impartiality and bias of these algorithms. The analysis reveals that global media platforms often alter their recommendation algorithms to favor one side in informational wars, despite presenting themselves as neutral. This highlights the potential influence and manipulation of public opinion through these platforms.

Given the significance of global media platforms, the analysis argues that global society should exert more pressure on these entities. Increased accountability and transparency are necessary to ensure that these platforms operate in an unbiased and fair manner, considering the critical role they play in shaping public discourse.

In conclusion, the prevailing trends in content governance pose a threat to freedom of expression and fundamental rights. Exploitation of content governance during times of crisis, the need for international laws addressing informational warfare, and the over-enforcement by social media platforms are among the challenges highlighted in the analysis. The complexity of internet restrictions and the design of platforms also warrant further consideration. Additionally, the importance of upstream solutions, human-centered approaches, and the inclusion of marginalized regions in discussions emerge as key insights. Efforts towards increasing platform accountability and transparency are crucial to safeguarding a fair and unbiased digital landscape.

Session transcript

Chantal Joris:
You Good afternoon, everyone, all the participants in the room, and also good morning, afternoon or evening for those who join online. My name is Chantal Joris. I’m with Freedom of Expression organization, Article 19, and I will be moderating the session today. In today’s session, we want to explore some of the current challenges posed to the free flow of information, specifically during armed conflicts. And I want to start with making a couple of opening remarks as to where we are at. And we do know that conflict parties have always been very keen to control the narrative and shape the narrative during conflicts, perhaps to garner domestic and international support, to maybe portray in a favorable light how the conflict is going for them. And of course, also often to cover up human rights violations and violations of international humanitarian law. So this is nothing new, yet what has changed, of course, is how armed conflicts look like in the internet age. We see an increased use of digital threats against journalists. and human rights defenders, mass surveillance, content blocking, internet shutdowns and even the way that information is manipulated has become much more sophisticated with the tools that parties have available today. And of course at the same time, civilians really rely at an unprecedented level on information communication technologies to keep themselves safe, to know what’s going on during the conflict, where fighting takes place and also to be communicating with the people, with their loved ones and see that they are okay. And also I want to emphasize a little bit that these issues are not necessarily limited to just sort of the top 5 to 10 conflicts that tend to make the headlines but there are currently about 110 active armed conflicts in all regions of the world and also beyond conflict parties, even states that are not part of the conflict have to grapple with questions, for example we’ve seen recently should they sanction propagandists, ban foreign media outlets so this is really an issue that concerns every state, all states and the whole world and also what we have seen is that digital companies have become increasingly important actors as well in conflicts and they do need to find strategies to avoid to become complicit in human rights violations and violations of humanitarian laws. So to discuss some of these challenges I’m very happy to introduce the panelists of today also I do want to make a quick remark in this context that we notice that many of our partners from conflict regions have not been able to come to IGF in person and have these discussions in person although we talk a lot about the need for an open and secure internet, including of course during conflicts, and they are often the stakeholders that are most affected and they are not really able to join these discussions except online. Similarly, some of our speakers, most of our speakers on this topic that we really wanted to have at the table are also joining us online today. The first speaker joining us online is Tetiana Avdievjeva. She is Legal Counsel at the Digital Security Lab Ukraine, an organization that has been established to address digital security concerns of human rights defenders and organizations in Ukraine. We also have Khattab Hamad, an independent Sudanese researcher focusing on digital rights and internet governance, who is working with the Open Observatory of Network Interference and the Code of Africa. We have Joรซlle Rysk joining us. She is Digital Risks Advisor at the Protection Department of the International Committee of the Red Cross. And next to me here in person is Eleni Hickok. She is Managing Director of the Global Network Initiative, of which Article 19 is also a member. I will also introduce what this multi-stakeholder initiative is all about. Also, we were supposed to have here Irene Kahn, Special Rapporteur on Freedom of Expression. Unfortunately, she had to be in New York at the same time in person, and we were struggling to remove her from the program, so apologies for that. But she has been focusing on these questions as well, and I encourage you to read also her report from last year on disinformation in armed conflicts, and she continues to engage in this discussion as well. So, a quick breakdown of the format of the session. So, we have… We have about 75 minutes to discuss these challenges. I will address a couple of questions to the speakers, but it is really meant as an interactive discussion, it is meant to be a roundtable, so I will also be asking some of the questions to you as well, after the speakers have been able to express themselves on the issues, so throughout the discussion, and then at the end also there will be a chance obviously to give input also what we might have missed, what open questions there are for the speakers. So perhaps let’s start with discussing the main digital risks that we see, and also the risks to the free flow of information during conflicts, and I will first have Tatiana from Ukraine and Khattab from Sudan talk about this, but then also again I will be very keen to hear from you what, in your areas of work or from the regions you are from, what you have been observing as sort of the key challenges in this respect. So Tatiana, if I can start with you.

Tetiana Avdieieva:
Yeah, hi everyone, and it’s my great pleasure to be here today and to talk about such an important topic. So first of all I wanted to share a brief overview of what is going on in Ukraine currently, regarding the restrictions on free speech, free flow of information and ideas, which were introduced long before the full-scale invasion, since the war in Ukraine started in 2014 with the occupation of Crimea, and after the full-scale invasion as a rapid response to the changing circumstances. So basically restrictions in Ukrainian context can be divided into two parts. The first part concerns the restrictions which are related to the regime of the martial law and derogations from the international obligations. And the second part relates to so-called permanent restrictions. For example, there is a line of restrictions based on origin, particularly concerning Russian films, Russian music and other related issues. Also, there are restrictions serving as a kind of follow-up of Article 20, for example, prohibition of propaganda for war, prohibition of justifications of illegal aggression, etc. The problem is, especially with the restrictions which were introduced after the full-scale invasion, that restrictions drafted in a rush are often poorly formulated and therefore there are lots of problems with their practical application. However, what concerns me the most in this discussion is the perception of the restrictions of the kind by the international community. The problem often is that people don’t take into account the context of the restrictions. And when I’m speaking of the context, it is not only and purely about missiles flying above someone’s head. It is about the motives which drive people to be involved into the armed conflicts. And that is a very important reservation to be made at the very beginning of this discussion, because we have to speak about the root causes. And I often make this comparison for me, armed conflicts can be compared to the rules of saving energy, that armed conflicts do not appear from nowhere and they do not disappear anywhere. So when, for example, a certain situation starts, we have to understand that there are motives behind the aggression on the side of the aggressor. And therefore we have to work with those motives to prevent further escalation and to prevent repetition of the armed conflict, to prevent re-escalation basically. In this case, assessment of the context is, unfortunately, not a basic math, it is rather a rocket science. Because for example, in Ukrainian context, the preparation of the fertile ground for propaganda for Russian interference has been done in the information space for at least the last 30 years of Ukrainian independence, when on the entire European level it was said that Ukraine is not basically a state and that there is no right to sovereignty and that was basically a gift to Ukrainian nation, that all the representations in front of international community from the side of the post-Soviet countries were done by Russia, etc. What does it mean? It means that there was a particular narrative which was developed and narrative with which we have to work. Why this is important? Because usually restrictions are treated, I would say, rather in vacuum. So we are trying to apply the ordinary human rights standards to the speech which is shared, developed, to the narrative which is developed in the context of the armed conflict. And it is very important because at the very end of the day, what any country which is in the state of war faces is the statement that as soon as the armed conflict is over, all the restrictions have to be lifted. And here we miss a very important point, the point about the transition period, so-called exit strategy, which is very frequently substantiated by automatic cancellation of the restrictions. And that actually is a part of the discussion on the rebuilding of Ukraine in terms of reinforcing the democratic values, re-establishing human rights which were restricted, etc. So at this particular point, it is very important to mention that we have to think about the transition period of lifting the restrictions from the very beginning of the armed conflict. Because when the restrictions are introduced, we have to understand that they cannot end purely when there is a peace agreement. Otherwise, it won’t make any sense from the practical standpoint because narratives will still be there in the air. Therefore, we have to develop this exit strategy and understand that post-war societies are very vulnerable towards any kind of malicious narrative. And they cannot be left without protection even after the end of the war. And finally, a brief overview of the digital security concerns. I will try to summarize it in one minute, not to steal a lot of time. Currently, there are lots of problems from the digital security side. For example, there are attacks on databases, attacks on media, which not only target the media as website for sharing information, but also target the journalists, which is more important because people experience chilling effect and they’re super afraid of sharing any kind of idea because they potentially might be targeted. Indeed, I mean, from the side of the aggressor state because currently in Ukraine, at least in Ukrainian context, the biggest threat is stemming from Russia, especially for those journalists who are working on the frontline and who can be captured, who can be tortured, who can be killed. And there were lots of examples of such things happening. Also, there is a problem of DDoS attacks on websites, which actually interrupts the work of the websites and disables the sustainable connection. There were attempts to share Melbourne spyware again in order to track in the videos, in order to check what- that they’re working on and in order to prevent the, basically the truth to be distributed to the general public. And finally, there are coordinated disinformation campaigns on the social media, on like platforms, messaging services, including Telegram, which is another important topic and probably this topic is a topic for the separate discussion. So I won’t be stopping on that, like for my entire speech, but just mention it for you to understand that this discourse is very extensive and there are lots of things to talk about. I will stop here. I will give floor back to Shantala. Thank you very much to listen to me and we’ll be happy to share further ideas in the course of the discussion.

Chantal Joris:
Thank you very much, Tatiana. Khattab, if I can bring you in and have you share also your observations about the situation in Sudan, also following the recent outbreaks of hostilities a couple of months ago.

Khattab Hamad:
Thank you, Shantala. Hi, everyone. So I wanna welcome you and the other participants and it’s really an honor for me to speak at the IGF. So to keep the attendees updated, Sudan is going through a war between two forces that have been allied since the year of 2013. And the disagreement came to an end on April 15th due to differences in the security agreements related to the unification of the armies in Sudan. So this put the Sudanese in a position in a bad position due to the parties to the war, because the parties of the war are not following the laws of war, in addition to its impact on basic services, including electricity and communication. So this contributed to widespread manipulation of war narrative and the spread of misinformation in addition to the intense polarization. So to answer your question, in Sudan right now we have international downs, we have targeting of telecom workers, we have also disinformation campaigns and also we have privacy violation. And unfortunately, these practices are used by both sides of war, not only one side like RSF or the Arab Support Forces, RSF or Sudanese Armed Forces, the official military. So regarding internet disruption, internet disruption is not a new experience for the people in Sudan. The authorities used to shut down the internet during exams and civil unrest. And this time, due to the ongoing conflict, there were numerous and periodic internet disruptions in Khartoum, the capital of Sudan, and the cities of Nyala, Zalingei, and Al-Junaina. These events are considered as effort of information control during the war. However, some disruption cases in Khartoum are related to security concerns of… the telecom engineers and other telecom related workers as they may face violence because of their movement towards maintenance. So the absence of internet connection opened a wide door to applying this information as people cannot verify information that they got from local sources. Moreover, this information during the conflict also exists in cyberspace and it has several actors, but there are two main players here. They are SAF, the Students Armed Forces, and RSAF. Both parties are using proxy accounts and influencers on social media platforms to promote and to propagate their narrative regarding the war. Actually, this practice puts civilians at risk because getting wrong information may impact their decision to move around their neighborhood or the decision of displacement. Moreover, what I observed is that this information is threatening the humanitarian response. So, for example, the ICRC office in Sudan posted on Facebook warning the people not to follow, do not follow this information. So also during this war, several privacy violation cases happened, such as physical phone inspection, a lot of cases of physical phone inspection by soldiers from both sides. And also the use of spyware. Actually, we couldn’t verify. the use of spyware until now, but there are claims of that. But the important thing here is we have to mention that RSF imported the predator spyware of Intellexa. And Intellexa is an EU-based company that is providing intelligence tools. And also, this is not the first time of using spyware in Sudan. The NISS, the National Intelligence and Security Service, imported the remote control system of the Italian company hacking team in 2012. So I think that’s it from my side, Chantal. Back to you.

Chantal Joris:
Thank you very much. And thank you also for this account and explaining how these information threats can also really lead to offline violence and concrete harms to civilians. So same question to the people in the room. What have you seen or what have you perceived as being the main, in your experience, the main risks to the free flow of information, be it through surveillance, propaganda, internet shutdown? What’s your perspective?

Audience:
Hi. Thank you so much for a great presentations. I’m Alishka Birkova from Access Now. And we are also working on the issue of content governance in times of crisis. And we have been recently mapping a number of prevailing trends in the field that, in one way or another, are not necessarily related to the content governance. And we are also working on the issue of content governance in times of crisis. And we have been recently mapping prevailing trends in the field that, in one way or another, put either freedom of expression and other fundamental rights in danger. And we looked specifically. at this issue from the perspective of international humanitarian law and so we are witnessing several issues especially parties to the conflicts that are actually very much in this instigators of those. One of them is of course the intentional spread of disinformation as a part of worldwide tactic where we noticed number of cases that we are now so we have these different case scenario that we are supporting with the case studies too that really happen in the field such as for instance claiming or warning that there will be invasion taking place and in reality this invasion has never occurred there is a very specific example from Israel in 2021 where even international media were convinced and believed that this invasion take place and reported on it which was just a part of military strategy and there are a number of other examples from different regions around the world where we see that. Another one is of course using platforms for the purpose of moving the parts of population from one territory to another which from the perspective of international humanitarian law is not at least in the context of non-international armed conflict it’s not even permitted and so we see those cases as well. Of course the whole entire issue of the content depicting prisoners of war that was very largely reported and that can again put in danger the privacy of those people identity and so and so on so the safety and security of those individuals depicted on that video content that is being shared and there may be other two or three case scenarios that we identified in the field and that we are now still gathering case studies and this will be all summarized in our upcoming report that we’re hoping to publish in following weeks I don’t want to overcome it but I am happy to elaborate further without going much in and give space to others as well.

Chantal Joris:
Thank you very much for the excellent points. Anyone else?

Audience:
Thanks for giving me the floor and opportunity to speak and express myself. I’m Tim from Russia and what I can say about internet shutdowns, internet restrictions in terms of conflict, it’s pretty obvious that any country involved in the conflict will ensure that there will be some restrictions on internet websites, media and so on, but frankly speaking it is not so restricted as it could be seen from abroad as long as there are plenty of like, you can stop information from flowing around through like telegram messengers from some social media and stuff and lots of Ukrainian media and Ukrainian telegram channels are still and effectively available in Russia So I can’t say there is a super restricted environment in the Russian media sphere. So far we face lots of, as the same as Ukrainian speaker said, we face lots of like cybersecurity threats coming obviously from Ukraine the same way, like denial-of-service attacks, like some sophisticated attacks on governmental and non-governmental like private web services companies and we have lots of like data leaks, for example recently Ukrainian hackers published a leaked database from the company who was a service provider for all the airline tickets and airline connections and stuff. So basically all the imaginable personal data including like names, dates and all the flying information of Russian people, Russian citizens was published in the internet, in the telegram and was available for any malicious actors and so far we see a lot of threats and insecurities from disinformation campaigns and threats and fakes which are used as a weapon in the informational war happening aside of real war in between Russia and Ukraine. And it’s so sad that this kind of informational war and this kind of weapons and weaponry used in informational war is not described in any international law and is not even somehow imagined and prescribed what’s, because there is, you know, it’s, the station is like that. There is, say, international law for wars, for real wars and for real warfare, but there is no international laws for informational warfare, and both of the countries and both of the, like, all the citizens of our countries, both Ukraine and Russia, suffer from this internet warfare. So the situation is like that. So the situation is like that, that both parties use this kind of weapons in the informational war in between our countries. For example, for this year, working in a, like, non-profit organization, which is, which focuses on countering disinformation and fakes in Russia, we have found more over 3,000 disinformational narratives, threatening Russian Federation and Russian citizens in some different ways. And this is about, like, number of narratives, but separately, we have counted each, like, post and message in social media, and the number of messages and posts and reposts placed in social media, an overwhelming 10 million copies in the Russian media sphere.

Chantal Joris:
Thank you. I think there will be probably a. quite some disagreement in the room and also I will let Tetiana perhaps respond and react to some of the remarks. Certainly there is a gap in international law as to how to deal appropriately with information manipulation actually both in times of peace and in times

Tetiana Avdieieva:
of armed conflicts. I don’t know if we have any… Yes. Yeah, just a brief response. First of all, I find it particularly interesting when the discussion around the incitements to aggression, propaganda for war and incitements to hatred turns into the discussion around the disinformation campaign spread inside Russia which for me is slightly a shifting of the context because when we are speaking of the aggression issues per se, we have to take into account the narratives which are primarily aimed at actually instigating their own conflict and also narratives which are shared inside Russia connected to the, for example, inviting people to join in Russian armed forces or connected to actually incitements to commit illegal activities which predominantly are shared in Russian media especially those which are state-backed. Also as regards the digital security threats and digital security concerns, what concerns me the most is the attempt to basically substitute the actual topic of harming civilians and the topic of basically trying to suppress activists, opposition, human rights defenders and journalists by the fact that there are restrictions which affect the entire community in Russia. First and foremost, because among the Russian community itself, there is an extensive support towards the invasion. Even Russian independent media Meduza, it’s in its findings and its research stated that from 70 to 80% of Russian citizens actually support the invasion. When assessing the restrictions in this context, the proportionality analysis, in my opinion, would a little bit differ comparing to the situation when we are just declaring the like facts without providing the appropriate context for some. So I will stop here and I won’t probably create the bottle out of this discussion here. But I think that it’s a very important topic to clearly define the things we are talking about and to clearly indicate in which context they’re done to whom they’re attributable and what are the specific consequences of the actions which are taken and what is the reasoning behind those actions which are taken. Thank you.

Chantal Joris:
Thank you. Hello? Yes, thank you very much. As mentioned, when we go to the factual scenarios of specific conflicts, for sure, there can be a lot of disagreements as to what specifically the issues are. I will take one more contribution and then I will, I will, and then let’s hear from Joel Risk from the ICRC.

Audience:
Hi, I’m Rafik from Internews here. This may be more of a niche issue. potentially, but one of the biggest frustrations that we hear from our media and journalist partners particularly, though also from civil society, is around over-enforcement from social media platforms where legitimate news reporting or commentary on conflict is taken down and legitimate news sources have their accounts suspended or restricted from amplifying or boosting content. Sometimes it’s through automation in cases like Palestine or Afghanistan where you can’t report on the news without mentioning dangerous organisations. We find a lot of media outlets wind up getting their pages restricted, and then other times it’s through mass reporting and targeting of these news sources that result in incorrect, having their pages taken down. Sometimes people do actually violate the rules of the platform too, maybe posting pictures of dead bodies and things like that that do violate the rules, but in a conflict setting it’s often complicated. So yeah, just in terms of the free flow of information, that’s another issue. Thank you, yes, absolutely. I mean, also promoting a certain narrative or sharing

Chantal Joris:
violations for propaganda purposes, for example, is obviously something very different than reporting on them to make them publicly known, but given how often automated tools are also involved in content moderation, it’s very difficult to make that distinction properly. Joelle, let me… turn to you and perhaps ask you as well hearing from from the situation in in ukraine and sudan does that um is that also what the the sort of threats that you that you have perceived globally as a humanitarian organizations and and what sort of specific risks um has the icrc identified in terms of how these digital threats can harm civilians

Rizk Joelle:
thank you shantan and thank you for just you know contributions they make to ukraine uh sudan i will maybe focus a little bit more on the harms to civilians to us um rather than on the nature of the uh of the threats so because of course our concern is not only about the use of digital technology but also that the lack of access to to that especially to connectivity particularly when people need reliable information the most to make potentially life-saving decisions the we share the information dimension of conflict that also becomes i’m sorry we have a little bit of you’re breaking up a little bit i don’t know if there’s anything uh i don’t know if it’s the connection or if there’s anything you can do with the with the mic that will make it a little let me change the mic setting um is it better like that okay yes i see you nodding oh okay all right great thank you sorry i it was a mic setting i believe um so i was saying that the information dimension of conflict have also become part of in a way of digital front lines um because digital platforms are used to amplify um a spread of harmful information at a wider scale reach and speed than than we’ve ever seen before and that is a concern because it compromises people’s safety their safety, their rights, their ability to access also these rights and their and their dignity. And this, the difficulty is that this happens in various ways that are very difficult to prove that Jenna spoke of attribution a little bit. It’s very difficult indeed to even not only to do that but also to prove how harmful information is actually causing harm to civilians affected by conflict and I’ll try to speak about that a little bit. And I see that different actors, whether they are state or non state are leveraging the information space to achieve information advantages, you, you had said earlier, but also to shape public opinion shape the narrative, the dominant narrative, and but also to influence people’s beliefs, their interests and their, their behaviors, which is where in situations of conflict is really becomes an issue of risk, potentially to other civilians. The information space in that sense is an extension of the conflict domain, and impacts people that are already in a vulnerable situation because they’re already affected by conflict. So, the digitalization of communication system then becomes basically a convergent of the information and digital that being said, not all harmful information and distorted information, whether it’s misinformation this information, malinformation and hateful information, not all of it is a result of organized information operations right, not all of it is state sponsored, but the use of digital platforms, really have has a mix of state and non state actors and to an organized spread of narratives but also an organic spread of information and harmful information. And what I’ve seen in the past years. And maybe also just to caveat on that that makes it very complex from humanitarian angle. Again, to identify to detect that that is a harmful narrative but also to assess what is the harm to that to the civilians and then to think of an adequate response to these complexities that I just mentioned. And what I’ve seen in the past years is that how countries affected by armed conflict. countries the spread of misinformation and disinformation, and also hateful and offensive speech can already aggravate tensions and can intensify conflict. dynamics, which of course will have a very important toll on civilian population. For example, harmful information can increase pre-existing social tensions, pre-existing grievances. It can also even take advantage of pre-existing grievances to escalate social tensions and exacerbate polarization, violence, all the way to a point where it’s a disintegration of social cohesion. Information narratives can also encourage acts of violence against people or encourage other violations of humanitarian law, and you already mentioned quite a few examples. Alishka also mentioned a couple of examples. The spread of misinformation and misinformation can increase vulnerabilities to those affected by conflict. The distress, the psychological weight it can cause, which is often invisible. For example, think of how harmful information may feed anxiety and fear and also mental suffering of people that are already under significant distress. We fear that the spread of harmful information can also trigger threats, harassments, which may lead to displacement and evictions, and I think a couple of examples were already given in the room. We also worry about stigmatization and of discrimination. Think of survivors, for example, of sexual violence. Think of families that are thought about as belonging to one or the other of a group or one or the other an ethnic group, for example, where they may be stigmatized about people being denied access to essential services as a result as well, only because they belong to a group that is subject to an information campaign or a narrative. We also fear that distorted information in times of emergencies and people’s ability to access potentially life-saving information is heavily compromised today. People may not be able to judge what information they can trust, at what time when time when they really need accurate and timely information for their safety and for their protection. For example, to understand what is happening around them, where danger and risks may be coming from, roads that are open or not safe or not, locations of checkpoints, et cetera, and how and where they may find assistance, whether it’s medical or other types of assistance, or take measures and make timely decisions to protect themselves or to even search for help. So the digital information space can also become a space where behavior that are counter to international and humanitarian law may occur, including, and I will not give contextual examples, including the incitement to targeting of civilians, to killing civilians, making threats of violence that may be considered as terrorizing the civilian population, but also information campaigns, whether they are online or offline, and I would like to underscore online and offline, can also disrupt and undermine humanitarian operations. Hattab spoke a bit about that, but I wanna say that when this happens, when undermining humanitarian operations may also hinder the ability to provide these humanitarian services to people that are most in need for it, and of course also compromise safety of humanitarian aid workers. One last point I’d make on this is that even the approaches that are adopted to address this phenomenon in themselves, and Chantal, you mentioned that in the beginning, it may also, intentionally or not, impact people’s access to information. It may fuel crackdown, more surveillance, more tracking of people, crackdown on freedoms, on media and journalists, and of course also on political dissent and potentially also on minorities. So as a humanitarian actor, we believe that this isn’t the issue that requires. a bit of a specific attention, not only because of the implication it has on people’s lives, their safety, and their dignity, but also because of how complex the environment is. And from that angle, a conflict-sensitive approach will be necessary. We’re used to discussing a lot on the impact of disinformation, for example, from a point of view of public health campaigns, election campaigns, freedom of speech, et cetera. When it comes to conflict, a conflict-sensitive approach will be necessary. In other words, an approach that really helps us ask how to best assess the potential harm in the information dimension of conflict, and also how that may have impact on civilians that are already affected by several other types of risks, mostly offline. And of course, think of adequate responses that will not cause additional harm or amplify harmful information, whatever the type of that information will be. And happy, of course, to talk a little bit more about that and how it connects to other risks later in the hour. Thank you.

Chantal Joris:
Thank you very much, Joelle. I do find this point very interesting about, as a freedom of expression organization, we look at something like disinformation, obviously, through the lens of the human rights framework and the test to apply to restrict freedom of expression. But it’s interesting to think about it from the perspective, again, of the potential harm, what are the adequate responses, and whether they are the same as the ones we would identify normally as a freedom of expression organization, as the adequate responses to this information that do not have any unintended negative consequences. With that, let me move to Elanai. So, I know that some GNI members are obviously telecommunication and internet. service providers or also hosting platforms and so I’m just curious to hear like what discussions have you had at the GNI specific to two conflicts and perhaps can you talk a bit about what pressures have companies reported to be facing if they operate in these conflicts from from the conflict parties

Speaker:
yeah sure thanks Chantelle and thanks for the opportunity to be on this panel maybe to start just to say GNI is a multi-stakeholder platform working towards responsible decision-making in the ICT sector with respect to government mandates for access to user information and removal of content we bring together companies civil society academics and investors and all of our members commit to the GNI principles on freedom expression and privacy and our company members are assessed against these principles in terms of how they are implementing them in their policies and their processes and their actions and we also do a lot of learning work and and policy advocacy and so as part of our some of our learning work we started a working group on the laws of armed conflict to examine responsible decision-making during times of conflict and the challenges that many of our member companies were facing and then we are also holding a learning series organized by GNI ICRC and CIPRI which is meant to be an enable and honest conversation around the ways that ICT companies can have impact and be impacted in the context of armed conflict and that’s really you know to say that I’m I’m coming to this conversation as GNI not really being or not necessarily being an expert in IHL or working in times of armed conflict but we are trying to bring together the right experts ask their write questions and have the conversations that are necessary to help companies and other stakeholders navigate these really complicated situations. So I think to answer your question, Chantelle, as we’ve heard from a number of our speakers today, armed conflicts are really complex and there is a lot at stake. Technology companies may offer services that support critical functions, provide critical information for citizens, but they can also be used to directly or indirectly facilitate violence, spread false information, potentially prolong and exasperate conflicts. And that’s just a few of the potential impacts. There are a number of different risks that companies may need to navigate during times of conflict and they often have to take difficult decisions that require the balancing of a number of stakeholder interests. This includes risks to people, individual users, journalists, vulnerable communities, societies. As well as navigating risks to a company, including its infrastructure, services, equipment, but probably most importantly, their personnel. And especially for telecom companies who have offices on the ground, often their personnel are at risk. And I think companies may need to navigate a whole range of questions about if they operate in a context and what that impact might be. I don’t think it’s a clear-cut answer. They, on one hand, may be providing access to critical information, they might be a more rights-respecting alternative, but they also might be used to facilitate the violence. They have to navigate questions about how they operate and function during times of conflicts, including how they’re responding to government demands. These can take many different forms, including requests for access to user information, giving access to networks for surveillance purposes, shutting down the networks, carrying messages on networks, removing content, and more. more. I think that we’ve seen that these demands may be informal. The legal basis for the demand may be unclear. The duration of the measure being required may not be specified. For example, it might not be clear when a network shutdown should be ended. The scope of the demand may be extremely broad. And I think something that was said by another speaker that’s important is that these demands can come from both sides of a conflict and not just one government. And so I think as companies manage risk to people and their company, their ability to respond to government mandates in other ways that might be available to them during times of peace can be really limited. For example, during a time of peace, you could say a company should request clarity of the legality of the request and communicate with the government to determine exact requirements. They should be responding in a way that is minimal, refuse to comply, partially comply or challenge a request through legal channels, disclose information about receiving the request to the public or notify the user, maintain a grievance mechanism when the privacy and freedom of expression of users is impacted by complying with the request. But I think in times of conflict, as they face these different risks that they have to manage, it can be really difficult for them to undertake these measures. And I think just from discussions that we’ve heard, things that are useful include companies having risk management frameworks in place, clear escalation channels, clear thresholds to understand what triggers different actions, working with other actors to understand the legality of requests, working with other companies to coordinate actions in a specific context, and importantly, engaging with experts, including to understand the implications of different decisions and ensuring formal and constant review of decisions on how to improve their actions going forward. And I think another challenge that we’ve heard in our discussions is that it can also be challenging to understand when to pull back or to de-escalate different measures that are in place because it’s not always clear when a conflict ends.

Chantal Joris:
Thank you very much and I do also really support in in these contexts that the necessity of a multi-stakeholder approach because perhaps say the ICRC might not be an expert classically with content moderation or maybe not yet maybe that’s still to come ISP providers are not necessarily experts in conflict settings they don’t maybe understand both of them maybe don’t understand the typical threats around this information so I do think it’s extremely important that different actors work together. Let me go back to to Tetiana maybe focus this sort of second half of the discussion a bit more on trying to identify gaps where we need more clarity and and also have Tetiana and Kattab speak to the role of ICT companies specifically in the context of their

Tetiana Avdieieva:
conflict. Tetiana over to you. Yeah thank you very much and I particularly liked how the discussion is currently going I mean what I wanted to briefly follow up and maybe start the discussion around how the ICT companies how platforms generally have to respond is that we have to make the clear distinction when organic spread of harmful information turns into spread of actually illegal content and probably this line has to be specifically identified for the context of our own conflict where the effect of the organic harmful information is amplified by the very context in which it is put. As regards the ICT platforms, for me, since in Ukraine there is no actual mechanism to engage with the platforms on the state level, in terms of we do not have jurisdiction over most of tech giants, and that creates the biggest problem, because there is no opportunity to communicate with the platforms otherwise, except for the voluntary cooperation from their side. That is probably the biggest challenge we as an international community have to resolve, because usually states which face armed conflicts or which face civil unrest, and we can expand this context even to other emergency situations, they do not have the legal mechanisms to communicate with the platforms, and that is the primary stage for the discussion. We have to understand when companies have to respond to the governmental requests, to governmental requests of which governments the companies have to respond, especially when there is suspicion or when we actually know that the government, for example, is authoritarian one, when the government has, the state generally has, the very high index of human rights breaches, whether the companies have to be involved into the discussions with such governments, with such states at all. So that is the primary point probably we have to think about. The second thing is to what extent IHL and IHRL have to collaborate when we are speaking about the activities of the ICT. For example, and I can share the link. in the chat, our organization Digital Security Lab Ukraine has done an extensive research on disinformation, propaganda for war, international humanitarian law, international criminal law and international human rights law. There is a big discourse about what are the definitions, which legal regime is applicable and how the states generally and international community have to react when these kinds of speech is delivered. With companies, it is even more difficult just because for them, they’re rather, and I mean, I can absolutely understand why it happens. They’re rather waiting for international organizations. For example, the UNESCO, the OSCE, the Council of Europe to say, well, there is a reason incitement to genocide. Whether the threshold is reached or not. And that is actually point like, it’s a big plus hundred to multi-stakeholder collaboration. Because there are certain actors which are empowered, which are put in place to say, to call particular legal phenomena by its own name. We have to understand that like, I mean, I wish I could say that there are incitements to genocide in what Russia does in Ukraine. But unfortunately, domestic NGOs won’t probably be the most reliable source and the most trustworthy source in this case. So that’s the point in time when international organizations have to step in. I mean, both international, intergovernmental organizations, international NGOs who can elaborate on those issues. And that might be a potential solution how ICT companies might deal with the prohibited types of content. The prohibited kinds of behavior, which is usually called coordinated innocent behavior online. So… most probably they need assistance on the more global level, as well as assistance on the local level in order to better understand the context. For example, when we are speaking about the slur words, most probably it is more reasonable to resort to the assistance of the local partners. And finally, it is about the issue of enforcement. And here, my main point at any discussion is that we are usually trying to, unfortunately, we are trying to blame and shame companies which are already good phase one. For example, we are constantly pushing Meta to do even more and more and more. And it is nice that Meta is open to a discussion. But on the other hand, we have such companies as Telegram, as TikTok, which are more or less reluctant to cooperate, or in case of Telegram, they’re absolutely closed for cooperation with either a government or civil society. And we also have to solve this issue in particular, because there is a big problem of people migrating from the safe spaces, which are moderated, but have certain gaps in moderation, to the spaces which are absolutely unmoderated, just because people feel over-censored in the moderated spaces. And this over-censorship is often caused by our blaming and shaming strategy. And the very same approach has actually been seen when Meta, for example, was blamed by the increased moderation efforts in Ukraine. I mean, it is good that the ICT companies finally started to do something. And our main task is not to blame and shame them for not doing the same in other regions, but rather to encourage them to apply the very same approach in all the other regions and situations, to develop crisis protocols, to think about… to initiate basically discussions about IHL and IHRL perspectives, to say like publicly what kind of problems they face, probably to launch the public calls for cooperation when local NGOs can apply, when local NGOs can themselves engage with content moderation teams, policy teams, oversight teams in case the ICT company has any. So that’s my main point probably to all the actors involved that when we see the pattern of the good behavioral pattern on behalf of the ICT company, we have to encourage them to expand this good behavioral pattern to other contexts rather than to shame them that they acted in this way only in one situation.

Chantal Joris:
Thank you very much. And I do echo the calls on companies to take all situations of conflicts equally serious and not focus on the ones more that tend to make headlines or where there’s bigger geopolitical pressures behind. So also over to Kataab, then I have two last questions for Elonay and Joรซl. If you can keep your interest in the topic, if you can keep your interventions relatively short. So we have a couple of minutes also for any questions for the audience that will be appreciated. Kataab, over to you.

Khattab Hamad:
Thank you Chantal and thank you Tatiana for the great intervention. So I will start with the challenges that the ICT companies face during the conflict in Sudan to be specific. So the major challenge that the ICT companies are facing in Sudan during the war is electricity, to be honest. Before the war. the national grid of electricity was only providing 40% of the citizens with power. And after the war, it’s clear that there was a huge shortage in power supply. And this impacted the network stability, the network, by network, I mean the telecom network, not the power network. And the data centers availability, which affected the e-banking service in Sudan and other basic governmental services. However, the ICT companies normalized the power shortage by equipping the devices, stations and data centers with uninterruptible power supply, as well as UPS and power generators. But due to the circumstances of the war, as I mentioned earlier, the companies could not deliver the fuel to the power generators because of security concerns of the workers. So this led a company like MTN Sudan, MTN Sudan, it’s an ISP in Sudan. It led MTN to announce that they had service failure due to the disability of delivering the power fuel. And I will translate to the role of social media platforms in the ongoing conflict. So social media platforms, actually, they played a major role in ousting the National Congress Party of Sudan, which was ruling Sudan for 30 years. And also it assisted us in our pro-democracy movement. But however… These platforms are the main tools of opinion manipulation during the ongoing conflict, as both conflict parties are using these platforms to promote their narrative of the war. However, the new event here is that there is a foreign actor, which is playing a major role in the cyberspace in Sudan, which is META. META took down the official and other related accounts of Rapid Support Forces, and they justified that by saying RSF is considered a dangerous organization, according to the Middle East Eye website. And yes, I confirm that RSF is a dangerous organization, and we know its human rights record and how it’s bad. But this step from META contributed to the efforts of SAF to control the information and the narrative of war, as nowadays only one way of information. You can get information from SAF while RSF is suppressed. My concern is that, yeah, both sides are bad, but making a free environment of information, and then people can get the information that they want, and they can filter by themselves, not making decisions that contribute indirectly. to prolonging the war, and also assisting in the process of polarization. So taking a decision without considering the local context is a big mistake, as I also have another concern, as RSF itself was a part of SAF, as SAF founded RSF in 2013. So it makes sense that both are dangerous organizations. How can you take down one organization and leave the other? Also, the decision impacted the free flow of information. So for example, fact checkers cannot find information to provide verification to the claims, as there is one way of information, and it also has security impact on the people on ground. So there are some gaps that I want to raise, and I think it should be filled. So in this era, the right to access information is related to cyberspace. So the front liners of accessing the information are the telecom workers, the telecom engineers, and other telecom-related workers, because they are the people who provide and operate the infrastructure which allows us to access information. Those workers should be considered by the international law to be extraordinarily protected, like doctors, journalists, and the human rights defenders. Moreover, in Sudan we need more and more training for our people because unfortunately we don’t have enough human resources to grow our internet governance and their knowledge is limited to specific people. And unfortunately these people are using their knowledge to restrict the free flow of information and freedom of expression. And also we have to amend our laws like the Right to Access Act, the Cyber Crimes Law and the Law of National Security as they were being abused using victims by the same people who have this knowledge. So I think that’s it from my side, Chantal and others, back to you Chantal, thank you.

Chantal Joris:
Thank you very much. Yeah, it’s interesting, we’ve heard now twice of these complications around ICT companies potentially sort of de facto asked to choose sides between the parties to a conflict, also like Elonay mentioned earlier. And also I think very interesting point about the key importance of the staff that is in charge of keeping these ICT systems going and perhaps them needing even specific protections to be able to do that. Elonay, so the GNI does refer to the guiding principles on business and human rights, which are key also to the GNI principles as to how companies should respect human rights. They only make very brief reference to humanitarian law, so maybe just an open question as to do you feel that there is a sense from companies that they need more guidance as to what it means for them to respect humanitarian law in addition to human rights?

Speaker:
I mean, yes. I think that is very central to a number of conversations that happen at GNI. I guess I would say so many technology companies approach risk identification and mitigation through the lens of business and human rights. And this includes relying on frameworks such as the OECD guidelines for multinational enterprises and in the UN guiding principles like you just mentioned. And I wanted to highlight that there are a couple of relevant principles and parts of the commentary of the UNGPs for companies and states with respect to operation in conflict affected areas. Importantly, according to the UNGPs, a core principle of the corporate responsibility to respect human rights is that in situations of armed conflict, companies should respect the standards of international humanitarian law. And then also the UNGP state that when operating in areas of armed conflict, business should conduct enhanced due diligence, resulting from potentially heightened risk and negative human rights impacts. And there’s emerging guidance from civil society organizations on how companies can undertake this EHRDD through a conflict lens. I think IHL can help inform tech companies operating in situations of armed conflict about the risks that they might expose themselves, their personnel, as well as other people too. But like you mentioned, I think that more guidance is needed as to how due diligence processes can incorporate IHL, as well as more work can be done on the articulation as to what IHL means for ICT companies.

Chantal Joris:
Thank you very much. Joelle, as the main guardian of IHL, I know the ICRC is looking into some of these also legal and policy challenges that have arisen through these cyber threats. And can you talk a bit about this global advisory board which has? supported the ICRC in addressing some of those. Can you perhaps share some of the initial findings?

Rizk Joelle:
Of course. Would you like me to focus more on ICT companies since that’s where the discussion went? Yes, yes, sure. Okay. So yeah, thanks. It’s a good question, Chantal. The ICRC has set up a sort of a global advisory board about two and a half years ago. So between 2021 and 2023, we brought together at a high level, really at senior level, advising the president and the leadership of the ICRC on basically experts from legal, military policy, tech companies, and also security fields to advise on the emerging digital threats and new digital threats, and to help us improve our preparedness to engage on these issues, not only with parties to armed conflict, but also with new actors that we see are very, that play a very important role in complex situations, including, of course, civil society, but also tech companies. So for these, throughout these two years, we’ve hosted about four different consultations with the advisory board, and hopefully next week on October 19th, we will publish the list of discussions and recommendations. They’re not ICRC recommendations. They won’t be ICRC recommendations, but they will be the advisory board recommendations on digital threats to civilians affected by armed conflict. So I will maybe broadly mentioned the four different trends that were discussed in these consultations between the global advisory board and the ICRC, and then I will focus a little bit on the recommendations linked to the information space, and then to ICT companies. And I’ll try to be quick, because I’m aware of time. So the first trend that was discussed between the ICRC and the global advisory board. board is the harm that cyber operations have on civilians during armed conflict. So focusing again on the emerging behavior of parties to armed conflict in the cyber space, but also other actors in that space by disrupting infrastructure services and data that may be essential to functioning of society, but also to human safety. And there we consider that there’s a real risk that cyber operations will indiscriminately affect widely used computer systems that are connecting and connected civilians and civilian infrastructure, but in a way that goes beyond the conflict. So as a result, it may interrupt access to essential services, but also hinder the delivery of humanitarian aid and cause, of course, offline harm and injury and even death to civilians. The other issue or the trend that was discussed is the question that we are discussing today, and that is connectivity and the digitalization of communication systems and the spread of harmful information. And similar to what we already discussed at length in this session, recognizing information operations have always been part and parcel of conflict, but the digitalization of communication systems and platform is amplifying the scale, reach, and speed for the spread of harmful information. And that, of course, leading to distortion of facts, influencing people’s beliefs and behaviors and raising tensions and all what we have already discussed, but really stressing that the consequences of this is online as well as offline. The third issue discussed, and this is really an issue that we hold very close to heart as the ICRC, and that is the blurring of lines between what is civilian and what is military in the digital dimensions of conflict. And seeing that civilians and civilian infrastructure becoming more targets of attacks in that. space in the digital dimension of conflict. And of course, this is an issue that is of growing concern as digital front lines are really expanding, and they’re expanding also, let’s say, conflict domains. The closer that digital technologies move civilians to hostilities, the greater the risk of harm to them. And the more digital infrastructures or services are shared between civilians and military, the greater the risk of civilian infrastructure being attacked, and of course, as a consequence to that, a harm to civilians, but also undermining the very premise for the principle of distinction between civilians and military objectives. And finally, of course, not by any way the least important, the fourth issue, very important to us as a humanitarian actor and to all humanitarian organizations, the way in which in the cyber domain, cyber operations, data breaches, and also information campaigns are undermining the very trust that people and societies are putting in humanitarian organizations, and as a result, the ability to provide life-saving services to people. So some of the recommend, of course, the board had 25 recommendations. I will, of course, not go through them now, but I will invite you to have a look and read that report that will be launched on October 19th. I think it’s really a beginning of an important conversation between multiple stakeholders in that field. I will maybe speak a little bit on the recommendations in relation to information, to the spread of harmful information, and maybe after listening now to you, I will also add a few on recommendations specific to ICT companies. So, of course, in addition to recommendations on parties to respect their international legal obligations, but also assess the potential harm that their action… and policies are causing to civilians and taking measures to mitigate or prevent that. This is, of course, a broad recommendation. But more specifically, a recommendation to states to build resilience and societies to build resilience against harmful information in ways that uphold the right to freedom of expression, protect journalists, and really improve the resilience of societies. And by a resilience approach, we, of course, understand that this is a multiple stakeholder approach that also involves civil society and companies alike. So thinking about it as a 360-degree approach to addressing the information disorder. Another recommendation to the platforms is recognizing the fact that a lot of this misinformation, disinformation is spreading through social media and digital platforms and calling on them to take additional measures to detect signals, analyze sources, analyze methods of distribution, different types of harmful information in contextual approaches to managing that and analyzing what may exist on their own platforms in this context. But particularly in relation to situations of armed conflict, I think Khattab’s example is a classic example of the importance of contextualizing these policies. And that these policies and procedures, including when it comes to contact moderation, as Khattab mentioned, should also really align with humanitarian law and human rights standards that Shantag also have mentioned. And, of course, lastly on that is a recommendation to us and to humanitarian organizations at large to strive to detect signals of the spread of harmful information, but also assess the impact on people. And keeping in mind that any responses to harmful information does not or must not amplify harmful information in itself. or cause additional or other unintended harm. And of course, a call to contribute to, again, the resilience building of affected people in conflict settings. If I still have a couple of minutes, I’ll maybe just mention some of the recommendations to ICT companies that are at large and more linked to the cyber domain and not necessarily to information operations or harmful information. And some of these recommendations include the segmentation of data and communication infrastructure between what is providing military purposes and those that are used by civilians. So segmentation of communication infrastructure where possible. Also awareness of risk for companies and awareness of the legal consequences around their role and their action and the support they may provide to military operations and private clients as well. And that awareness of the consequences that their involvement and the use of their products and services in situations of conflict may have. Also ensuring that restrictive measures that may be taken in situations of conflict, sanctions or others related to sections or other or self limitations as well, do not impede the functioning and maintenance of medical services and humanitarian activities. And of course the flow of essential services to the needs of civilian population. I’ll stop here. Thank you Chantal for giving me the opportunity to elaborate on that.

Chantal Joris:
Thank you very much. I know we’re basically out of time, but I do wanna, before we get kicked out, see if anyone has something they would like to add, something that you think has been missing from the discussions and should be taken into account by the people working on this, or questions of course also to the speakers if they can stick around. around for five more minutes.

Audience:
Yeah, thank you. My name is Julia. I work for German Development Corporation and I would have one question. Yesterday morning, Maria Risa said we need more upstream solutions for this information topic. And we heard a lot now about more downstream solutions, so content management, taking down certain profiles, et cetera, et cetera. So my question would be, what are your views about questions of design of platforms? So why do we talk, how do we talk about redesigning algorithms, business models, et cetera, and what your perspectives are on these aspects? Thank you.

Speaker:
I mean, I would just say that I think it’s really important that companies start to build in the capacity to apply a conflict lens to the development of their products. And I know that ICRC, for example, is working on building and working with companies to build out this capacity. So I think we have to consider both upstream and downstream solutions.

Chantal Joris:
Cut up, Joelle, Tatiana, do you wanna come in on this question quickly?

Rizk Joelle:
I will just say very briefly, it is in line with a 360 degree approach, of course, that involves not, I mean, in the upstream thinking that the very business model is reinforcing in a way, the way that these policies can be enforced. So from that angle, I would tend to, of course, agree, but realistically, I think this would be a very challenging discussion that also requires expertise that may not be in the hands of those that are currently conducting that feedback loop with the tech companies.

Chantal Joris:
Thank you very much. I will perhaps see if there’s. Any other quick questions in the room? Yes, go ahead.

Audience:
Hi, I’ll be super quick. Lindsay Anderson from BSR. For those who don’t know, we help companies implement the UNGPs and conduct human rights due diligence. And I just wanted to flag a resource that might be useful for folks on this topic. About a year ago, we published a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings, which we developed alongside Just Peace Labs and other organization. And it’s very detailed, obviously targeted to companies, but it might be useful for those who are advocating with companies who wanna understand under the UNGPs specifically what they should be doing and what enhanced human rights due diligence looks like in practice. So if you Google BSR, conflict sensitive due diligence, you’ll find that resource. Hi, I’m Farzaneh Badi. So I’m working on a project related to USAID, and they are looking at human-centered approaches to digital transformation. And they want to know and understand what’s, how it can look like, and how can they can actually engage with the local communities when they are doing this actual digital transformation work. And some, one part of that is dealing with crisis. But the challenges that we see in human-centered approaches and human rights analysis is that in, especially in countries that are in war zones, getting in touch with the communities and receiving their feedback and have that kind of stakeholder consultation is extremely difficult. And I want to know if there are actual recommendations out there. And also, how can we use these mechanisms, these human rights mechanism, human-centered approaches to not to leave anyone behind? Because we are not talking about Afghanistan anymore. And like this is maybe, so thank you so much for this session because I’ve been thinking about Sudan and I’ve been thinking about Afghanistan and how sanctions affect them and how they’re in crisis. But in this meeting, we need to talk more and more about them so that they won’t be forgotten. So thank you for this session, but also like the recommendations to get in touch with the community and kind of address their needs as well. When we are doing the digital developments and after that, during the crisis, that would be great, thank you.

Chantal Joris:
Thank you very much. And I know a lot of material has been mentioned that will come out and then some of them, I think also focuses on stakeholder engagement, but I think you’re absolutely right. There is still a lot more to be learned and improved. So I mean, if anyone has anything in this sense to offer. Yes.

Audience:
Yeah, thank you for giving me a space for I want to support Tatiana’s words and I think that international society should do more pressure on global media platforms because they basically control what people think with their recommendation algorithms. Facebook actually can do a revolution in the click by altering the news feed in some social accounts like in some country. So that we analyze that and we see that global media platforms are extremely unsupported as they’re extremely against publishing their recommendation algorithms. and it was mentioned before that some global media platforms take sides in the informational war happening all across the globe and that’s like some bad condition because they are tend to be neutral because like there is no bad and good side there’s like side A and side B in every conflict and we see that global media platforms tend to take side to tend to alter recommendation algorithms for the profit of one of the war sides but they are not doing it publicly so they try to shadow it out so they pretend to be non-biased and neutral but they are not so I think that the global society and here I support Tatiana for 100% should do more pressure on global media platforms globally thank you so much yes thank you very much and I do think I mean there have been long standing calls around more transparency when it comes to the recommender systems we’ve had digital services act just adopted in the EU let’s see if this will bring improvement and I know that Eliska has strong views on this as well I mainly wanted to since a couple of us mentioned several resources so together with article 19 you kindly co-drafted and so Tatiana actually the joint declaration of principles on content governance and platform accountability in times of crisis we did not manage to come up with a shorter title this is still documented it’s available on our website it’s a joint effort of number of civil societies that have either first hand experience with crisis or similarly to access now an article 19 have global expertise in this area, and I think there are a number of, even though it’s a declaration, we still managed to put together 10 pages of relatively, at some instance, detailed rules for platform accountability. The declaration, why am I mentioning it? It is specifically addressed to digital platforms that find themselves and operate in the situation of crisis. It has different recommendations for what should be done prior to escalation, during the escalation, and post-crisis, emphasizing correctly, as the speaker from GNI mentioned, that there is no clear end or starting point of any crisis. So there are a couple of detailed rules without going into the details. The document was launched at the IJ last year, so it’s already one year old, but I think some important principles and rules can be found in there that can serve at least as a guiding light. Thank you.

Chantal Joris:
Thank you so much. I’ve been told to close, also perhaps to say that Article 19 is also working on two reports, one specific to propaganda for war and how it should be interpreted under the ICCPR, and the other one also trying to identify and address some of these gaps that exist when it comes to the digital space and armed conflict. So as you can tell, a lot more material is coming out, still not enough quite yet, or it’s just the start of a process. So thank you to our excellent speakers, Roel, Tatiana, Kataaf, Elunay. Thanks, it was a pleasure to have you. And thank you for everyone in the room and online who participated. And we will be speaking about this topic for years to come, for sure. Thank you so much. Thank you.

Audience

Speech speed

162 words per minute

Speech length

2320 words

Speech time

861 secs

Chantal Joris

Speech speed

157 words per minute

Speech length

2388 words

Speech time

911 secs

Khattab Hamad

Speech speed

118 words per minute

Speech length

1460 words

Speech time

743 secs

Rizk Joelle

Speech speed

157 words per minute

Speech length

3062 words

Speech time

1168 secs

Speaker

Speech speed

169 words per minute

Speech length

1338 words

Speech time

476 secs

Tetiana Avdieieva

Speech speed

150 words per minute

Speech length

2699 words

Speech time

1080 secs

Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ololade Shyllon

The utilization of sandboxes, which are regulatory frameworks that permit controlled experimentation and innovation in the financial technology (FinTech) sector, has encountered challenges in Africa and the Middle East. Presently, there are only one or two FinTech-related sandboxes in the region, indicating a slow start in this field. This lack of progress is viewed negatively.

However, there is recognition of the necessity for positive outcomes regarding sandboxes across the entire region. Sandboxes can provide a conducive environment to test new ideas, products, and services. Fostering innovation in the FinTech sector is considered crucial for economic growth and development.

In terms of regulatory collaboration and policy-making, there is a positive sentiment towards regional cooperation. This collaboration can enhance the understanding of the FinTech ecosystem and enable stakeholders to learn from one another. By working across borders, stakeholders can share insights and enrich their collective understanding. Moreover, the existence of global treaties provides a basis for common rules, despite variations in individual legal systems. This regional collaboration is seen as a proactive step towards achieving the Sustainable Development Goals (SDG) related to industry, innovation, and infrastructure (SDG 9) as well as partnerships for the goals (SDG 17).

Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes. Specifically, the organisation META supports and promotes a harmonised approach, emphasising the importance of collaboration and experimentation. By identifying basic principles applicable globally, a harmonised approach can help to create a more cohesive regulatory environment.

However, the likelihood of increasing harmonisation beyond the national level is deemed to be complex. This complexity arises from various challenges, such as differences in legal systems and the unique data governance challenges faced in the region. Despite the challenges, sandboxes are considered crucial in stimulating innovation within Africa and the Middle East. Implementing sandboxes requires significant resources and time, given the nascent stage of data governance in the region. Nevertheless, the potential benefits and importance of fostering innovation drive the push for sandboxes.

In conclusion, sandboxes in Africa and the Middle East have faced challenges in their establishment. However, there is a recognised need for positive outcomes regarding sandboxes across the region. Regional collaboration in regulation and policy-making is seen as a means to better understand the FinTech ecosystem. Advocates for a harmonised approach believe it can contribute to a more coherent regulatory environment. Despite the complexity and challenges, sandboxes are seen as crucial for stimulating innovation.

Dennis Wong

Singapore has been implementing sandboxes as a policy mechanism to experiment with uncertain applications and technologies. These sandboxes are widely used to explore frontier technologies and collaborate with the industry to ensure clarity and compliance. The use of sandboxes has proved beneficial, providing confidence in data protection, accelerating the deployment of technologies, facilitating regulatory guidance, and promoting business collaborations. Sandboxes also contribute to regulatory understanding and transparency, as the findings from sandbox experiments are published, offering insights into regulatory issues.

It is important to note that sandboxes are not designed for volume but for specific cases with clear objectives. They are intended to provide a safe environment for experimentation and to understand the underlying technology and industry needs more clearly. Sandboxes also facilitate the publication of the experimental findings, enabling regulators and other interested parties to gain a deeper understanding of the regulatory landscape.

However, the process of identifying technology players for companies can be time-consuming, particularly when companies have specific requirements such as a need for privacy-enhancing technology. In such cases, the process becomes longer and more involved.

While sandboxes offer valuable insights and guidance, there are also other policy innovation tools like policy clinics that can provide quicker advice on accountabilities. Policy clinics can expedite the process by offering timely guidance on accountability matters.

Coordinated efforts among regulators are crucial to address sector-specific challenges. If a regulatory question arises in the finance or healthcare sector, the respective authority is brought in to work jointly on addressing the issue. This emphasizes the need for collaboration and partnerships among regulators.

Furthermore, discussions related to sandboxes are primarily domestic but include industry players who operate globally. The sharing of learning and experiences from sandboxes is seen as essential, with the transferability of such knowledge being highly valued by stakeholders.

Dennis Wong, the Data Protection Deputy Commissioner and the Assistant Chief Executive of IMDA, supports broad conversations and principles that everyone can agree on. As interest in sandboxes as a regulatory tool grows, it leads to more tech conversations and meetings with interested regulators, promoting international collaboration.

It is important to understand that the regulatory sandbox is not a decision-making or exemption-providing mechanism. Instead, it serves as a dialogue-based guidance tool to explore areas of regulation where there may be uncertainty. The emphasis is on dynamic and agile regulatory development involving ongoing engagement and a back-and-forth process, rather than providing a final answer at the end.

To conclude, Singapore’s use of sandboxes as a policy mechanism for experimentation and regulation has proven beneficial in facilitating innovative solutions, promoting compliance, and fostering collaboration between industry and regulators. The findings from sandbox experiments offer valuable insights into regulatory issues, supporting the development of transparent and effective regulatory frameworks. Coordinated efforts, both domestically and internationally, are necessary to address sector-specific challenges and promote the transferability of knowledge gained from sandboxes. The regulatory sandbox, as a guidance tool, contributes to dynamic and agile regulatory development by facilitating ongoing engagement and dialogues.

Kari Laumann

During the discussion, the speakers emphasized the significance of learning from the experiences of others when it comes to implementing and operating sandboxes. They highlighted the importance of reaching out to experts in the field, such as the British Data Protection Authority (ICO), to gather insights and knowledge. The speakers stressed that sharing information and learning from established sandboxes, like the one implemented by ICO, can greatly contribute to the success of a new sandbox.

The speakers also highlighted the need to adapt sandboxes to fit specific contexts when transferring them from one place to another. Cultural and other differences were cited as factors that necessitate customized adaptations. The speakers shared their experience of ensuring that the sandbox they learned from ICO was tailored to suit their own context, making it more effective in achieving their objectives.

Another key point raised during the discussion was the importance of tailoring the sandbox to the needs of the target audience. The speakers emphasized that while sharing information is crucial, it is equally important to create a sandbox that is tailored to the purposes and needs of the group it is meant for. This ensures that the sandbox effectively addresses the specific challenges and requirements of the target audience, maximizing its impact.

The regulatory sandbox was explored as a tool that offers guidance and clarity to companies. It allows for the exploration of areas of regulation where uncertainty exists. The speakers clarified that regulatory sandboxes do not provide exemptions or approvals, but rather facilitate the examination of regulatory gray areas within laws like the General Data Protection Regulation (GDPR). It was emphasized that regulations, including GDPR, continue to apply within the sandbox, ensuring that the applicable regulatory framework is not compromised.

Additionally, it was noted that organizations’ regulatory powers, such as those of META (presumably a regulatory authority), are strictly regulated by GDPR. This serves to maintain the integrity and accountability of regulatory bodies, ensuring that they comply with case handling and enforcement actions under GDPR.

In conclusion, the discussion highlighted the importance of learning from others’ experiences and adapting sandboxes to specific contexts. Tailoring the sandbox to the needs of the target audience and ensuring compliance with relevant regulations, like GDPR, are crucial factors in the successful implementation and operation of sandboxes. The exchange of insights and lessons learned from established sandboxes can greatly contribute to the effectiveness and impact of new sandboxes.

Pascal Koenig

During an online discussion on regulatory sandboxes, the participants emphasized the importance of learning from experiences and promoting international collaboration. There was a consensus on the need for sharing knowledge and transferring sandboxes from one context to another, while also acknowledging the need for adaptation. One example cited was Denise’s sandbox, which provided inspiration to others. The significance of cross-border data flows and enabling collaboration between regulators and authorities were also highlighted. The possibility of increasing harmonization of sandboxes on a regional level was discussed, with different perspectives on likelihood. Overall, the discussion focused on the importance of learning, collaboration, and potential harmonization to advance regulatory sandboxes globally.

Lorrayne Porciuncula

This comprehensive analysis delves into the topic of regulatory sandboxes, which are viewed as a means of policy prototyping for experimentation purposes. It highlights several key points that demonstrate the significance and potential of sandboxes in various contexts.

One important aspect discussed is the diverse skills required for the successful deployment of sandboxes. The analysis emphasizes that there is no single skill or set of skills that is universally applicable to all sandboxes and use cases. Instead, the skills needed depend on factors such as national jurisdiction, institutional framework, and the specific issue being addressed. This insight underscores the flexibility and adaptability of sandboxes, allowing them to be tailored to different circumstances.

Stakeholder engagement is another critical factor highlighted in the analysis. It argues that sandboxes should be designed to engage stakeholders from the very beginning, during the design phase. This approach fosters institutional trust and ensures that the sandboxing process is inclusive and representative of diverse perspectives. The analysis contrasts this approach with the current state of sandbox development, which often involves merely posting a consultation online and then leaving it. Instead, it suggests a more iterative and hands-on process that actively involves stakeholders throughout the sandbox implementation.

The analysis also focuses on the importance of capacity building and the creation of a community of practice to share best practices and reduce the cost of implementing sandboxes. It mentions a project in Africa that aims to build such a community through a Sandbox Forum. The forum’s approach prioritizes direct engagement and practical application over theoretical discussions, reinforcing the need for a collaborative and action-oriented approach to sandboxing.

Evaluation of sandbox implementation is another crucial aspect discussed in the analysis. It emphasizes the need to measure and monitor sandbox success using different methods. Factors influencing sandboxing success include stakeholder involvement, risk mitigation, and the technology used. Sharing this knowledge and evaluating sandbox outcomes can lead to improvements in the sandboxing process overall, enhancing its effectiveness in promoting innovation and achieving desired outcomes.

The analysis also explores the role of sandboxes in regulatory frameworks, particularly in the fintech sector. It highlights how sandboxes allow regulators to go beyond traditionally regulated entities, as exemplified by the success of open calls for different companies and innovative solutions in fintech sandboxes, such as Brazil’s PIX payment system. Ensuring fairness and avoiding regulatory capture are identified as important considerations in sandbox implementation.

Mitigating the risk of bias and regulatory capture in sandboxes is further discussed in the analysis. It suggests that regulatory frameworks should be aware of these risks and develop appropriate measures to anticipate and address them. Open conversations about best practices and framework setup are considered essential in this regard.

The analysis also underscores the impact of international collaboration in the deployment of regulatory sandboxes. It highlights the potential of cross-border perspectives to enhance the understanding and deployment of privacy-enhancing technologies and data intermediaries. Furthermore, it notes that new trade agreements can create opportunities for testing business, societal, and regulatory issues among participating countries. This observation emphasizes the crucial role of international cooperation in addressing complex issues related to innovation, data protection, public health, and climate change.

In conclusion, this analysis advocates for a comprehensive and inclusive approach to regulatory sandboxes. It emphasizes the need for diverse skills, stakeholder engagement, capacity building, evaluation, fairness, and international collaboration. By adopting such an approach, regulatory sandboxes have the potential to foster innovation, reduce inequalities, and tackle complex global challenges. The analysis provides valuable insights and recommendations for policymakers, regulators, and stakeholders involved in the design and implementation of regulatory sandboxes.

Moraes Thiago

The speakers highlighted several important points regarding sandbox initiatives in the analysis. One of the main points emphasized the need to foster dynamic discussions on strategies that stimulate innovation while upholding human values. It was acknowledged that sandbox initiatives play a significant role in promoting innovation and ensuring adherence to fundamental values of humanity. The primary goal of this session was to encourage a dynamic discussion among all relevant stakeholders.

Another significant point discussed in the analysis was the launch of the ANPD Regulatory Sandbox on AI and Data Protection. This initiative was created in collaboration with partners like CAF Consultants, aiming to provide a space for innovative ideas while safeguarding individual privacy and data protection. It was recognized that striking a balance between promoting innovation and protecting privacy is crucial in the development of sandbox initiatives.

The importance of international collaborations in shaping the future landscape of sandboxes was also emphasized. It was acknowledged that international collaborations play a crucial role in shaping the future of data governance and AI innovation. Collaboration among different countries and stakeholders is seen as a key driver for advancing regulatory sandboxes and ensuring collective progress.

Furthermore, the analysis highlighted that the call for contributions for the ANPD Regulatory Sandbox will be inclusive by accepting submissions in English. This inclusivity in language aims to make the dialogue more accessible and enable a broader range of stakeholders to participate. By accepting submissions in English, the call for contributions aims to reduce barriers and promote a more inclusive and diverse discussion.

In conclusion, the analysis underlined the significance of sandbox initiatives in stimulating innovation and upholding human values. The launch of the ANPD Regulatory Sandbox on AI and Data Protection aims to strike a balance between innovation and privacy protection. International collaborations were recognized as an essential element in shaping the future of data governance and AI innovation. Lastly, the call for contributions being inclusive and accepting submissions in English adds to the accessibility and diversity of the dialogue.

Axel Klapp-Hacke

Data is considered a critical asset for economic growth and sustainable development. It provides valuable insights for decision-making in areas such as food security, climate change mitigation, and health policies. Data empowers policymakers and private organizations to allocate resources effectively, solve problems, and prepare for risks. However, to ensure the fair and responsible use of data, regulatory frameworks that protect data sovereignty and security need to be strengthened. These frameworks strike a balance between reaping the benefits of data utilization and safeguarding citizens’ rights. Additionally, data and artificial intelligence (AI) have great potential in achieving the Sustainable Development Goals (SDGs). They can facilitate the delivery of medical services, increase efficiency in agriculture, and improve food security, contributing to broader sustainable development objectives. Regulatory sandboxes are also discussed as a means to promote a free, fair, and open data economy. These sandboxes provide a controlled environment for testing and developing innovative solutions while complying with regulatory requirements. By embracing the full potential of the data economy through regulatory frameworks and innovative approaches like sandboxes, we can harness the transformative power of data for economic growth and sustainable development.

Agne Vaiciukeviciute

GovTech sandboxes have emerged as a key component of Lithuania’s innovation ecosystem. These sandboxes, initiated in 2019, have received recognition at the European level for their positive impact on public governance. They provide a controlled environment for testing and implementing innovative solutions in the government sector. Several artificial intelligence (AI) solutions are currently being used by the Lithuanian government, demonstrating the success of GovTech sandboxes in driving technological advancements.

Lithuania places great emphasis on the potential of 5G technologies for innovation. With 90% coverage of the population, Lithuania has invested over 24 million euros in 5G-based projects, with more than 53 projects worth over 124 million euros in the pipeline. The government’s proactive approach to investing in 5G technologies reflects their commitment to harnessing the power of emerging technologies.

The Lithuanian government advocates for a flexible and adaptive regulatory framework that responds to technological innovation. The Sandbox regime in Lithuania enables the government to adapt regulations in line with advancements in technology. This fosters a regulatory environment that supports innovation and allows for the exploration of new possibilities.

To ensure unbiased and inclusive solutions, Lithuania mandates the participation of diverse stakeholders, including higher education institutions and civil society, in the sandboxes. This approach prevents a one-sided approach in the sandbox solutions and promotes fair outcomes in the innovation process.

In Lithuania, sandboxes primarily focus on mature technologies and ideas rather than early-stage testing. This strategic approach ensures that the sandboxes are effectively used to advance technologies with strong potential for real-world implementation.

While collaboration with other countries, such as the United Kingdom, for the establishment of sandboxes is valued, Lithuania recognizes that harmonization may not be necessary in the short-term. Cross-border collaboration is seen as more beneficial, allowing countries to work together and learn from each other’s experiences.

Learning from experiences and sharing knowledge is considered crucial for the regulation of innovations. Collaborations with the UK have provided valuable insights into the establishment and operation of sandboxes. The importance of learning from experiences is highlighted, although it is too early to implement harmonization as the concept of sandboxes is still actively being discussed.

Sandboxes are viewed as a vital tool in Lithuania to test and validate innovations. Government policies are closely aligned with the process of sandbox testing, and policy-makers work closely with those involved in testing systems. This reflects the country’s commitment to fostering innovation and ensuring that policies and regulations are effective in real-world scenarios.

The need for regulations to be dynamic and adaptable to reality is emphasized in Lithuania. Existing regulations without practical use cases indicate a disconnect with the evolving technology landscape. Additionally, sandbox testing may uncover failures or unforeseen challenges, further highlighting the necessity of regulatory adaptability.

In conclusion, GovTech sandboxes have become a central part of Lithuania’s innovation ecosystem, receiving recognition and awards for their positive impact on public governance. The country’s focus on 5G technologies, flexible regulatory frameworks, diverse stakeholder participation, and testing mature technologies in the sandboxes demonstrate their commitment to fostering innovation. Collaborations with other countries, learning from experiences, and the importance of dynamic regulations contribute to Lithuania’s progressive approach to driving technological advancements.

Audience

The discussion focused on the use of sandboxes in different sectors and explored the advantages and concerns associated with their implementation. One pertinent aspect was the AI Act, which stipulates that a national authority should operate a national sandbox. However, concerns were raised regarding the practicality of implementing this legislation. Specifically, there were apprehensions about the significant amount of time and resources required to study and create a test base for each use case.

Sandboxes were also discussed in relation to their potential role in combatting misinformation. CNET, for example, has developed a sandbox specifically designed to address this issue. An audience member raised a question about how civil society can utilise CNET’s misinformation sandbox beyond government use. This prompted consideration of the broader applications and benefits of sandboxes, including their potential use in tracking and analysing the spread of technology-driven misinformation, as well as developing countermeasures.

The value of sandboxes as a space for companies to engage with civil society and build trust was highlighted. It was suggested that sandboxes could serve as a crucial preliminary step before implementing regulations. This approach allows for flexible collaboration between companies and civil society to find appropriate solutions and establish trust-building efforts.

The sandbox approach was deemed particularly useful in the early stages of policy development or policy interrogation, particularly for framing the problem at hand. This experimental tool offers a unique opportunity to explore different policy options, and was seen as an effective way to address complex regulatory challenges.

However, limitations in participation were raised as a potential issue. Due to their nature, the number of firms that can participate in a sandbox is inevitably limited. This could restrict the diversity and inclusivity of the sandbox ecosystem.

Ensuring fairness and preventing distortion of competition were also identified as important considerations when implementing sandboxes. It was questioned how to guarantee that participation in sandboxes does not result in unfair advantages for certain companies. This issue underscores the importance of maintaining a level playing field and reducing inequalities.

Moreover, concerns were expressed about potential regulatory capture in the sandbox process due to the close interaction between regulatory authorities and participating companies. It was highlighted that mechanisms need to be established to prevent regulatory capture and maintain impartiality.

Additionally, the timeframe for operationalising a sandbox was raised as a concern. Some participants questioned the readiness and strictness of regulators in intervening effectively and efficiently.

Overall, the discussion called for advocacy towards adopting flexible, scalable, and dynamic regulatory methods. Sandboxes were viewed as one of the tools to achieve these objectives. While they offer important benefits, such as fostering conversations between regulators and breaking concentration in smaller financial sectors, the limitations and challenges associated with their implementation must be carefully considered to optimise their potential impact.

An interesting observation from the discussion is that sandboxes can facilitate the growth of digital banks and electronic money issuers. As seen in the case of Pakistan, sandboxes enabled these emerging financial entities by providing a regulated environment in which they could operate.

In conclusion, the use of sandboxes in various sectors offers both benefits and challenges. While they provide a space for experimentation, innovation, and collaboration, concerns exist regarding implementation, participation limits, fairness, regulatory capture, and operationalisation. Efforts must be made to address these concerns, and sandboxes should be integrated into a broader regulatory framework that promotes inclusivity, fairness, and effective policy development.

Armando Guรญo

Regulatory sandboxes are gaining attention as an effective solution for addressing regulatory concerns related to Artificial Intelligence (AI) and data. These sandboxes, which are being implemented worldwide, provide a controlled environment for testing and evaluating innovative technologies without rigid regulatory constraints. They have the potential to facilitate the development and implementation of responsible and ethical AI and data practices.

Different countries have adopted unique approaches to implementing regulatory sandboxes. The fintech sector, in particular, has been a strong advocate and driver of regulatory sandboxes. The experiences of countries such as Brazil, Lithuania, Ethiopia, Germany, Norway, and Singapore have been discussed in relation to their sandbox implementations. These discussions aim to learn from the successes and challenges faced by these countries and inform the development of best practices.

Regulatory sandboxes offer the opportunity for authorities to better understand the real impact of emerging technologies, such as AI and data, particularly in areas like privacy protection, misinformation, and digital power concentration. By providing a controlled environment, sandboxes enable authorities to assess the effectiveness of their regulatory measures and develop capacities to effectively tackle these major regulatory questions. However, there are ongoing debates about whether regulatory sandboxes alone are enough to develop the necessary capacities, and whether expensive and time-consuming sandboxes are beneficial for all authorities.

The value of data is highlighted as an important consideration in future discussions regarding regulatory sandboxes. The experiences of Latin American governments, who have been studying the Singaporean Sandbox, have been particularly influential. The Singaporean Sandbox is regarded as pivotal and offers a balance between flexibility, responsibility, and unlocking data value. By studying its implementation, other countries can gain insights into how to effectively leverage data and strike the right balance between innovation and regulation.

In addition to addressing AI and data concerns, sandboxes also play a crucial role in tackling misinformation. They provide a flexible and neutral space for collaboration between companies, governments, and civil societies to explore and develop effective measures to address the harmful impact of misinformation. By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implementing robust regulatory measures.

Advocates stress the importance of a multi-stakeholder approach in tackling misinformation, involving civil societies, companies, and governments. Civil societies, in particular, have been recognized for their valuable contributions in this area. By working together, these stakeholders can collaboratively develop effective strategies to combat misinformation and promote responsible information sharing.

Overall, regulatory sandboxes are regarded as valuable tools in building trust and understanding before introducing regulatory measures. They create a space for experimentation and collaboration, allowing authorities to assess the impact, feasibility, and effectiveness of their regulations. However, caution must be exercised in terms of their costs and effectiveness. It is crucial for countries to consider their individual capacities and circumstances before implementing sandboxes as a regulatory solution.

Session transcript

Axel Klapp-Hacke:
all of you here to today’s session on Sandboxes for Data Governance, Global Responsible Innovation. My name is Axel Klapp-Hacke. I’m a Director for Economic and Social Development and Digitalization at GIZ headquarters based in Germany. Data for sure is one of the most strategic assets for both economic growth and sustainable development. It can provide key insights to make better decisions around food, security, climate change mitigation, or health policies. Hence, data can help policymakers and private organizations to better allocate resources, solve problems, and prepare for risks. And as the backbone of AI application, its potential for the achievement of the SDGs cannot be underestimated. But for the use of data to benefit all, data sovereignty and data security need to be strengthened. We need regulatory frameworks that help reap the benefits of data while protecting citizens, and I think that is the key assumption of this and the starting point of this session. This panel gathers experts from around the world to discuss how regulatory sandboxes can unlock the value of data for all and promote responsible innovation in AI. I’m very delighted to welcome on this panel today Deputy Minister of Transport and Communication Agne Wajciukiewicz. So, I knew it would be very challenging and I was trying to pronounce it a bit correctly. Now, very welcome, very happy to have you here. From Deputy Minister from Lithuania and she focuses among others on innovation and open data and she will share her perspectives in a few minutes time. We also welcome Dennis Wong as the Deputy Commissioner at the Personal Data Protection Commission of Singapore. She manages the formulation and implementation of policies relating to the protection of personal data. We also welcome, I think she is joining virtually, Skadi Lauman. She is the Head of Section for Research Analysis and Policy and Project Manager for a regulatory sandbox for the Norwegian Data Protection Authority. She collaborated with stakeholders in the AI industry in Norway and is one of the team members ahead of AI regulations in her country. And then we also welcome Lorraine Poggiankula. She is here on the panel. She is the Co-Founder and Executive Director of the Data Sphere Initiative. An international non-profit foundation with a mission to responsibly unlock the value of data for all. She is an affiliate at Harvard Berkman Klein Center for Internet and Society at Harvard University. And last but not least, we have Olalade Shilon. She’s also here on the panel. She’s the Head of Privacy Policy across Africa, the Middle East and Turkey for META. She is a human rights lawyer who has focused on privacy, access to information and freedom of expression. Our panel this afternoon will be moderated by our friend Armando Guillo, who is an affiliate at Berkman Klein Center for the Internet and Society and doctoral candidate at the Technical University of Munich, focusing on social sciences and technology. And finally, we also welcome our online moderator. Hello Pascal, Pascal Koenig, GIZ colleague. He is a Planning Officer at the GIZ Headquarters. He has served as the John F. Kennedy Memorial Fellow at the Minna de Gunzburg Center for European Studies. And he’s also a postdoctoral researcher at TU Technical University Kaiserslautern. Together, they will discuss now the roles of regulatory sandboxes in the promotion of responsible data governance and AI innovation. Secondly, a regional perspective on the enablers and challenges of implementing those sandboxes. And thirdly, the issue of international collaboration on those regulatory sandboxes. As GIZ, we are very, very happy to facilitate this discussion and to support this session. Regulatory sandboxes can be really a great tool to promote regulation for a free, fair, free and open data economy. In this way, the potential of data and AI can be used to achieve the SDGs. They can facilitate medical service delivery, increase efficiency in agriculture and improve food security. Thank you very much, and please enjoy this wonderful session. And now, over to you.

Armando Guรญo:
Thank you very much. Thank you, Axel, for your kind introductions. And it’s a real pleasure to be here in such a distinguished panel with these experts on the area of regulatory sandboxes, which are gaining a lot of attention and a lot of traction now. There is a lot of fuss about regulatory sandboxes becoming more important nowadays to deal with many of the regulatory questions there are regarding AI, data, many other technologies and innovations, and of course, that will have an impact on technology. And here, perhaps briefly, just as an introductory remark, I would like to provide this context on regulatory sandboxes. It’s not a comprehensive one in the way in which basically we have, and that’s one of the biggest challenges we have right now, a lot of definitions on what a regulatory sandbox is, how they work, how they’re being implemented, and these kind of questions that we’re going to be answering today perhaps are opening the floor for these kind of discussions to take place. So, we want to start, and that’s one of the basic elements that we have to be have very much in mind, is that regulatory sandboxes are having a lot of definitions, and there are many different ways on defining what a regulatory sandbox can be. You can see regulatory sandboxes that look like innovation labs, or that look like many other projects, which are not necessarily even related with regulation. Some others are related with regulatory questions, but are dealing with them in a very different way. So, here, just to take an approach of what the UN Secretary General’s Special Advocate for Inclusive Finance for Development defined as a sandbox, is that a sandbox is a regulatory approach, not even a space, but an approach typically summarized in writing and published that allows live, time-bound testing of innovations under a regulator’s oversight. That’s a definition. Yeah, and that’s perhaps a definition that some share, some will say not necessarily. I don’t see that it has to be a regulatory approach. Perhaps it’s a regulatory experimentation space or an ecosystem of experimentation. That’s one of the challenges that we are facing right now, and that authorities around the world are facing with their approach to this kind of tool to deal with innovative regulatory measures. From there, we have this big question of how have authorities designed and implemented regulatory sandboxes around the world? And that’s a very interesting thing to analyze, and I have been able to look into this in some of my previous work. So, I have seen sandboxes that have been developed mainly by two people within an authority working on learning more about a technology, and this is called a sandbox. In some other countries, a whole sandbox unit is prepared for developing these kind of projects and developing and deploying an adequate sandbox, and we will hear from experiences from all around the world. We have the sandboxes also, and this is something interesting. Of course, we’re going to talk more on the data sandboxes, but we have seen sandboxes, of course, developing on the fintech sector. On the generative AI, of course, there’s all more attention on why sandboxes can be beneficial to understand many of the challenges posed by generative AI systems. And, of course, on the GovTech and public sectors. So, we have seen these areas and these areas of work as areas that can be of interest for many stakeholders that have been working on this. The fintech sector, of course, has been one of the leading sectors on developing regulatory sandboxes around the world, and that has been perhaps one of the biggest promoters of having sandboxes. Other authorities are trying to follow the same path now on many questions about IP, data protection, antitrust, and many other topics. We have seen, for example, in Latin America, sandboxes being developed. For example, in Brazil now, we have this public announcement, and we will hear from the colleagues from the Brazilian Data Protection Authority. They’re going to tell us a little bit more of these new generative AI sandbox and data protection that is going to be developed. At the same time, Columbia has this fintech regulatory sandbox, which has been also quite big, and a privacy-by-design and by-default sandbox being developed there. We have also sandboxes all around the globe. In Ethiopia, for example, we have seen a sandbox unit being developed there, which is going to be a big unit within the Central Bank of Ethiopia that is going to create some kind of regulatory experimentation environment. Germany, of course, also promoting many of the sandboxes, almost all of them at a regional level, and, of course, with this sandbox handbook that was developed some years ago, which has been quite influential not only in Germany, but in many other countries. At the same time, we have seen sandboxes in Kenya, so the Capital Markets Authority, they’re working on a very interesting fintech sandbox, which has also been quite important to develop the fintech ecosystem in the country, and Lithuania, of course, with the GOP regulatory sandbox for the public sector that we will hear more from the Vice-Minister. So that’s perhaps the whole representation that we want to have here, and many of the experts that are here have been very much involved into these kind of projects, have been working on them, so we have also, for example, the experience of Norway and Singapore working on data protection sandboxes, Singapore developing one of the first frameworks on how to have a regulatory sandbox on data protection and on AI governance, which was also very interesting, and Norway trying to open the black box and trying to develop this idea of more transparency with a regulatory sandbox in Norway for this specific purpose. So with this brief introduction and this brief context and definition of what a sandbox can be, it’s that we are facing now this big question on the relationship between regulatory sandboxes and internet governance. What’s there? Why are we talking about regulatory sandboxes in this specific forum, and when we are talking about technologies such as AI, and when we are talking about the future of data and data protection? Basically, because we are having a lot of questions, so for example, three big topics such as privacy, protection, mis- and disinformation, and digital power concentration, which we definitely have to analyze. How are we going to analyze that, and the authorities are going to analyze that? That’s the biggest question. What are the decisions and the regulatory decisions to be made? That’s where sandboxes perhaps can be helpful to understand the real impact of these technologies, and what can be achieved with the current regulatory frameworks that we have? But that’s the question perhaps. Are regulatory sandboxes enough in order for authorities to develop capacities to deal with many of these big regulatory questions? What has been the experience of other countries that we have here and many other experts that have been working on different contexts that can help us to understand a little bit more about that? And that’s perhaps one of the other big questions that we have. Are sandboxes for all authorities around the world? Are sandboxes effective in any country, or there have to be some initial capacities within some countries and some initial elements for these kind of projects to be developed? With the GHC, with the German corporation, we have also been working on this, and with my colleague Pascal Koenig, also trying to answer some of these questions, because we believe that sandboxes can be expensive. You can spend a lot of time working on them. Are they effective? Are they going to be effective to answer many of these internet governance and many other questions about regulation of technologies such as AI and the use of data and data cross-border data flows and many other big questions on the future of these technologies? That’s what we would like to answer and discuss today. So, with that, I would like to start briefly with a video of the Data Protection Authority from Brazil, that they were very generous to send to us this video. They were very much involved in the preparation of this event. Unfortunately, they were not able to join us, but I think it’s also good to hear from them, and then we will start with the questions with the experts here and the experts on the Zoom room. So, I think we can start. Thank you.

Moraes Thiago:
Ladies and gentlemen, esteemed colleagues and distinguished guests, I stand before you today on behalf of the ANPD, the Brazilian Data Protection Authority, filled with immense gratitude and excitement as we co-organize this workshop in collaboration with our esteemed colleagues from the Berkman Klein Center and the Data Sphere Initiative. It’s a privilege to have the active engagement of representatives from various government bodies and methods. Together, we are embarking on a journey that’s not only significant, but crucial for the future of data governance and AI innovation. Our primary goal in this session is to foster a dynamic discussion among all relevant stakeholders. We aim to deliberate on strategies that can pave the way for the development of sandbox initiatives. Initiatives that not only stimulate innovation, but do so while upholding the fundamental values of humanity. In this session, we will delve into three key areas. First, we will explore the pivotal roles that regulatory sandboxes play in promoting responsible data governance and fostering innovation in the realm of AI. Second, we will examine a regional perspective, shedding light on the enablers and challenges faced in implementing these sandbox initiatives. Lastly, we will discuss the importance of international collaborations in shaping the future landscape of sandboxes. I am thrilled to announce a significant milestone in our journey towards responsible innovation. The launch of the call for contributions for the ANPD Regulatory Sandbox on AI and Data Protection. This initiative, crafted in collaboration with esteemed partners like CAF Consultants, including the distinguished Armando Guilho, who is today’s moderator in this session, seeks to create a space where innovative ideas can flourish while ensuring the safeguarding of individual privacy and data protection. I invite our esteemed panelists and the entire audience to contribute actively to this endeavor. Your valuable insights can shape the very foundation of how we approach AI and data protection. You can submit your contributions via our webpage, which you can access via the QR code presented on this screen. I am delighted to inform you that submissions can be made in English, allowing for a broader and more inclusive dialogue. As we embark on this collective journey of exploration and innovation, let us remember the profound impact our discussions can have on the future. Let us collaborate, ideate, and inspire one another. Together, we can create a future where innovation and ethics coexist harmoniously, fostering progress that benefits all of humanity. With that, I wish you all a very productive session. May our discussions today be illuminating, and may they pave the way for a future that we can all be proud of. Thank you.

Armando Guรญo:
Thank you. With that, we have this invitation from the Data Protection Authority from Brazil, this very exciting sandbox. We can move then to our first question, and perhaps here for our panelists and vice ministers, I would like to start perhaps with your approach to sandboxes and your experience on this work. For you, what is your practice concerning sandboxes? What are the benefits of sandboxes that you have seen in your experience in Lithuania and the work you are developing right now? It will be very much interesting to hear how the sandboxes have been evolving in your experience and what you have learned from that.

Agne Vaiciukeviciute:
Thank you very much for having me here. I think sandboxes is one of my passions, and while it’s very important to speak about the future of the Internet, it’s sometimes very important to speak on the practical matters, how all those innovations will bring closer to us. In Lithuania, you mentioned one of the good practices is GovTech sandboxes. These are a little bit more on my colleague’s side, but this is already an award-winning way of looking into problem solving. It started in Lithuania in 2019. I think last year it got an award on European level of sandboxes that helps for public governance, to solve issues within the governance, to make it more accessible to the customers. I just figured out I will maybe just tell you some of the examples. For example, there are some examples based on AI solutions to measure the quality of digital government in an innovative way, Kodami solution to automate the detection of illegal gambling operations online, Burbi solution to improve the environmental risk assessment of companies, Open Assessment Technology solution to perform remote examination for civil servants, and many, many solutions that are already used in Lithuanian governance in one or another way. I think that platform was so successful that from the government side, the investments into these kind of sandboxes grew, and now it became a huge part of innovation ecosystem in Lithuania. But what I would like to talk a little bit more, which is more on communication side. Countries these days invest a lot into infrastructure, especially infrastructure for 5G technologies, and we are doing the same. In Lithuania, we do have the coverage of 90% of population, almost the same as here in Japan. But when we want to see the value cycle, so to see the demand side, we do not see enough of technologies there. So I think that’s where the need of Sandbox is coming from. So what we did in this sense, we dedicated more than 24 million euros for applications and solutions based on 5G. And it concerning not only innovations in transport sector, but in any sector. So we are very happy of this possibility to do it a bit in a niche way. So it’s not coming from the whole innovation policy within Lithuania, but the initiative comes from the Ministry of Transport and Communication. So we really want to see what the 5G technology is capable of. And there is a lot of interest from business side, where we just called the tender. So just imagine maybe 53 projects are in the pipeline, more than 124 million euros worth of projects of testing. Testing within the Sandbox regime and in Lithuania, those new technologies and applications. I think why it was so interesting for the companies, because we created the Sandbox in the manner that technology and the result of the innovation will belong to the owners. The only wish from the government side is that the application, the testing side would be in Lithuania. And the idea is that we want as a policy makers to be able to be very flexible and dynamic and respond to all the innovations and changes needed in the regulation framework. And I think this is not only to create more applications on 5G technology based solutions and to solve some of the problems in Lithuania, as it is more of the exercise for the government as well to adapt on the regulation matter as well. So we are very, very excited on this Sandbox regime, because we do believe that now we kind of fill the whole value chain. So we’re not only creating the infrastructure, but we’re encouraging private sector as well as public companies to participate and create applications in autonomous driving, in healthcare, in all other industries. And we’ll see what’s gonna happen. I’m very happy and hope that at the middle of next year, we will see some very great results and we will be able to share about it. So maybe it is for first intervention, that’s it. And later on, we can continue. Thank you.

Armando Guรญo:
Thank you, Vice Minister. Very interesting to hear many of those, some of those points, especially on the flexibility, attracting the private sector, presenting the results of a Sandbox, which seems to be sometimes an easy task, but it is not as easy as we can imagine. And from there, that I would like to join to perhaps one of the Sandboxes I have been studying the most, and that basically I have been working with governments, especially in Latin America, and they always say, look at the Sandbox in Singapore. What are they doing in Singapore? And how the Singaporean Sandbox is working? How do they were able to achieve these results? And from that, Denise, we would like to hear from you, because your experience, of course, in Sandboxes has been pivotal for Sandboxes to become a reference around the world. I would like to hear, and we would like to hear perhaps some elements on that experience, and how do you think, especially a data protection Sandbox has been helpful to achieve this balance between being responsible, being also flexible, but at the same time, like unlocking the value of data, which is also very important for many of these future conversations that we’re having. So the floor is yours.

Dennis Wong:
Thank you. Thank you very much. And thanks for having me. As you’ve seen, Singapore has experimented in Sandboxes for quite some time. It’s been a very useful tool for us in policy experimentation, and also in experimentation of frontier technologies generally. We tend to use it as a policy mechanism where there are uncertainties in application, as well as use cases. And it’s very much a tool that we use in partnership with industry, where we need clarity on certain technologies or solutions surrounding different types of use cases. We also look at it where organisations need support for compliance, and also to understand the integrity of their business use cases, and their intended sort of business commercial pathways forward. I wear two hats, both as the Data Protection Deputy Commissioner, but also as the Assistant Chief Executive of IMDA. And in that role, I also look at data promotion and growth. And those are, to us, two sides of the same coin. And so we view Sandboxes as a crucial tool to support industry, but to also help them to find appropriate safeguards, guardrails, and protections for the end user. We’ve had a few Sandboxes for a while now. We specifically had a Data Regulatory Sandbox that eventually grew to become the Privacy Enhancing Technology Sandbox. And that’s been something that’s been running for about a year now. We’ve just closed the first stage of it. And I’d just like to highlight sort of pockets of benefits that we saw. There were certainly benefits to individuals because it gives them assurance and confidence that data’s not being misused. It helps with transparency and to flesh out sort of questions of ethical use. We find that with Sandboxing, experimenting in a safe environment cuts down time and efforts for technologies to be deployed. We also see benefits to the organizations that participate in our Sandboxes because they can safely experiment with cutting edge technologies that give them a competitive advantage. And of course, I mean, realistically, that’s what companies are trying to do. We find that organizations very often come to us to provide regulatory support and guidance. They want to understand the potential of technology solutions, but they also want to comply with what the regulator wants. And I think, interestingly, we also find, and this is talked about a little bit less, it also creates opportunities for B2B data collaborations. Very often, companies come with their own use case. They may not necessarily understand the ecosystem the way we see it from a more central point of view. And a lot of what we do in Sandboxes is also putting together different parties within that ecosystem, matching them to technology providers or to end users or to intermediaries that allows that sort of ecosystem to be created in a specific sort of sector or specific use case. That’s not to say we don’t benefit at all. We benefit a lot because it helps us as regulator understand about technology, understand what industry needs, and it allows us to focus on areas that could potentially require regulatory guidance. But I just want to clarify that we don’t necessarily think that Sandboxing must lead to regulatory guidance. For us, it’s just one of a broad range of policy levers and tools that we have. We do, as a modality, I don’t know whether I’m jumping forward a little bit, but do tend to publish use cases and reports at the end of each sort of experiment. And that in itself, sometimes it just ends there, but it gives the sort of sector and people who are interested a sense of what were the regulatory issues, what were the obligations and allocations of responsibility that arose out of us working through that use case. I would just say that as regulator, we do get our hands quite dirty. We do spend a lot of time working through the mechanics of each individual use case to try and understand what the concerns are, what the issues are. We bring other regulators on board where there are issues that don’t fall within our sort of purview. So it is quite an intensive process for us. Thank you.

Armando Guรญo:
Thank you, and it’s a very amazing experience, and of course, elements that you shared there. And with that, I think, so Lorraine, we have heard about this case. I don’t know if we can already call it a successful case of a sandbox being applied to data protection. We have seen some of the elements that have been used also in Lithuania for the development of the sandbox in Singapore. In your experience, you had work from the DataSphere Initiative, working with different governments, working on reports on how to build this kind of projects. What do you think governments should do? What are like that checklist of elements to develop sandboxes that have the capacities, that have the impact that we would like to see on such projects that have a lot of work to do, that have a lot of resources to be used? We want to be effective on those. What do you see are the best practices, perhaps?

Lorrayne Porciuncula:
Thank you so much for the question. And it’s a pleasure to be here in this panel because sandboxes is also a passion of mine. And so seeing one workshop that we get to discuss this in the IGF, it’s just a pleasure. So on the question on the skill, I think that there isn’t a particular set of skills or a skill that is needed for you to deploy a sandbox. I think there are as many skills as there are sandboxes and there are as many sandboxes as there are use cases because no sandbox is going to be the same depending on the national jurisdiction where it’s located, what’s the institutional framework, what are the core partners that we need to be involved? What is the issue that you’re trying to solve? What’s the timeframe? And the complexity of all of this is just makes it exponential, the number of different skills that you need to have and the people you need to bring in the house. And I think that’s sort of an important step into this mystifying what sandboxes are. And that’s sort of the campaign that I’m trying to lead from my own corner in the Datastore Initiative. We have a report that we published last year called Sandboxes for Data, Building Agile Spaces Across Border to addressing the issue. And in that report, we try to look into the good practices and I consume a lot of the reports that are coming out of experiences such as the one from the Singaporean government, but also in terms of what other actors are doing in different countries and trying to be systematic about understanding what has worked and what hasn’t. We’re still at the early stages of understanding how that can be deployed to other use cases, right? But there is a maturity in terms of trying to understand what are sandboxes and we can all agree that it’s an umbrella term that captures a whole lot of things, right? And I think depending on who you ask what sandboxes are, they’re going to have a different kind of definition and that’s okay. And we should be okay with it as well in terms of seeing it as an anchor for a policy prototyping for experimentation. And the second aspect that we’re looking to also is in terms of the potential of using this internationally, which I’m going to come to later in the panel. And what I realized is that having done that study and that analysis of the experiences internationally and then talking to a number of governments, there’s still a lot of, people are still very much afraid of what it means in terms of resources and skills that are necessary because they’re under the impression that’s something that you need to be a very sophisticated regulator in order to be able to deploy. And I think the first step is trying to exactly say that actually it should be simpler. It should be about looking at a different way in before you design policy and then regulation in terms of engaging stakeholders rather than doing something where it’s sufficient for you just to post a consultation online and then forget about it. How do you actually engage stakeholders from the design phase onwards? And how do you build that trust, that institutional trust with the private sector and civil society and technical community and government and regulators in order to come together and as Denise said, get their hands dirty. And that’s not something that a lot of institutions are prepared to do or have the frameworks that allow them to do it. So for me, it’s less about the skills in itself but rather than being allowed to do that, to actually engage purposefully with stakeholders. And this is an important part of the capacity building that we’re doing now and we are with the support of the Hewlett Foundation now started a project in Africa and through Africa Sandboxes Forum where we’re bringing together stakeholders to create that community of practice in terms of sharing what can be done and what are the issues that you would like to solve from a multi-stakeholder iterative fashion. And doing that in terms of we have a course which we design that takes you through in terms of what sandboxes are and their potential. So that’s an important part into sort of building that skill in terms of words and vocabulary that we are using in the space but also in terms of how do we turn this into practice. So rather than just being a talk shop where we’re talking to them about sandboxes and what it should be, we are actually in the best way of a sandbox bringing them together in terms of can we identify an issue that we can address and can we do so in a way that helps with issues that are relevant among different countries at the same time or different stakeholders through dedicated sandboxes that we are piloting and we’ll be simulating until next year onward. And I think that is a step in terms of just being able to define what are the appropriate stakeholders that need to be involved depending on the use cases, what are the technologies that might be necessary if you’re looking to operational sandboxes and to be transferred data as was mentioned. But also in terms of what are the arrangements in terms of mitigating risks that may emerge, what are the different kind of ways for you to look into measuring and monitoring and evaluating the success of that sandbox as well. And so we are in a process where we are, and I like to say, and I’m not joking, but it’s sandboxing sandboxes really in terms of how they can best function. And I would like to see a space where we are able to share more of those good practices so that we can reduce the cost of actually implementing those sandboxes in sharing resources among each other.

Armando Guรญo:
Thank you, Lorraine. And I really like this idea of exactly of sharing and we definitely should talk a little bit more later on the global forum for sandboxes and sharing these kind of ideas and having these kind of forums for this interaction. We have two colleagues, actually three colleagues that are on the Zoom connection and I think they’re in Africa and Europe and we would like to very much say good morning to them, I think. So I will start with Kari Laumann from the Data Protection Authority in Norway. Is Kari there?

Ololade Shyllon:
In terms of having like a, you know, good outcome in terms of sandboxes across the board. And just to flag, like I said earlier, that there’s been challenges with getting this going in the region. The only data related, so we have one or two FinTech related sandboxes in Africa as well as in the Middle East, but the only data related one so far started a couple of months ago in Saudi Arabia, still at very early stages of development. So there’s not much to sort of share about, you know, the lessons that were learned in that regard. I think I’ll stop here. Thank you very much.

Armando Guรญo:
Thank you. We wanted to also finish this first round with your remarks, which I think are very interesting because that idea of building trust among different stakeholders, it’s always a big challenge and I think it has a lot to do with the design of many of the sandboxes and the participation spaces that we have. And talking about also participation, we would like to open the floor for this first round on questions that you have. I think you can stand up over here to the microphones and please, if you could present briefly yourselves, give your name and your questions, we will be more than happy to hear you.

Audience:
Good morning. My name is Claudio Agosti, I’m a platform auditor. And mostly my question is for the expert from Singapore and from Finland. I’m concerned because soon the AI Act will be in place so that we exist a national authority, this national authority need to run a national sandbox. So the question is, in average, for a use case, how many days per person is necessary to study it and to create the test base? Because it seems that is the potential bottom line to handle a lot of cases.

Dennis Wong:
I think you’re right, we don’t handle a lot of cases. So to us, a sandbox is not a tool for volume. It’s not meant to, it’s not like a framework or policy where you set at the general principle level or even an obligation level, and then it applies to thousands or hundreds of thousands of cases. It’s a tool for a lot of cases. In a year, maybe we work on six to 10 cases where we are really just working through what the use case is. I think one of the things we find helps a lot is to set very clear use case objectives. So if it’s fairly tight in scope, the parties already know what they want to do. It’s really about just working through the accountabilities, then it is more straightforward, easier to do. If it’s about the use case, easier to do. If it’s about helping companies to find players, technology players, they know they need, they have a data problem, they want to use a privacy enhancing technology, they don’t know which one, that becomes a longer process, a more involved process, and it can take many months to sort through. So I would say, unfortunately, the way we do it, at least it’s fairly customized to the use case, and it can take usually an average of maybe three to six months to work through a use case. Sometimes even longer than that. But of course, we have other policy innovation tools such as policy clinics, where we’re just giving quick advice on how on accountabilities that one can be much faster.

Armando Guรญo:
Thank you. I have also additional small remark, but later.

Audience:
Good afternoon. This is A H M Bozulu Rahman, come from Bangladesh IGF. Thank you panel, thank you moderator and honourable minister. We learn so many thing regarding the sandbox from this session. I learn from your presentation, Mr. Moderator, CNET has developed one sandbox regarding the misinformation. So how can we utilize this sandbox regarding the misinformation from the civil society side apart from the government? Thank you.

Armando Guรญo:
Thank you. Well, that’s a big question. I think that we’re having sandboxes on misinformation. Definitely what we would like to analyze is to gather some good evidence on how these technologies are actually spreading misinformation and what kind of measures can be used. I think that’s one of the biggest questions that we have right now, like what are the kind of measures that can be used and how to implement some of those. That’s where sandboxes become so attractive because you have this kind of flexible space in which basically you can interact with some companies and basically try to make them get involved into these kind of questions, concerns. Let’s work together, let’s involve civil society that has been doing some great work on this area, and let’s try to show you basically what could be the measures there to put into place. I don’t know if the sandbox in misinformation is actually a sandbox on providing flexibility. I think it’s more on providing trust-building efforts and perhaps this multi-stakeholder approach, but that’s how I see it. I think there’s interest in many countries to start with this kind of work even before regulating because of course there’s a lot of regulatory pressure also. Why don’t we regulate these kind of practices? Sandboxes are seen perhaps as a first step before going into that. That’s how I think we will see some sandboxes and misinformation more and more, I think, in the near future. Thank you.

Audience:
Good afternoon. My name is Bertrand de la Chapelle. I’m with the Data Sphere Initiative. I just want to make a quick comment. The word umbrella term has been used, and I think it’s an illustration of the fact that the sandbox approach is a spirit of experimentation, and there is a growing toolkit or toolset for governments to experiment various approaches depending on the topics. You mentioned the clinics and so on. The consequence is that it is particularly adapted to the early stages of any policy development or policy interrogation, which is the agenda setting and the issue framing, which is a stage that is usually skipped because the moment people have identified a problem, they run to say, my solution is A, my solution is B, instead of taking enough time early on to frame the problem as a problem that people have in common rather than a problem that they have with each other. Thinking about sandboxing as sometimes an early tool to identify how to shape the problem before you get into drafting whatever guidelines, regulation, or just code of conduct is probably an important element in the sandbox approach.

Armando Guรญo:
Thank you. I don’t know if there are any reactions to that. Okay. Thank you.

Audience:
Yes, please. My name is Christian Rumsfeld from the OECD, and I have a question related to one of the risks or potential risks of sandboxes. Given their very nature, the number of firms that can participate in the sandbox are obviously limited, so the question is how can we make sure that there are no distortion of competition going on that is favoring those companies participating, and also how can we avoid regulatory capture given exactly that closer interaction between the regulator and the companies? And so, in general terms, how can we make the sandbox more fair and non-discriminatory? Thank you.

Lorrayne Porciuncula:
Thank you so much, Christian, for a great question. I know very well, having worked and written about sandboxes and all the risks that we actually need to balance, and that’s one of them, right, in terms of competition and regulatory capture, and I think that’s part of the process of trying to ensure that you’re building trust with a broad spectrum of stakeholders, and what’s interesting about sandboxes is that it does allow the regulator usually the flexibility to go beyond the traditionally regulated and regulated entities. That’s been the case around fintech, right, and so for those of you that know the experience around fintech, I mean, it’s a very regulated sector, right? Central banks have banks that they regulate, and that’s, I mean, financial institutions, and that’s a very tightly knit group. Here with the experience with fintech sandboxes, what happened is that they did open calls for different startups and companies to come in and provide different services and innovative services to answer to a demand or to a problem, and here there were telecom companies that came in, startups, a whole bunch of innovators, and the solutions that came through those fintech regulatory sandboxes has been really, really impressive in terms of providing, in the case of Brazil, for example, instantaneous payment system that right now four out of five adults use, so it’s the fastest growing payment system in the world. It’s called PIX, fastest growing than the ones in India and China, surprisingly, and it was the concerted effort that went beyond the traditional companies, the traditionally regulated companies, so in terms of spirit, it is something that it’s meant to be more encompassing than what a traditionally regulated sector looks like. Of course, there are risks that’s not going to be the case, that we’re going to choose our champions and just invite the ones that we know best, but I think being cognizant of that risk is a first step in terms of trying to mitigate it, particularly so there isn’t a regulatory capture, which is always a concern when we look into healthy regulatory frameworks. How do you build the governance of the spaces? Having more conversations in terms of good practices and also on the frameworks that we need to set up at least the minimum condition for regulatory sandboxes, I think is the first step to go to mitigate those risks and anticipate them.

Agne Vaiciukeviciute:
If I may just very shortly to add from Lithuanian perspective, what we’ve done so far as an obligation to participate in sandbox and get the financing for any testing purposes, there should be a group of stakeholders involved, so it’s obligatory to involve higher education institutions, someone from civil society, so there is a range of compositions that is an obligation to be a part of, so we don’t want to, I mean, I clearly understand the threat there, we don’t want to see one side of sandboxes and solutions, therefore the broader stakeholders group has to be involved, and I think that we clearly put it into the rules of participation just to avoid this obstacle.

Armando Guรญo:
Thank you. I will have to perhaps provide space for one more question, yes, and of course at the end I will, we will have the space, please, I’m so sorry about that, because we have also the online moderator and everything, so thank you.

Audience:
Thank you. Sandboxes in my experience actually break away the concentration that takes place usually in smaller, you know, like financial sector. I was on a committee as a tech lawyer for the Middle East, for Pakistan’s central bank, we did a, exactly right, innovation challenge fund, so there was money as well as the ability to have your idea, you know, sandboxed and approved, and what we noticed was that basically by going through that process we got, you know, start-ups, et cetera, nobody was interested in the money as much as they were interested in the approvals, and then the most amazing thing was that it had a multiplier effect, and I’ll speak about that in a second, but the more important thing was that it started having conversations between regulators, saying you’re not the only ones, you need to actually get approval from another regulator, so the conversations broadened, that was helpful for the ecosystem, and as a result of these things that happened, the central bank was confused about things like should we allow cloud in the financial sector, should we do core treasury systems on cloud, and electronic money issuers and digital banks were enabled because of this exercise, so that was very, very helpful, but I have a question, my question was what I just mentioned regarding the learnings between regulators, have you found that that has been something that you’ve also experienced, that, you know, there’s one regulator maybe that’s doing financial services and there’s other regulatory approvals that are required, and how do you interact and coordinate that effort when you do a sandbox, I’d love to know, thank you.

Dennis Wong:
It’s a great question, we do work, I would say more domestically, because as IMDA we hold the horizontal sort of regulations for data protection, but obviously in a use case of, often they are sectoral and vertical, so for example where we have a finance use case and a finance regulatory question comes up, we will bring in the monetary authority, for example, to sort of work out joint guidance, if it’s a healthcare one, then we’ll bring in the relevant regulator, because very often from the business’s point, or the industry’s point of view, there are regulatory questions, they don’t really care which regulator is going to answer the question, or they realise that it crosses different silos. So that’s also been a fairly sort of interesting way to solve problems, and it’s been quite a helpful exercise, not always the easiest, but I think quite important to move things forward.

Agne Vaiciukeviciute:
If I may just very shortly, it’s a very interesting topic, we could talk about it hours and hours. Once again, in Lithuania case, where we were focusing mostly, was there technologies or ideas which would be at the very high TRL, so we are not talking about sandboxes where the ideas are tested or tried on the very not mature sense, because the money is quite huge, we’re talking about the last TRLs that would later on be scaled on. So it’s really sometimes important to speak on what side of the sandboxes and the ideas maturity we’re talking at.

Armando Guรญo:
Thank you, thank you for all the questions, and hopefully we have some final minutes for those questions that are left, and many others. I will give the floor then to the online moderator, to Pascale Koenig, from the GAC, Pascale, the floor is yours, I know you have also some interesting questions and the challenge of making this in 30 minutes or less, so the floor is yours, and thank you for also joining, I know it’s early morning for you.

Pascal Koenig:
Yeah, thank you Armando, it’s my pleasure to join you online and to now direct you and guide you through the next set of questions, and I would like to shift the attention a bit and pick up on something that especially Lorraine has already commented upon, and that is I want to adopt a bit more of a regional perspective and look at aspects of international collaboration and cooperation. So my first question is, I’m interested in how important is it to learn from other experiences when implementing and operating a sandbox, and perhaps more specifically you can also say something about how transferable are sandboxes from one context to another, how much work has to go into adapting them when you transfer them, and since I’m online I would like to direct the first question to Kari.

Kari Laumann:
Yes, so I think we were one of the first data protection sandboxes in Europe, but there was one before us and that was the ICO, the British Data Protection Authority, so when we were starting our sandbox we did reach out to them and they were very generous in sharing their experiences and even documents, so we learned so much from them. Of course we had to adapt, we didn’t just implement because there are cultural differences, there are so many differences, so we did adapt it, but that was super useful, and I think also the spirit of sharing we have kind of carried with us, and we’ve had so many different countries in Europe and beyond reaching out to us because we’re one of the first sandboxes, so we’ve also tried to share all that we can from what we have learned and what we have built, and I think it’s been really useful since the sandbox concept is kind of new and a little bit fussy for a lot of people, so I think you know sharing their experiences that are there is very important, and I also agree with what has been said earlier in this panel that there’s not like one definition of sandbox, you can make it your own and make it fit your own purpose, so I think you know sharing is important, but also you know listening to the needs of the target group that you’re trying to reach is very important and tailor it to your own purposes.

Pascal Koenig:
Thanks very much for these insights, and for the panellists in presence I will also direct the question at you, also perhaps at Denise, since your sandbox has been an inspiration to others as we’ve heard before, what’s your perspective on the importance of sharing learning from experiences and the transferability of sandboxes?

Dennis Wong:
No, it’s a great question, I think so far I would say honestly a lot of it has been domestic focus, there isn’t an APEC or an ASEAN framework in the areas that we operate in, a lot of it was about helping industry, and we do work with industry players who operate all over the world, so there is an international element, but I think more and more as we have tech conversations like this, as we meet more and more interested regulators, as the interest in sandboxes grows as a regulatory tool, I think there is a lot that we can learn from each other, and a lot that we can learn from the use cases that we all sort of get our hands dirty on it and do, so very supportive of the sort of broader conversations and principles that we can all buy into, and I think absolutely a lot of these questions about data protection or misinformation or AI are transferable, just by the very nature of the theme, and so we have a lot that we can learn from each other.

Pascal Koenig:
Thank you very much, I would go one step further and also ask about in what ways can international collaboration and exchange on the regulatory sandboxes be most helpful for regulators, for authorities, what do you think are important areas for collaborating, which areas are especially important currently to advancement, and since Lorraine has already said a bit about the importance of exchange and collaboration, I would direct the question to you.

Lorrayne Porciuncula:
Thank you so much for the question, and I think it’s important to consider that while sandboxes have been deployed nationally, there’s so much potential, not only for sharing those experiences internationally, but also on co-constructing and building those internationally from a cross-border perspective. In the report that I mentioned that we published last year, we list a number of different areas where they could be tested, so for example in testing privacy-enhancing technologies, which was already mentioned here, but from a cross-border perspective, by looking as well through issues like new data intermediaries as well that are emerging, so think about the role of data fiduciaries, or for example data commons, data collaboratives that may exist in one country and may want to be certified or recognized in another jurisdiction, so how do we do that, how do we create that space that actually allows for this exchange of what are the minimum requirements, how do you actually get that transferred across borders as well, so we can think through technologies and issues that are more transversal that are emerging within the digital space, but also within more vertical sectors in terms of how cross-border sandboxes could be used, for example to address issues that are already included within trade agreements. Actually DIPA, which is a trade agreement, one of the new trade agreements, Digital Economy Partnership that Singapore is actually a signatory to, together with New Zealand and Chile, with Canada also acceding to it, includes already a provision on the potential of having a data sandbox within DIPA. Now no one knows how to do that right now, but it’s already included as a provision, and I see that I mean this new generation of trade agreements may as well include beyond the lengthy process that it takes to negotiate and balance multiple interests into a static text, that it actually creates the fora for us to test what are the issues that businesses and society and regulators within those different countries care about, care about enough to work together to solve a solution, right, so it’s very much around how do we get, how do we operationalize a lot of those issues that we spend a lot of time negotiating under closed doors, and so trade agreements for me it’s an issue and it’s one that we include in the report. The other one is around health, so think about the issues around transferring sensitive data across border, but also on the opportunities of using that for research and innovation, particularly in the moment of pandemics, right? But also on the complexity of balancing those objectives of innovation and research in public health with issues around data protection and other regulatory systems that somehow interact with health objectives. About the issue of climate change, which is the most transversal challenge that we have in our planet, how are we going to actually get through working on solutions if we don’t have the space to collaborate together, right? And what for me is very encouraging is that we can use this as a blueprint to think about international cooperation in a different way. So I have a career having worked in different international organizations, at the ITU, at the OECD, before I co-founded the DataSphere Initiative. And for me, we need to think about not ways to supplant multilateral processes, but at least to collaborate with them and create a space where we think about solutions and we are concrete about it, right? And so for me, that’s where it lives, the opportunity for cross-border sandboxes, for us to create that space where we are between just do nothing and regulate and forget. We have, we find this sweet spot, so it’s sort of the go-to-lock spots where we can actually work, test solutions.

Pascal Koenig:
Yeah, thank you so much for these interesting comments. Certainly important issues and I have several GSF colleagues who also were interested in this question of enabling cross-border data flows. So that’s certainly something to continue the discussion on. Now I would also like to invite a private sector perspective on the question of international collaboration and those areas that are especially important. Ololade, would you also say a bit on that, perhaps?

Ololade Shyllon:
Thank you, thank you so much, Pascal. I fully agree with what Lorraine has said. Excuse me, I think by their very nature, sandboxes are, they require stakeholder collaboration and there’s a lot of things that can be learned across the board if they’re given a chance. So definitely broadening this kind of collaboration across borders will definitely enrich the learnings and help policymakers definitely better understand the ecosystem and be able to figure out the kind of policies and rules that would apply in different contexts and in different environments. And this in a way would help with harmonization. So for us at META, we believe in having a harmonized approach to regulation and policymaking. And so in a way, whilst we know that different countries will have different rules and different laws or legal systems, but there’s a lot to be learned in terms of working together and collaborating on this kind of approaches because at the end of the day, we all, I mean, globally, there are a lot of treaties that exist even though each country has their own like domestic legal systems. So in the same kind, working together across borders with regulatory sandboxes and the like, for us it’s very, very important to ensure that there’s widespread collaboration across the board and consensus. Of course, there’s cultural nuances, there’s specific nuances, but at the end of the day, at high level, there are basic principles that apply across board and that one can learn from experimenting and collaborating in this space.

Pascal Koenig:
Thank you, Ansel. And yeah, moving, maybe going a bit further in that direction, what are your observations regarding the need, but also how likely it is that there’s an increasing harmonization of sandboxes both beyond the national level, either through new sandboxes that are being created on the regional level, or perhaps through a stronger harmonization of existing sandboxes?

Ololade Shyllon:
Oh, likelihood is a very tough question because I think it’s a complex issue. There’s a lot of factors that come into play in this room. Like I mentioned, differences in legal systems, but like zeroing in on the region that I cover, which is Africa and the Middle East, there is a lot of challenges that I think that exist with sandboxes that are probably more acute in the region. So things around the time it takes for this to be executed, things around the costs that it involves. And the reality is that we have data protection, if you’re talking about data governance related kind of initiatives or sandboxes, it’s fairly nascent in the region. And so most of the regulators are literally trying to figure out exactly how to build the infrastructure. Build their organizations. And at the same time, there’s a lot of impatience from ordinary people with them being able to enforce and being able to show that they’re actually relevant in the ecosystem. And so you find many of them trying to say, okay, how do we prioritize being legitimate and being able to do what we need to do, what we are established to do? In that case, if we have to prioritize that, we don’t have enough resources, financial or technical to be able to focus on sandboxes, which take too much time for us to be able to see any benefit. So that’s, I think, one of the challenges that we’re seeing in the region, but we’re hopeful that with organizations such as Doreen working on this issue in the region, we can see some push and some movement towards having sandboxes, because there’s no doubt that they are very important for ensuring innovation in the ecosystem in the region.

Pascal Koenig:
Okay, great, thanks. And maybe to get a perspective also on a different region, and to get a bit of an insights on the perspective from Lithuania. Darkne, what is your perspective on the need for harmonization on a regional level and how likely is this to be?

Agne Vaiciukeviciute:
Thank you very much for the question. I think that there was already a lot of discussion and there was already a lot of good things said. I think that if we talk in the short-term perspective, harmonization, maybe it’s not the way to go. Maybe I would use a better word, collaboration across borders. That’s what I would expect more happening in the short period. I think that harmonization is always better for those who are not first movers, for countries like Singapore or others who has a lot of experience already there and openly shares it with other countries. This is something that would be maybe not so interesting in a short perspective. I think that we are talking about innovations at this point. So innovations are very important, not only to have a safe space to test it, but also to have a freedom to explore the potential there. I think what our experiences in this field, we also were not unique in the sense of with our sandboxes and I’m proud to say that we got the experience, of course, from UK. We had a very close collaborations. We went there, we invited them. We had a huge conference on the sandboxes just to share their experience because there was a lot of things which they said that we should not do. So it was also very valuable for us. So I think that the harmonization, and maybe it’s too early to have this question at this point. I think now, today, we’re talking a lot about what is the concept of sandboxes, what kind of sandboxes we do have. Then we have some just good initiatives already. So I think what we really need, we need to catch up with the scale on sandboxes in so many different levels and just to show maybe for other policy makers how valuable it is. I’m convinced already, but that’s not enough. I think if we want to make huge changes within the governments, we need to think further. So during this panel, I got so many ideas about how fast we need to go to Singapore with our minister and so on. So I’m joking, of course, but thank you very much.

Armando Guรญo:
Thank you, Minister. Thank you, Deputy Minister. Yeah, I have more questions and I would love to hear more from you, but I think while keeping an eye on the clock, I think we should leave some time for another round of questions from the audience. And I can see questions online, but of course I cannot see them in the room. So Amanda, you can gladly go ahead. Great. So I will start with a question here in person and I would like to get the questions in the Zoom room because I don’t have them. I don’t know if you can help me with that, but please.

Audience:
Thank you very much. Thanks for the very insightful panel. I’m Claudio Lucena, Paraรญba State University Law School in Brazil. And I’m also the co-coordinator of the Open Loop Experience in Brazil. We are addressing privacy enhancing technologies. I’d just like to add a bit on Lorraine’s comment about the happiness of having sandboxes discussed in a privileged space like this. For years, we have talked about the necessity to regulate in a more adequate, dynamic, flexible, scalable way. And bringing sandboxes to this privileged space means that we’re considering it one of the tools to operationalize that smart regulating. Yes, for the digital space, but definitely not only to it. So my question is a little bit more mundane though. It’s a question about timeframe. And I’d like to have these experiences here of Lithuania, maybe Norway and Singapore. You have a framework to operationalize a sandbox and there is a space where you weigh back in as a regulator to say if and which measures are going to be taken out of the experience you have. How strict, the question is how strict you intend to be or have been in these measures? Do you wait until the whole process is finalized as the regular framework foresaw or are you ready to intervene in a point where something stands out as very important not to be waited for? Thank you very much.

Kari Laumann:
Thank you. I don’t know, Carrie, if you want to start there. Yes, I think this is a very good question and very relevant for us as regulators. I think for META it’s a bit different if you’re a private actor and you have a sandbox, but as a regulator, our kind of powers are very strictly regulated in the GDPR. So we are to case handle, we are to do enforcement action and we are to give guidance. And for the sandbox for us, this is a guidance tool. So we call it dialogue-based guidance. So for us, it’s very important to be clear that this is not a decision. We only give guidance in the sandbox and then the company who is participating can decide themselves for what they will actually do. And also we are very clear that we don’t give any exemptions from the regulation. So even if they’re in the sandbox, the regulations still apply. So our sandbox is more about exploring those area in the regulation where there might be questions or uncertainty of how it should be implemented in practice. It’s not about giving exemptions or giving like a stamp of approval. It’s basically guidance. So I think that’s an important kind of also, it’s important to be clear about what the sandbox is and clearly define that for anyone who participates or wants to take part in it.

Dennis Wong:
It’s a great question. I would say that for us, it’s a fairly sort of dynamic process because we waited right from the start of the use case. Very often, right from the get-go, we’re trying to understand what is the regulatory issue they’re trying to solve for. And well, at the end of the process, we come up with the case study or with the published report. So obviously there is that process. But I think throughout in the engagements, we are working on the ground with them to work out what are the regulatory issues, where are the inter-jurisdictional issues, where are the interdisciplinary issues. And we are going back and forth on that process all the time throughout. So it’s definitely in the realm of guidance. For us, it is a fairly agile and dynamic process. And I agree completely with what you said earlier in your speech, which is really, it’s about agile policymaking. So it’s very much in that space for us. And we don’t really sort of see this as, okay, you go figure it out and then we’ll give you an answer at the end of it. It doesn’t really work like that.

Agne Vaiciukeviciute:
So thank you very much. Very good question. I think our perspective is a bit from different angles just because I’m not from the regulatory kind of authority. I’m from the policymaker’s kind of side. This was initiated from our side. So we understand sandboxes as a part of work, working very closer with the ones who are testing all those systems. You know, testing all those innovations. And once again, obviously it’s a very dynamic process. Nobody wants to implement or change any rules that is absurd or whatever. But the idea was to kind of open it and be dynamic in regulation as well. Because we have some of the regulations already in place, but there are no usage cases. So it means that it’s written on the paper, but the reality does not work. So that’s what we are having. So our perspective with sandboxes try to kind of close this gap. And obviously we understand that nothing could be taken for granted or fully within the process because we did not even touch upon the fact that while doing those sandboxes, there could be some not usable cases in the future. You’re just testing some, there could be some failures as well. So we are looking into this more in a relaxed manner, just to see what’s going to happen, just to create a playground for everyone. Thank you.

Armando Guรญo:
Thank you. This has been an amazing panel, very much on a topic that is still on the making. I think there are many things coming on the way, global forum, sandboxes, Lorraine, perhaps, of course, that will be coming and projects on different sites, Lithuania working on these, Norway still continue the good work, although I think META is going to be also very important actor on many of these conversations and as a participant of the sandbox, Singapore continue the great work. So this has been already amazing. Also with GIC working on this assessment on how to help countries to be more efficient on implementation of sandboxes, something that we’re working with Pascal. So this has been already an amazing experience. Hope that you continue the great work. Hope that you continue with all the great questions and thank you again for joining and this has been an amazing experience. Thank you again. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Agne Vaiciukeviciute

Speech speed

151 words per minute

Speech length

1687 words

Speech time

670 secs

Armando Guรญo

Speech speed

193 words per minute

Speech length

3113 words

Speech time

969 secs

Audience

Speech speed

172 words per minute

Speech length

1071 words

Speech time

375 secs

Axel Klapp-Hacke

Speech speed

169 words per minute

Speech length

718 words

Speech time

255 secs

Dennis Wong

Speech speed

189 words per minute

Speech length

1707 words

Speech time

542 secs

Kari Laumann

Speech speed

166 words per minute

Speech length

541 words

Speech time

196 secs

Lorrayne Porciuncula

Speech speed

173 words per minute

Speech length

2284 words

Speech time

794 secs

Moraes Thiago

Speech speed

169 words per minute

Speech length

464 words

Speech time

165 secs

Ololade Shyllon

Speech speed

199 words per minute

Speech length

756 words

Speech time

228 secs

Pascal Koenig

Speech speed

164 words per minute

Speech length

491 words

Speech time

179 secs

Quantum-IoT-Infrastructure: Security for Cyberspace | IGF 2023 WS #421

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Wout de Natris

The lack of cybersecurity measures in Internet of Things (IoT) devices is a pressing issue that demands attention. While the technical community has made efforts to address this concern, the majority of governments and industries have not yet prioritised security by design in IoT. This oversight has resulted in widespread vulnerability and the potential for malicious attacks.

Initially, cybersecurity was not a concern during the early days of the internet, as worldwide connectivity was limited. However, with the rapid expansion and integration of IoT devices into our daily lives, the need for robust security measures has become increasingly evident. Unfortunately, IoT devices are often designed without adequate security measures, making them susceptible to cyber threats and potentially compromising users’ personal data.

One argument put forth is that governments and large corporations should play a crucial role in setting the standard for security in IoT. An example of this proactive approach is seen in the Dutch government, which has taken the lead by imposing the deployment of 43 different security standards. This demonstrates the importance of demanding high levels of security in IoT devices.

Another concerning aspect is the lack of rigorous security testing before new technology, including ICT, enters the market. The fast pace of innovation and the urgency to bring products to market often result in inadequate security measures. It is argued that security should be a fundamental consideration and undergo formal testing before any form of ICT is released, minimising risks for users.

On a more positive note, international cooperation and information sharing are emphasised as pivotal factors in staying ahead in terms of cybersecurity. The power of the internet lies in its ability to facilitate global discussions, enabling the sharing of knowledge and experiences across borders. Governments and larger industries need to be made aware of their role and potential influence in addressing cybersecurity challenges, fostering collaboration and cooperation on a global scale.

In conclusion, the lack of cybersecurity measures in IoT devices poses a significant challenge that needs to be addressed urgently. Efforts from both the technical community and various stakeholders are required to push for security by design and the implementation of robust standards. Governments and large corporations hold the responsibility of leading the way, setting the standards for security in IoT. In addition, rigorous security testing should become a prerequisite before any form of ICT is introduced to the market. Furthermore, international cooperation and information sharing are critical for staying ahead in the ever-evolving landscape of cybersecurity. Only through collaboration can we tackle the challenges and vulnerabilities inherent in the interconnected world of IoT.

Moderator – Carina Birarda

This extended summary highlights the main points and arguments presented in the given information on cybersecurity. It also provides more details, evidence, and conclusions drawn from the analysis.

The first argument states that there has been a significant increase in cybersecurity incidents at the international level, which is viewed as a negative trend. This can be attributed to the global connectivity that has become a key factor behind this increase. Additionally, the emergence of sophisticated criminal activities, such as crime as a service, has further contributed to the rise in cybersecurity incidents. The supporting evidence for this argument is the fact that cyberattacks are often conducted by actors in multiple countries, indicating the global nature of the issue.

The second argument emphasizes the fundamental challenge of adopting internationally-recognised cybersecurity best practices. It is highlighted that only a few organisations currently practise these standards, and the lack of adoption is a global issue. The evidence supporting this argument includes the observation that just a small number of organisations implement these best practices, indicating a need for widespread adoption to enhance cybersecurity at both national and international levels.

The third argument stresses that cybersecurity is a global issue that necessitates international collaboration for effective mitigation. The fact that cyberattacks do not respect borders or jurisdictions is put forward as evidence for the need for international cooperation. Additionally, it is stated that information sharing at the international level is imperative for combating cybersecurity threats. This argument highlights the importance of collaboration between countries to establish a robust global cybersecurity framework.

The fourth argument suggests that understanding the threats facing IoT, web, and quantum technologies is essential for implementing proper cybersecurity practices. By gaining a comprehensive understanding of these threats, appropriate best practices can be selected and implemented. The evidence supporting this argument is the observation that proper implementation of cybersecurity practices can only be achieved by addressing the specific threats posed by emerging technologies.

In conclusion, the extended summary highlights the increasing number of cybersecurity incidents on an international scale as a negative trend. The adoption of internationally-recognised cybersecurity best practices is identified as a fundamental challenge, with only a small number of organisations currently practising these standards. It is established that cybersecurity is a global issue requiring international collaboration for effective mitigation. Understanding the specific threats posed by emerging technologies is emphasised as crucial for implementing proper cybersecurity practices. Overall, the analysis underscores the need for international cooperation and comprehensive measures to address the growing cybersecurity challenges.

Maria Luque

Quantum technologies, specifically quantum computing, present challenges and opportunities in terms of cybersecurity. The concern is that quantum computing has the potential to break current cryptographic systems and expose sensitive information. To combat this threat, researchers are developing technologies like Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD). PQC, although not yet standardized, can be applied today as a software-based solution, while QKD requires substantial investment and the creation of new secure communication infrastructures.

It is argued that governments and the technology industry need to continuously and significantly invest in quantum technologies to ensure data security in the face of the quantum threat. QKD, in particular, requires high investment and the establishment of entirely new infrastructures for secure communication. On the other hand, tech companies have already started implementing PQC into their solutions, showing their recognition of the need to adapt to quantum technologies.

Organizations also need to assess and adapt their information security structures to prepare for the quantum threat. They should understand their information architectures, level of encryption, and capabilities necessary for transitioning to quantum security. The approach for organizations may vary depending on their size, with smaller ones potentially adopting PQC and larger ones engaging in quantum communication networks.

For small tech companies, the infrastructure provided by large tech companies like AWS, Microsoft Azure, and Google is crucial for addressing the challenges posed by quantum technologies. These platforms serve as a foundation for smaller companies to navigate the complexities of quantum computing.

Deploying PQC algorithms in the cloud is considered a potential solution for securing data for small companies in the next five to ten years. Despite not being favoured by some, it is argued that deploying PQC algorithms in the cloud offers optimal data security for small companies. However, there is debate regarding this approach, with some opposing the practice for maintaining data security.

Countries are encouraged to focus on their strengths and specialties when planning their national quantum strategies. For example, Spain has chosen to invest in areas where it excels, such as optics and mathematics, to drive its quantum technology development.

In conclusion, quantum technologies pose both challenges and opportunities in cybersecurity. Addressing the quantum threat requires significant investments in quantum technologies, assessments and adaptations of information security structures, and consideration of alternative solutions like deploying PQC algorithms in the cloud. Additionally, countries should strategically focus on their strengths and specialities to plan effective national quantum strategies. Ongoing research and discussions are needed in this rapidly evolving field.

Olga Cavalli

Latin America faces unique technological and internet infrastructure challenges due to economic and distribution inequalities. These challenges stem from the disparities in wealth and resources within the countries of the region. As a result, access to and the quality of technology and internet infrastructure vary greatly across Latin America.

To address these challenges, there is a need for increased participation in policy dialogues related to the internet in Latin America. Olga Cavalli, a university teacher at the University of Buenos Aires, has played a key role in creating a training program for professionals to learn about the rules of the Internet, understand its challenges, and participate more actively in policy dialogues. This initiative aims to empower Latin American countries to have a stronger voice in shaping internet policies that are suitable for their specific needs and circumstances.

Furthermore, the rapid adoption of Information and Communication Technology (ICT) and Internet of Things (IoT) devices in Latin America has raised concerns about increased vulnerabilities due to the lack of initial security designs. It is estimated that there will be between 22,000 million to 50,000 IoT devices in the region next year. The fast pace of adoption leaves little time for proper security measures to be implemented, which could lead to potential breaches and threats in the future.

Argentina has taken proactive steps in addressing cybersecurity concerns. The national administration has implemented binding resolutions that require the preparation of a security plan, the assignment of a focal point for contact, and information sharing in the event of a cyber incident. Additionally, a manual has been developed to guide the national administration on how to respond to such incidents. A new cybersecurity strategy has also been approved, showcasing Argentina’s commitment to ensuring security in the digital realm.

Developing countries and small to medium enterprises (SMEs) face significant challenges in keeping up with rapid technological changes. These challenges include restrictions on importing certain products and hardware, as well as a lack of human resources, as trained professionals often migrate to developed countries in search of better opportunities. The combination of limited resources and a lack of technical expertise hampers their ability to understand and afford new technologies, creating a widening technology gap.

Moreover, developing economies and small to medium enterprises are often consumers of technologies developed elsewhere, which raises concerns about the global technology gap. While major technology companies like AWS, Microsoft Azure, and Google are expected to provide solutions based on emerging technologies like Post-Quantum Cryptography (PQC) algorithms and cloud computing, developing economies and SMEs rely on these technologies without actively contributing to their development. This dependence on technologies developed elsewhere puts them at a disadvantage.

To address these challenges, capacity building and awareness are advocated as essential measures. By investing in the development of local technological capabilities and creating awareness about the importance of technology, Latin American countries can reduce their reliance on technologies developed by other countries. This would help narrow the global technology gap and allow them to actively contribute to technological advancements that suit their specific needs.

In conclusion, Latin America faces unique challenges in technological and internet infrastructure due to economic and distribution inequalities. Increasing participation in policy dialogues, addressing cybersecurity concerns, and bridging the technology gap are crucial steps towards creating a more inclusive and technologically advanced region. Additionally, capacity building and raising awareness about technology will empower Latin American countries to shape their own technological future.

Nicolas Fiumarelli

During the discussion, the speakers emphasised the necessity of implementing security technologies, such as RPKI, DNSSEC, IoT security standards, and quantum-resistant algorithms, through legislation. They pointed out that the rising number of Internet of Things (IoT) devices and the advancements in quantum computing pose significant security risks. These risks can be mitigated by the adoption of robust security measures.

The speakers also highlighted the existence of security standards developed by the Internet Engineering Task Force (IETF) specifically for IoT devices. These standards provide guidelines and best practices to ensure the security of IoT networks and data. However, one speaker questioned why these security technologies are not universally enforced in all Information and Communication Technology (ICT) systems through legal obligations.

It was acknowledged that the implementation of advanced security technologies comes with a high cost. This cost may pose a challenge to widespread adoption. Nonetheless, the importance of safeguarding critical infrastructure and personal information against cyber threats and data breaches justifies the investment in these technologies.

Overall, the sentiment during the discussion was neutral, indicating a balanced examination of the topic. The speakers’ arguments and evidence provided a comprehensive understanding of the urgency to implement security technologies, alongside the challenges associated with their implementation. The discussion aligned with SDG 9: Industry, Innovation and Infrastructure, as it emphasised the need for secure and resilient ICT systems to support sustainable development.

Through this analysis, it becomes evident that the adoption of security technologies through legislation should be encouraged and prioritised. This will help ensure the protection of IoT devices and networks, while also addressing the growing threat of quantum computing to traditional encryption methods. Additionally, the development and enforcement of security standards can play a crucial role in enhancing cybersecurity practices across various industries.

In conclusion, the discussion underscored the significance of deploying advanced security technologies and standards to safeguard ICT systems. Although challenges such as high implementation costs exist, the speakers highlighted the urgency to address these concerns and apply security measures throughout the industry. By doing so, they aimed to emphasise the need for a comprehensive approach to cybersecurity, simultaneously addressing both technological advancements and legal enforcement.

Carlos Martinez

The discussion centres around the vital role of DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure) in securing the fundamental structure of the internet. These security protocols are instrumental in safeguarding the integrity and authenticity of DNS responses and BGP (Border Gateway Protocol) announcements, respectively.

DNSSEC and RPKI operate by utilising digital signatures to verify the legitimacy of DNS responses and BGP announcements. This verification process ensures that the network delivers data packets to the correct destination, maintaining the proper functioning of the internet. The speakers unanimously recognise the crucial importance of DNSSEC and RPKI, highlighting their shared responsibility in both signing and validation processes.

On a related topic, there has been a debate concerning the potential weakening of cryptographic algorithms and the inclusion of backdoors to enable access. However, Carlos, one of the speakers, expresses a negative sentiment towards this notion. He asserts that such actions would be unwise, potentially compromising the security of cryptographic systems. This viewpoint aligns with SDG 16, which focuses on ensuring peace, justice, and strong institutions.

A positive aspect discussed is that both DNSSEC and RPKI have algorithm agility built into their design. This feature ensures that they can adapt to incipient post-quantum cryptographic scenarios. Consequently, when post-quantum cryptographic algorithms are standardized, they can be effectively incorporated into DNSSEC and RPKI, providing continued security measures against quantum threats.

The debate also encompasses the challenge of mandating technology, with the speakers highlighting instances where such endeavors have proven unsuccessful. They note the issues surrounding cost and benefit discrepancies, particularly in the context of the Internet of Things (IoT) and DNSSEC/RPKI implementation. Furthermore, while post-quantum algorithms have been proposed, they have not yet achieved a satisfactory level of performance.

In conclusion, the speakers collectively emphasize the importance of DNSSEC and RPKI in securing the core infrastructure of the internet. Their positive sentiment towards the efficacy of these protocols underscores their significance in maintaining a properly functioning internet. Nonetheless, there is a negative sentiment towards weakening cryptographic algorithms, highlighting the potential risks associated with such actions. The speakers also acknowledge the need for flexibility and tailored approaches when addressing different technologies, rather than enforcing a one-size-fits-all mandate. Ultimately, this discussion highlights the ongoing challenges and complexities associated with internet security and the need for continued research and adaptation to effectively counter emerging threats.

Session transcript

Moderator – Carina Birarda:
Okay. We are going to start it. Good morning, everyone. Good afternoon. Good night. I want to interpret my gratitude for sharing this workshop, Quantum IoT Infrastructure Security for Service Space. It’s an honor to moderate such a disengaged colleagues and friends. I am Karina Virarda from Argentina, a member of the Multiple Stakeholders Advisory Group of ICF, co-facilitator of the Best Practices Forum on Cybersecurity. I’m passionate about technology and all things related to digital protection. As we know, in recent years, we have seen a significant increase in cybersecurity incidents at the international level, which alarming statistics is showing and consistent wise. Global interconnectivity depends on technology and the sophistication of criminal as crime as a service are the key factor behind the trend. So maybe we have more work. The lack of adoption of internationality-recognized cybersecurity best practices is one of the fundamental challenges. Recognizing cybersecurity as a global issue is essential as cyber attacks do not respect borders or jurisdictions. Organizations such as the UN’s World Economic Forum, ICF Forum, promote internationally-recognized cybersecurity standards. Sorry. Such a need cybersecurity framework and ISO 27001 Information Security Guidance, which provide a solid framework for protecting digital assets. Collaboration and international cooperation are equally essential as a cyber attack often involves actors operating in multiple countries. Sharing information about threats and cybersecurity tactics is vital to on a step ahead in the fight against these attacks. In summary, the increase in international cybersecurity incidents is a challenge that requires a global response. The adoption of the cybersecurity best practices and international collaboration are the fundamental pillars to addressing this growing threat and protecting our digital assets and increasing interconnected world. In order to determine with the best practice can be implemented, it is essential to understand the threats we are facing. So we have two opening questions for all the panelists, which are as follows. Number one, what are the leading cybersecurity threats across the IoT critical internet infrastructure, web, and quantum technology, and what are existing best practices to counter this threat? Number two, how can diverse stakeholders, including the ICF community, the Best Practices Forum on cybersecurity, dynamic coalitions, and the other relevant groups collaborate and contribute actively to development and implementation of these best practices? And number three, in the context of the continuality involving cybersecurity landscape, what key considerations are essential to ensure a safer and more trustworthy internet for our users across these areas? I kindly request that each of you introduce yourself. And you have 10 minute limit for your presentation. And the number one, please, Wouter Natrisch. Your turn. Thank you.

Wout de Natris:
Thank you, Carina. My name is Wouter Natrisch, and I am a consultant based in the Netherlands. And as such, I am the coordinator of the dynamic coalition at the IGF called Internet Standards Security and Safety Coalition. And this coalition has one primary goal that is to make the internet more secure and safer for all users, so whether public, private, or individuals. We do that through different working groups. And these working groups focus on different topics on the topic of cybersecurity. So we have a topic called on internet of things, so security by design built into the internet of things. And I’m sure that Nicolas will tell more about that later. We published our first report yesterday morning here in Kyoto, which can be found online. We have a working group on procurement and supply chain management. And I think that’s what we’re going to focus on most in a moment. That we have a one on education and skills to make sure that tertiary education deliver what industry needs in this field and not codes programs from 20 years ago. We have one on data governance. We have one on the consumer protection. We have a working group on emerging technologies and one on deployment of two specific standards, but then focusing on not the technical side, but I’m sure that also what we’re discussing here is not about the technique. It’s about political, economical, social, and security choices that we have to make in a society. I think that what we try to aim to do, and I think that that answers one of the questions that I heard, is that when governments and larger industries start demanding security by design, when they procure their ICT services, devices, or products, that would mean that any company is not able to deliver these demands, will not get big assignments. And that would be a major driver for getting everything, including IoT, more secure by design. What I think is important to understand is that The internet works as it does and let’s face it, it works fantastically because anybody in the world can at this moment follow us, they can ask questions to us, they can use the chat to interact with us and it’s all because of the way the internet functions and the way it is scalable. But unfortunately when they built these rules, security was not an issue because people who were then connecting were working at either the U.S. government department of defense or they worked in some U.S. universities and everybody knew each other, so there was no need for security. And then the world came online on the same principle and then showed that it was inherently obscure. The technical community has made reparations, they made changes to the code that runs the internet and that code running the internet is the public core of the internet that people talk about. So when you talk about protecting the public core of the internet, you’re not just protecting undersea cables or land cables or server parks, you’re also protecting the software that makes it work. And that is the weird thing about this story that software that makes the internet and IoT more secure is not even recognized by any government in the world as such. So if you talk about standards, they talk about government bodies making standards or they talk about organizations like ISO making standards, but not about the internet standards. They are made by the technical community on a voluntary basis, but that is what makes the internet run and not ISO because that is an administrative ticking box. So if we get governments to understand that it’s the other standards they have to recognize formally as well, but also use them when they procure their services, their services, their products, their devices, the world will change. And what is the current situation? The current situation is that there’s not a living playing field for industry. When industry is not asked for a level of security built in, apparently they don’t do it. And what if I was a single company and I decided I am going to deploy all these standards? That costs me money, it costs me time, it costs me effort. I have to train people. And if the competition does not do it, it means my product becomes more expensive and most likely governments won’t buy it because they go for the cheapest option. So in other words, I would be out of business. So there’s no living playing field. There’s no demand from the big players. So there’s no interest to deploy. So all the IoT devices coming to the market are usually insecure by design. And from that moment on, are a threat factor for everybody in society. So if we don’t put this pressure on industry to deploy, nobody will most likely, except a few that are more idealistic. And this is shown in the research that we’ve done on IoT security by design. And I will not take anything away from what Niklas will be telling us, but we found that there’s no pressure to make IoT secure. There’s no pressure from the outside. We’ve seen it also in the procurement study we’ve done. We’ve analyzed the documents around the world on procurement. And if security is mentioned, it is not always cybersecurity. And if it’s cybersecurity, it’s seldom on internet standards. There’s one big example that does, that’s the Dutch government. They’re mandatorily have to deploy 43 different standards when procuring or explain why they cannot do that. And that is reported to the Dutch parliament once a year. So why is this relevant? I think this is extremely relevant because we’re discussing our future. IoT is already among us. AI is among us for far longer than most people realize. And who knows what is coming with a metaverse or quantum and who knows what is invented tomorrow, because we’re in a society that changes every two hours. And it looks like that time and time again, the same mistakes that we made. The product is invented and it comes into markets usually untested for security. So is that something that we should be discussing that when a new technology enters the market, that at least be tested formally in one way or another? Probably not legislated because you can’t legislate what you don’t know. You can at least demand a certain amount of testing. So ICT in whatever form is allowed to the markets from outside, usually it’s also almost irreparable. So when they find the flaws, it’s almost too difficult to repair them in some cases. So they remain a threat factor for sometimes decades. And with AI, perhaps with quantum or the metaverse and all else that is in store, we can demand at least security from the outset. Demand is before we start procuring it and certainly before we buy it. So large corporations and governments can set that example. And when they do, they become a standard and the security will become available for all of us. So if we make governments and larger industry aware of their role, their potential influence, and to provide them with the information they perhaps lack now, they will change the world for us. And that’s our ICT goal, to make the internet more secure and safer by the widespread deployment of security-related internet standards and ICT best practices. And if you’re interested to join, you can do that at is3coalition.org, and the three is the number three. Our reports are there, also the report Nicolas will be telling about. And I think that is about what I would like to contribute for now. So thank you very much for the opportunity.

Moderator – Carina Birarda:
Thank you very much. The second panelist is Carlos Martinez. He’s online. Carlos, I can see you online. Hello, how are you? I am very well, thank you.

Carlos Martinez:
can you guys hear me? Yes. Okay. I have like four or five slides that I would like to share. I hope that I can share my screen. Yes. Okay. Okay. So, I’ll be right to the point. Well, my name is Carlos Martinez. I work for LACNIC, the Regional Internet Registry for Latin America and the Caribbean. I’ve been working for LACNIC the best part of the last 15 years. I’m currently the head of technology or the CTO for LACNIC. One of the things that has initially caught my attention when I started working for LACNIC was the need for deploying two technologies that at the time were just not very well-known actually. These are DNSSEC and RPKI. I’m sort of grouping them because I believe that there’s a common theme between them, which is securing the infrastructure or securing the core of the internet. I would describe, I would say, a bit of a dire situation regarding the security on IoT, but that’s one part of things. When you have devices, the devices maybe secure themselves, but you still have to traverse the internet to get information from one point to another. I will try to go through this very quickly. When I speak about internet infrastructure, I’m not thinking about the physical layer in this case, not about fibers, cellular, or satellites, but I’m thinking particularly about what I used to call the three pillars of a properly functioning internet. The internet to work, as we know, it depends on three functions, basically. One is routing, the other is control and forwarding, basically the ability of the network to have one packet on ingress and deliver that packet destination to the proper destination, and a complementary function, which is domain name resolution, or DNS, okay? So the three things are necessary. There’s a subtle difference between routing and forwarding. Forwarding is the actual decision of a router when it has a packet and needs to analyze the packet and decide which interface it should be sent off, and routing, which is a control function where the router learns a table that it uses to decide how to forward packets. Both things are necessary, of course, are complementary. So this is a very high-level thread overview of these two or three functions, and each, you could probably identify more than this. Name resolution, for example, suffers from domain spoofing, where a server pretends to host a DNS zone that it shouldn’t, or it’s not authorized to hold, and this is widely used, for example, for phishing attacks. Cache poisoning is another very well-known thread to DNS, and where a specially crafted packet can poison, in a way, a server and allow an attacker to actually… they instruct a server to lie to its customers. This has been widely discussed in the industry and has in a way caused a bit of, I would say, loss of trust on the part of users, something that we’ve been in different industries and in different ways. Routing suffers from something in a way similar, if you will. Route hijacking is probably one of the most well-known effects on attacks on the routing system where an autonomous system publishes a network it shouldn’t, or it doesn’t have authorization to do so. Recently, we have witnessed some instances of internet instability due to hijacks or to a related situation called route leaks where there is a network within the internet that announces some prefixes, but it cannot fulfill the promise of actually carrying the traffic to the destination. It usually happens when a small network announces the whole routing table of the internet and it basically cannot transport all the traffic that every other network starts sending through it. So, as it was mentioned previously, security on some of these protocols was in a way an afterthought. These protocols were created when the internet was a much, I would say, naive place and some security had to be, I would say, backported into them. DNS, for DNS we have the DNS security extensions or DNSSEC, which introduced this. digital signatures within the DNS responses. And this allows a resolver to actually verify a response. This is, of course, not supposed to be a complete explanation of the NSA. This is just the general idea. And RPKI does a similar thing for routing. Again, there is some cryptography introduced into the BGP protocol and some additional decision points that are introduced in the BGP algorithm that allows a router based on some signatures, which I’m going to call ROAS because that’s the name they have, allows a router to make a decision on whether route is a correct one or not. So again, this is RPKI particularly has a lot of complexity that I’m not describing and I don’t have the time to get into, but there’s a lot of documentation in their internet. So a few considerations regarding, for example, the use of cryptography within these protocols. Some people have the misconception that every time you use cryptography is to ensure encryption or ensure secrecy in a way. Both RPKI and DNSSEC make heavy use of cryptography, but they not encrypt messages. They are not intended to provide privacy per se. Maybe privacy is a consequence of implementing these protocols, but they are not. Cryptography in DNS and RPKI is not used for providing secrecy. What is it used for? Cryptography here is used for authenticating and verifying signature chains that ensure either a correct DNS response or a correct BGP announcement. There is a slight difference between them. A PKI requires a well-defined PKI or a public key infrastructure with a trust anchor and CRLs, all the complexity that comes with a PKI. The RIRs have taken the role of operating the trust anchors of this RPKI. On the other hand, the NSSEC uses a simpler chain of trust because it can depend on some features that the DNS already has, for example, the tree-like structure. These technologies are basically useless unless the community, I would say, realizes that there is a shared responsibility here. In both RPKI and DNSSEC, there is a function, which is the signing, either of the DNS or the routes, and the validation. And both are necessary. Signing becomes useless if no one validates, and the other way around. If you’re validating but you have nothing to compare these signatures with, again, it’s useless. And there’s a shared responsibility here. And this is probably my, if you remember one thing of what I’ve been saying, please remember that the message of shared responsibility, in this case, it’s something that we need to get across the industry. Regarding quantum, the previous panelists mentioned that security was sort of an afterthought, and that’s completely true. And there’s a silver lining to it, which is that this afterthought was implemented in the form of an overlay. The core protocol remains unchanged. And there is, I would say, a layer of. cryptography applied over it. The cryptography here didn’t exist before. It was added afterwards. And it was added in a way that can be replaced. There’s a term that is technically used here, which is algorithm agility. And all this, both the NSAIC and RPKI have algorithm agility built-ins. So eventually, when a post-quantum cryptographic algorithm is designed or is standardized, it will be positively applied to both the NSAIC and RPKI. I don’t have it here in the slide, but I have another thing that I would like to mention, which is that I have a strong position on initiatives that point towards weakening of cryptographic algorithms. They’re having some discussions in governments and other fora regarding the necessity of weakening or providing backdoors to algorithms. And I think that would be a very poor decision to implement something like that. So that’s all I have for now. Thank you.

Moderator – Carina Birarda:
Thank you very much, Carlos, for your presentation. Very clear. I am thinking the same. I am support very strongly. And the third panelist is Maria Luque. She’s online. Maria, the floor is yours.

Maria Luque:
Good morning, everyone. Good morning from Madrid, actually. Very glad to be here with you today. It’s 2 AM in the morning in Madrid. And today, it seems that we are going to speak about software. It’s a key point of our discussion. So give me a second to find my presentation, see if I can share my screen. Okay. Can you see it? Yes. Perfectly. Okay. I take it as a yes. So, we’re starting today. I was saying that we were speaking about software, and software is at the core of my presentation about quantum security. First of all, I am Maria Llocke, and for the past three years, I have been working on quantum security. Software is at the core of my presentation about quantum security. First of all, I am Maria Llocke, and for the past 10 years, I have been advising national governments, local government agencies, and mostly in Spain and in the European Union on what to do with emerging technologies, for example, new technologies, space connectivity, or quantum technologies, and how to do it so that whatever we do with these technologies can benefit society in great ways. So I’ve also been working with quantum organizations, quantum startups, and national quantum strategies for the past three years, and I’m very glad to be here. So, the focus of today. Today, for me, we have a challenge, and the challenge is understanding how quantum technologies are going to disrupt not only cybersecurity, but our entire conception of how we process and how we store and how we communicate information. As you may have probably seen in the media, the protagonist is quantum computing. Now, its potential is immense to bring about new solutions to all challenges, computational or not. But once it is live, it will somehow imply that our current cryptographic systems are unsafe and won’t be able to safeguard our privacy. So let’s try to understand today in 10 minutes. how to look at the quantum threat and how to take advantage of quantum to actually be quantum safe. Now, we’re in the IGF, and the IGF’s motto this year is an internet for everyone. An internet for everyone is possible through universal access and privacy. And the fact that our communications can be kept secret is the base of our integrity as individuals and as nations, of course. And to keep the confidentiality of our online interactions, we trust what we call cryptographic algorithms, what Carlos was speaking about. And this trust is built on something we call computational harness assumptions. The fact that they will be able to withstand a cyber attack no matter what. But the truth is that a breakthrough in cryptanalysis can make the system vulnerable in one night. Now, we all know of a company who suffered a cyber attack in the past three or four months. And as my mates were saying, when it’s not a cyber attack on a company, it’s a cyber attack on a national health system or a security infrastructure. We do live in cyberspace. Thanks to 5G, among others, of course, we rely each time more on cyber physical systems, such as IoT, the critical infrastructure, and the web. And the more digital our infrastructure is, the more attack vectors we have to withstand. And each domain is vulnerable in its own very unique way. For example, as Carlos was saying before, critical infrastructures depend on scarce systems that are normally very outdated. IoT environments have very limited computing resources by design and very limited security schemes by design. as my mate, Buddha Nadir was saying. And also when we’re speaking about the internet and telecom networks, we are shifting subtly to our software defined networks, meaning that they will be more susceptible to cyber attacks. So we can say in a way that the cryptographic systems that protect our data infrastructure are shaky ground. Today, we can really say that they are a weak point to watch. And during the past decades, we’ve discovered quantum algorithms. Quantum algorithms with a crypto analytical potential that can break the cryptographic techniques that we use today to protect our data. We just need quantum processor that are big enough to run them. Quantum processors, meaning quantum computers. A new type of computing device, you’ve heard about it, that is capable of performing very specific calculations. Some of which are actually intractable by current classical computers. And quantum computer is truly a game changer. Uses the principles of superposition and entanglement, whatever they mean, to change the way we store and process information. And while large scale quantum computers are not a reality, they’re not available yet, of course. The fact is that creating a strong computer, quantum computer, can accelerate our process of solving the schemes we use in public key cryptographic algorithms to protect our data. I’m gonna give you an example. Thanks to a quantum algorithm like Shure, we could store RSA encryption. And this can break and destabilize us. And it’s not about data breaches. And it’s not only about financial loss. It’s about losing the integrity of digital documents, all of them, losing the sanctity of our personal data. data and losing control over the health and the financial systems that keep us together. And the truth is that we don’t have to wait for quantum computing to come because by harvesting now, decrypting later, which I assume you’ve heard a million times by now, someone can store encrypted information to decrypt it once quantum technology becomes more advanced. And this means that the impact of quantum computing truly started yesterday, as we can say. Now the paradox is that quantum can also give us the key back to our integrity. And in fact, quantum technologies and some classical techniques are the bet of the tech industry and governments when it comes to cybersecurity in the future to come. Now today, as you can say in the presentation, we’re going to focus, we don’t have time, we’re going to focus on the tools we are developing today to be quantum safe in the short term and in the midterm. The first one is post-quantum cryptography, Karloff was talking about it before, and the second one is quantum key distribution. Now let’s focus on the solution that we have more at hand. We were saying that encrypted communication that is intercepted today can be decrypted in the future by a quantum computer that is strong enough. Now post-quantum cryptography, what it offers to us is new classical algorithms that we believe to be secure against a quantum threat. There’s nothing quantum in these algorithms, but we have seen computational hardness that can withstand the brute force of a quantum computer that tries to decipher it. PQC is software. PQC is a short-term solution. We’re making an effort. to standardize them, guided by the NIST from the U.S. And also you probably heard of them, there’s Kyber for secure key exchange, and there is Lithium, Sphinx, and Falcon for digital signatures. And the interesting thing here, talking about best practices, is that the tech industry can enforce these algorithms into the solutions they offer to us today, even though they haven’t been standardized. And in fact, they do this, which is interesting, for example, for government agencies that use technologies in the cloud or store sensitive data on the cloud. Here we’re gonna see a couple of examples of major tech companies taking a hybrid approach via the cloud. For example, AWS has a cloud commercial environment, but it allows you to apply this algorithm Kyber within your security shell, and that’s nice. Google has started combining classical cryptography algorithms with potential quantum resistant algorithms for the FIDO2 standard, which is the standard that you use to authenticate yourself when you initiate your session on a website. And Cloudflare, for example, has done something that’s more or less the same, right? So PQC, what I want you to get from this is that it requires new software stacks. It can be started, it can be implemented starting now. And due to the comparatively low cost doing that, the private sector can take the lead, guided by standard, but it can take the lead. Now, we get to QKV, which is a crown jewel to me, is my favorite. QKV, quantum key distribution, can be the midterm solution to the quantum threat to cybersecurity. It is hardware-based, it is not software. base. Now QKD uses the principles of quantum mechanics to establish a shared secret random key between two parties that have a secure communication channel and alerts you of any eavesdropping attempts. Now for QKD, what I want you to imagine, because we love to talk about the quantum internet but we’re not close to that, what I’d like you to imagine for QKD is an entire infrastructure like those of the ISPs of the internet, tier 1, 2, 3 for telecom networks, by using quantum information processing techniques. That is a quantum network and if we are successful in implementing quantum networks we’re going to have unhackable networks for secure communications. Now I’m optimistic about the future of QKD but it’s definitely not a stable pallet and there are many challenges to solve before it’s deployed at scale. It’s a bumpy road for a start and it is very costly. QKD is a moonshot because we need to have entirely new infrastructures for secure communication. There is still these limitations, for example if you have a quantum network that is hyper big you will probably, I mean your quantum states of the photons can be degraded and the information maybe cannot make it, so we have to work on that. Also these quantum networks, they have to be integrated in classical telecom networks because that’s the interesting thing that we can go about and it requires compatibility, it requires us to work on interoperability and this is such a technical challenge. And also scalability and the potential for the service to work 99% of the time. Why? Because quantum networks are going to be designed for the first use case to be secure government communications. It’s going to be defense, it’s going to be intelligence and they need to work. But the thing is, despite the limitations, I want you to understand that Quantum networking is starting to work. We can see that in Madrid, in the Madrid quantum communications infrastructure, because it is able to send info over a radius of 40 square kilometers. We can also see that in New York with Connect and the NYU, because they have a quantum network that actually works. And also in China, you already seen the news, they’re very good at doing ground segment to space segment communication with quantum teleportation. So with QKD, we have PQC for the short term. With QKD, the investment needs to be very big and very continued, and only nations and federations can kickstart design and deployment of these technologies. For example, the European Commission has the Euro QCI program, and the strongest use case, as I was telling you, is secure government communications. Now, I have one minute for this. What I want you to get from this presentation is that, of course, there is a threat that may come with quantum computer in 10 to 15 to 20 to 25 years, but there are things and techniques that we can implement, standardize, and use together in a phased approach in this 20 years till quantum computing comes. The first one to me is going to be PQC, because it’s classical and we can do it now. The second one is going to be quantum networking. And the end game is going to be full deployment of quantum communication infrastructure networks, and also quantum computer, the quantum internet, sensors, computers, everything connected protecting your data. So, taking this into mind, how can we participate in- making this happen, we can do many things, right? But first of all, for me, is always thinking about yourselves and think about yourselves means that you have an organization, you need to think about how we can be quantum safe. And the way you can do this is understanding what you have in terms of information architecture, not that we were used to mix on premise and cloud services to house and communicate your data, understand which infosecurity scheme you’re following, your level of encryption, as Carlos was saying, is it robust, is it not? Have an inventory of your cryptographic algorithms and also see how much you can invest in your transition to quantum security. If you’re a small organization, you may get to BQC and that’s all for the next 10 years. If you are a stronger, bigger organization, maybe you can also try to understand how to engage in quantum communication networks. The industry is already busy working in interoperability and compatibility together with governments for PQC and also for quantum networking. The governments are already launching national strategies and engaging quantum solutions into their cybersecurity strategies. For example, the European Union is working on this right now. There is some box in PQC and QKD to have software stacks, to have hardware that actually works. And for the IEF community and I’m counting me in the IEF community, I would tell you that quantum is still a mystery to most of us in the policy community. So what I think we need is to engage, we need to learn, we need to study this, we need to understand this, we need to create spaces for discussion and engagement. I think it’s on us to introduce something else beyond policy thoughts on how to collaborate and then some that is. standardize these technologies. And also, let me finish with this. I think that quantum technologies bring both light and darkness to our lives, because our lives are digital. And that our privacy is our health, is our identity. And the digital rights of the people cannot be lost in translation in a global race towards being quantum safe and hackable that no one understands. So I hope we can work together on this. And thank you very much for listening.

Moderator – Carina Birarda:
Thank you very much, Maria, for your presentation. And we thank you for sharing your ideas. And we invite you to ask questions, to have an interactive session. And Olga is our next panelist. The microphone is yours.

Olga Cavalli:
Thank you both. Thank you. Thank you for inviting me. This is extremely interesting. And I have a question for the experts once we have the questions and answers as part of the session. Thank you for inviting me. I would like to bring to you a different perspective now, first from the capacity building concept and then from the public policy concept. First, let me tell you, my name is Olga Cavalli. I am a university teacher at University of Buenos Aires. I teach internet infrastructure and telecommunications infrastructure, which is where I have worked most of my first stage of my career. Then for 20 years, I’ve been working in public policy in Ministry of Foreign Affairs. I’m now in the Secretary of Innovation in Argentina. Presently, I am the National Director of Cybersecurity. So I want to bring you some ideas from these two perspectives. The school was created 15 years ago because we realized that the participation of Latin America in all these dialogue spaces where the policy related with the internet are defined was very scarce, was few and was perhaps not so much relevant prepared to participate in dialogues and comments and shaping the policies that are totally different from perspective from Latin America to other regions. Latin America has a different challenge from other regions. It’s extremely unequal in relation with economic distribution, infrastructure distribution. So our problems are not the same like other regions. So this is why we created this space, to train professionals at any age, and any background is welcome, whether technical, policymakers, journalists, lawyers, in order to learn all the rules that make the Internet work and how to participate and understand the problems and challenges that Latin America has. We have been doing that for 15 years, and for the first time this year we went out from big cities. We rotate among the Americas, and we had one totally focused on cybersecurity in the venue of the Organization of American States. That was very interesting. This year, for the first time, we went away from big cities and we went to a city inside one state in Brazil, the city of Campina Grande, with 400 fellows. So you can find information in our website, governanceinternet.org. And what I would like also to talk about is the extremely fast pace of the adoption of ICT technologies by human beings. There are different estimations. Maybe Nico will know more details about that. I had a report from Ericsson that next year we will have 22,000 million of IoT devices, and then I found another one from Cisco saying that the number will be 50,000. So the difference is interesting, but I think that the amount of devices is enormous compared to what we have been dealing with up to now, which is a reasonable number of devices per person. Considering that the population of the world is 88,000 million people, the pace of adoption of all these digital infrastructures, especially the new ones, is very, very fast. It’s much, much faster, five times faster than electricity and telephony. Much, much faster. Also, it was already mentioned by Wode and colleagues that most of these technologies were not designed with concept of security from scratch. They were designed in a different environment, in a different time, and with different ideas. So that’s, it’s extremely challenging. And I would like to consider now some public policy that we have been implementing in Argentina, although I am participating here as an academic, I have a public policy role. So I want to tell you what we have been doing in Argentina. Our role in the national government, we have a target, which is the national administration. So for that, there is a resolution that establishes minimum requirements of cybersecurity for them. What they have to do, they have to prepare a security plan. They have to share it with us. We have a database with all the security plans. And the most important thing is that they must design, assign a one focal point. That focal point is in contact with us in a permanent basis. We provide training for them every month and sometimes more frequently with news about technology and also we share with them all the vulnerabilities that the national assert that depend on our administration also can detect. We share with them all this information on a daily basis. If they have an incident, they have to share that with us and the national assert and our experts can help them. And this communication and this establishment of the security plans and the communication is mandatory for them. So there is a binding resolution. It’s not that voluntary or aspirational, but it’s mandatory for them. Also, we have developed a manual on what to do if they have an incident. So it describes the different stages that they have to go through. they have an incident. And I think that that would fit into the question about best practices and also the public policy that I mentioned to you. Also, we have published the new or approved the new cybersecurity strategy for Argentina. This is the second one that was produced after a public comment period during the month of January this year. And let me check if I’m forgetting something. That would be all that I want to share with you. I have a question for Maria, for Wout and for Nico. What I see now, it’s an increasing gap and challenging for developing countries, especially for small and medium enterprises in catching up with all these new changes in technology. And I see this gap really being very, very big, not only because of understanding technology, but also about buying it. It’s extremely expensive. And in some countries, we have some restrictions for import some products and some hardware. And also the lack of human resources that we all know that it’s a big challenge for all countries, not only for developing countries, but also for developed ones. But some human resources go away. Like my son is living in Europe because he was captured by a company that thought that he was very well prepared. So he was trained in Argentina in a public university and now he’s working in another country, which is good for him, but maybe not good for developing economies. Just an example of the challenge that we are facing. And looking at all these quantum technologies that are being developed, how do you see the small and medium enterprises or developing countries catching up with this changing, fast changing technologies that will be used and will be implemented very quickly? Thank you. I did two things. I spoke and then the question.

Moderator – Carina Birarda:
Thank you very much, Olga. We have only seven minutes for questions. If you want to answer the questions, this is okay. Yes, Olga? Yes, yes, go ahead. Let me see. Mohamed, do you have any questions in the chat? No, no, we don’t have any questions yet.

Nicolas Fiumarelli:
Yes, maybe I could accumulate one question, and we could, the panelists could respond as well. Because you all talked about different technologies. It’s known that the IOT number of devices is increasing. And in the case of the quantum computing, it’s already been developed. And also, ICT is not showing, deploying the best practices for security in every service. And as Olga said, it’s so expensive to have all of this. So, yes. So my question is, do you think that, also in the case of RPKI and DNSSEC, do you think that law enforcing these technologies is a good way to go? What are the threats or the risks, maybe commercial risks, in having this? Why are we not having this as a mandatory thing in the case of DNSSEC and RPKI for the networks? In the case of the IOT security standards made by the IETF, sometimes for these constraint devices, there are solutions already in standardizing the entities. And also for ICT, right? Why this is not like quantum resistant algorithms that we are seeing in the core Internet? Why these technologies are not applied for all the ICTs by a mandate, by a law enforced thing? Maybe if you want to have two minutes. per panelist to try to respond and also accumulate on the other questions we have had from Olga and the rest of the panelists. Thank you. Maybe starting with Carlos, then yes.

Carlos Martinez:
Those were a bunch of questions in a single one. I will try to make a couple of points. I personally don’t believe that mandating technology is a good idea and I’ve seen many examples where that has failed. That said, I think the situation for DNS second RPA is vastly different than the situation from IOT. IOT has a serious issue with cost, with cost per device. There’s a race to the bottom in cost per device because since you have so many million devices, it makes sense to have the cheapest device that you can actually manufacture. There’s a race to the bottom that this certainly doesn’t help in developing new technologies. DNS second RPA, I think there’s a difference there. I think one of the issues that the internet has faced over the year in deploying many new technologies, it happens for IPv6 as well, is that the thing that mainly affects in the internet are externalities. Those are things that you as part of the internet have to do at your own cost on behalf of another party to benefit another party. Sometimes that is commercially a hard sell. I think that’s what has been one of the barriers in deploying new technologies on the internet. I think there’s two different phenomena there that need to be addressed differently. You mentioned about why you’re not seeing post-quantum algorithms being applied. In my opinion, I mean, the post-quantum algorithms that have been proposed so far are less than satisfactory. They’re basically variations of elliptic curve algorithms with very, very long keys that are simply not practical. I mean, they exist, but they are not practical. They would create these huge signatures that are a threat in themselves. So sorry, I think I took more than two minutes. Sorry about that.

Nicolas Fiumarelli:
So now going to Maria, two minutes, please, and then Olga.

Maria Luque:
OK. OK, so thank you very much, Olga, for your question. I think it’s very interesting, and I would like to expand on this with you for an hour and a half. Regarding what you say about BIMES, basically, like small companies faced with the challenge of trying to keep up with these quantum technologies and all of the buzz that comes with it, and also with something very interesting, because in Spain, for example, we have the National Security Scheme, which was updated on October 2022 last year. And it doesn’t speak about quantum yet, but the standards that it enforced for information security are very high. It talks about, for example, multilevel security schemes, and it talks about path for hardware, et cetera. And I can see this strategy, for example, in Spain being updated with PQC requirements and best practices. And the thing here, although I don’t like it and I don’t think it’s positive, the thing here is that a small company, given that normally a small company, if it’s a tech company or a normal company, they rely on the infrastructure of big tech companies. And that infrastructure providers, to serve themselves, they don’t have problems. proprietary technology architecture scheme. So they rely on AWS, Microsoft Azure, they rely on Google. And these companies are going to be able to offer this solution that Carlos and I don’t like very much, which is PQC algorithms inserted in the cloud as an option for you to try to make your data safer in the place that it is. So this is going to be the option in the next five to 10 years for small companies, although I don’t like it, but I can see it as a way. And also regarding national quantum strategies for developing countries and for any country in general, I can tell you that the tendency is to be very, try to be very specialized and try to prioritize the one thing that you think you can invest in. For example, you can see that in the European Union, everybody’s very ambitious in the European Union, every country, but what we see is, for example, Spain says, hey, we have, we’re very good at optics. We’re very good at, we have very good mathematicians. So we’re going to go for developing quantum algorithms and we’re not going to invest so much on quantum computer because maybe we don’t have the resources, right? So different countries are trying to understand which role they can play in the quantum supply internationally. And it can be betting on talent workforce. It can be betting on developing algorithms or it can be betting on theoretical physicists. It really depends and it’s a challenge for every country and I would love to expand on it more with you. Thank you.

Olga Cavalli:
Thank you, Maria. I take your word of expanding this in among us. I may, I may get. touch with you. So it’s interesting what you said first about that the most important companies in the world will develop some technologies that others will start using, which is true and which is happening now perhaps with cloud computing and other technologies. My fear is that developing economies and small and medium enterprises will be just consumers of technologies developed elsewhere, mainly in the States and China, which are the main poles where all these technologies are being developed now. But that’s something that we can change with capacity building and awareness. And I’m always positive about technology. So I think that we have to go in that way. Thank you. Thank you for inviting me and for comments and Maria Carlos and both the left. Thank you.

Nicolas Fiumarelli:
Okay. Thank you so much. So we are ending the session here. Good insights about the law enforcement. Maybe it’s not the solution, but the capacity building and awareness are there. And we need to be in the loop, in the loop of what is happening regarding requirements on the national agencies and all these entire world of different technologies are approaching. So thank you so much to all the panelists and see you next year in hopefully with new news about these technologies. Thank you so much. Thank you very much. Thank you very much. Have a great day.

Carlos Martinez

Speech speed

143 words per minute

Speech length

1822 words

Speech time

763 secs


Arguments

Importance of DNSSEC and RPKI to secure the core of the internet

Supporting facts:

  • The DNS and routing, or the ability of the network to deliver a packet to the proper destination, are three necessary functions for a properly functioning internet.
  • DNSSEC and RPKI are security protocols that use digital signatures to verify DNS responses and BGP announcements respectively
  • Both DNSSEC and RPKI have a shared responsibility between signing and validation


Both DNSSEC and RPKI are prepared for a potential post-quantum scenario

Supporting facts:

  • Both DNSSEC and RPKI have algorithm agility built-in
  • When a post-quantum cryptographic algorithm is standardized, it can be applied to both DNSSEC and RPKI


Mandating technology is generally not a good idea

Supporting facts:

  • Mandating technology has failed in past instances
  • Issues with cost and benefit discrepancy in IOT and DNSSEC/RPA
  • Post-quantum algorithms proposed so far are less than satisfactory


Report

The discussion centres around the vital role of DNSSEC (Domain Name System Security Extensions) and RPKI (Resource Public Key Infrastructure) in securing the fundamental structure of the internet. These security protocols are instrumental in safeguarding the integrity and authenticity of DNS responses and BGP (Border Gateway Protocol) announcements, respectively.

DNSSEC and RPKI operate by utilising digital signatures to verify the legitimacy of DNS responses and BGP announcements. This verification process ensures that the network delivers data packets to the correct destination, maintaining the proper functioning of the internet. The speakers unanimously recognise the crucial importance of DNSSEC and RPKI, highlighting their shared responsibility in both signing and validation processes.

On a related topic, there has been a debate concerning the potential weakening of cryptographic algorithms and the inclusion of backdoors to enable access. However, Carlos, one of the speakers, expresses a negative sentiment towards this notion. He asserts that such actions would be unwise, potentially compromising the security of cryptographic systems.

This viewpoint aligns with SDG 16, which focuses on ensuring peace, justice, and strong institutions. A positive aspect discussed is that both DNSSEC and RPKI have algorithm agility built into their design. This feature ensures that they can adapt to incipient post-quantum cryptographic scenarios.

Consequently, when post-quantum cryptographic algorithms are standardized, they can be effectively incorporated into DNSSEC and RPKI, providing continued security measures against quantum threats. The debate also encompasses the challenge of mandating technology, with the speakers highlighting instances where such endeavors have proven unsuccessful.

They note the issues surrounding cost and benefit discrepancies, particularly in the context of the Internet of Things (IoT) and DNSSEC/RPKI implementation. Furthermore, while post-quantum algorithms have been proposed, they have not yet achieved a satisfactory level of performance.

In conclusion, the speakers collectively emphasize the importance of DNSSEC and RPKI in securing the core infrastructure of the internet. Their positive sentiment towards the efficacy of these protocols underscores their significance in maintaining a properly functioning internet. Nonetheless, there is a negative sentiment towards weakening cryptographic algorithms, highlighting the potential risks associated with such actions.

The speakers also acknowledge the need for flexibility and tailored approaches when addressing different technologies, rather than enforcing a one-size-fits-all mandate. Ultimately, this discussion highlights the ongoing challenges and complexities associated with internet security and the need for continued research and adaptation to effectively counter emerging threats.

Maria Luque

Speech speed

158 words per minute

Speech length

3182 words

Speech time

1205 secs


Arguments

Quantum technologies, especially quantum computing, pose both significant challenges and opportunities in terms of cybersecurity

Supporting facts:

  • Quantum computing has the potential to break our current cryptographic systems and expose confidential information
  • Technologies like Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD) are being developed to combat this threat
  • PQC, while not yet standardized, can be applied today and is software-based, while QKD is hardware-based and might serve as a mid-term solution


Small tech companies rely mainly on the infrastructure of large tech companies when it comes to meeting the challenges of quantum technologies

Supporting facts:

  • Small companies generally use platforms like AWS, Microsoft Azure, Google, etc.


Despite being not favourable, PQC algorithms inserted in the cloud will be the optimal solution for small companies to secure their data in the next five to 10 years


Different countries should focus on their strengths and specialties when it comes to planning their national quantum strategies

Supporting facts:

  • Spain chooses to invest in areas they excel at such as optics and mathematics


Report

Quantum technologies, specifically quantum computing, present challenges and opportunities in terms of cybersecurity. The concern is that quantum computing has the potential to break current cryptographic systems and expose sensitive information. To combat this threat, researchers are developing technologies like Post-Quantum Cryptography (PQC) and Quantum Key Distribution (QKD).

PQC, although not yet standardized, can be applied today as a software-based solution, while QKD requires substantial investment and the creation of new secure communication infrastructures. It is argued that governments and the technology industry need to continuously and significantly invest in quantum technologies to ensure data security in the face of the quantum threat.

QKD, in particular, requires high investment and the establishment of entirely new infrastructures for secure communication. On the other hand, tech companies have already started implementing PQC into their solutions, showing their recognition of the need to adapt to quantum technologies.

Organizations also need to assess and adapt their information security structures to prepare for the quantum threat. They should understand their information architectures, level of encryption, and capabilities necessary for transitioning to quantum security. The approach for organizations may vary depending on their size, with smaller ones potentially adopting PQC and larger ones engaging in quantum communication networks.

For small tech companies, the infrastructure provided by large tech companies like AWS, Microsoft Azure, and Google is crucial for addressing the challenges posed by quantum technologies. These platforms serve as a foundation for smaller companies to navigate the complexities of quantum computing.

Deploying PQC algorithms in the cloud is considered a potential solution for securing data for small companies in the next five to ten years. Despite not being favoured by some, it is argued that deploying PQC algorithms in the cloud offers optimal data security for small companies.

However, there is debate regarding this approach, with some opposing the practice for maintaining data security. Countries are encouraged to focus on their strengths and specialties when planning their national quantum strategies. For example, Spain has chosen to invest in areas where it excels, such as optics and mathematics, to drive its quantum technology development.

In conclusion, quantum technologies pose both challenges and opportunities in cybersecurity. Addressing the quantum threat requires significant investments in quantum technologies, assessments and adaptations of information security structures, and consideration of alternative solutions like deploying PQC algorithms in the cloud.

Additionally, countries should strategically focus on their strengths and specialities to plan effective national quantum strategies. Ongoing research and discussions are needed in this rapidly evolving field.

Moderator – Carina Birarda

Speech speed

114 words per minute

Speech length

644 words

Speech time

340 secs


Arguments

There has been a significant increase in cybersecurity incidents at the international level.

Supporting facts:

  • Global interconnectivity is a key factor behind this trend.
  • Emergence of sophisticated criminal activities like crime as a service


Adoption of internationally-recognised cybersecurity best practices is a fundamental challenge

Supporting facts:

  • Just a few number of organizations practice these standards
  • The lack of adoption is a global issue


Cybersecurity is a global issue that necessitating international collaboration for combating it

Supporting facts:

  • Cyberattacks do not respect borders or jurisdictions.
  • Information sharing in international level is imperative.


It is essential to understand the threats we are facing for proper implementation of cybersecurity practices.

Supporting facts:

  • By understanding threats that IoT, web, quantum technologies are facing, best practises can be selected.


Report

This extended summary highlights the main points and arguments presented in the given information on cybersecurity. It also provides more details, evidence, and conclusions drawn from the analysis. The first argument states that there has been a significant increase in cybersecurity incidents at the international level, which is viewed as a negative trend.

This can be attributed to the global connectivity that has become a key factor behind this increase. Additionally, the emergence of sophisticated criminal activities, such as crime as a service, has further contributed to the rise in cybersecurity incidents. The supporting evidence for this argument is the fact that cyberattacks are often conducted by actors in multiple countries, indicating the global nature of the issue.

The second argument emphasizes the fundamental challenge of adopting internationally-recognised cybersecurity best practices. It is highlighted that only a few organisations currently practise these standards, and the lack of adoption is a global issue. The evidence supporting this argument includes the observation that just a small number of organisations implement these best practices, indicating a need for widespread adoption to enhance cybersecurity at both national and international levels.

The third argument stresses that cybersecurity is a global issue that necessitates international collaboration for effective mitigation. The fact that cyberattacks do not respect borders or jurisdictions is put forward as evidence for the need for international cooperation. Additionally, it is stated that information sharing at the international level is imperative for combating cybersecurity threats.

This argument highlights the importance of collaboration between countries to establish a robust global cybersecurity framework. The fourth argument suggests that understanding the threats facing IoT, web, and quantum technologies is essential for implementing proper cybersecurity practices. By gaining a comprehensive understanding of these threats, appropriate best practices can be selected and implemented.

The evidence supporting this argument is the observation that proper implementation of cybersecurity practices can only be achieved by addressing the specific threats posed by emerging technologies. In conclusion, the extended summary highlights the increasing number of cybersecurity incidents on an international scale as a negative trend.

The adoption of internationally-recognised cybersecurity best practices is identified as a fundamental challenge, with only a small number of organisations currently practising these standards. It is established that cybersecurity is a global issue requiring international collaboration for effective mitigation. Understanding the specific threats posed by emerging technologies is emphasised as crucial for implementing proper cybersecurity practices.

Overall, the analysis underscores the need for international cooperation and comprehensive measures to address the growing cybersecurity challenges.

Nicolas Fiumarelli

Speech speed

158 words per minute

Speech length

380 words

Speech time

145 secs


Arguments

Enforcing the adoption of technologies like RPKI, DNSSEC, IOT security standards, and quantum resistant algorithms via legislation

Supporting facts:

  • Increasing number of IoT devices
  • Development of quantum computing
  • Existence of security standards made by IETF for IoT devices
  • The cost of implementing advanced technologies is high


Report

During the discussion, the speakers emphasised the necessity of implementing security technologies, such as RPKI, DNSSEC, IoT security standards, and quantum-resistant algorithms, through legislation. They pointed out that the rising number of Internet of Things (IoT) devices and the advancements in quantum computing pose significant security risks.

These risks can be mitigated by the adoption of robust security measures. The speakers also highlighted the existence of security standards developed by the Internet Engineering Task Force (IETF) specifically for IoT devices. These standards provide guidelines and best practices to ensure the security of IoT networks and data.

However, one speaker questioned why these security technologies are not universally enforced in all Information and Communication Technology (ICT) systems through legal obligations. It was acknowledged that the implementation of advanced security technologies comes with a high cost. This cost may pose a challenge to widespread adoption.

Nonetheless, the importance of safeguarding critical infrastructure and personal information against cyber threats and data breaches justifies the investment in these technologies. Overall, the sentiment during the discussion was neutral, indicating a balanced examination of the topic. The speakers’ arguments and evidence provided a comprehensive understanding of the urgency to implement security technologies, alongside the challenges associated with their implementation.

The discussion aligned with SDG 9: Industry, Innovation and Infrastructure, as it emphasised the need for secure and resilient ICT systems to support sustainable development. Through this analysis, it becomes evident that the adoption of security technologies through legislation should be encouraged and prioritised.

This will help ensure the protection of IoT devices and networks, while also addressing the growing threat of quantum computing to traditional encryption methods. Additionally, the development and enforcement of security standards can play a crucial role in enhancing cybersecurity practices across various industries.

In conclusion, the discussion underscored the significance of deploying advanced security technologies and standards to safeguard ICT systems. Although challenges such as high implementation costs exist, the speakers highlighted the urgency to address these concerns and apply security measures throughout the industry.

By doing so, they aimed to emphasise the need for a comprehensive approach to cybersecurity, simultaneously addressing both technological advancements and legal enforcement.

Olga Cavalli

Speech speed

151 words per minute

Speech length

1399 words

Speech time

555 secs


Arguments

Latin America faces unique technological and internet infrastructure challenges due to economic and distribution inequalities

Supporting facts:

  • Olga Cavalli is a university teacher at University of Buenos Aires, teaching internet infrastructure and telecommunications infrastructure
  • She works in public policy in Ministry of Foreign Affairs
  • She’s currently the National Director of Cybersecurity


Latin America needs to increase its participation in policy dialogues related to internet, as it’s different from other regions

Supporting facts:

  • Cavalli helped in creation of a training program for professionals to learn the rules of the Internet, understand its challenges and to participate more in policy dialogues


The pace of ICT and IoT adoption is very fast, likely leading to increased vulnerabilities due to lack of initial security designs

Supporting facts:

  • There are estimates that there will be 22,000 million to 50,000 IoT devices next year


Argentina has implemented several cybersecurity policies for national administration

Supporting facts:

  • In Argentina, there’s a binding resolution that the national administration must prepare a security plan, assign a focal point for contact and share information in case of an incident
  • A manual has been developed for them on what to do in the event of an incident
  • New cybersecurity strategy has also been approved


Expresses concern over developing economies and small to medium enterprises being consumers of technologies developed elsewhere

Supporting facts:

  • The most significant technology companies, such as AWS, Microsoft Azure, Google, are expected to provide solutions based in technologies like PQC algorithms and cloud computing
  • Countries like Spain specializing on certain aspects of quantum technology development due to lack of resources


Report

Latin America faces unique technological and internet infrastructure challenges due to economic and distribution inequalities. These challenges stem from the disparities in wealth and resources within the countries of the region. As a result, access to and the quality of technology and internet infrastructure vary greatly across Latin America.

To address these challenges, there is a need for increased participation in policy dialogues related to the internet in Latin America. Olga Cavalli, a university teacher at the University of Buenos Aires, has played a key role in creating a training program for professionals to learn about the rules of the Internet, understand its challenges, and participate more actively in policy dialogues.

This initiative aims to empower Latin American countries to have a stronger voice in shaping internet policies that are suitable for their specific needs and circumstances. Furthermore, the rapid adoption of Information and Communication Technology (ICT) and Internet of Things (IoT) devices in Latin America has raised concerns about increased vulnerabilities due to the lack of initial security designs.

It is estimated that there will be between 22,000 million to 50,000 IoT devices in the region next year. The fast pace of adoption leaves little time for proper security measures to be implemented, which could lead to potential breaches and threats in the future.

Argentina has taken proactive steps in addressing cybersecurity concerns. The national administration has implemented binding resolutions that require the preparation of a security plan, the assignment of a focal point for contact, and information sharing in the event of a cyber incident.

Additionally, a manual has been developed to guide the national administration on how to respond to such incidents. A new cybersecurity strategy has also been approved, showcasing Argentina’s commitment to ensuring security in the digital realm. Developing countries and small to medium enterprises (SMEs) face significant challenges in keeping up with rapid technological changes.

These challenges include restrictions on importing certain products and hardware, as well as a lack of human resources, as trained professionals often migrate to developed countries in search of better opportunities. The combination of limited resources and a lack of technical expertise hampers their ability to understand and afford new technologies, creating a widening technology gap.

Moreover, developing economies and small to medium enterprises are often consumers of technologies developed elsewhere, which raises concerns about the global technology gap. While major technology companies like AWS, Microsoft Azure, and Google are expected to provide solutions based on emerging technologies like Post-Quantum Cryptography (PQC) algorithms and cloud computing, developing economies and SMEs rely on these technologies without actively contributing to their development.

This dependence on technologies developed elsewhere puts them at a disadvantage. To address these challenges, capacity building and awareness are advocated as essential measures. By investing in the development of local technological capabilities and creating awareness about the importance of technology, Latin American countries can reduce their reliance on technologies developed by other countries.

This would help narrow the global technology gap and allow them to actively contribute to technological advancements that suit their specific needs. In conclusion, Latin America faces unique challenges in technological and internet infrastructure due to economic and distribution inequalities.

Increasing participation in policy dialogues, addressing cybersecurity concerns, and bridging the technology gap are crucial steps towards creating a more inclusive and technologically advanced region. Additionally, capacity building and raising awareness about technology will empower Latin American countries to shape their own technological future.

Wout de Natris

Speech speed

158 words per minute

Speech length

1388 words

Speech time

528 secs


Arguments

Lack of deployment of cybersecurity measures in IoT is a major issue

Supporting facts:

  • Cybersecurity was not an issue in the early internet, but has become a problem with worldwide connectivity
  • The technical community has adjusted the code, but most governments and industries do not demand security by design
  • IoT devices are usually insecure by design


Security should be inherent in all forms of ICT and should undergo formal testing before entering the market

Supporting facts:

  • When new technology enters the market, it is usually untested for security
  • ICT cannot be legislatively controlled because of the rate of innovation


Report

The lack of cybersecurity measures in Internet of Things (IoT) devices is a pressing issue that demands attention. While the technical community has made efforts to address this concern, the majority of governments and industries have not yet prioritised security by design in IoT.

This oversight has resulted in widespread vulnerability and the potential for malicious attacks. Initially, cybersecurity was not a concern during the early days of the internet, as worldwide connectivity was limited. However, with the rapid expansion and integration of IoT devices into our daily lives, the need for robust security measures has become increasingly evident.

Unfortunately, IoT devices are often designed without adequate security measures, making them susceptible to cyber threats and potentially compromising users’ personal data. One argument put forth is that governments and large corporations should play a crucial role in setting the standard for security in IoT.

An example of this proactive approach is seen in the Dutch government, which has taken the lead by imposing the deployment of 43 different security standards. This demonstrates the importance of demanding high levels of security in IoT devices. Another concerning aspect is the lack of rigorous security testing before new technology, including ICT, enters the market.

The fast pace of innovation and the urgency to bring products to market often result in inadequate security measures. It is argued that security should be a fundamental consideration and undergo formal testing before any form of ICT is released, minimising risks for users.

On a more positive note, international cooperation and information sharing are emphasised as pivotal factors in staying ahead in terms of cybersecurity. The power of the internet lies in its ability to facilitate global discussions, enabling the sharing of knowledge and experiences across borders.

Governments and larger industries need to be made aware of their role and potential influence in addressing cybersecurity challenges, fostering collaboration and cooperation on a global scale. In conclusion, the lack of cybersecurity measures in IoT devices poses a significant challenge that needs to be addressed urgently.

Efforts from both the technical community and various stakeholders are required to push for security by design and the implementation of robust standards. Governments and large corporations hold the responsibility of leading the way, setting the standards for security in IoT.

In addition, rigorous security testing should become a prerequisite before any form of ICT is introduced to the market. Furthermore, international cooperation and information sharing are critical for staying ahead in the ever-evolving landscape of cybersecurity. Only through collaboration can we tackle the challenges and vulnerabilities inherent in the interconnected world of IoT.

Resilient and Responsible AI | IGF 2023 Town Hall #105

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Mariam Jobe

The Africa Youth Internet Governance Forum was recently held, highlighting the significant role of young people in shaping the digital future. The forum covered various topics, including cyber security, data privacy, digital inclusion, and the need for comprehensive data laws. One key argument was the lack of knowledge among young people about these important issues, emphasizing the need for educational outreach efforts.

The forum also emphasized the importance of internet access and digital literacy in underserved and rural communities. It recognized that improving internet access and digital literacy is crucial to ensuring equal opportunities and promoting socio-economic development.

Discussions addressed the issue of cyber crimes and the need for safe spaces to report such problems. The importance of an ethical framework surrounding artificial intelligence was also highlighted. It was noted that some countries lack comprehensive data laws, hindering their ability to effectively address cyber crimes.

An intergenerational session between the youth and Members of Parliament (MPs) fostered collaboration and highlighted the importance of government-youth partnerships. Involving young people in policy development and decision-making processes is crucial.

In conclusion, the Africa Youth Internet Governance Forum underscored the pivotal role of young individuals in shaping the continent’s digital future. Increased education and awareness, inclusivity, ethical considerations, and citizen participation were identified as crucial components. Internet access and digital literacy in underserved communities were recognized, along with the need for collaboration between the youth and the government. The forum provided a platform to address pressing issues and generate innovative solutions for Africa’s digital transformation.

Audience

The analysis covered a range of topics discussed by various speakers. The first speaker expressed a disagreement with the commonly held belief that advanced tech elements such as AI and blockchain are the keys to innovation and development. Instead, they emphasized the importance of isolating, understanding, and tackling diseases like COVID-19. The speaker pointed out their seven years of engagement with individuals afflicted with COVID-19 and emphasized the importance of isolation in dealing with such diseases. They considered AI and blockchain distractions when it comes to public health crises.

Another speaker focused on the role of traditional forms of innovation and governance in driving improvement. They highlighted the contributions of African engineers and economists who are actively tackling COVID-19. The speaker emphasized the crucial role played by telecommunications regulators and considered traditional forms of governance, such as the rule of law, essential for improvement.

The role of AI in technological advancement was also discussed by a speaker with 40 years of experience in technology. They cited the example of human genomics and how integration of technology did not eliminate medical jobs but enhanced precision medicine. The speaker viewed AI as just another technology that should be adapted and integrated, rather than feared.

Legislators’ role in adapting technology and their potential to get distracted by job loss fears were highlighted by another speaker, who was both a lawmaker and technologist. They emphasized the importance of focusing on adapting technology quickly to avoid being left behind.

The importance of sharing reports and learnings with the leadership of each respective National Assembly was emphasized by a participant who presented a report in Abuja. They urged not to turn legislative participation into a mere holiday or jamboree but to make meaningful contributions.

Another participant suggested the need for a directory of ongoing initiatives at the continental level to be shared among all parliamentarians. They mentioned learning about several initiatives at the continental level for the first time during the meeting.

The need for international development partners to customize their support based on the priorities of each country was emphasized by a participant. They believed that generic support often does not address the priorities of individual countries and asserted that each country should determine its own priorities and approach development partners accordingly.

Concerns were raised over the limited participation of African countries in hosting the Internet Governance Forum (IGF), with less than 20 out of 54 African nations being active in hosting it. The speaker expressed the need for African nations to be more involved and accountable in terms of hosting IGF.

The establishment of an accountability framework within IGF for multi-stakeholders and countries was advocated by a speaker. They urged the need for a mechanism to hold stakeholders accountable.

The need for a vision and strategic plan for growing and strengthening IGF within Africa was also highlighted by a speaker. They emphasized that having such a plan would be instrumental in achieving the goals of IGF.

The potential contribution of assistive technologies to the GDP was mentioned, highlighting the importance of utilizing these technologies to serve disabled communities.

It was noted that African meetings and conferences often neglect the discussion on disabled communities, indicating a lack of attention and inclusivity in these forums.

The utilization of traditional African communal values to ensure the realization of IGF goals was suggested by a speaker, emphasizing the importance of cultural context in achieving the goals.

Overall, the analysis highlighted the need for innovation, inclusive policies, and partnerships to achieve sustainable development goals. It shed light on the importance of integrating advanced technology responsibly, prioritizing country-specific priorities, and ensuring inclusivity in decision-making processes. The speakers’ perspectives provide valuable insights into various aspects of development, governance, and technology, contributing to the ongoing discourse on achieving sustainable development.

Martin Koyabe

The analysis provided covers several key points related to cyber capacity building and cybersecurity in Africa.

The first point discussed is the AU-GFCE collaboration project, which aimed to build resilience and ensure cyber capacity building within the continent. This project focused on three key areas: assessment of priorities for African countries regarding cyber issues, sustainability through investment in expertise, and establishment of institutional memory. The analysis highlights that significant investment has been made in digital infrastructure due to the increased demand for these services during the COVID-19 pandemic.

The next area of focus is the need to enhance security in Africa through investing in training and developing cyber skills. It is mentioned that protecting infrastructure is a high priority for African countries. The GFCE has established the Africa Cyber Experts Community, which consists of over 80 experts from 37 countries. Additionally, there is a call to facilitate opportunities and development of cyber skills for individuals in marginalized areas. The GFCE and AU have also established the network of African women in cyber.

The importance of political will and funding in boosting cybersecurity is emphasized in the analysis. It is noted that many projects in Africa lack sustainable funding or resources, leading to their discontinuation or inadequate sustainment after the primary funding ends. The analysis argues that countries need to internally invest in cybersecurity to ensure the sustainability of projects. Furthermore, there is a critical need for sensitization at various political and decision-making levels to enhance cybersecurity efforts.

The analysis also mentions an upcoming meeting in Ghana, where cybersecurity experts and capacity building development partners will discuss cyber-related issues. It is highlighted that this meeting is of significant importance as it is the first of its kind.

The situation of AFRINIC, an organization facing challenges and undergoing issues, is also addressed. The analysis mentions that AFRINIC is currently under litigation and requires the resolution of its problems. However, it is recommended to reserve extended comments on this situation and let the process take its due course.

Finally, the importance of sustaining mechanisms for auditing authentic organizations is emphasized. This is seen as crucial in ensuring the effectiveness and credibility of these organizations.

Overall, the analysis focuses on the need for cyber capacity building and cybersecurity in Africa, highlighting the importance of various factors such as collaboration, investment, political will, funding, and sustainability. It also provides insights into specific initiatives and challenges, contributing to a comprehensive understanding of the topic.

Chidi

The African IGF (AIGF) emphasises the importance of a multi-stakeholder approach to ensure its success. This approach involves collaboration from various stakeholders including government, civil society, academia, and the private sector. The AIGF recognises that for effective addressing of the challenges and opportunities of the digital landscape, involvement of all these stakeholders in decision-making is necessary.

Creating an enabling environment is a key factor for the success of the AIGF. This refers to the need for policies and regulations that support the growth of the digital economy and ensure equal access to digital technologies for all. It is also crucial to enforce instant cyber laws to protect individuals and organisations from cyber threats and ensure the security of digital systems.

In addition to an enabling environment and instant cyber laws, political will is essential for shaping the digital landscape. The AIGF highlights the importance of political leaders showing commitment to promoting digital inclusion and embracing technology for development. This includes providing necessary resources and support for digital initiatives.

Another important aspect discussed is the need for inclusivity and ethical AI principles. The AIGF argues that an inclusive digital environment should be created to ensure everyone benefits from technological advancements. This includes addressing the issues of digital divide and ensuring no one is left behind. The AIGF also highlights the importance of a legislative framework to promote ethical AI principles and prioritize inclusivity.

Nigeria is recognised as a country playing a pivotal role in shaping the trajectory of technological advancement. The country has put in place strategic objectives, initiatives, regulatory instruments, and platforms to foster the growth of the digital economy. Nigeria has also taken major steps towards harmonising rights of way, which are crucial for the development of ICT infrastructure.

However, Africa still faces challenges such as inadequate visibility of individual countries’ activities and insufficient collaborations within the African region. It is imperative for African countries to share information in real-time and work together to achieve their technological goals.

Investment in research and development for emerging technologies is seen as a fundamental step towards technological advancement. The AIGF urges stakeholders to seize the opportunity and increase research capacity to drive innovation and stay at the forefront of emerging technologies.

AFRINIC, responsible for managing internet resources in Africa, is mentioned to be in a state of crisis or dysfunctionality. This raises concerns about its impact on internet security and sustainability in Africa. The source of the internet, considered a commodity, lies with the IP networks.

Another key argument made is the importance of Africa taking charge of its internet infrastructure to maintain cybersecurity. The AIGF highlights the need for African countries to have control over their internet infrastructure to effectively combat cybersecurity issues. This requires strengthening internet governance and building strong institutions to ensure the security and stability of the internet.

In conclusion, the African IGF advocates for a multi-stakeholder approach, an enabling environment, instant cyber laws, and political will to shape the digital landscape. Inclusivity and ethical AI principles, backed by a legislative framework, are also considered essential. Nigeria plays a crucial role in technological advancement, but challenges such as inadequate visibility and insufficient collaborations persist. Investment in research and development is necessary, and concerns are raised about the crisis within AFRINIC and its impact on internet security in Africa. Taking charge of internet infrastructure is crucial for cybersecurity on the continent.

Moctar Seck

Africa is facing significant digital challenges that hinder its progress in the digital age. One of the main obstacles is the deficit of connectivity, with infrastructure issues preventing 6% of the African population from accessing the internet. This lack of connectivity acts as a barrier to economic development and social inclusion. To overcome this challenge, Africa needs to ensure that broadband is accessible to everyone on the continent by 2030, which would require substantial investment from the private sector.

Another crucial challenge is the gender digital divide. Presently, only 45% of females in Africa are connected to the internet, compared to 85% of males. Bridging this divide is essential for achieving gender equality in the digital era. It is worth noting that the internet market in Africa has the potential to reach $180 million by 2025, further highlighting the economic opportunities that can be unlocked by addressing the gender digital divide.

Furthermore, the lack of legal identity for 500 million Africans poses a significant obstacle to digital transformation. Without legal identification, individuals are unable to fully participate in the digital economy and access essential services. Resolving this issue is crucial to ensure that every African can benefit from the opportunities presented by the digital age.

Cybersecurity challenges are also prevalent in Africa, with the cost of cybersecurity issues amounting to 10% of the continent’s GDP. Additionally, terrorists are increasingly exploiting digital avenues, underscoring the need for robust cybersecurity measures to protect individuals and institutions in Africa.

While artificial intelligence (AI) presents opportunities for growth and innovation, it also brings challenges that require regulation. Africa’s young population, projected to constitute 70% of the continent’s population by 2050, needs to be prepared for advancements in AI. Implementing regulations around AI is necessary to harness its potential benefits while mitigating associated risks.

The Global Digital Compact, which will shape the future of digital development globally, necessitates African input to ensure equitable sharing of digital technology benefits. Active participation from Africa in shaping this compact is essential.

Resolving the AFIRINIC issue, considered of utmost importance, requires a meeting between the African Economic Community (AEC), the Economic Commission for Africa (ECA), and Smart Africa. The resolution of this issue is crucial for the development of the continent’s digital infrastructure.

Network access and control are vital for digital transformation, particularly for Africa’s large youth population, accounting for 42% of global youth. Lack of access and control stifles progress, hindering the continent from fully harnessing the potential of digital technologies.

The reliance on AI and its perpetual usage of network data raise concerns about privacy and security. Establishing a regulatory framework is important to address these issues and ensure responsible and ethical use of AI.

Capacity building for regulators is essential to keep up with rapid technological advancements, such as AI, blockchain, IoT, and nanotechnology. Regulators need to stay ahead of these developments and understand their implications to effectively safeguard users’ rights and interests.

The African Internet Governance Forum (IGF) is a growing multi-stakeholder forum where key issues related to digital technology are discussed. It distinctively differs from the World Summit on the Information Society (WSIS) forum, where government decisions are made. Increased participation from government, the private sector, and civil society in the African IGF is necessary for a more inclusive and comprehensive discussion on digital technology.

The organization of the IGF in Africa depends on the renewal of its mandate. The successful hosting of the IGF in Ethiopia highlights the potential for its further expansion in Africa. However, the renewal of its mandate by 2030 is crucial to ensure its continuity and effective contribution to digital governance in the region.

In conclusion, Africa faces significant digital challenges that need to be addressed for the continent to fully participate in the digital age. These challenges include the deficit of connectivity, the gender digital divide, the lack of legal identity, cybersecurity issues, opportunities and challenges posed by AI, and the need for capacity building for regulators. Active participation by Africa in shaping the Global Digital Compact is crucial, while the resolution of the AFIRINIC issue is of utmost importance. Furthermore, the African IGF provides a platform for important discussions on digital technology, and its expansion and inclusive participation are necessary for effective digital governance in Africa.

Sam George

The analysis focuses on discussions among speakers regarding various topics related to data policies and digital infrastructure in Africa. One key point highlighted is the important role played by parliamentarians in bridging the gaps between civil society, the technical community, and the government. By attending events such as the African School on Internet Governance (AfriSEEG) and the Internet Governance Forum (IGF), parliamentarians gain insights into the challenges and opportunities in the digital realm. They can then initiate or support the government in developing legislation to implement data policies.

Another crucial aspect emphasized is the need for harmonisation of data policies across African countries. The case of the Nigerian company Jumia operating in multiple African nations illustrates how challenges can arise without proper data flow across borders. Without harmonisation, these challenges can hinder the growth and development of businesses operating across countries. Therefore, speakers argue for the adoption of consistent and coordinated data policies across the continent to promote a conducive environment for cross-border data flow.

The importance of prioritising funding for digital infrastructure also emerged as a key point. In upcoming budgeting cycles, it is recommended to improve funding for digital public infrastructure. This infrastructure would serve as a secure space to house data and support the stability and growth of digital services in Africa. Given the increasing importance of digital technology in various sectors, adequate funding for digital infrastructure is seen as crucial for the continent’s socioeconomic development.

Regarding the intersection of state security and digital rights, a neutral stance is taken. While it is recognised that the state has the right to secure data, it should not infringe upon the digital rights of citizens. Striking a balance between these two aspects is necessary to ensure the protection and privacy of individuals’ data while maintaining an environment of national security.

Another noteworthy point is the significance of building the capacity of parliament members through civil society engagement. Deepening knowledge on legislative subjects and engaging with parliamentary portfolio committees are seen as important steps in empowering parliamentarians to effectively address the complex challenges of data policies and digital infrastructure.

Lastly, the analysis also highlights specific stances taken by some speakers. One speaker supports the implementation of the Automated Union (AU) data policy framework and emphasises the need for legislation to support its implementation. Additionally, the speaker suggests the importance of data policy harmonisation across the African continent.

Another speaker advocates for increased funding towards digital public infrastructure. The Parliamentary Network on Internet Governance aims to improve funding allocation, and it is noted that most parliaments will be resuming work in a few weeks, providing an opportunity to further push for increased funding.

In conclusion, the analysis highlights the key points and arguments made by speakers on various aspects of data policies and digital infrastructure in Africa. These include the vital role of parliamentarians, the need for harmonised data policies, prioritisation of funding for digital infrastructure, and the balance between state security and digital rights. Civil society engagement and capacity building for parliament members are also seen as crucial. The implementation of the AU data policy framework and increased funding towards digital public infrastructure are supported. Overall, the analysis provides valuable insights into the discussions surrounding data policies and digital infrastructure in Africa.

Moses Bayingana

Digital transformation is considered critical for Africa’s development and plays a significant role in achieving Agenda 2063 and the UN Sustainable Development Goals. The African Union Commission has developed strategies to drive digital transformation and boost Africa’s digital economy. Over the past decade, Africa’s contribution to GDP has increased from 1.5% to over 3% due to advancements in the digital sector. This growth highlights the potential for further economic development through increased digitisation.

To facilitate the digitisation process, the AU has adopted the AU Data Policy Framework to ensure the smooth flow of data. Moreover, support has been extended to Internet Governance Forum organisations, demonstrating the commitment to fostering a conducive environment for digital transformation.

Investing in Africa’s youth is crucial as they hold the potential to drive Africa’s digital economy. With approximately 60% of the continent’s population below the age of 25, Africa’s youth play a significant role in shaping its future. Additionally, it is projected that Africa’s population will reach 2.5 billion by 2050, further emphasising the importance of youth empowerment to harness their potential in the digital sector.

The need to bridge the digital divide is also addressed. Efforts are being made by the African Union Commission to develop strategies and frameworks to regulate digital transformation and ensure the continent’s digital future. The adoption of the AU Convention on Cyber Security and Personal Data Protection is a notable step in safeguarding Africa from cybercrime, as it is identified as a prime target due to its low awareness rate.

In terms of implementation, an institutional architecture and information framework have been devised to monitor the progress of the digital transformation strategy. Member states have nominated focal points for digital transformation, ensuring a collective and coordinated approach towards achieving the set goals. Engagement with all actors across the continent is planned to foster collaboration and support in the implementation process. Furthermore, a comprehensive evaluation is scheduled for 2025, which will provide insights into the progress made and identify areas that require further attention.

Finally, a consultative approach is being employed to grow the African Internet Governance Forum. Recognising the importance of partnerships, strategies are developed through a consultative process, and collaboration is maintained with the European Commission and other stakeholders.

In conclusion, Africa recognises the significance of digital transformation for driving development and achieving its strategic goals. With a focus on youth empowerment, bridging the digital divide, regulating digital transformation, and monitoring implementation, Africa is positioning itself for a prosperous digital future. The efforts of the African Union Commission, coupled with collaboration from key stakeholders, demonstrate the commitment to harnessing the power of digitisation for the benefit of the continent and its people.

Moderator

The African Internet Governance Forum (IGF) discussed various key topics related to internet governance in Africa, including the role of parliament members as a bridge between civil society, the technical community, and the government. It was emphasized that parliament members play a critical role in initiating or supporting government efforts to implement data policy frameworks. Harmonized data policies across African countries were also identified as necessary for seamless data management and operations of companies like Jumia. Furthermore, the forum highlighted the need to increase funding for digital public infrastructure, and the importance of civil society’s engagement with parliamentary portfolio committees for effective legislation. The pivotal role of youth in shaping the digital future was emphasized, as well as the need for advocacy for changes in the digital landscape involving all stakeholders. The forum also stressed the importance of improving internet access and digital literacy from grassroots levels, and the need for safe spaces to report cyber crimes and ethical frameworks tailored to the African context. Investment in Africa’s young generation for driving the digital economy and sustained funding for cybersecurity projects were identified as crucial. The African Network Information Centre (AFRINIC) dysfunctionality was acknowledged, while Nigeria’s readiness to host the global IGF was welcomed. The forum also highlighted the need to address the gap in internet penetration in Africa. Overall, the African IGF provided a platform for valuable discussions and emphasized collaboration, policy development, and investment in various areas of internet governance. The active monitoring of the digital transformation strategy for Africa was also highlighted as a positive step.

Lillian Nalwoga

The African Internet Governance Forum (IGF) has seen increasing interest and participation from African stakeholders. The effectiveness of the multi-stakeholder approach in the African IGF has become evident, with support from parliamentarians, ministers, and the private sector. The presence of the parliamentarian network and African Parliamentarian Symposium highlights the importance of collaboration in shaping internet governance in Africa. Additionally, governments and the private sector at the regional and national levels have shown interest in engaging with the IGF.

One key argument is the need to implement recommendations and discussions from the IGF at the regional and national levels. These recommendations, discussed in Choto and Abuja, should be applied to enhance internet governance practices across Africa.

There has been an increase in interest and participation of African stakeholders in the Internet Governance (IG) processes. Statistics from the host country show that 3,100 people registered for the IGF, both onsite and online. The eagerness of countries to host future forums indicates the growing importance of these conversations in Africa. This increasing interest reflects the recognition of effective internet governance’s impact on achieving Industry, Innovation, and Infrastructure (SDG 9) and Partnerships for the Goals (SDG 17) in Africa.

It is concluded that the IGF should continue to support and partner with African stakeholders, with significant support received from global partners through the UN IGF Secretariat. This support acknowledges Africa’s potential in the digital future and encourages collaboration and learning opportunities for the continent’s development.

In summary, the multi-stakeholder approach in the African IGF is effective and relevant. Implementing recommendations at regional and national levels is vital, considering the increasing interest and participation of African stakeholders in IG processes. Continued support and partnership between the IGF and African stakeholders are essential for the digital future of Africa.

Session transcript

Moderator:
Good afternoon, everyone. We just give ourselves 30 seconds. We should be on. As we wait, I want to confirm that our online panelists are there. Are you there? Yes, I am. Thank you. Perfect. Thanks. Moses, are you there? Moses from African Union Commission, are you there? Yes, I am. I’m connected. Thank you, Moses. So good afternoon again, and welcome to the African Union Open Forum 2023. We are delighted that you are all able to join us, and considering we have already taken a few minutes, we’ll just go into the program. And to start with, I will have Dr. Chidi Diogo, who will be giving us an overview or a highlight of what happened in the Africa IGF that was held in Abuja. And he is the head, New Media and Information Security, Nigeria Communication Commission. Dr. Chidi, the floor is yours.

Chidi:
Thank you very much. This report refers to the African Internet Governance Forum that was held between the 19th and the 24th. of September 2023 in Nigeria. To do my presentation, I prepared an outline that is written as follows. Distinguished guests, honorable delegates, ladies and gentlemen, we express our appreciation for the cooperation displayed during the recently concluded African Internet Governance Forum held in Abuja, Nigeria, between the 19th and the 21st of September 2023. Now the details of the forum are as shown on the board. The theme of the forum was transforming Africa’s digital landscape, empowering inclusion, security and innovation, followed by the facilitators of the program, which included the government of Nigeria and specifically the National Assembly, through the efforts of the various committees led by Senator Shuaibu Afolabi-Salusi and Honorable Adedjiji Stanley Olajide, and the Ministry of Communications and Innovations, the Nigerian Communications Commission and other relevant stakeholders. In general, the forum had an impressive turnout, recording nearly 3,105 participants, and then 700 of those were in person, while about 1,600 were virtually participated. Throughout the forum, there have been various activities leading up to the main celebration. First, there was African School of Internet Governance between the 13th and the 18th September 2023. The trust of the school was to build Internet governance in Africa, focusing on the African Union data policy framework. This was closely followed by African Parliamentary Symposium, and the trust for that was the contribution of the parliamentarians to shape digital trust on the African continent. Lastly, there was an African Youth Internet Governance Forum on the 18th of September. The trust was basically emerging technologies, leveraging innovation for sustainable development and youth empowerment. There were various sub-themes, numbering up to 40, but then the major caps had to do with cybercrime, human rights and freedom, universal access and meaningful connectivity, cyber security, digital device and inclusion, artificial intelligence and emerging technologies. So we had a very fruitful time in Nigeria, and then very meaningful deliberations we are undertaking, the summary of which I’m presenting as follows. Multi-stakeholder approach is key and required for the AIGF. The need for enabling environment cannot be overemphasized. Enforcement of instant cyber laws are very necessary, and the display of political will to shape the digital landscape is required. Not to mention the legislative framework, which in essence would promote ethical artificial intelligence principles and make inclusivity a data priority. And there’s a need to develop a strong foundation of digital identities across the continent. And lastly, the adoption of an African payment and settlement system. These five pillars were entirely agreed upon by the forum. And now to localize our efforts in Nigeria, the federal government of Nigeria, and just as we believe that so many other countries are doing, is playing a pivotal role of multi-stakeholderism in shaping the trajectory of technological advancement. And in doing so, it has put in place strategic objectives, initiatives, regulatory instruments and platforms for all stakeholders to come together from time to time to assess where we are, and then to most importantly determine where we’re going. And in overall, this ensures inclusivity, security, and innovation. Nigeria, like most countries that we have heard about, has also taken major steps towards the harmonization of rights of way across the 36 states of the federation, which means that the barrier to entry to play in our industry has been lowered. It also ensures that there’s a thorough connectivity and that there’s fair competition amongst the players, which also translates to ease of licensing intended operators. And of course, the price regime has been regulated, is open access and non-discriminatory. And finally, all these have contributed to what you might call the universal access and service obligations of the commission. As a continent, our work is just beginning in a very simple way, and we need to continue to work together to make Africa a shining example of digital progress. Together we’ll overcome the challenges and insist the opportunities that are emerging in our markets. While some countries have made significant progress, other countries cannot be said to have made similar progress. We identify some challenges that we must overcome as a continent, and these include inadequate visibility of individual countries’ activities, which can create problems in terms of sharing information. While our different countries are working tirelessly to ensure that our digital footprints are all over, it’s very important that we come together to share information in a way and a manner that will be real-time and achievable. We also fear that there’s insufficient collaborations within the African region. This second point somehow points to the first one, which means there’s need for continuous handshake amongst the various continental stakeholders. It did appear that research and development are inadequate across the favourite units. We all know how disruptive the emerging technologies can be, and then the speed with which we are eroding our everyday life, and therefore the need for collaboration, for research efforts to be made, cannot be overemphasized. What we don’t want to do is to continue to dwell in crying about the disruption of the OTTs and the other emerging technologies, where we can literally seize the opportunity to increase our research capacity and get funded. And lastly, there have been concerns raised about the inadequate platforms for capacity development, especially for the digital grassroots. So with the appreciation that I rendered at the beginning of this, I would like to conclude by saying that the African Internet’s Governance Forum held in Abuja in September attracted good participation across the entire Africa. As the host country, we are grateful for all those that attended. And we are even more grateful to those who took out the time to write to us to express their profound gratitude and to say to us how beautiful our country is. And so having talked about the prospects that we all stand to benefit, especially in research and collaborations, and then also identify the few challenges, it is very important that, as Africans, we’ll do what is needed for us to be able to compete effectively in the digital world. Thank you very much.

Moderator:
Thank you, Dr. Chidi, for that elaborate report about the Africa IGF. I’m now going to ask Honorable Sam George just to give key highlights from the parliamentarian symposium. Thank you.

Sam George:
Thank you very much, Madam Chair, and to the honorable members in the house, Big Mommy, Madam Mary, and everyone gathered here. For us as members of parliament, it was a very long IGF because we started with the AfriSEEG, the African School on Internet Governance, which was not a school. It was a boot camp. It was a boot camp that really stretched members of parliament. We started at 8 AM, closed at 8 PM, and had to submit our assignments by 5 AM the following day, thanks to Henriette. She’s not a very big friend of members. of Parliament. But we put out a very important document from that session that led us into the parliamentary track and and what that did for us was to highlight the opportunities that exist for members of Parliament to begin to act as the bridges between civil society, technical community and the executive or government in ensuring that we don’t leave anyone behind and we close the digital gap. Part of the key things that we discussed was the role of and highlighted in our parliamentary sessions was the role that members of Parliament have to play in ensuring that we either initiate or support executive government to bring legislation that will help with the implementation of the AU data policy framework because for us we realize that it’s important that as a continent we need to have harmonization of our data policies across the board. And one of the things that we realized was that data policies do not necessarily end, data policies do not necessarily just end with data protection legislation but also with the necessary harmonization and synchronization and one of the big examples we used was a Nigerian big tech company called Jumia. Jumia works as a Nigerian company but works across about 16 African countries including Ghana. So if we do not have proper data flows across African continents across the African continent we’re going to have challenges. So the issues of data sovereignty and data cross-border data flows came up highly and how as members of Parliament we need to ensure that even as we look at protecting critical and sensitive national data under the precepts of data sovereignty we also need to realize that we are increasingly connected and that we cannot survive on our own without cross-border data flows. Another key thing that we also looked at was the need for us to prioritize funding. Funding for digital infrastructure, digital public infrastructure is a very key thing that we need to look at in our country. So we’re looking to see how well we can improve the funding that goes to digital public infrastructure. We’re looking to see how well we can improve the funding that goes to digital public infrastructure. And that’s a key thing that African Parliamentary Network and Internet Governance is looking to do in terms of the budgeting cycle that’s going to happen in our various countries when parliaments resume the next few weeks. Most parliaments are resuming in about a week or two. So we’re looking to see how well we can improve the funding that goes to digital public infrastructure and the free flow of data. If you don’t have the infrastructure in your country in the first place to house the data, that houses it in a secure manner. Another key thing we discussed was the issue of that fine line between state security and digital rights of citizens. We recognize as members of parliament that the state has a right to have access to the data, to have access to the information, to have access to the data. However, the state must do so in a manner that does not infringe on the digital rights of citizens. So these are some of the things that as members of parliament we left Abuja with. And we’re very confident that as a network of members of parliament, we can pat ourselves on the back that most times they say MPs don’t sit in a room for very long. They don’t sit in a room for a very long time. They don’t sit in a room for a very long time. And we’re more passionate about internet governance than the tech community themselves. And you’ve got champions. You’ve got champions for you. But one of the big takeaways, which I’ll end on, if you leave me as a politician, we’ll talk beyond, until tomorrow, one of the key takeaways we left Abuja with was the fact that civil society and technical skills are very important to the people of the world, and that they don’t help build the capacity of members of parliament. And we’re very confident that as members of parliament, we can pat ourselves on the back that most times they say MPs don’t sit in a room for very long time. And we’re more passionate about internet governance than the tech community themselves. And you’ve got champions for you. But one of the key takeaways we left Abuja with was the fact that civil society and technical skills are very important to of parliament. You can only push legislation based on how deep your knowledge of a subject is. And so if civil society does not engage with parliamentary portfolio committees and members on those committees, and if we wouldn’t do a sample here, many people from civil society and ask them to mention five members of the portfolio committees in their national parliaments, many of them cannot. If you don’t build relationships with these members of this portfolio committees, you can only continue to cry outside but you won’t have the change that you want to see. And it was refreshing for us that we had members of parliament like Honorable Stanley Adediji from the Nigerian House of Reps and the chairman of the Senate committee. We’re told Nigerian senators don’t like to sit in meetings, but I mean they’ve shown us that they have first competence and secondly they’re willing to work if they are engaged. So civil society, you have champions of internet governance in parliamentarians, work with us to get what you need from government. Thank you very much.

Moderator:
Thank you Honorable Sam for those insights and we challenge you next year, same time we want to see what has been done in those areas. Now I give the floor to Mariam Jobe to give highlights of what happened in the youth session.

Mariam Jobe:
Hi, good afternoon. I’m Mariam Jobe as she already introduced and I will just highlight some of the key takeaways that we had from the Africa Youth Internet Governance Forum held a day before the main Africa IGF and it brought together a very diverse set of new voices in the youth perspective and you know we address critical issues related to internet governance, youth empowerment and emerging technology and we highly emphasize the pivotal role that young people play in shaping digital future and they’re important in their policy development and enforcement in this regard. We urge everyone, you know policymakers, relevant stakeholders including the members of Parliament, civil society and government, irrespective of their positions honestly, to advocate for changes in the digital landscape. We also address a very concerning issue which is the lack of knowledge among young people about issues around internet governance, particularly cyber security laws, data privacy, digital inclusion and you know continuous outreach efforts to educate and empower youth who are unaware of internet governance issues and how it affects them in their daily lives and their daily usage of the internet and participants call for initiatives to integrate internet governance and technology into the education system in our various African countries and especially the underserved communities and the rural communities. We also highlighted the importance of improving internet access and digital literacy from the grassroots level. Another key highlight that was made during the session was that we delved into discussions around artificial intelligence and the need for safe spaces to report problems and cases that are cyber crimes for instance and the importance of ethical framework works that are tailored to the African context, and the lack of comprehensive data laws in some countries. I know that Nigeria has, you know, made a progress in that, but there’s some countries, many African countries, that still lack comprehensive data laws that require a lot of attention. And we also, you know, concluded the event with an intergenerational session between the youth and the MPs, where we had an open dialogue where we talked with the MPs. The MPs heard what the youth want, what they want, what we want them to consider, and we talked about how they can support youth and their visions. While specifics were not fully detailed, you know, we talked about fostering collaboration between the youth and the government, and representatives emerged as a crucial step in addressing digital challenges. In conclusion, overall, I think we talked most, the key highlights was that we need to have increased education and awareness, inclusivity, ethical considerations, and citizen participation in order to build a sustainable digital future for Africa. Yes, thank you.

Moderator:
Thank you, Maria, that we need collaboration, we need innovative views of engaging our youth, and we need to ensure that we are all holistically moving together in capacity building, safety online, and the like, so we all have to work together. Now, my next speaker is online, Mr. Moses Biyangana, the acting head, Information Society Division, African Union Commission. Moses, can you take the floor? Yeah, thank you, Mumbula.

Moses Bayingana:
Distinguished participants, ladies and gentlemen, on behalf of the African Union Commission, I welcome you all to this AAU Open Forum. Let me use this opportunity to thank the government in Nigeria and the African IDMACC for the successful organization of 23 editions in Abuja, Nigeria. Our leaders have recognized digital transformation as a driver for development and critical to the attainment of Gender 2063 and UN Sustainable Development Goals, developing the digital transformation for Africa as a master plan that will drive our digital agenda up to 2030. Across Africa, the digital economy is on the rise. In the past decade, its contribution to GDP has brought in many economies from 1.5 percent to more than 3 percent. While there is progress, there is still a lot to be done. Connectivity currently has its usage in Europe, and Africa’s race for cybercrime remains low, making it a prime target for cybermen. At the continental level, the AECU has made progress in the digital environment to facilitate and implement partners across the continent to build on these common strategies and frameworks to regulate Africa’s digital transformation. These strategies and frameworks will also facilitate harmonization across the continent. This includes the development and adoption of the Digital Transformation Strategy for Africa that sets out a vision to build an inclusive digital society and economy in Africa. Sectoral digital strategies in the critical sectors of education, agriculture, health, and e-commerce have also developed to facilitate and scale up access to smart digital technologies and associated data-driven services across all sectors. Furthermore, the AU Data Policy Framework has been adopted to facilitate the flow of data across sectors and borders. The Interoperability Framework for Digital Union has also been adopted to facilitate the development of digital solutions that are inclusive, trusted, and interoperable. The African Union has also developed a broader policy and empowerment policy and conducted a study on the cyber security in Africa. develop a continual cyber security strategy. I am pleased to inform you that King AU must adopt the AU concept on cyber security and optimal data protection, which anchored its entry in court. This item gives impetus to our endeavors to promote cyber security while advancing the digital. With regards to internet governance, through the first phase of WIDA, support has been extended to the organization of the Internet Governance Forum, national, regional, and NGOs. And together with the European Union, we are working on the second phase of WIDA. Distinguished participants, ladies and gentlemen, moving forward, the continent’s useful population factor is the demographic dividend that could be a game-changer in accelerating access to digital platforms and boost economic development, create jobs, and improve lives. Recent statistics show that 60% of Africa’s population is below 25 years old. By 2050, Africa’s population is expected to grow to 2.5 billion people. And this will undercut of the world’s youth. We’ll be in Africa. Africa’s youth are there for an opportunity to drive Africa’s digital economy. Hence, the need to invest in them. a key development to relate innovation growth on the continent. In conclusion, I would like to thank everyone who contributed to the organization of the AU Open Forum, and invite all stakeholders to work together to bridge the Afro digital divide, have the Indiagra close, and secure Africa’s digital future.

Moderator:
Thank you, Moses, for those insightful highlights. And for the documents that Moses mentioned, they are in the AUC websites, and we can consult them. Our next speaker is Dr. Mark Tasek. He’s the Chief of Section Innovation and Technology, United Nation Economic Commission of Africa. Dr. Tasek, can you take the floor?

Moctar Seck:
Thank you. Good afternoon, and good morning, wherever you are. I think we have some people connected online, and it’s still morning there. Just as a beginning, I would like to thank all of you to attend this Open Forum organized by African Union. And also, I would like to thank the federal government of Nigeria for the successful organization for the African Internet Governance Forum. In the first presentation, the two first presentation, they highlight the key funding of this very interesting. the world. We have a very strong African community, and we have a very interesting forum organized in Africa. But let me try to highlight the outcome on the work we are doing now at the United Nations Economic Commission for Africa. As you know, one of our key missions is to support African development. And when we support African development, it’s not only about Africa, it’s also about Africa as a whole. So we have a very strong health sector, as well as statistics, and as a sector. And we try in our sector, in digital technology, how we can leverage all this sector through digital technology. Why it is very important to listen to the presentation done by the government of Nigeria, and Ms. On this, why it is important for Africa? Well, let me get start on the first point. As you know, there is a deficit of connectivity in Africa. We have 6 percent of our population offline. This is due to several problems. The first one is infrastructure. We need to make sure that we have the infrastructure to provide broadband to everybody by 2030. We need to make sure all people will be connected by 2030. And for this, we need to involve the private sector and also have a sound regulation to attract investment in the development of the infrastructure in the continent. We need to make sure that we have the infrastructure to make sure that we have the infrastructure to work with our regulator to look at the way we regulate now the new system, because there is an advance of digital technology around the world. Second, the continent, the digital divide, I’m going to focus only on the gender digital divide. As you know, we have a connectivity. Foreigners who come to Jackie it is a market that is rich in technology, it 85% of connected compared to people at 45% 40% it is a gap of 11% and it is very important to involve the women and youth in the technology sector. Why? It is very important to have the women and youth in the technology sector. It is $1.5 trillion. It is something very important if you want to take benefits of the digital technology. Because we have several studies to highlight estimated the cost of the internet market in Africa will be $180 million in 2025. It is very important to have the women and youth in the technology sector. It is a very important amount. It is important to put in place activity and policy to make sure we get all people included in this digital era. Third point, we talk a lot about cyber security. As you know, cyber security is very important. We have a lot of work to do. We have a lot of work to do on cyber security. We have a lot of work to do to make sure our continent is secure. And the cyber security remains a big challenge because it costs now 10% of our GDP. We have 10% of African GDP, how many schools you can build, how many hospital you can build, how many people can be moved from this poverty. So we have to be very careful. We have to be very careful. We have to be very careful to fight this cyber security as well as we have this issue of terrorism using this cyber security to kill people. is something very key in this continent. The fourth point, it is the issue of people offline. We have 500 million people in the continent without any legal form of identity. These people doesn’t exist anywhere. And we can’t do any planification without these people. We need to take into consideration the issue of digital ID, to provide digital ID to all, to see how we can interoperate with our system. Since it’s a service system, digital ID system, health system, license ID system, passport system. We need to work on this digital ID to make sure these people have identity and can participate to this digital information. Last but not least, it is emerging technology. I will focus only on this artificial intelligence. It is a big opportunity, but we need to be very careful. We need to build the capacity of our new generation. How we build the capacity? Why we are building the capacity of our new generation? Because we have this demographic dividend. By 2050, 70% of our population will be under 35 years. If you would like to participate in this digital era, we need to build their capacity to be ready to participate to this for industrial revolution. And also, we need also to look at the regulation of this artificial intelligence. Because otherwise, we can miss some sector. The sector on reading, booking, or developing, booking, we have to look at this sector carefully with this artificial intelligence. The artificial intelligence can offer a lot of opportunity, but we have also a lot of challenge with this artificial intelligence. We need to look at this carefully. And what we do in ECA? to overcome all these challenges and to support African countries. I’m going to highlight some of our key activities. In 2018, we have set up a Center of Excellence on Digital ID, Digital Threats, and Digital Economy to support African countries to use digital technology for their sustainable development. And now we are supporting African countries to implement the African Digital Transformation Strategy developed by the African Union in collaboration with UNECA and other partners. And this strategy is a blueprint for the African digital sector from 2020 to 2030. Now a lot of countries have benefited from the support of ECA. On cybersecurity, we organized two years ago the first African Summit on Cybersecurity in Togo. One outcome of the cybersecurity summit was the declaration of Lomรฉ. And since the declaration of Lomรฉ, we have seen a lot of progress. Now we have 15 countries who ratify this African Union Convention on Cybersecurity called Malibu Convention. And also we are establishing now a Center of Cybersecurity in Togo. On digital ID also, we are supporting a lot of countries to develop their digital ID program. We can give an example of Nigeria in the region of Kaduna, Gambia, and other countries also benefit from this support on digital ID. On capacity building, we talk about we need to build the capacity of the young generation. Why? We already established an African Center on Artificial Intelligence in Congo, Congo-Brazzaville. This center is functional since last year. And last week, this week, they have their academic, I think, for 2023 starting. I think we can build to be more relevant, more performant in the sector of artificial intelligence, applicable to the sector of health, environment, climate change, and industry and economy. Another point also, we support African countries, it is to promote the youth generation. We have several initiatives for youth generation, one, it is an STI forum we are going to organize, we organize every year, and to have a lot of young innovators to promote some innovation ideas in the continent. We have also this foremost program for girls, African Girl Recruiting Camp, and focus on the girl age from 12 to 25 years, and we provide them skills in several areas, artificial intelligence, web gaming, and now the program has trained around 35,000 girls across the continent. Another issue also, for the Parliamentary also, we have a program for the Parliamentary, an important program focused to build the capacity of the Parliamentary on the making decision also, it is very important for them to understand the issue of this digital technology, because at the end, it is them to adopt the rule and the regulation for the digital technology, and this training focus also on FinTech, we have one program with Alibaba, and also in cyber security, we have the Global Forum of Cyber Security Experts. We need also to promote the voice of Africa, why we have several forums, the one forum, several UN-led forums, one it is WSIS Outcomes for Africa, World Summit Information Society Forum for Africa, we organize every year in Africa, and to focus on the 11 action lines of the WSIS, to see the progress made by African countries in the implementation of this 11 action line, going to the role of the government, the cyber security development infrastructure issue. Africa, but also, we are focused on the African digital compact as a service and as an activity. And also, we support the organization of the African Internet Governance Forum every year, and now we are focused on this global digital compact. I’m going to stay to focus one minute before I conclude on this global digital compact, because it is, it will be one of the key framework for the world. So, we are focused on the African digital compact. We are focused on the African digital compact. How we can participate is not just to attend the meeting. Now, we are in the consultation period, and we need to provide input. I think all sectors we discuss, we can provide input based on the need of Africa. We talk about this digital public infrastructure, this access affordability. We talk about capacity building, emerging technologies, and we need Africa to be a part of this. We need Africa to be a part of this. We need Africa to provide, and this global digital compact is open to everyone. Private sector, government, civil society, academia, everybody should be involved, because this global digital compact will define the future we want on the development of technology for the world, and we need Africa to be part. We need Africa to be a part of this global digital compact. We need Africa to be a part of this global digital compact. We already have one African input developed at the meeting organized by ECA in July 2023 in the Cap Town, South Africa, and the document is available on our website, but you can still provide your input before final submission. We need Africa to be a part of this global digital compact, and we need Africa to be a part of this global digital compact. We need Africa to be a part of this global digital compact, and to think about also on the WSIS plus 20. We are going to organize the WSIS plus 20. We start the reflection to organize WSIS plus 20 in November, early December in Victoria and then continue to sort through Africa, but that’s something also to investigate. I understand that that has been the 1946 proposal for Africa for the continuation of WSIS beyond 2025 and to look at also what is, the benefit, the impact that it will have on the world and exports. I think it is something I would like to highlight to share with you. And thank you very much for involving ECA on this important forum. And we invite also to attend the several side event. We start organizing since Saturday, but we have some side event tomorrow and after tomorrow, and I would like to invite you all to attend. And I would like to invite you all to attend this side event. Thank you.

Moderator:
And I would like to invite all of you to attend this side event. Thank you. Thank you. Thank you, Dr. Seck, for reminding us that we need to focus on women among the many things you have said, and for us to contextualize the issue, we need to analyze and get the opportunity cost of this issue, and we need to get the opportunity cost of this issue. And we need to be intentional and innovative to be able to address this issue. And this can only be supported if we have the facts and statistics, and we are able to, again, link it with the livelihoods opportunities. My next speaker is online as well, that is Dr. Martin Koyabe, senior manager, African Union, Global Forum on Cybersecurity Project.

Martin Koyabe:
And first of all, thank you so much for inviting me, and also for giving the GFCE an opportunity to share some of the aspects of this intervention. Secondly, I want also to pay tribute to the speakers who have just come before me in some of the issues that they have articulated going forward. As I said before, my name is Martin Koyabe. I actually lead the GFCE on cyber security. And I also coordinate the activities between the AU and the GFCE. Some of the issues that I’ll highlight on have also been contributed by our partners. And that is the AU development agency, that’s Aouda Nepad, and also colleagues within the GFCE ecosystem. If you allow me, let me just give you the context of what we’ve been doing with the AU. And this refers to the AU-GFCE collaboration project. This project has actually come to an end, and we’re now moving to the next phase of this particular project, where we were looking at how do we build resilience and ensure that African countries have the capacity to sustain what we call cyber capacity building within the continent. There were three areas that we were looking at. And these areas were very, very pertinent. One was the issue around taking an assessment to really look carefully and look at what are the priorities of African countries when it comes to issues of cyber. Remember, COVID really interfered with many of the plans of many of the African countries, and therefore, there was a shift from cyber capacity building to cyber capacity building. And we were looking at how do we build resilience and ensure that African countries have the capacity to sustain what we call the areas where they had planned, for example, in the digital infrastructure and other areas which actually saw a massive investment since there was a need, and also with other ministries such as health, which saw a massive increase in terms of funding. So, therefore, the priorities of African countries really shifted along the path. The other aspect was to look at how do you sustain capacity within the continent. And as I said earlier, it is true that by 2050, the continent will have roughly about 2.5 billion people, and out of that, actually a good chunk will be young people. So, therefore, there was a need to make sure that there was an investment, especially in looking at the expertise that exists within the continent. So, therefore, the issues around sustainability through the resources, especially the expertise of the government, was very critical. And then, thirdly, was the issue around institutional memory. How do we make sure that we establish the issues around knowledge so that we can be able to have many institutions, citizens, and also participants to be able to learn about cyber in the future? So, therefore, the development of what we call knowledge modules was very critical. So, these are what you call best practice or what you call good practice type of platforms that enable people who are in cyber to either learn about experiences in different parts of the continent, but more importantly, to be able to share their expertise and also new ideas on specific areas. When it comes to the areas of interventions and also the lessons learned, Madam Chair, if you allow me, I’ll just go very briefly, very quickly. There were several areas that came up and several interventions that we saw. It is important that the area of especially the need to sustain and to be able to protect the infrastructure became very, very, very high in the priorities of many countries. So, therefore, establishment of SADS, enhancement of CSADS and SADS became very, very high in the agenda of many of the African countries. And in this aspect, it was an issue around how do you ensure that we actually have the knowledge and the expertise to ensure that we actually beef up the capacities of SADS in many of the African countries? Because some of the people who get trained move on to other jobs and, therefore, many African countries struggle to maintain the capacity, to maintain the skill sets that are required. So, therefore, an issue around SADS and the critical national infrastructure was very high in the agenda. Some of the areas that were proposed as the way forward is to ensure that we have what we call an identification of critical infrastructures in these countries. These countries require that people or rather the institutions or the agencies to identify what is critical. It is also important to conduct what we call the risk assessments of the critical infrastructure so that we know how much investment we need in those particular areas. But more importantly, it was to develop what we call the critical infrastructure. So that countries can understand what they need to do going forward in terms of protection of the critical infrastructure. This was also exacerbated by the fact that many countries depend on the digital infrastructure for most of the services that we see today and that is why that was very, very critical. The second dimension was the issue around development of skills, and I really support some of the sentiments that have been expressed by some of the countries. The GFCE through the project actually with the AU has established what we call the Africa cyber experts community. This is a community that comprises of over 80 experts and some of them, I can see them in the room. They are from roughly about over 37 countries. And they have a lot of experience in the field of cybersecurity and they have a lot of experience in the field of cyber security and they have a lot of experience in the field of cyber security and they have a lot of experience in the field of cyber security and they have done a lot. They traveled to zoos and they And we call them the continue from going from strength to strength in order to establish what we call the Southern to Southern expertise, which can actually be able to help many of the countries to be able to converge and be able to address some of the issues that they have. So for example, if you have an expert who is good in certs, in Malawi and the country that probably requires that expertise in North Africa, surely there’s a, there’s no need of going to the North to seek such an expert. Therefore, if we have experts within the continent that can be established and known for their, for their development, it makes a lot of sense to build that particular capacity in order to support future need in those particular areas of need, especially when it comes to cyber capacity building. So therefore the development of skills is important. There’s a need to provide the opportunities, especially for individuals in marginalized areas. I think this is something that came out within the project and also strengthening cyber diplomacy and the understanding of normality and that process. And this is something that we’ve discussed in detail. There is a need for African countries to understand what is the process for being involved in the discussion of cyber diplomacy. What are the tenets of the understanding of the basics that are required within this area? And then thirdly, there’s a need to also promote what we call diversity and inclusivity. And I like the presentation that was given earlier by the member of parliament and also by the Nigerian minister to make sure that we can build this diversity. And these are sentiments that were expressed at the IGF Africa for those of us who are there, that we need to make sure that all communities, especially the diverse communities are built within this. So therefore we need to encourage the need for the young people and both old and other people who are actually vulnerable within the community to be able to be involved in these areas. Within the GFCE and in the collaboration project, we’ve established the network of African women in cyber. This network has grown from strength to strength. Thank you, Madam Chair. Since you are the co-founder for this particular organization, you have moved it from where it is to the next level. And I think that is one area where we’ve seen a lot of effort being put in a good edge, that more effort is put in, as Markta said, that when you have more than 50% of the population is women, then it’s obvious that we need to have women and girls in cyber, you know, taking their role and being able to support the efforts. As I come to wind up, there were areas of concern, especially when it comes to resources and funding. And this is something that is not new. Many of the projects that we’ve seen in the continent do not necessarily have what we call the sustainment built into those particular projects. So therefore, after the funding is over, these projects normally either end or these projects are never sustained to the level that is expected. There is a need for the African countries to also invest more in terms of funding. So therefore when you develop cybersecurity strategies, or when you develop these particular interventions, it’s important to factor in how these countries can be able to sustain some of these particular projects. We know there are some good examples in the continent of countries that have been able to sustain their certs, or they’ve been able to sustain specific projects internally without necessarily seeking external funding. So therefore in terms of budgeting, especially for parliamentarians and other decision makers who are in the room, it’s important that we think about funding as a critical component when it comes to issues of cyber capacity building. And then finally, Madam Chair, the issue around the political will cannot be emphasized any further. And I really want to emphasize what the representative from parliament from Ghana just said just a few minutes ago, that the political will is important. And the reason why this is important is because many of the political leaders, many of the legislators do make decisions that affect you and I, especially when it comes to the continent. So therefore the issue around sensitizing the executive, sensitizing members of parliament, sensitizing people who make decisions that might not necessarily be the decisions that can actually have an impact now, but might have an impact in future. It’s very important that we really sensitize those levels or those echelons in the society so that they can understand what are the critical issues when it comes to cyber. As I finalize Madam Chair, there are some interventions that the GFCE continues to make, and we really want to thank some of the partners that are in the room that we’ve worked together. I know we’ve been able to actually support some of the IGF regional capacity development, especially when it comes to issues of the school on internet governance and other areas. We’ve also been able to work in tandem with some of the organizations in order to push specific areas of cyber capacity building forward. In summary, and as I come to a conclusion, Madam Chair.

Moderator:
Sorry, Dr. Koyabe, you only have 30 seconds.

Martin Koyabe:
Okay. So the last bit here is the upcoming meeting that is coming up in Ghana. And I know many of you are looking forward to it. There will be, for the first time, we shall have the cyber security experts and cyber capacity building development partners coming together in Ghana on the 29th to the 30th of November to talk about an issue of cyber. But thank you very much, Madam Chair.

Moderator:
Thank you, Dr. Koyabe. We are now going to the Q&A session and I want to ask all the participants, if you ask a question, please state who should answer the question so that we are able to align ourselves. We have about 15 minutes for that. Kindly only one question per person and don’t make it as if it’s another presentation so that we save on time. And to start with, I’ll ask Dr. Chidi next to me to start.

Chidi:
Yes. It’s actually a question. Thank you, Dr. Martins Koyabe, for your beautiful and intelligent presentation. Okay. We talked about critical resources or infrastructure that are required to undertake all the massive projects that you mentioned. But for us as regulators in Nigeria, we’ve received inquiries from a good number of stakeholders, which had to do with AFRINIC. the internet and the internet in Africa, to which we have not been able to give substantive answer to. And in all your presentation, I have not heard you mention the crisis or the problem or the dysfunctionality within the AFRINIC. The reason being that to sustain the internet and to fight cyber security, it is very important that the continent takes charge of the internet and the internet is the source of the internet. So, the internet is the commodity, the bandwidth, which is the IP networks. Thank you very much. ยป Thank you, Dr. Chidi. We’ll take two, three more questions before the panel start answering them. ยป Thank you very much, James. I would like to really use the opportunity to commend AU for starting African IGF 2011. I was in the room that day and it was quite tough, but the dividend is for us to see today. And also to appreciate NCC for the opportunity to host the global IGF. And to actually say that Nigeria is ripe to host the global IGF. Do you agree? Yes. Yes. Yes. So, now to the question. Dr. MacTarr provided us that data, which we know that we are really behind the global average with regard to internet penetration. I want to ask, how about the use of the data and the use of the data to actually, I mean, how can we use that to really reach the under south, the tools, the technical know-how, it’s available. I hear people say we don’t have the capability, the technical capability. We have the technical capability, we can deploy a lot of infrastructural tools. My company, we have data centers, we So digital wide spaces from the digital dividend is just there for us to use. With a bandwidth of 100 megabits per second, we can reach the underserved. So what is holding us? Thank you.

Moderator:
Thank you. Any other question? As we wait for more question, I give the floor to Dr. Koyabin to answer the first question and Dr. Marksek for the second one.

Martin Koyabe:
Thank you very much. I don’t know whether I’m on the chopping board here, but let me try and be very, very, very careful in how I respond to this issue of AFRINIC. But more importantly, I think we all agree that the continent requires consistency, it requires organizations that can be able to deliver in some of these aspects that we are discussing if we are to make a difference. It’s very, very unfortunate, especially from what I understand, and I want to be very, very careful here on the situation around AFRINIC because I think the challenge has been the litigation that has been launched in terms of the challenge of the problem that AFRINIC has. I really don’t want to go into the details of that because I know it’s in the public domain. And if you allow me another go, I want to make sure that we really assist where we can to make sure that the organization comes back to what it’s meant to be because the continent requires that organization. But more importantly, let’s build some, what we call sustainment in terms of how these organizations can function in future so that we know how we build mechanisms for auditing, mechanisms that can be able to create what we call an authentic organization that can be able to serve the people and the continent. So for now, I’ll want to reserve my extended comments if you allow, but to let what we call the process take its due course as AFRINIC tries to unsolve its issues as we all know it. Thank you very much.

Moctar Seck:
Thank you, Martin. I think I’m going to start with the AFRINIC problem. It is a big issue for the continent now. We have seen two days ago the resolution of the court and we need to take seriously into consideration this resolution. Now we can’t say anything more, but we are going to call a meeting between AEC, ECA and Smart Africa to see how we can do to sort it out with this problem. Because when you talk about this digital transformation, job creation, FinTech opportunity, e-commerce, if you don’t have your IP address, what are you going to do? Nothing. Yeah, it is a problem. And the CCTL, TCTL, This is a problem in several African countries. There is no digital sovereignty in several continents. We have our young generation, digital dividends, 70% of youth, to represent 42% of the youth in the world. But if you don’t have the access, if you don’t control your network, anything will happen. It is something we have to take into consideration. Second, regulation, it is a big problem. It is not something easy now. Before, it was easy when you have only the telecommunication sector, we have this mobile and some added value service, it was easy to regulate. But now you have this artificial intelligence. We don’t know where we are going with this artificial intelligence. It is very clear. Even for the development of all the world, they don’t know where we are going with this artificial intelligence. You want to write a book, just you ask a GPT to write for you the book, you can write. Yeah, what is the problem? Everything you can do, you can ask this artificial intelligence. It is something like, what call it in French, a lavage fall. A lavage fall, because there is no electricity, there is no electricity, there is no electricity, there is no electricity. Lavage fall, because this cow eat other cow. But now artificial intelligence use the data from all network and the use of the data of the artificial intelligence, of the service provided by artificial intelligence. We don’t need what happened now. We are not safe in this. We are not safe in this. We are not safe in this. The issue is cyber security. Now, when you use this cryptography, all this software, to save your network, the issue is you are this quantum computer now. You can’t block anything with this, it’s clear. We need to work closely with the African government, with the African government, and we can find any code you put in your system. And with this artificial intelligence, it will continue. We need to work closely while we have this working group, AUC, on artificial intelligence to see how, what we can do in Africa, what kind of framework we can put in place, what kind of measure we can put in place, what we can do in Africa, what we can do in Africa. So, even for the spectrum, it is very important to look at the issue of spectrum with the development of the 5G. Maybe later 6G will come, and we are not ready for 5G. We are not ready. Because some people use this, what call it, this 4G. But, we have to increase the 4G to make sure, to make it is like 5G. It’s not 5G. Yeah. It is generally what the operator did. We have to see this allocation of this bandwidth, also this spectrum. Regulator now has a big role to play, and it is important also all regulators to start building their capacity on this emerging technology. We have artificial intelligence. We have this blockchain. We have all this Internet of thing. Tomorrow, nanotechnology will be there. We have this quantum computer. We don’t know what’s happening in the world. Yeah. I’m going to stop there. Thank you.

Moderator:
APPLAUSE . Sorry. Thank you, Dr. Seck. I think we have time for one more question, and then we’re going to have more questions.

Audience:
Thank you, madam. My name is Katia Sarajeva. I come from Spider at Stockholm University. I would slightly disagree with the previous speaker. Just a comment. Do not get distracted by AI or blockchain. Spider has been working for seven years and has been working engaging individuals, individuals with COVID, it is not isolation. It is still spectrum. It is still a living disease, and that’s why the being of isolation is so important. That’s, you’re right, one of the key parts of my point, because if the second component of isolation is a biological respirator that’s infrastructure sharing and all of these things are done by African engineers, economists and software engineers that are locally in Africa and are constantly working on this. It is complicated and it is hard but everybody’s doing it so please remember your regulators and also your judiciary because everything rests on the rule of law so it’s not AI. It’s a lot of interesting work and good work that’s being done on national level but also in regional harmonization and working together on the basic stuff. Everybody’s talking about AI and how blingy it is. It’s just a dream. A lot of the work and a lot of the progress is being done is done right now by really highly skilled experts on the African continent and by supporting those people who are working on the bolts and nuts that are not glamorous. That is the everyday work of the telecom regulators. You are actually spreading both connectivity and use and empowering a lot of people as we speak and a lot of it is done in meetings like this. Just everybody’s struggling. It’s not just Africa. In Sweden, the north of Sweden didn’t get connectivity on their own. It’s the people who had to make it happen. So the problems are everywhere and Africa is no different and you’re doing really good because I work with these people. Sorry.

Moderator:
Thank you for the comment. I will give Honorable Stanley then two more questions and answers and the finalists. Probably to save on time, can you kind of just stand behind the mics because we have them.

Audience:
challenges. Thank you, everybody. Permit me to stand on existing protocol. I am Honorable Adedeji Stanley Olagide, House Committee Chairman for ICT and cybersecurity, also representative Nigeria. I think we’ve been having, I’ll cite an example. When we started this whole world of human genomics, where we have to do a lot of analytics around DNA for precision medicine. A lot of doctors were agitating, will this thing take away medical doctors role? What is it going to do? Is computer simulation or analytics going to take away all of our jobs? It is not true. I’ve been around technology for almost 40 years. I want to say this, let’s not get distracted with what AI is just another technology. As far as I’m concerned, I’m a technologist and now a lawmaker. Let’s not get distracted. Let’s focus and keep our eye on the ball. And the ball is how we are going to make this integrated into the future. Either we take it and run with it, or you just keep running and we’ll be behind. So as legislators in the House, let’s not get distracted. We will skin this cat. There’s so many ways we are going to skin this cat. We are going to unravel it. It’s just the reality. But the question now is, how quickly are we going to train ourselves to catch up with the rest of the world? Let me stop right there. Thank you.

Moderator:
Thank you. I give you the floor. I give Onika and then the last one.

Audience:
I have three interventions and I’m going to do it in two minutes. One is at national level, one is at continental level, and one is at global level. On the national level, we were in Abuja, had a very beautiful report that had been presented by Dr. Chidi, complimented by Honorable Sam and Marian. The first thing I believe we need to do as legislators and participants, when we go back, let’s put this together as a report and share it with the leadership of each respective National Assembly. Otherwise, this report will be circulating amongst us. No one else will know that a wonderful job took place in Abuja and has been complimented Don’t let us turn this participation into a holiday or a jamboree. The only way we can get meaningful things out of this is in our view. Let’s put this report together, let Nigerian senators share with the president of the Nigerian Senate, let the Ghana parliamentarians do so with the Ghana parliamentarians, let us also do so with representatives of our various countries in inter-parliamentary unions like ECOWAS parliament and African Union. This is one way to ensure that what we are discussing here gets traction among others. That’s number one. Number two, I listened to a number of initiatives being done at global level, at continental level. The truth of the matter is that I am hearing some of them for the first time. You cannot be a champion or an advocate of something you are not aware of. Can we have a directory of ongoing initiatives at the continental level to be shared by all parliamentarians? That is the only way you can mainstream it into your national agenda. If there’s something that has taken place in Malawi that needed to be domesticated, it cannot happen until the parliamentarians are aware of it. And to not be aware of it, unless we have a directory of it. So the gentleman who spoke about, from the CE, yeah, he spoke about some initiative going on. The man from, he spoke about something going on. Do we have a directory? Of saying, these are the ongoing initiatives at African level, at African Union level, and let’s share with the parliamentarians, and let us become the evangelists and champions in our various countries. The last one. The last one, we must be grateful to international development partners. GIZ, yeah, a number of them are here. They have been supportive of initiatives in Africa. But that’s a caveat. Sometimes those supports do not address our priorities. Those supports come in a generic manner. Malawi requires supports for education. That will become a template for the rest of Africa. I want to beg of you, let each country determine its own priorities. And let us approach those development partners with our priorities. Let the funding, let the support be tailored to the priorities of each country. That’s where we make the maximum of it. And lastly, I thank you for coming to Nigeria. We are open again.

Moderator:
Thank you for the, thank you for the comments. We have Onika Denzanyo.

Audience:
Good evening, Onika Makwakwa. I am asking a question more specifically around a concern of what is the vision and the strategic plan for growing Africa IGF. I think it’s really troubling that we’ve got, what, 54 nations? And less than 20, if I’m not mistaken, that are actually active and hosting IGF. If we have a vision of hosting more global IGFs in Africa, in Nigeria or wherever is going to take us actually showing up en masse and holding each other accountable. I think what’s been missing with IGF is an accountability framework amongst the multi-stakeholders that are involved, but also amongst us as countries within the continent. So I’d like either ECA or AU to speak to what is the vision and the strategic plan for growing and strengthening IGF within the continent. Thank you.

Moderator:
Thank you, Onika. Thank you. Thank you.

Audience:
Good afternoon, everyone, and evening, morning to everyone around the world. My name is Zanyu Ntatisiasare, CEO of Digitally Legal, and standing on existing protocols, and also I think the questions and comments that were articulated. I’m really excited to hear that. But mine is more around, there are a lot of statistics given today and a lot of vulnerable groups that were mentioned. But I think as Africa, we need to be honest about one thing. When we have meetings of this kind, we really neglect our disabled communities. And the reason why I’m saying this is if you look at use cases, and I won’t even mention where, I’ll actually give you the homework to do that yourselves, the potential of effectively using assistive technologies in your country has the immense potential of one, not just obviously fighting equality and all the really good stuff that we all say we’re here for, but actually injecting to your GDP. So I think with the leaders that are sitting here, that’s one of the questions that I would like you to sort of research for yourself and ask yourself in the context of your country, your communities, even your own hometown, what has been done from that perspective? That’s the first one. The second one is more of a comment, I think, Honorable, just bootstrapping on what you said. We’re Africans, and we’ve got our own norms, our own cultures. There’s an English saying that says you can take a horse to water, but you can’t make it drink. We are Africans. If your brother, your cousin, your sister, your child, whoever cannot drink that water themselves, you make sure that water goes into that body. And I’d like to leave that as an actionable item for each one of us here to know that if whatever is required for us to achieve our goals from an IGF perspective, as Africans, is that water conundrum of water drinking horses, what, what, we make our horses drink in Africa. Thank you very much. Thank you. Thank you.

Moderator:
Thank you very much to all of us. Most of these have been comments. We are going to continue with these discussions going forward. I’ll give now the floor to Lillian, who will attempt probably to answer some of the questions, but then also give our vote of thanks. Okay, Mata, 45 seconds. Okay. I’m counting.

Moctar Seck:
I think IGF, African IGF, is growing in the continent. As you have to know exactly what is IGF, it is a multi-stakeholder forum. It’s different to the WSIS forum, where government come and make decision. Here, it is just to discuss on the key of the issue, key issue on the digital technology around the world. It is very important to forum, one of the outcome of the WSIS 25, and now I think everybody can discuss about issue related to Africa, to the world. You have seen several opinion when we talk about artificial intelligence. People coming from the north have their own idea, but we have our own idea on what’s happen in the continent, because we know very well what’s happen on this continent. And we’ll work with African Union to try to make up more successful African IGF, and to involve more government, more private sector, more civil society. I think we have seen this very good participation in. I think next year we will get more participant to the next African IGF, but in the meantime we discuss between all key actors to see how we can make better African IGF functional in the benefits of the continent. Internet governance, the global one, we already have one last year in Ethiopia. It was very successful, but we can’t organise it, because the IGF will end in 2025. By 2025, it will not organise in Africa. Maybe if the mandate is renewed by 2030, we can see this African country, when African country can organise. But it is a competition to organise also this Internet Governance Forum. Thank you.

Moderator:
Thank you, Makta. I realise we didn’t give Moses time to, chance to answer any questions, so Moses, I give you one minute.

Moses Bayingana:
Yeah, thank you. I just wanted to make a quick comment. I will be brief. One is on the issue of initiatives on the continent, and I want to thank the distinguished speaker for raising it up. Yes, as part of monitoring the implementation of the digital transformation strategy for Africa, we have put in an institutional architecture and information framework, where we have identified who is doing what to implement into a plan. As part of the monitoring evaluation framework, we have also appointed, we requested member states and member states have nominated focal points for digital transformation. So we will be collecting initiatives from member states, from all actors across the continent who are supporting us to implement the digital transformation strategy. And in 2025, we will do a comprehensive, the monitoring of course will be early, but as a mid-term comprehensive evaluation, we will also do one in 2025. Regarding the strategy to grow African IGF, I think my colleague, Makta, is right, we have to be very, very careful, and I thank my colleague, Makta, with respect to that. You know, strategies are always a consultative process. At least your inputs are also welcome, but we will continue working with the EC and other stakeholders so that we can continue to grow the African IGF. There is always room for improvement, but it will be a consultative process involving everyone in the spirit of the multi-agenda process. Thank you.

Moderator:
Thank you, Moses. And finally, we have run over time, but I’ll ask Lilian to give our final remarks.

Lillian Nalwoga:
Thank you so much, Madam Moderator. It’s only kind of ironic that I have just one minute to say something, but it’s also wonderful and a good opportunity, being the MAG Chair for the African IGF, to hear all these kind of nice recommendations and deliberations from the actors from the region, very far, far away from our continent. I think to me, when I was listening in, and part of the things that I was seeing coming out of the multi-stakeholder approach, where we are seeing some of the recommendations or some of the resolutions that we got from Abuja already happening, you know, the issue of exercising political power all to shape the digital economy. to a future for Africa. We already have the parliamentarian network. We had the Africa Parliamentarian Symposium. We had quite a number of ministers participating in the regional, in the continental forum. But also when we do a quick kind of lens, we zoom into the regional, the sub-regional forums and the national forums, we already seeing that governments are taking interest in these conversations and so is private sector. So for me, I’m seeing that the multi-stakeholder approach is already there. So I think one of the recommendations that we got from Abuja was to further strengthen that. The role of the MAG and from Onika, we’ve had the vision is there, the plan is there. When the Africa IGF was launched in 2021, 2011, oh God, sorry, 2011, we started off with fewer countries participating, but it has grown. And if we go back to the statistics that were given from the host country, from the four days, we had about 3,100 people who registered. On site, we had 1,414 participants and online we had 1,683. This is interest, that people are interested in this, partners are interested in this and we have some of our key partners who have been with us for the past years and they’re still continuing to see that we are strengthening and growing our continent and conversation. So these are good things. These are good things that we are seeing and we are hoping that, like Honorable mentioned, that we don’t just stop at having conversations in Choto or in Abuja. We need to be able to take the recommendations and take back whatever has been discussed and see how we can implement this at the regional and national level. So the vision we’ve had for Africa is there. Of course, our role as the MAG is to see how can we strengthen the coordination of the forum, but also to increase participation of African stakeholders in the IG processes, whether it’s a national, regional or subcontinental level. So this is what we are working on and listening on to all the conversations that have come through. These are things that we are going to take on. You see that next year is even bigger than Abuja. I’m glad that Nigeria is already expressing for us to go back there, but in the spirit of my stakeholderism, we need to go to another country. Unless all factors are being held constant and there’s not any other host, then we can come back to Nigeria. Already we’ve had conversations from, interest from South Africa to host from Benin. So there’s already interest and we are seeing that the community is increasingly interested in hosting and taking these conversations to these countries. So through you, moderator, I would like to thank members, our partners at the global level, at the UN through the UN IGF Secretariat, who have been able to join us today, bring our conversations all the way from Abuja to Tshoto to be able to listen on to the outcomes. But we also encourage you to continue supporting us to see how we can grow, but also learn from what you’re doing and help, not help, but partner with us in making the digital future for Africa more successful and more wonderful for the development of the continent. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Moderator:
So thank you very much to our panelists, to our participants, and we look forward to working with you and making our next open forum to show output, outcome, and show how we have improved as it has been said. Many thanks and bye from all of us. Bye. Thank you. Thank you. Thank you. Thank you very much. Apologies for the next session. So let’s kindly move out or we join them to listen to them.

Audience

Speech speed

167 words per minute

Speech length

1740 words

Speech time

625 secs

Chidi

Speech speed

120 words per minute

Speech length

1559 words

Speech time

780 secs

Lillian Nalwoga

Speech speed

169 words per minute

Speech length

782 words

Speech time

278 secs

Mariam Jobe

Speech speed

157 words per minute

Speech length

491 words

Speech time

188 secs

Martin Koyabe

Speech speed

221 words per minute

Speech length

2498 words

Speech time

679 secs

Moctar Seck

Speech speed

189 words per minute

Speech length

3247 words

Speech time

1032 secs

Moderator

Speech speed

144 words per minute

Speech length

984 words

Speech time

409 secs

Moses Bayingana

Speech speed

102 words per minute

Speech length

893 words

Speech time

523 secs

Sam George

Speech speed

201 words per minute

Speech length

1140 words

Speech time

341 secs

Protecting children online with emerging technologies | IGF 2023 Open Forum #15

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Shenrui LI

During the discussion on protecting children online, the speakers placed great emphasis on the importance of safeguarding children in the digital space. Li Shenrui, a Child Protection Officer from UNICEF China Council Office, highlighted the need for collective responsibility among various stakeholders, including governments, industries, and civil society, in order to effectively protect children from online harms. Shenrui stressed that it is not enough to rely solely on policies; education and awareness are also crucial elements in ensuring children’s safety online.

China is dedicated to leading the way in creating a safe digital environment for children globally. The Chinese government has introduced provisions to protect children’s personal information in the cyberspace. Additionally, the country has organised forums on children’s online protection for consecutive years, demonstrating their commitment to addressing this issue.

Xianliang Ren further contributed to the discussion by highlighting the importance of adaptability in laws and regulations for addressing emerging technologies. Ren recommended regulating these technologies in accordance with the law and suggested that platforms should establish mechanisms such as ‘kid mode’ to protect children from inappropriate content. This highlights the need for clear roles and responsibilities in the digital space.

Improving children’s digital literacy was also identified as a crucial aspect in protecting them online. The importance of education in equipping children with the necessary skills to navigate the digital world effectively was acknowledged.

The discussion also highlighted the significance of international cooperation in addressing the issue of children’s online safety. China has partnered with UNICEF for activities related to children’s online safety, demonstrating their commitment to working together on a global scale to protect children.

In conclusion, the discussion on protecting children online emphasised the need for collective responsibility, adaptable laws and regulations, improved digital literacy, and international cooperation. These recommendations and efforts aim to create a safe and secure digital environment for children, ensuring their well-being in the increasingly connected world.

Patrick Burton

Emerging technologies offer both opportunities and risks for child online protection. These technologies, such as BORN’s child sexual abuse material classifier, the Finnish and Swedish somebody initiative, and machine learning-based redirection programs for potential offenders, have proved valuable in combating online child exploitation. However, their implementation also raises concerns about privacy and security. Potential risks include threats to children’s autonomy of consent and the lack of accountability, transparency, and explainability.

To address these concerns, it is crucial to prioritize the collective rights of children in the design, regulation, and legislation of these technologies. Any policies or regulations should ensure the protection and promotion of children’s rights. States have a responsibility to enforce these principles and ensure that businesses comply. This approach aims to create a safe online environment for children while harnessing the benefits of emerging technologies.

The implementation of age verification systems also requires careful consideration. While age verification can play a role in protecting children online, it is essential to ensure that no populations are excluded from accessing online services due to these systems. Legislation should prevent the exacerbation of existing biases or the introduction of new ones. Recent trends indicate an increasing inclination towards the adoption of age verification systems, but fairness and inclusivity should guide their implementation.

Additionally, it is important to question whether certain technologies, particularly AI, should be built at all. Relying solely on AI to solve problems often perpetuated by AI itself raises concerns. The potential consequences and limitations of AI in addressing these issues must be carefully assessed. While AI can offer valuable solutions, alternative approaches may be more effective in some situations.

In summary, emerging technologies present both opportunities and challenges for child online protection. Prioritizing the collective rights of children through thoughtful design, regulation, and legislation is crucial to leverage the benefits of technology while mitigating risks. Age verification systems should be implemented in a way that considers biases and ensures inclusivity. Moreover, a critical evaluation of whether certain technologies should be developed is necessary to effectively address the issues at hand.

Xianliang Ren

There is a global consensus on the need to strengthen online protection for children. Studies have revealed that in China alone, there are almost 200 million minors who have access to the internet, and 52% of minors start using it before the age of 10. This highlights the importance of safeguarding children’s online experiences and ensuring their safety in the digital world.

In response to this concern, the Chinese government has introduced provisions for the cyber protection of children’s personal information. Special rules and user agreements have been put in place, and interim measures have been implemented for the administration of generative artificial intelligence services. These efforts are aimed at protecting the privacy and security of children when they engage with various online platforms and services.

There is a growing belief that platforms should take social responsibility for protecting children online. It is suggested that they should implement features like kid mode, which can help create a safer online environment for young users. By providing child-friendly settings and content filters, platforms can mitigate potential risks and ensure age-appropriate online experiences for children.

Additionally, it is argued that the development and regulation of science and technologies should be done in accordance with the law. This calls for ethical considerations and responsible practices within the industry. By adhering to regulations, technological innovations can be harnessed for the greater good while avoiding potential harm or misuse.

Improving children’s digital literacy through education and awareness is seen as crucial in tackling online risks. Schools, families, and society as a whole need to work together to raise awareness among minors about the internet and equip them with the knowledge and skills to recognize risks and protect themselves. This can be achieved by integrating digital literacy education into school curricula and empowering parents and caregivers to guide children’s online experiences.

Furthermore, it is important for the internet community to strengthen dialogue and cooperation based on mutual respect and trust. By fostering a collaborative approach, stakeholders can work together to address the challenges of online protection for children. This includes engaging in constructive discussions, sharing best practices, and developing collective strategies to create a safer digital environment for children.

In conclusion, there is a consensus that online protection for children needs to be strengthened. The Chinese government has introduced provisions for the cyber protection of children’s personal information, and there is a call for platforms to implement features like kid mode and take social responsibility. It is crucial to develop and regulate science and technologies in accordance with the law, improve children’s digital literacy through education, and promote dialogue and cooperation within the internet community. By taking these steps, we can create a safer and more secure online environment for children worldwide.

Mengyin Wang

Tencent, a prominent technology company, is leveraging technology to ensure the safety of minors and promote education. With a positive sentiment, Tencent places a strong emphasis on delivering high-quality content and advocating for the well-being of minor internet users. In line with their mission and vision, the company has initiated several key initiatives.

In 2019, Tencent launched the T-mode, a platform that consolidates and promotes high-quality content related to AI, digital learning, and positive content. This initiative aligns with Goal 4 (Quality Education) and Goal 9 (Industry, Innovation, and Infrastructure) of the Sustainable Development Goals (SDGs). The T-mode platform aims to provide a safe and valuable online experience for minors by curating content that meets strict quality standards.

To promote education and inspire learning, Tencent has taken significant steps. They released an AI and programming lesson series, offering a free introductory course to young users. This initiative aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. The course is designed to cater to schools with limited teaching resources and aims to reduce educational inequalities.

Tencent has also partnered with Tsinghua University to organize the Tencent Young Science Fair, an annual popular science event. This event aims to engage and inspire young minds in science and aligns with Goal 4 (Quality Education) and Goal 10 (Reduced Inequalities) of the SDGs. Through interactive exhibits and demonstrations, the fair encourages the next generation to explore the wonders of science and fosters a love for learning.

In addressing the protection and development of minors in the digital age, Tencent has harnessed the power of AI technology. They compiled guidelines for constructing internet applications specifically designed for minors based on AI technology. This shows Tencent’s commitment to creating safe and age-appropriate digital environments for young users. Additionally, Tencent offered the Real Action initiative technology for free to improve the user experience, including children with cochlear implants. This initiative aligns with Goal 3 (Good Health and Well-being) and Goal 9 (Industry, Innovation, and Infrastructure) of the SDGs.

In conclusion, Tencent’s initiatives in ensuring minor safety online and promoting education demonstrate their commitment to making a positive impact. Their focus on providing high-quality content, offering free AI and programming lessons, organizing the Tencent Young Science Fair, compiling guidelines for internet applications, and enhancing accessibility for individuals with cochlear implants showcases their dedication to the protection and development of minors in the digital age. Through these initiatives, Tencent is paving the way for a safer and more inclusive online environment for the younger generation.

DORA GIUSTI

The rapidly evolving digital landscape poses potential risks to children’s safety, with statistics showing that one in three internet users are children. This alarming figure highlights the vulnerability of children in the online world. Additionally, the US-based National Center for Missing and Exploited Children reported 32 million cases of suspected child sexual exploitation and abuse in 2022, further emphasizing the urgent need for action.

To protect child rights in the digital realm, there is a pressing need for increased cooperation and multidisciplinary efforts. The emerging risks presented by immersive digital spaces and AI-facilitated environments necessitate a collective approach to address these challenges. The UN Committee on the Rights of the Child has provided principles to guide efforts in safeguarding child rights in the ever-changing digital environment. By adhering to these principles, stakeholders can ensure the protection of children and the upholding of their rights online.

In addition to cooperation and multistakeholder efforts, raising awareness and promoting digital literacy are crucial in creating a safer digital ecosystem for children. Educating children about the potential risks they may encounter online empowers them to make informed decisions and stay safe. Responsible design principles that prioritize the safety, privacy and inclusion of child users should also be implemented. By adhering to these principles, developers can create platforms and technologies that provide a secure and positive digital experience for children.

The analysis highlights the urgent need for action to address the risks children face in the digital landscape. It underscores the importance of collaboration, guided by the principles set forth by the UN Committee on the Rights of the Child, to protect child rights in the digital world. Furthermore, it emphasizes the significance of raising awareness, promoting digital literacy, and implementing responsible design principles to ensure the safety and well-being of children online. Integrating these strategies will support the creation of a safer and more inclusive digital environment for children.

ZENGRUI LI

The Communication University of China (CUC) has made a significant move by incorporating Artificial Intelligence (AI) as a major, recognizing the transformative potential of this emerging technology. This integration showcases the university’s commitment to preparing students for the future and aligns with the United Nations’ Sustainable Development Goals (SDGs) of Quality Education and Industry, Innovation, and Infrastructure.

In addition to integrating AI into its programs, CUC has also established research centers focused on exploring and advancing emerging technologies. This demonstrates the university’s dedication to technological progress and interdisciplinary construction related to Internet technology.

CUC has also recognized the importance of protecting children online and the need for guidelines to safeguard their well-being in the face of emerging technologies. It is suggested that collaboration among government departments, scientific research institutions, social organizations, and relevant enterprises is crucial in establishing these guidelines. CUC’s scientific research teams have actively participated in the AI for Children project group, playing key roles in formulating guidelines for Internet applications for minors based on AI technology.

The comprehensive integration of AI as a major and the establishment of research centers at CUC reflect the university’s commitment to technological advancement. It highlights the importance of recognizing both the benefits and risks of emerging technologies and equipping students with the necessary skills and knowledge to navigate the digital landscape responsibly.

Overall, CUC’s initiative to integrate AI as a major and its involvement in protecting children online demonstrate a proactive approach towards technology, education, and social responsibility. The university’s collaboration with various stakeholders signifies the importance of interdisciplinary cooperation in addressing complex challenges in the digital age.

Sun Yi

The discussion revolves around concerns and initiatives related to online safety for children in Japan. It is noted that a staggering 98.5% of young people in Japan use the internet, with a high rate of usage starting as early as elementary school. In response, the Ministry of Internal Affairs and Communications has implemented an information security program aimed at educating children on safe internet practices. The program addresses the increasing need for online safety and provides children with the necessary knowledge and skills to navigate the online world securely.

Additionally, the NPO Information Security Forum plays a crucial role in co-hosting internet safety education initiatives with local authorities. These collaborative efforts highlight the significance placed on educating children about online safety and promoting responsible internet usage.

However, the discussions also highlight challenges associated with current online safety measures in Japan. Specifically, concerns arise regarding the need to keep filter application databases up-to-date to effectively protect children from harmful content. Moreover, the ability of children to disable parental controls poses a significant challenge in ensuring their online safety. Efforts must be made to address these issues and develop robust safety measures that effectively protect children from potential online threats.

On a positive note, there is recognition of the potential of artificial intelligence (AI) and big data in ensuring online safety for children. The National Institute of Advanced Industrial Science and Technology (AIST) provides real-time AI analysis for assessing the risk of child abuse. This highlights the use of advanced technology in identifying and preventing potential dangers that children may encounter online.

Furthermore, discussions highlight the use of collected student activity data to understand learning behaviors and identify potential distractions. This demonstrates how big data can be leveraged to create a safer online environment for children by identifying and mitigating potential risks and challenges related to online learning platforms.

To create supportive systems and enhance online safety efforts, collaboration with large platform providers is essential. However, challenges exist in collecting detailed data on student use, particularly on major e-learning platforms such as Google and Microsoft. Addressing these challenges is crucial to developing effective strategies and implementing measures to ensure the safety of children using these platforms.

In summary, the discussions on online safety for children in Japan emphasize the importance of addressing concerns and implementing initiatives to protect children in the digital space. Progress has been made through information security programs and collaborative efforts, but challenges remain in keeping filter applications up-to-date, configuring parental controls, and collecting detailed data from major e-learning platforms. The potential of AI and big data in enhancing online safety is recognized, and future collaborations with platform providers are necessary to create safer online environments for children.

Session transcript

Moderator – Shenrui LI:
Okay, hello everyone, excellence, ladies and gentlemen, and also young, young, young people Friends, because I saw there are some children also joining us for this session Welcome all to the Internet Governance Forum 2023 Open Forum No. 15 Protecting Children Online with Emerging Technologies My name is Li Shenrui, I’m from UNICEF China Council Office as a Child Protection Officer It’s my honor to welcome you as the moderator of this session on behalf of the China Federation of Internet Societies, UNICEF China, and Communication University of China to convey the warm greetings to all of you together to this important forum And a big thank you for being here today And today in this session we will discuss the most trendy topics around protecting children with emerging technologies As many of you may know that two years ago ago, the unions have released a policy guidance 2.0 on AI for children globally. It’s a global policy guidance for governance and industry. So the conversations kept going on in the last two years on how to protect children online and how to adjust our policy actions, practices, not only from the government side, but also from the industry and from the social civil society side to engage and leverage resources to protect our children. So taking this opportunity, we have guests and guest speakers with various backgrounds, and we will share more on their insights around this topic. So without further ado, let’s welcome our honored guest, Mr. Ren Xianliang, the Secretary General of the World Internet Conference and the President of China Federation of Internet Societies, to give us opening remarks. Please, welcome. REN XIANLIANG, SECRETARY GENERAL, WORLD WIDE INTERNET CONFERENCE

Xianliang Ren:
Ladies and gentlemen, I am pleased to attend the UN Internet Governance Forum in 2023, which is a forum on the protection of children’s Internet security with new technologies. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. Ladies and gentlemen and friends, it’s great to be here at the IGF 2023 Open Forum, Protecting Children Online with Emerging Technologies. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. On behalf of the organizers, I want to congratulate everyone for putting together an amazing event and a warm welcome to all our guests. It’s great to be here at the IGF 2023 Open Forum, Protecting Children Online with Emerging Technologies. In today’s world, technologies like AI, big data, the Internet of Things are everywhere. They have a huge impact on our lives and raise new issues for Internet governance, especially when it comes to protecting children online. On one hand, the Internet is an important tool for children to learn and communicate. On the other hand, it brings risks like harmful content, addiction, fraud, and privacy breaches. There is a global consensus that we need to strengthen online protection for children. Studies show that in China alone, there are almost 200 million minors who have access to the Internet. The age of the first exposure is getting younger too, with 52% of the minors Internet users before the age of 10. That’s why the Chinese government needs to strengthen online protection for children. The Chinese government and society have taken this issue seriously. The government has introduced provision on the cyber protection of personal information of children, which require operators set up special rules and user agreement for the protection, and interim manners for the administration of generative artificial intelligence services, which make sure that generative AI, like how it works and what data it uses, is regulated. After regulation on the online protection of manners, and a dedicated chapter to cyber protection in the new law on protection of manners, make sure kids are protected when they are online. Special efforts have been made to clean up the online environment. Platforms have taken social responsibility by implementing features like kid mode and As a social organization, World Internet Conference and China Federation of Internet Societies are actively involved in children’s online protection too. WIC Wuzhen Summit has held forums on children’s online protection for consecutive years, and CFIS has collected with UNICEF to host or participate in activities related to children’s online safety at IGF, collecting cases of AI for children and promoting them globally. These efforts have yielded positive results. To protect children’s online security with emerging technologies, we need to communicate more, build consensus, and take collective action. Here, I’d like to share three suggestions. First, we should regulate emerging technologies in accordance with the law. ๅปบ็ซ‹ๅฎŒๅ–„็›ธๅ…ณ็š„ๆณ•ๅพ‹ๆณ•่ง„,ๅ‘ๆŒฅๆณ•ๅˆถๅฏนๆ–ฐๆŠ€ๆœฏๅบ”็”จใ€ๆ–ฐไธšๅฐๅ‘ๅฑ•็š„ๅผ•้ข†ใ€่ง„่Œƒใ€ไฟ้šœไฝœ็”จ,ไพๆณ•่ง„่Œƒๆ–ฐๅž‹ๆŠ€ๆœฏๅบ”็”จๅœบๆ™ฏใ€‚ It is important to establish and improve laws and regulations related to the application and development of emerging technologies. This will ensure that these technologies are used responsibly and in a way that safeguards everyone’s interests. ๅปบ่ฎฎๆ”ฟๅบœ้ƒจ้—จๅผบๅŒ–็›‘็ฎกใ€ๆŒ็ปญๅผ€ๅฑ•ๅ„็ฑปๆ•ดๆฒป่กŒๅŠจ,็บ ๆญฃๅ„็ง็ฝ‘็ปœไนฑ่ฑก,่กŒไฝฟๅ„ฟ็ซฅ็ฝ‘็ปœๅฎ‰ๅ…จ้˜ฒๆŠคๅข™ใ€‚ I recommended that government departments enhance insuperation, continue to take collective measures and crack down on bad online behaviors, to strengthen the role of production for children’s online security. ไบŒๆ˜ฏ,ๆŽจๅŠจ็ง‘ๆŠ€ๅ‘ไธŠใ€ๅ‘ๅ–„ใ€‚ Second, we should make sure science and technologies are developed to do good. ็ฝ‘็ซ™ๅนณๅฐไฝœไธบๅ„็ฑปๅบ”็”จๆœๅŠก็š„ๆไพ›่€…,่ฆๅผบๅŒ–ไธปไฝ“่ดฃไปป,ๅปบ็ซ‹ๅฅๅ…จ้’ๅฐ‘ๅนดๆจกๅผ,้˜ฒๆฒ‰่ฟทๆœบๅˆถ,ไธพๆŠฅๅค„็ฝฎๆœบๅˆถ็ญ‰,้˜ฒ่Œƒๆ‰“ๅ‡ปไพตๅฎณๅ„ฟ็ซฅๅˆๆณ•ๆƒ็›Š็š„ๅ†…ๅฎนๅ’Œ่กŒไธบใ€‚ ้ผ“ๅŠฑๅ€กๅฏผไผไธšๅŠ ๅผบๅ„ฟ็ซฅ็ฝ‘็ปœไฟๆŠค็š„ๆŠ€ๆœฏ็ ”ๅ‘,ไปฅๆŠ€ๆœฏๅฏนๆŠ€ๆœฏๆ้ซ˜ๅ„ฟ็ซฅ็ฝ‘็ปœไฟๆŠค็š„้˜ฒ็–ซ่ƒฝๅŠ›ใ€‚ ไธ‰ๆ˜ฏ,ๆๅ‡ๆ•ฐๅญ—็ด ๅ…ปใ€‚ Third, it’s crucial to improve children’s digital literacy. ๆŽจ่ฟ›ๅญฆๆ กใ€ๅฎถๅบญๅ’Œๅ…จ็คพไผšๅ…ฑๅŒๅ‚ไธŽๅŠ ๅผบๆœชๆˆๅนดไบบ็ฝ‘็ปœ็ด ๅ…ป็š„ๅฎฃไผ ๆ•™่‚ฒ,ๆๅ‡ๅ„ฟ็ซฅ้˜ฒ่Œƒ้ฃŽ้™ฉๆ„่ฏ†ๅ’Œ่‡ชๆˆ‘ไฟๆŠค่ƒฝๅŠ›,ๆๅ‡ๅญฆๆ กใ€ๅฎถ้•ฟไปฅๅผ•ๅฏผ่ง„่Œƒๅ„ฟ็ซฅๅฎ‰ๅ…จไธŠ็ฝ‘็”จ็ฝ‘็š„่ƒฝๅŠ›็ด ๅ…ปใ€‚ Schools, families and society as a whole should work together to raise awareness and educate minors about the Internet. equipped with knowledge and skills to recognize risk and protect themselves. In addition, schools and parents should be more prepared to guide children through internet use. Social organizations and research institutions should utilize social and industrial resources and work on ethical governance of emerging technologies. These include establishing machines for ethical review and certification. We should also develop cross-regional and cross-platform cooperation to study and solve the problems of black market and hidden network disruptions of children. We should jointly create a network space for children to grow healthily. Last but not least, I suggest that the internet community strengthen dialogue and cooperation based on mutual respect and trust. We could not tackle difficult issues such as illegal industry targeting children and hidden cyber threats without cooperation across regions and platforms. Together, we can build a community with a shared future in cyberspace that fosters the healthy growth of children. We will continue to make dedicated efforts towards this goal and contribute to a better and safer cyber world for children. I wish this forum great success. Thank you.

Moderator – Shenrui LI:
Okay, thanks to Mr. Ren for during the opening remarks. It’s always thrilled to see that China dedicates to be a pioneer, to explore and lead the positive pathways towards enabling and safe digital environment for children globally, while emphasizing, as Mr. Ren mentioned, the adaptability of laws and regulations, and also the clear roles and responsibilities of different sectors, including industrial and social science sectors, and also improving the children’s digital literacy. While we’re glad to see that China keeps seeking opportunities on international cooperation among this important topic, and we hope to unpack those suggestions later in our discussion today. Next, let’s welcome Mr. Li Zengrui, the Deputy Director of the Council of the Communication University of China. Let’s welcome.

ZENGRUI LI:
Distinguished Mr. Ren Xianliang, Ms. Dora, ladies and gentlemen from around the world, good afternoon, good evening, good morning. I’m very pleased to participate in this open forum with the theme, Protecting Children Online with a Major Technology. First of all, please allow me to represent Communication University of China, or CUC, one of the organizers of this forum, to warmly welcome all experts and scholars for your attendance. Thank you for your attention to the topic of children online protection. With rapid development of Internet, the wave of digital technology and information networking has swept the world. By June 2023, netizens in China had outnumbered 1 billion, about 20 percent of which are adolescents and students. Taking the largest proportion, the popularity of the Internet has enabled children more access to reach out emerging technologies and further use them. The major technologies not only bring great convenience to children’s education, health and entertainment, but also arise people’s concern for policy protection and fairness. CUC has always valued the integration of disciplinary construction related to Internet technology, technological progress and social responsibility, and has deepened academic accumulation in the intelligence media network. A number of emerging technology-related research centers have also been established, including State Key Laboratory of Media Convergence and Communication, Key Laboratory of Intelligent Media of the Ministry of Education, Key Laboratory of Audiovisual Technology and the Intelligent Control System of the Ministry of Culture and Tourism. In addition, the School of Information and Communication Engineering has set up AI as a major to cultivate compound senior talent for AI-related scientific research, design, and development, and integrated applications in fields such as information, culture, radio, and television, and the media industry. With the strengthening of the inheritance of academic and social research and the vantage of amazing Internet technology, and the invitation of CFIS and UNICEF, one of the scientific research teams from the CUC joined the AI for Children project group. As a key member, our team conducted in-depth research on the application of AI for children and participated in the formulation of guidelines for the construction of Internet applications for minors based on AI technology. Different from traditional Internet applications, the Internet applications driven by emerging technologies introduce intelligent technologies such as machine learning, deep learning, natural language processing, and knowledge graphs. The use of these technologies helps to provide more well-being for children, such as health monitoring of children, recommendation of quality content, company of special group. However, emerging technologies also bring many risks to children. such as unfairness, data policy security, and internet education. Therefore, stakeholders such as government departments, scientific research institutions, social organizations, and relevant enterprises should deepen exchanges, enhance consensus, and strengthen cooperation, and found guidelines and rules of global common development of protecting children online with emerging technologies, so as to promote the health development of emerging technologies, and better benefit people around the world. I hope that through exchange of this open forum, we can all get inspiration from the application of emerging technologies for children, and contribute to the development and application of emerging technologies in the children-related field. At the end, I hope this open forum will be success, and promote global awareness of children online protection. Thank you very much. Thank you.

Moderator – Shenrui LI:
Okay, thank you Mr. Li for sharing, and also for expressing the commitment of the CUC on generating more evidence on child online protection. It was good cooperation between CUC and UNICEF China on working on the documentation of AI for children cases. We definitely hope to see more of those collaboration joined. Please let us welcome Mr. Patrick Burden, the Child Online Protection Consultant to share about the key considerations in. in regulating emerging technologies for protection of children. So the floor is yours, Patrick.

Patrick Burton:
Thank you very much. Can I just check that everybody can see my screen? Sound and clear, please. Perfect. Thank you. Sorry. Give me a second. I just need to turn translation off, but I’ve got an echo. There we go. Hopefully that will be better. So thank you very much, Chairperson, Secretary-General, colleagues, fellow speakers, experts, participants in the room, friends that I know are there. Thank you so much for the opportunity to speak to you and for convening this forum in the first place. So it’s difficult to watch or to read the news these days, obviously, without hearing about AI, the impact of artificial intelligence or digital technology on children’s lives. Often this is phrased in negative terms, for example, the impact of screen time, as problematic as that phrase is, unless it’s impact on children’s concentration and well-being or on the escalating reports of child sexual abuse material or exposure to explicit images by children or sometimes the tragic results of cyberbullying that children are experiencing. And I think this is only surpassed perhaps by the growing attention on the impact of AI and emerging technologies specifically, not least in feeding these risks and in exacerbating and catalyzing harmful outcomes for children. Yet, as the title of this forum suggests, at the same time, that same technology can certainly offer a wealth of opportunities, many of which have already been alluded to by the previous speakers in the right context with the appropriate oversight, regulation and design to mitigate some of the potential for harms that the underlying fabric and construction algorithms and machine learning introduced for escalating into children’s everyday use of digital technology. These range from the use of predictive analytics and behavioral models for prevention, deterrence and response to cyber bullying, child sexual offending and other risks and to the use of machine learning and deep neural networks for scanning and hashing of child sexual abuse material. And each of these offers exciting and important guardrails to emerging adaptation of risks that exponentially changing technology and this rapidly changing speed of technology introduced into children’s lives. Now, I’ll just touch on a couple of examples of how digital technology, of how emerging technology using AI in different forms are being used to keep children safe online. Many of you, I’m sure, will have heard of some of these. BORN’s child sexual abuse material classifier is a machine learning based tool that can find new or unknown child sexual abuse material in both images and videos. When potential CSAM is flagged for review and the moderator confirms the decision, the classifier learns it. Now, it continually improves from those decisions and those moderator reviews in a feedback loop and it’s significant in that it uses AI to generate a departure from existing child sexual abuse material mechanisms which depend on existing reports, existing in databases, existing databases using hashing and matching technology. Rather, it detects new and unknown or unclassified child sexual abuse materials. That’s just one example. Another example, which is somewhat different but so important and often overlooked is the use of AI to support children in responding, dealing with issues they encounter online. The example I’ve got here is somebody, a Finnish and Swedish example, which has been developed to support children and adolescents who have potentially experienced online harassment. And often the chatbot, through which cases are analyzed and what it calls the first aid kit, are offered to children with step-by-step guidance on how to deal with each situation on a case-by-case basis. Importantly, it also has a mechanism to review by legal experts, ensuring that the safety and the child-friendliness of the system is ensured through constant human oversight, something which I touch on again later. The third example that I’d like to give is somewhat different from the previous examples, and something which I think we are only starting to pay enough attention to, and that’s looking at deterrence and behavior change for potential offenders. The redirection programs, and there’s a similar initiative out of the UK, uses machine learning to offer self-help programs to prevent child sexual offending, specifically through focusing on deterrence to use child sexual abuse material. It constantly and iteratively learns from information and data shared by users, and importantly is transparent in the collection and use of this data. Like the previous example, the somebody initiative, it’s also subject to oversight and training from human operators. Similar initiatives use predictive analytics to promote behavior change and help seeking among child sexual abuse offenders. Those are just three out of a multitude of examples of how emerging technology is being used, practical examples to keep children safe online. Yet, as much as these technologies in keeping children safe offer immense opportunities, so do these technologies themselves introduce risks to children. They’re not necessarily new risks, but rather new or exacerbated manifestations of existing risks that digital technologies present in children’s lives. These risks pose important questions for how the tech is designed, how it’s regulated, and how it’s legislated. For example, a couple of key questions that you need to take into account when thinking about this. What data is used for machine learning? How is it collected? What biases might it introduce into operations? How are these biases mitigated? How is data collected? Where is it stored? Who has access to it? intentionally and unintentionally. And what’s the purpose of that access to the data? Predictive models, machine learning required immense amounts of children’s data, the collection of storage, which might introduce new risks into children’s lives, might introduce new privacy and security risks for children. There are a number of ethical dilemmas around this. To what degrees are approaches such as predictive analytics and nudge techniques, when applied using AI, allowing for the personal freedom of choice, the autonomy of decision-making, rather than manipulating users. Particularly if those users, those children are not aware of the facts or fully understand the facts, how that technology is being used, how the data is being used, or how an intervention is being applied. And somewhat related to this is the ring fencing of data that is collected and used to inform these models for purposes of the minimization of purpose and use. Now, just, I think the moderator introduced or made reference to a couple of documents that UNICEF has produced, both the model legislation policy for AI and also UNICEF Innocenti have produced a number of papers that highlight some of these challenges. Just to carry on. Risks to children’s autonomy of consent. Technology deployed to detect new child sexual abuse material or grooming, for example, using classifiers, such as those provided in the example before, would not necessarily be able to differentiate between consensual sexual conversations or image sharing between two adolescents of a legal age in that jurisdiction, on the one hand, and otherwise unknown and unhashed child sexual abuse material, potentially introducing risks and biases to those children. Related to this, what are the underlying assumptions that underpin those algorithms? or the machine learning or what is age appropriate, contextually appropriate, culturally appropriate, consensual behavior, and how are those differences by context, by region, by location taken into account. What about the lack of accountability, transparency, and explainability? Machine learning systems are making decisions related to data and algorithmic determination. How and when are these decisions and explained to children in a way that they understand or to their parents as well? And do they detract from individual decision making? There are many more perceptual hashing potential for false positives. Some of these risks are more applicable to some forms of emerging technology than others and in particular uses compared to others, but most are common to some degree across the different forms of technology that use machine learning and deep learning. I don’t have five days, so I’m just going to draw attention to some of the key issues around regulation and particularly around addressing some of these challenges that the use of emerging technologies pose. I say I don’t have five days because this is a challenge that countries throughout the world and regions throughout the world are battling with and I think while we have some really good promising examples and some good examples relating to some of the challenges in legislation, it is an evolving conversation and something where I think, you know, it’s going to take us a while to get this framing and the regulatory and policy environment really sound in order to protect the collective rights of children. And I’m starting with the protective rights of children because underlying any legislational policy has to be an assurance that all technology and regulation are used in the mandate to protect and ensure that collective, equal, indivisible, and inseparable rights of children rather than prioritizing one right over the other. That means anticipating many of the potential unintended consequences that that technology might have down the road on children’s rights, collective rights. It ranges from the obligations of due diligence by industry, designing and implementing that technology to anticipate and address adverse effects on the rights of the child, to the responsibility of states to ensure that businesses adopt and adhere to these principles and are held accountable, and also to ensuring that states themselves respect and adhere to these principles and its mandates. Now, these are enshrined in the Convention on the Rights of the Child. They’re enshrined or they are certainly contained in the general comment number 25 and emerging sort of global guidance and treaties and instruments that have been designed to look at the protection of children’s rights. And a couple of more recent pieces of legislation and policy frameworks are starting to incorporate these effectively, and the Australian Online Safety Act, the UK Online Safety Bills, which is addressing this to some degree, and I say to some degree, the EU DSA, and the recent draft directive regulations that explicitly address the need to anticipate and detect online harms before they occur. What’s interesting, the recent EU directive calls for relevant judicial bodies to ensure that technology companies objectively and diligently assess, identify and weigh on a case-by-case basis, which is critical, not only the likelihood and seriousness of the potential consequences of services being misused for the types of online child sexual abuse at issue, but also the likelihood and seriousness of any potential negative consequences for other parties affected. One of the things I don’t have on the slide here that is also critical, that is contained in EU legislation as well as Australian legislation, at least, I’m sure it’s in others, require the importance of requiring third party independent and public annual audits to assess the impact on child rights as detailed in the CRC and general comment number 25. Moving on, some more examples. If age verification is to be adopted and most recent pieces of legislation are pointing to that contained in various EU documents contained in the draft UK Online Safety Bill, in Australian legislation, if age verification is to be adopted, and I’m saying if because we can’t say that age verification is perfect yet, it is not where it should be in order to function effectively, it is very likely to get there, then significant steps will need to be taken prior to its implementation. To ensure that the child population is equitably equipped with required identification or whatever is required in order to be able to verify the age and that certain populations are not excluded from that. So we need to make sure that age verifications do not reinforce existing biases or introduce new biases or exclusionary practices. Okay, Patrick, sorry to interrupt. I had to interrupt you but we are running out of time, so probably you could wrap up within one minute, please. I will wrap up within one minute. I’ve already spoken, almost there. I’ve already spoken about AI oversight bodies. Importantly, with attached mechanisms for redress and that’s just something I’ve got to say, we know from speaking to children throughout the world, one of their major concerns is that when they make reports or when AI is used or when automated report systems are used that there’s no response. We need to make sure there’s accountability for those responses. And then we need to make sure that regulation is designed in a way or policies are designed in a way that are not limited to existing emerging technologies that will provide but rather provide scope for future developments and definitions. The very last point I’d like to make, this is a. quote from recent paper by Amanda Lennard and Coddy Goins on common myths and evidence, and she makes a point that sometimes some technology cannot be fixed by more design. We cannot necessarily design our way out of problems. Sometimes those technologies should not be built at all. And I guess my final comment is, do we and can we and will we rely on emerging technologies and AI to fix the problems that often result from AI in the first place? Do we rely on AI to create the internet that we want? And that’s perhaps a question more than an answer. Thank you and apologies for going over time.

Moderator – Shenrui LI:
Okay, thank you, Patrick, for your thoughtful sharing. And we all know that is never an easy question to answer on this topic. And we’re all devoting to find the fine balance between the trade-off and against on-child online protection. We definitely want to hear from you more in future. But next, let’s welcome Professor Sun Yi from Kobe Institute of Computing Graduate School of Information Technology to share his thoughts on this topic. Please.

Sun Yi:
Okay. Good afternoon, everyone. Thank you for the Unicef China and CFS and CFC giving me the opportunity to share my experience and here. My name is Sun Yi. I’m Chinese, but I live in Japan more than 20 years. And now I’m Associate Professor of Graduate School Information Technology at the Kobe Institute of Computing. Today I want to share some of my personal experience of the internet safety technology for children in Japan. Next slide, okay. And yes, and first I want to share the internet use rate of youngs in Japan. About actual internet use by Japanese youngs, we find the date published by Cabinet Office Government of Japan in 2022. In this date, 98.5% of the youngs people response they are using the internet. The most used device is smartphone. And there’s a high rate, you can see the graph in the right, there is a high rate of internet use starting in the elementary school. Yes, and my daughter is In Japan, they also have a smartphone. Okay. In the digital age, ensure the safety of the children as online is a parliament’s concern. In Japan, constant efforts are underway to address this issue. For the government side, the Ministry of Internal Affairs and Communications runs a program called information security side for the citizens. They have a key vision to educating the children on safety internet practice. For the NPO initiative, the NPO information security forum co-hosts the internet safety education program with local authorities and organizations, extending the reach of the internet safety education to various communities. These efforts help make the internet safe for our kids, let them enjoy its benefits while protecting them from its dangers. From the technology side, various technologies are also offered. Field dust technology stands as a popular measure in safeguarding children’s internet use. Developed with smartphones application, all set in the network device at the school and homes, and some network service provider also provide the service. Moreover, smartphones parents’ controls help limited use time and accessible application. However, there are big challenges. When you use filter, it’s important to keep the filter applications database always up to date in order to provide the most effective production. Moreover, if you are using a network side filter, it’s very simple to change, switch to another network will disable the filter. For parentless controls, maybe fine even for me. and it’s the set-up of the controls, a smartphone. It’s very complicated. And often, the parents cannot to configure it correctly. And believe me, the kids is very smarter than we imagine. They always can find the way to disable the parent’s controls. More than one, I heard from some young boys, proud to tell me how he removed the restriction on his school’s PC. Okay, next slide. Use of big data and AI technology to protect the students safe use of internet is a new technology trend. Some, the AIST, the National Institute of Research, institute provides real-time AI analysis for children, abuse risk assignment, and support decision-making. Use this system, yes, they can provide the separate of abuse, a potential recurrent threat to help the kids. Okay, the next slide. Oh, this one, it’s okay. Sorry, okay. So at the same time, another research, in our research group is working on a value study about e-learning. So the open-source learning management system, we collect all the activity students while they are interacting with the system. All the click and what they watched, how long they watched some page. All the data collected to utilize, to patternize the students’ learning behaviors. Enables real-time personalized feedback, significantly improving the learning experience. Interestingly, we also developed a method to identify student, why they struggle with learning. Upon the investigation, we discovered that the struggle is all. . So we often not with learning materials, but distractions like online game. Next slide, please. So our research group is working and so we realise that is some support system, the same support system can be help ensure kids use internet safety. So we can use internet safety without need external set-up. It’s more easy to use. But the challenge is many school use learning platform like Google and Microsoft. This platform is very easy to create the learning materials, but even you haven’t IT skill, but don’t need us to connect the detail date and how the students use it. So if we want to enhance internet safety, we need to create the learning materials. So this is why we team up with the big platform providers, very important. In addition, there are many issues related to personal privacy when you state, there’s a trend off between protecting privacy and improving the date available, which will be a big challenge. Okay. That’s all my presentation. Thank you.

Moderator – Shenrui LI:
Thank you, everyone, for joining us today. We have a lot of questions about how to employ something we have already have to inform our practices. Next, please join us to welcome Ms. Wang Mengying, senior director of culture and content division of Tencent to share with us.

Mengyin Wang:
Thank you. Welcome. APPLAUSE I’m Wang Mengying, senior director of culture and content division of Tencent to share with us how to use the emerging technologies to keep children safe online as we are all aware, emerging digital technologies such as AI and large language models and rapid development and enables internet applications to scale and expand substantially, offering children a much richer digital world for learning, living, and engaging with the world. There are nearly 200 million netizens in China, with the internet adoption rate among the population reaching almost 100%. Children now access the internet as younger ages, with an evident rural and urban information gap, as well as a lack of risk awareness going online, given the large number of children under the age of 12. The digital world is changing rapidly and the development is always at the top of the agenda to protect the rest and interest in the digital world. Just now, Professor Sun Yi shared with us his research and thoughts on the children online protection in Japan, which is tremendously enlightening. And now I’m going to offer an industrial perspective on how the digital world is changing. Tencent firmly committed to the mission and vision, which is value for users and tech for good. Actually, we actively explore and improve our online safety solutions for minors, making full use of the company’s experience in information and digital technologies, and also mobilizing resources in societies at large. Tencent is committed to providing high-quality content for the young users, and this is what Tencent is working on at this moment. For status, we bring together quality contents and provide netizens with a sense to use the internet positively. In 2019, Tencent kicked off the T-mode in a handful of its products, consolidating high-quality content, so it’s not just for young users, but also for the general public. Tencent is also working with China’s foundation in the initiative of the master class for young, which top-class scientists, experts, and educators were invited to teach our young audience their lesson one in various areas. Nobel Prize-winning physicist Professor Yang Zhening, Mr. Chief designer of China’s spacecraft, and also Mr. President of the Chinese Academy of Sciences, and also the president of the Chinese Academy of Sciences, and also the president of the Society of Cultural Relics. The master classes were then turned into featured video lectures in 4K resolutions for circulation in the hope that these great materials can truly benefit more children, offer fascinating learning content, and inspire their future professional pursuits. Secondly, Tencent provides professional education and help young people in the digital age. Today’s young people need to keep a finger on the pulse of the emerging technologies so as to prepare for the future. On September 1st this year, Tencent released AI and programming lesson one, a pro bono project offering young users a free introductory course at home on AI and programming through a lightweight package on WeChat, notably for schools in rural areas with low-income children. Despite suffering limited teaching resources at school in March, Tencent’s mission prompted the advocates to adjust Blackboard’s and equipment. Our course can also take place in computer-free mode, allowing students to learn AI as their urban counterparts do, such as role-playing. This program deputed at already 21 palisades in 14 primary schools in around four cities, including Beijing, Shanghai, Shenzhen, and Guangzhou. Most students found it captivating to let the machine identify objects through simple labeling, and many teachers said it was very important to build up children’s creative mindset through such programs, enable them to spot potential questions, and also try to troubleshoot using contributing thinking. Certainly, as an advocate for scientific thinking, Tencent strives to guide minors to understand the internet and their own development in a positive manner. Curiosity of the young mind is very much treasured. They need diversified channels to explore the real world, and the proper education to experience the pervasive world beyond the screens. Starting from 2019, Tencent and Tsinghua University jointly carried out an annual popular science event named Tencent Young Science Fire. More than 2,000 young scientists and enthusiasts met face-to-face with top international scientists at the fair, and 40 million online views were impressed by the charms of the science. More and more youngsters in China now are taking scientists as new models and new adults, and also scientific explorations is becoming a new fashion. Helping the minors growing up healthily is a vision shared by the international community. In 2022, Tencent teamed up with a number of companies and organizations to compel and release guidelines for construction and internet applications for minors based on AI technology, bringing the synergies of the industry to promote online safety for children while developing digital technologies. Tencent is also exploring AI technology to improve the growing environment for minors. For example, the Israel Action Initiative by Tencent in 2020 offered charities, groups, and equipment manufacturers the Israel audio technology for free by improving user experience for those with cochlear implants, including the children. Children is the future and the hope of the mankind, and minors protection is by all means a common cause, as wonderful and daunting as it is. I’m pleased to share with you that in September, the AI program lesson one was rolled out in primary schools all around the country, so in the seas of AI and in the hearts of the many children in rural areas. The master classes for young now total 139 episodes, already reached 10 million young people with more than 100 million views so far. And at last, Tencent is looking forward to join hands with you all and to build in building a clean internet and a safe digital world for our children. Thank you for all.

Moderator – Shenrui LI:
Okay, thank you. sharing good practices from Tencent. And last but not least, to conclude this session let’s welcome Ms. Dora Giusti, the Chief Child Protection of UNICEF China country office to deliver the closing remarks. Please welcome.

DORA GIUSTI:
Distinguished experts and participants, as we bring this forum to a closure, allow me to thank you for your insightful ideas and also for the participation in this important forum on emerging technologies and child online protection. We live in an era driven by technologies such as artificial intelligence, blockchain, newer technologies that are poised to reshape our society. Globally, a child goes online for the first time every half a second. One in three internet users are children. We’ve heard today how this has positive connotations and impact in terms of learning and accessing information, but we’ve also heard that there are potential risks. Children may be exposed to harms like illegal content, privacy breaches, cyber bullying, and most seriously sexual abuse and exploitation through the use of technology. In 2022, the US-based National Center for Missing and Exploited Children received 32 million reports from around the world of suspected child sexual exploitation and abuse cases, an increase by 9% from 2021. Europol identified that the increase had been going on year by year, but during COVID, due to increased activity related to the lockdowns, this increase, this rise was particularly significant. As today we talked about emerging technologies, we need to consider that the use of immersive digital spaces, which are virtual environments that create a sense of presence or immersion for users and are facilitated by AI, may expose children to environments that are not designed for them, amplifying the risks of sexual grooming and exploitation, for instance, through the use by potential abusers of virtual rooms or personas that groom them. As technology evolves, immersive digital spaces will become more widespread in all fields and therefore the risk will also increase. We need therefore to understand in depth the implications and impact of the risks for children. On a positive note, we’ve heard today how AI technologies can offer help to address child sexual exploitation and abuse online. For instance, there exists an array of techniques based on AI that can be designed to detect different elements of the spectrum of illegal materials, behaviors, and practices linked to child sexual exploitation and abuse online. In addition to identifying preventing abuse, AI can also be used to support children who have experienced abuse, as we saw in Patrick’s presentation. this is positive for prevention, detection, and investigation of cases of child sexual abuse and exploitation online, the use of AI may also impact data protection, safeguards, and users’ privacy. Therefore, protecting child rights in the digital world and ensuring safety relies on striking a balance between the right to protection from harm as well as to privacy. This is one of the guiding principle of the UN Committee on the Rights of the Child, general comment number 25 on children’s rights in relation to the digital environment. This document has provided us with important principles to address the issue of child rights in a rapidly changing technology environment with the objective of preventing risks from becoming harms and to ensure children’s rights to be informed and at the same time become digital citizens. We know much more today than a decade ago. We heard today, echoing also the Secretary General’s words, Patrick’s and all the other speakers, that we need to cooperate. We need to work together. We need to look at different dimensions. We need to coordinate efforts at the legal and policy level, criminal justice, victim support, society and culture, the technology industry, investing in research and data. Before I conclude, allow me to emphasize some key actions to ensure that we have a safe digital environment for children, echoing also the words of the Secretary General and other speakers. First of all, we need to enhance our understanding on child safety within this evolving landscape. Increase evidence generation on trends, patterns and risks for children to be engaged in this evolving digital environment, but also to bring forward solutions that are effective. Secondly, we need to strengthen and develop laws, policies and standards that can evolve as rapidly as the changing environment and that can also assess the critical benefits and risks. We need harmonization of these legislation standards across the globe because this is a global problem and we need to involve experts from different disciplines. Third, we need tech companies to embrace responsible design principles and standards, prioritizing safety, privacy and inclusion of child users and conducting frequently child rights reviews for their products and services. And we’ve heard a few example during this forum. Fourth, we need to continue raising awareness on safety and digital literacy for children, parents, caregivers, society as a whole. We rally for a collective action by governments, private sector, civil society organization, international organization, academia, families and children themselves. Together, we must ensure emerging technologies create a safer, more accessible digital world for children. Thank you very much.

Moderator – Shenrui LI:
Okay, thank you, Dora, for the very comprehensive and encouraging closing remarks. As you mentioned, they’re all essential building blocks for enabling a safe digital environment for all children. And we hope today’s session has brought some enlightening insights to all of you, and thank you for your attention and participation. We are looking forward to seeing you in our session next year at IGF 2024. Okay, thank you all.

DORA GIUSTI

Speech speed

136 words per minute

Speech length

898 words

Speech time

397 secs

Mengyin Wang

Speech speed

153 words per minute

Speech length

1071 words

Speech time

420 secs

Moderator – Shenrui LI

Speech speed

141 words per minute

Speech length

797 words

Speech time

340 secs

Patrick Burton

Speech speed

167 words per minute

Speech length

2464 words

Speech time

884 secs

Sun Yi

Speech speed

147 words per minute

Speech length

904 words

Speech time

369 secs

Xianliang Ren

Speech speed

82 words per minute

Speech length

864 words

Speech time

635 secs

ZENGRUI LI

Speech speed

97 words per minute

Speech length

606 words

Speech time

375 secs

Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sophie

The importance of children’s digital rights in the digital world is underscored by the United Nations. These rights encompass provision, protection, and participation, which are essential for children’s empowerment and safety in online spaces. General Commendation 25 by the UN specifically emphasises the significance of children’s digital rights. It is crucial to ensure that children have access to digital resources, that they are protected from harm and exploitation, and that they have the opportunity to actively engage and participate in the digital world.

Young children often seek support from their parents and teachers when faced with online risks. They rely on them as safety contact persons for any issues they encounter on the internet. As they grow older, children develop their own coping strategies by employing technical measures to mitigate online risks. This highlights the importance of parental and teacher support in assisting children in navigating the digital landscape and promoting their online safety.

Furthermore, the design of online spaces needs to be tailored to cater to the diverse needs of different age groups. Children, as active users, should have digital platforms that are user-friendly and age-appropriate. Children are critical of long processing times for reports on platforms, advocating for more efficient and responsive mechanisms. It is important to consider children’s perspectives and ensure that their voices are heard when designing and developing online spaces.

Human resources play a significant role in fostering safe interactions online. Children are more likely to use reporting tools that establish a human connection, thereby enhancing their sense of safety and anonymity. The THORN study conducted in the United States supports this viewpoint and suggests that human involvement positively affects children’s willingness to report online incidents.

The introduction of the Digital Services Act in the European Union is seen as a critical tool for protecting children’s data. This legislation is set to come into force next year and aims to enhance data protection measures for individuals, including children, in the digital sphere. The act aims to address issues related to privacy, security, and the responsible use of digital services to safeguard children’s personal information.

Children’s rights by design and their active participation in decision-making processes regarding the digital environment should be prioritised. The United Nations’ General Comment 25 highlights the importance of young people’s participation in decisions about the digital space. The German Children’s Fund has also conducted research that emphasises the need for quality criteria for children’s participation in digital regulations. By involving children in decision-making, their perspectives and experiences can inform policies and ensure that their rights are respected and protected.

Creating safe socio-digital spaces for children and adolescents is of paramount importance. These spaces should not be primarily influenced by product guidelines or market-driven interests but rather should prioritise the well-being and safety of children and young people. Civil society and educational organisations are seen as key stakeholders in shaping and creating these safe social spaces for children to engage in the digital world.

In conclusion, a holistic approach is necessary to advocate for children’s rights in the digital world. This entails promoting children’s digital rights, providing support and guidance from parents and teachers, adapting the design of online spaces to meet the needs of different age groups, harnessing the potential of human resources for safe interactions, and enacting legislation such as the Digital Services Act for protecting children’s data. Children and young people should be actively involved in their rights advocacy and be included in decision-making processes in the digital environment. The involvement of all stakeholders, including governments, organisations, and communities, is essential in advancing and safeguarding children’s rights in the digital world.

Steve Del Bianco

In the United States, the states of Arkansas and California faced legal action for implementing a controversial rule that required legal consent from a parent or guardian for individuals under the age of 18 to use social media sites. Steve Del Bianco, representing an organization, sued the states and deemed this measure to be aggressive.

The sentiment expressed towards this rule was negative, as it was seen as a potential infringement upon the rights of children and young individuals. The argument presented was that broad child protection laws have the potential to restrict a child’s access to information and their ability to freely express themselves. Judges who presided over the case acknowledged the importance of striking a balance between child rights and the need for protection from harm.

Steve Del Bianco, in the course of the proceedings, emphasized the significance of considering the best interest of the child. He argued that the state’s laws should undergo a test that balances the rights of the child with their protection from potential harm. According to Del Bianco, these laws should not excessively limit a child’s access to information or their ability to express their beliefs.

Moreover, it became evident that lawmakers lacked an understanding of the broader implications of their laws. This led to legal challenges and raised concerns about the effectiveness of these policies. Del Bianco’s organization obtained an injunction that effectively blocked the states from enforcing these laws. It was suggested that lawmakers should be educated and gain a better understanding of the potential consequences of their legislative decisions to avoid such legal challenges.

To summarize, the implementation of a rule requiring verifiable consent for underage individuals to use social media sites in certain US states sparked controversy and legal disputes. The negative sentiment towards this rule arose from concerns about potential limitations on the rights of children to access information and express themselves freely. The need to strike a balance between child rights and protection from harm was highlighted. Additionally, the lack of understanding by lawmakers about the broader implications of their laws was emphasized, underscoring the importance of better education and consideration in the legislative process.

B. Adharsan Baksha

AI adoption among children can pose significant risks, particularly in terms of data privacy. The presence of chatbots such as Synapse and MyAI has raised concerns as these tools have the capability to rapidly extract and process vast amounts of personal information. This raises the potential for exposing children to various cyber threats, targeted advertising, and inappropriate content.

The ability of chatbots to collect personal data is alarming as it puts children at risk of having their sensitive information compromised. Cyber threats, such as hacking or identity theft, can have devastating consequences for individuals, and children are especially vulnerable in this regard. Moreover, the information gathered by chatbots can be used by marketers to target children with ads, leading to potential exploitation and manipulation in the digital realm.

Inappropriate content is another concerning aspect of AI adoption among children. Without proper safeguards, chatbots may inadvertently expose children to age-inappropriate material, which can have a negative impact on their emotional and psychological well-being. Children need a secure and regulated online environment that protects them from exposure to harmful content.

It is crucial to recognise the need to ensure a secure cyberspace for children. This includes focusing on the development and implementation of effective measures related to artificial intelligence, children, and cybersecurity. Governments, organisations, and parents must work together to mitigate the risks associated with AI adoption among children.

In conclusion, AI adoption among children brings forth various risks, with data privacy issues at the forefront. Chatbots that possess the ability to collect personal data may expose children to cyber threats, targeted advertising, and inappropriate content. To safeguard children’s well-being and protect their privacy, it is essential to establish a secure online environment that addresses the potential risks posed by AI technology. The responsibility lies with all stakeholders involved in ensuring a safe and regulated cyberspace for children.

Katz

Child rights are considered fundamental and should be promoted. Katz’s child-focused agency actively advocates for the promotion of child rights. However, conflicts between child rights and freedom of expression can arise. Survey results revealed such conflicts, underscoring the need for balance between these two important aspects.

Misunderstandings or misinterpretations of child rights are common and must be addressed. Some people mistakenly believe that virtual child sexual abuse material (CSAM/SEM) can prevent real crime, indicating a lack of understanding or misinterpretation of child rights. Efforts should be made to educate and provide correct information regarding child rights to combat these misunderstandings.

Regulating AI in the context of child protection is a topic under discussion. Many respondents believe that AI should be regulated to ensure child protection, particularly in relation to CSAM/SEM. However, opinions on this matter are mixed, highlighting the need for further dialogue and research to determine the most appropriate approach.

Public awareness of the risks and opportunities of AI needs to be raised. Approximately 20% of respondents admitted to having limited knowledge about AI matters and associated risks. This signifies the need for increased education and awareness programs to ensure the public understands the potential benefits and dangers of AI technology.

Japan currently lacks regulations and policies concerning AI-generated imagery. Katz’s observation reveals a gap in the legal framework, emphasizing the necessity of establishing guidelines and regulations to effectively address this issue.

There is also a need for greater awareness and information dissemination about AI developments. Katz suggests that the media should take more responsibility in informing the public about advancements and implications of AI. Currently, people in Japan are not adequately informed about ongoing AI developments, highlighting the need for improved communication and awareness campaigns.

Katz recommends that the public should gather information from social networking services (SNS) about AI developments. This highlights the importance of utilizing various platforms to stay updated and informed about the latest developments in the field of AI.

A rights-based approach is crucial in designing regulation policies. It is essential to ensure that the rights of children and humans are protected in the digital world. Advocating for the enhancement of child and human rights in the digital sphere is a vital aspect of creating an inclusive and safe environment.

In conclusion, promoting child rights is essential, although conflicts with freedom of expression may arise. Addressing misunderstandings and misinterpretations of child rights is crucial. The regulation of AI in the context of child protection requires further examination and consideration. Public awareness about the risks and opportunities of AI needs to be improved. Japan lacks regulations for AI-generated imagery, and greater awareness about AI developments is necessary. Gathering information from SNS can help individuals stay informed about AI happenings. A rights-based approach is needed when designing regulation policies, and enhancing child and human rights in the digital world is vital.

Amy Crocker

During the event, the speakers highlighted the significant importance of children’s digital rights in creating a safe and secure online environment. They stressed that children’s rights should be protected online, just as they are in the offline world. General Comment Number 25 to the UN Convention on the Rights of the Child was mentioned as a recognition of the importance of children’s digital rights, with state parties being obligated to protect children from all forms of online exploitation and abuse.

In terms of internet governance, the speakers advocated for a proactive and preventive approach, rather than a reactive one. They argued that governments often find themselves playing catch-up with digital issues, reacting to problems after they have already occurred. A shift towards a preventive model of online safety was deemed necessary, which involves designing for safety before potential issues arise.

Effective implementation was seen as the key to turning digital policies into practice. The speakers emphasized the need to understand how to implement policies in specific local contexts to realize the full benefits. They argued that implementation is crucial in ensuring that children’s rights are protected and upheld online.

The need for public understanding of technology and its risks and opportunities was also highlighted. It was mentioned that improving public understanding is necessary for individuals to make informed decisions about their online activities. Empowering parents to understand technology and facilitate their children’s rights was seen as an important aspect of ensuring a safe online environment for children.

Trust was identified as a crucial element in the digital age, particularly with the growing reliance on technology. The speakers discussed the importance of trust against the backdrop of emerging risks related to data breaches, data privacy problems, and unethical practices. Building and maintaining trust were seen as essential for a secure online environment.

Safeguarding the younger generations online was viewed as a collective responsibility. The speakers stressed that parents and guardians cannot solely shoulder this responsibility and must have a certain level of knowledge of online safety. The importance of all stakeholders, including businesses, industries, and governments, working together to protect children’s rights online was emphasized.

Regulation was seen as an important tool for keeping children safe online. However, it was noted that regulation alone is not a solution for the challenges posed by emerging technologies. The speakers argued that both regulation and prevention through education and awareness are crucial in effectively addressing these challenges.

Differentiated regulation based on context was advocated for. The speakers highlighted that different online services offer different opportunities for children to learn and be creative. They also emphasized that children’s evolving capacities are influenced by various factors, such as their geographical and household contexts. Understanding the link between online and offline contexts was seen as essential in developing effective regulation.

Transparency, a culture of child rights, and collaborative efforts were identified as crucial for the protection of children’s rights online. All stakeholders, including businesses, industries, and governments, were urged to work together and have a shared understanding of child rights. The need for transparency in their commitment to protecting child rights was emphasized.

The challenges faced by developing countries in terms of technology and capacity building were acknowledged. The speakers discussed the specific challenges faced by countries like Bangladesh and Afghanistan in terms of accessing technology and building the necessary capacity. Opportunities for codes of conduct that can be adapted to different contexts were also explored.

Consulting children and young people was highlighted as an important approach to addressing online safety issues. The speakers emphasized the need to understand how children and young people feel about these issues and to learn from approaches to regulation that have been successful.

Amy Crocker, one of the speakers, encouraged people interested in children’s rights issues to join the Dynamic Coalition and continue similar conversations. Flyers and a QR code were mentioned as ways to sign up for the mailing list. The importance of creating more space within the IGF for discussing children’s rights issues was also emphasized.

In conclusion, the event highlighted the significant importance of protecting children’s digital rights and creating a safe and secure online environment for them. It emphasized the need for proactive and preventive internet governance, effective implementation of digital policies, public understanding of technology, empowering parents, trust, collective responsibility, regulation along with education and awareness, differentiated regulation based on context, transparency, and collaborative efforts. The challenges faced by developing countries were acknowledged, and the involvement of children and young people was seen as essential in addressing online safety issues.

Ahmad Karim

In a discussion concerning the design of advancing technology, Ahmad Karim, representing the UN Women Regional Office for Asia and the Pacific, stressed the importance of carefully considering the needs of girls, young adults, females, and marginalized and fragile groups. It was noted that, in such discussions, there is often a tendency to overlook gender-related issues, which indicates a gender-blind approach.

Another argument put forth during the discussion underscored the significance of making the design of the metaverse and technologies more considerate towards marginalized and fragile groups, especially girls and women. The rapid advancements in technology were acknowledged as having disproportionate effects on females and marginalized sectors of society. It was highlighted that national laws frequently do not adequately account for the specific needs and challenges faced by these groups.

The supporting evidence provided includes the fact that girls, young adults, and women are often underrepresented and encounter barriers in accessing and benefiting from technological advancements. Additionally, marginalized and fragile groups, such as those from low-income backgrounds or with disabilities, are particularly vulnerable to exclusion and discrimination in the design and implementation of technology.

The conclusion drawn from the discussion is that there is an urgent need for greater attention and inclusivity in the design of advancing technology. Consideration must be given to the unique needs and challenges faced by girls, young adults, females, and marginalized and fragile groups. It is imperative that national laws and policies reflect these considerations and ensure that these groups are not left behind in the technological progress.

This discussion highlights the significance of addressing gender inequality and reducing inequalities in the design and implementation of technology. It sheds light on the potential pitfalls and repercussions of disregarding the needs of marginalized and fragile groups, and calls for a more inclusive and equitable approach to technological advancements.

Tasneet Choudhury

During the discussion, the speakers highlighted the importance of ensuring the protection and promotion of child rights within AI strategies, policies, and ethical guidelines. They particularly emphasized the significance of these efforts in developing countries, such as Bangladesh. Both speakers stressed the need to include provisions that safeguard child rights in AI policies, especially in nations that are still in the process of development.

The speakers also connected their arguments to the Sustainable Development Goals (SDGs), specifically SDG 4: Quality Education and SDG 16: Peace, Justice, and Strong Institutions. They proposed that by embedding measures to protect child rights in AI strategies and policies, countries can contribute to the achievement of these SDGs. This link between AI development and the attainment of global goals highlights AI’s potential role in promoting inclusive and sustainable development.

Although no specific supporting facts were mentioned during the discussion, the speakers expressed a neutral sentiment towards the topic. This indicates their desire for a balanced and equitable approach to integrating child rights into AI strategies and policies. By addressing this issue neutrally, the speakers emphasized the need for a comprehensive and ethical framework that protects the rights and well-being of children in the context of AI development.

One notable observation from the analysis is the focus on child rights in the discussion of AI policies. This underscores the growing recognition of the potential risks and ethical implications that AI may pose for children, particularly in countries with limited resources and regulations. The emphasis on child rights serves as a reminder that as AI continues to advance, it is crucial to ensure that these technologies are developed with the best interests of children in mind.

In conclusion, the discussion underscored the importance of protecting and upholding child rights within AI strategies, policies, and ethical guidelines. The speakers highlighted the specific significance of this endeavor in developing countries like Bangladesh. The incorporation of child rights in AI policies aligns with the Sustainable Development Goals of Quality Education and Peace, Justice, and Strong Institutions. The neutral sentiment expressed by both speakers indicates the need for a balanced approach to addressing this issue. Overall, the discussion shed light on the need for a comprehensive and ethical framework that safeguards the rights of children amidst the development of AI technologies.

Jenna

Children today are immersed in the online world from a very young age, practically being born with access to the internet and technology. This exposure to the digital age has led to an increased need for trust in this new environment. Trust is seen as a cornerstone of the digital age, particularly as we rely on technology for almost every aspect of our lives. Without trust, our reliance on technology becomes more precarious.

Creating a reliable and ethical digital environment for younger generations requires imparting fundamental digital knowledge and nurturing trust. Building trust and instilling digital literacy are essential steps in safeguarding children online. Parents play a crucial role in this process, but it is also a shared responsibility that extends to all stakeholders. Informed parents are key as they are often the first line of defense for children facing challenges online. However, they cannot do it alone, and it is important for all stakeholders to be aware of their responsibility in protecting younger generations.

The challenges faced by teenagers today in the online world are more multifaceted and harmful than ever before. Cyberbullying has evolved from early stages of internet flaming and harassment via emails to more advanced forms like cyberstalking and doxing. The rise of generative AI has made creating hate image-based abuse relatively easier, contributing to a growing concern for online safety. It is important to address these issues effectively and efficiently to ensure the well-being of young people online.

The approach to online safety varies across different jurisdictions, with each adopting their own strategies and measures. For example, Australia has an industry code in place, while Singapore employs a government-driven approach. This diversity highlights the need for clear definitions and standards regarding online safety threats. A cohesive understanding of these threats is imperative to effectively combat them and ensure consistency across different regions.

Capacity building is essential for addressing the challenges of the digital age. Empowering young people and ensuring their voices are heard can lead to a better understanding of their needs and concerns. Additionally, understanding the technical aspects of internet governance is vital in developing effective solutions to address issues of online safety and security.

Inclusion and diversity are crucial in creating a safe online space. It is important to include the voices of different stakeholders and ensure that everyone has a seat at the table. Language can be a barrier, causing loss in translation, so efforts must be made to overcome this and make conversations more inclusive.

The perspective and insights of young people are valued in discussions on gender and technology. Gaining fresh and unique insights from the younger generation can contribute to the development of more inclusive and gender-responsive approaches. Jenna, a participant in the discussion, highlighted the need to engage young people in discussions related to explicit content and self-expression, as well as providing safe spaces for their voices to be heard.

Modernizing existing legal frameworks is seen as a more effective approach to addressing the impacts of AI and other technological advancements. Rather than a single legislative solution, updating legislation such as the Broadcasting Act, Consumer Protection Act, and Competition Act is seen as crucial in integrating present issues and adapting to the digital age.

Collaboration among stakeholders is essential for success. Capacity building requires research support, and the cooperation of multiple stakeholders is crucial in terms of legislation and regulations. By working together and leveraging each other’s strengths, stakeholders can more effectively address the challenges faced in the digital world.

Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical community possesses valuable knowledge and insights that can contribute to the development of effective policies. However, it is acknowledged that their involvement may not always be the best fit for all policy-making decisions. Striking a balance between technical expertise and broader considerations is key to ensuring policies are robust and comprehensive.

In conclusion, children today are growing up in a digital age where they are exposed to the internet and technology from a young age. Building a reliable and ethical digital environment requires imparting digital knowledge and nurturing trust. Safeguarding younger generations online is a shared responsibility, requiring the involvement of all stakeholders. The challenges faced by teenagers today, such as cyberbullying and hate speech, are advanced and harmful. Different jurisdictions have varying approaches to online safety, emphasizing the need for clear definitions and standards. Capacity building and the inclusion of diverse voices are crucial in creating a safe online space. The perspective and insights of young people are valuable in discussions on gender and technology. Modernizing existing legal frameworks is advocated, and engaging young people in discussions on explicit content and self-expression is important. Collaboration among stakeholders and the inclusion of the technical community in policy-making processes are considered essential for success in addressing the impacts of the digital age.

Larry Magid

In the analysis, the speakers engage in a discussion regarding the delicate balance between protecting children and upholding their rights. Larry argues that protection and children’s rights are sometimes in conflict. He cites examples of proposed US laws that could suppress children’s rights in the guise of protection. Larry also highlights the UN Convention, which guarantees children’s rights to freedom of expression, participation, and more.

On the other side of the debate, another speaker opposes legislation that infringes upon children’s rights. They point out instances where such legislation may limit children’s rights, such as requiring parental permission for individuals under 18 to access the internet. Their sentiment towards these laws is negative.

Lastly, a speaker emphasises the need for a balanced approach to regulation, one that can protect and ensure children’s rights while acknowledging the inherent risks involved in being active in the world. They argue for a fair equilibrium between rights and protection. Their sentiment remains neutral.

Throughout the analysis, the speakers recognize the challenge in finding the proper balance between protecting children and preserving their rights. The discussion highlights the complexities and potential conflicts that arise in this area, and stresses the importance of striking a balance that safeguards children’s well-being while still allowing them to exercise their rights and freedoms.

Katarzyna Staciewa

In a recent discussion focusing on the relationship between the metaverse and various sectors such as criminology and child safety, Katarzyna Staciewa, a representative from the National Research Institute in Poland, shared her insights and emphasized the need for further discussions and research in criminology and other problematic sectors. Staciewa drew upon her experiences in law enforcement and criminology to support her argument.

Staciewa discussed her research on the metaverse, highlighting its significance in guiding the development of developing countries. The metaverse, an immersive virtual reality space, has the potential to shape the future of these countries by offering new opportunities and addressing socio-economic challenges. Staciewa’s positive sentiment towards the metaverse underscored its potential as a tool for fostering quality education and promoting peace, justice, and strong institutions, as outlined in the relevant Sustainable Development Goals (SDGs).

However, concerns were raised during the discussion regarding the potential misuse of the metaverse and AI technology, particularly in relation to child safety. Staciewa analyzed the darknet and shed light on potentially sexually interested groups involving children, revealing alarming trends. The risks associated with the metaverse lie in the possibility of AI-generated child sexual abuse material (CSAM) and the potential for existing CSAM to be transformed into virtual reality or metaverse frames. The negative sentiment expressed by Staciewa and others reflected the urgency to address these risks and prevent harm to vulnerable children.

The speakers placed strong emphasis on the importance of research in taking appropriate actions to ensure child safety. Staciewa’s research findings highlighted the constant revictimization faced by child victims, further underscoring the need for comprehensive measures to protect them. By conducting further research in the field of child safety and child rights, stakeholders can gain a deeper understanding of the challenges posed by the metaverse and AI technology and develop effective strategies to mitigate these risks.

In conclusion, the discussion on the metaverse and its impact on various sectors, including criminology and child safety, highlighted the need for more research and discussions to harness the potential of the metaverse while safeguarding vulnerable populations. While acknowledging the metaverse’s ability to guide the development of developing countries and the positive impact it can have on education and institutions, concerns were expressed about the possibility of misuse, particularly with regards to child safety. The importance of research in understanding and addressing these risks was strongly emphasized, particularly in the context of the continuous victimization of child victims.

Patrick

During the discussion on child safety and online policies, the speakers emphasised the importance of taking a balanced approach. While regulation was acknowledged as a crucial tool in ensuring child safety, the speakers also highlighted the significance of prevention, education, and awareness.

It was noted that regulation often receives more attention due to its visibility as a commitment to child safety. However, the lack of proportional investment in prevention aspects, such as awareness-raising and education, was seen as a gap.

Addressing the specific needs of children in relation to their evolving capacities and contexts was deemed crucial. A differentiated approach to regulation was recommended, taking into consideration the diverse services and opportunities available for children to learn digital skills. The household environment, geographical context, and access to non-digital services were identified as factors that influence children’s evolving capacities.

A unified understanding and commitment to child rights were highlighted as prerequisites for effective regulation. The speakers pointed out that there is often a significant variation in how child rights are interpreted or emphasised in different regional, cultural, or religious contexts. It was stressed that a transparent commitment and culture of child rights are necessary from industries, businesses, and governments for any successful regulation to be established.

The tendency of developing countries to adopt policies and legislation from key countries without critically analysing the unique challenges they face was criticised. The speakers observed this trend in policy-making from Southern Africa to North Africa and the Asia Pacific region. The need for developing countries to contextualise policies and legislation according to their own specific circumstances was emphasised.

An issue of concern raised during the discussion was the reluctance of countries to update their legislation dealing with sexual violence. The process for legislation update was noted to be lengthy, often taking up to five to ten years. This delay was seen as a significant barrier to effectively addressing the issue and protecting children from sexual violence.

The role of industries and companies in ensuring child safety was also highlighted. It was advocated that industries should act as frontrunners in adopting definitions and staying updated on technologically enhanced crimes, such as AI-generated child sexual abuse material (CSAM). The speakers argued that industries should not wait for national policies to change but should instead take initiative in adhering to certain definitions and guidelines.

The importance of engaging with children and listening to their experiences and voices in different contexts was emphasised. The speakers stressed that children should have a critical say in the internet space, and adults should be open to challenging their own thinking and assumptions. Meaningful engagement with children was seen as essential to understanding their needs and desires in using the internet safely.

In addition, the speakers highlighted the need for cross-sector participation in discussing internet safety. They recommended involving experts from various fields, such as criminologists, educators, social workers, public health specialists, violence prevention experts, and child rights legal experts. A holistic and interdisciplinary approach was deemed necessary to address the complex issue of internet safety effectively.

Overall, the discussion on child safety and online policies emphasised the need for a balanced approach, taking into account regulation, prevention, education, and awareness. The importance of considering the evolving capacities and contexts of children, a unified understanding and commitment to child rights, and the role of industries and companies in taking initiative were also highlighted. Additionally, the speakers stressed the significance of engaging with children and adopting a cross-sector approach to ensure internet safety.

Andrew Campling

The discussions revolve around the significant impact that algorithms have on child safety in the digital realm. One particularly tragic incident occurred in the UK, where a child took their own life after being exposed to suicide-relevant content recommended by an algorithm. This heartbreaking event highlights the dangerous potential of algorithms to make malicious content more accessible, leading to harmful consequences for children.

One key argument suggests that restrictions should be placed on surveillance capitalism as it applies to children. The aim is to prevent the exposure of children to malicious content by prohibiting the gathering of data from known child users on platforms. These restrictions aim to protect children from potential harms caused by algorithmic recommendations of harmful content.

Another concerning issue raised during these discussions is the use of AI models to generate Child Sexual Abuse Material (CSAM). It is alarming that in some countries, this AI-generated CSAM is not yet considered illegal. The argument is that both the AI models used in generating CSAM and the circulation of prompts to create such content should be made illegal. There is a clear need for legal measures to address this concerning loophole and protect children from the creation and circulation of CSAM.

Furthermore, it is argued that platforms have a responsibility towards their users, particularly in light of the rapid pace of technological change. It is suggested that platforms should impose a duty of care on themselves to ensure the safety and well-being of their users. This duty of care would help manage the risks associated with algorithmic recommendations and the potential harms they could cause to vulnerable individuals, especially children. Importantly, the argument highlights the difficulty regulators face in keeping up with the ever-evolving technology, making it crucial for platforms to step up and take responsibility.

In conclusion, the discussions surrounding the impact of algorithms on child safety in the digital realm reveal significant concerns and arguments. The tragic incident of a child’s suicide underscores the urgency of addressing the issue. Suggestions include imposing restrictions on surveillance capitalism as it applies to children, making AI-generated CSAM illegal, and holding platforms accountable for their users’ safety. These measures aim to protect children and ensure a safer digital environment for their well-being.

Amyana

The analysis addresses several concerns regarding child protection and the legal framework surrounding it. Firstly, there is concern about the unequal application of international standards for child protection, particularly between children from the Global South and the Global North. This suggests that children in developing countries may not receive the same level of protection as those in more developed regions. Factors such as resource distribution, economic disparities, and varying levels of political commitment contribute to this discrepancy in child protection standards.

Another notable concern highlighted in the analysis is the inadequacy of current legislation in dealing with images of child abuse created by artificial intelligence (AI). As technology advances, AI is increasingly being used to generate explicit and harmful content involving children. However, existing laws appear ineffective in addressing the complexities associated with such content, raising questions about the efficacy of the legal framework in the face of rapidly evolving technology.

On a positive note, there is support for taking proactive measures and demanding better protection measures from online platforms. Efforts are being made to provide guidelines and recommendations to agencies working with children and adolescents, aimed at enhancing child protection in the digital space and promoting the well-being of young individuals online. This demonstrates an awareness of the need to keep pace with technological advancements and adapt legal frameworks accordingly.

Overall, the analysis underscores the importance of addressing the unequal application of international standards for child protection and the challenges posed by AI-generated images of child abuse. It emphasizes the need for updated legislation that aligns with emerging technologies, while also advocating for proactive measures to enhance protection on online platforms. These insights provide valuable considerations for policymakers, child protection agencies, and stakeholders working towards establishing robust and inclusive frameworks for child protection globally.

Jim

The discussion emphasized the importance of regulating and supporting internet technology in developing countries, as evidenced by the interest and concern of participants from regions such as Bangladesh and Kabul University. This real-world engagement highlights the relevance and urgency of the issue in developing regions.

Jim, during the discussion, summarised and acknowledged the questions raised by participants from developing nations, demonstrating his support for addressing the challenges and needs specific to these countries. He stressed the need to consider these perspectives when dealing with the issues surrounding internet technology in developing countries. This recognition of diverse needs and experiences reflects a commitment to inclusivity and ensuring that solutions are tailored to the circumstances of each country.

The overall sentiment observed in the discussion was neutral to positive. This indicates a recognition of the importance of regulating and supporting internet technology in developing countries, and a willingness to address the challenges and concerns associated with it. The positive sentiment suggests support for efforts to enhance access to, and the effectiveness of, internet technology in these regions, contributing to the United Nations Sustainable Development Goals of Industry, Innovation and Infrastructure (SDG 9) and Reduced Inequalities (SDG 10).

In conclusion, the discussion highlights the crucial role of regulation and support for internet technology in developing countries. The participation and engagement of individuals from these regions further validate the significance and necessity of addressing their specific needs and challenges. By considering the perspectives of those in developing nations and taking appropriate actions to bridge the digital divide, we can work towards achieving a more inclusive and equitable global digital landscape.

Liz

In a recent discussion on online safety, Microsoft emphasised its responsibility in protecting users, particularly children, from harmful content. They acknowledged that tailored safety measures, based on the type of service, are necessary for an effective approach. However, they also highlighted the importance of striking a balance between safety and considerations for privacy and freedom of expression.

One speaker raised an interesting point about the potential risks of a “one size fits all” approach to addressing online safety. They argued that different services, such as gaming or professional social networks, require context-specific interventions. Implementing broad-scoped regulation could inadvertently capture services that have unique safety requirements.

Both legislation and voluntary actions were deemed necessary to address children’s online safety. Microsoft highlighted their focus on building safety and privacy by design. By incorporating safety measures from the very beginning during product development, they aim to create a safer online environment for users.

However, concerns were also raised about the current state of legislation related to online safety and privacy. It was noted that legislative efforts often lack a holistic approach and can sometimes contradict each other. Some safety and privacy legislations contain concepts that may not optimise online safety measures.

Microsoft also recognised the risks posed by AI-generated child sexual abuse material (CSAM) and emphasised the need for responsible AI practices. They are actively considering these risks in their approach to ensure the responsible use of AI technologies.

The discussion strongly advocated for the importance of regulation in addressing online harms. Microsoft believes that effective regulation and a whole society approach are crucial in tackling the various challenges posed by online safety. They emphasised the need for ongoing collaboration with experts and stakeholders to continuously improve online child safety measures and access controls.

Another key aspect discussed was the need for a better understanding of the gendered impacts of technology. It was highlighted that current research lacks a comprehensive understanding of youth experiences, particularly for females and different cultures. Additional research, empowerment, and capacity building were suggested as ways to better understand the gendered implications of technology.

In conclusion, the discussion stressed the importance of collaboration, open-mindedness, and continuous learning in addressing online safety. Microsoft’s commitment to protecting users, especially children, from harmful content was evident in their approach to building safety and privacy by design. The speakers highlighted the complexities of the topic and emphasised the need for context-specific interventions and effective regulation to ensure a safer online environment for all users.

Session transcript

Amy Crocker:
Thank you very much. Sorry for the short delay, but it was a good opportunity to bring more people into the room. So thank you very much for being here for the 2023 session of the Dynamic Coalition on Children’s Rights in the Digital Environment. I know you can go and navigate many paths in the agenda, the impressive agenda of the IDF, and so we’re really happy that you are here. There are also, as we speak, some similar child rights-focused sessions going on, so thank you for choosing this, and I hope that you’ll have the opportunity to perhaps watch online some of the other sessions and engage with the speakers in those sessions as well. So as we all know, the theme for this year’s IDF is the internet we want empowering all people, and the Dynamic Coalition, which I will explain a little bit and we can talk about throughout this session, has a clear starting point that for us as children’s rights advocates, there can be no empowerment on or through the internet without a foundation of safety, and the internet we want and the internet we need is one where children’s rights are guaranteed, and that includes speaking to them about their views about their digital lives and the online world. And of course that’s not just me or our coalition or my fellow panelists saying this. We can also refer, and for those of you coming from the previous session on digital rights in different regions around the world, we have now something called the General Comment Number 25 to the UN Convention on the Rights of the Child that recognizes children’s rights in relation to the digital environment, and that was adopted two years ago. And this obliges state parties to protect children in digital environments from all forms of exploitation and abuse. So what this means is the rights that children have in the offline world, if we can call it that, are also guaranteed online, and I think this is crucial for the context in which we are meeting today. So in that context, when we talk about the AI, the metaverse, new technologies, frontier technologies, as we’ve seen at this IDF, it’s clearly at the forefront of discussion. It’s across the agenda very heavily. There are a lot of sessions talking about regulation, frameworks, guidance, opportunities, risks of these kind of new technologies, and we know they are increasingly embedded in the lives of digital platform users worldwide. So we see that legislation, digital policy, safety policy, design practice, digital experiences are at a critical moment of transition, and innovation is not new. It’s core to our human societies. It does actually define us. There is a pace of change, perhaps, that we’re seeing right now that requires us to really stop and pay attention, and consider what these implications may be, and how we can harness the positive opportunities for the next generations. Yes, indeed, I think we all agree, and a starting point for this panel, we will be balancing this conversation about the transformative power of technologies, but also looking at how we mitigate the risks, and address harms, some of which we can talk about very directly and concretely today, some of which we can probably predict, and some which we cannot predict. This is the nature of the evolving environment. We do know that governments often find themselves playing catch-up. There is a huge regulatory debate right now, but in many ways, in too many ways, it’s responsive to the problem after it’s happened. We’ll be talking a little bit about moving to a more preventive upstream model of safety by design. How do we prevent things happening before they take place, and how can we build, at the same time, those environments and communities online for children and everyone to thrive, and be well, and progress. We’ve also seen that some online companies, technology providers, are not equal in that understanding your commitment to design. I think that’s something that’s crucial for us to address. How can we all work with companies of different sizes to actually scale, and share best practice and knowledge in these areas. The questions I think that we need to ask is how we move from talk to action, how we move from policy to practice. This is also something that has come up in many of the sessions I have attended. We need to act. We need to be smart about the policies and laws we develop, but really the proof is in the implementation. The proof is in how we actually use these for the benefit of society, and how we localize these and make them relevant to the specific context in which we are implementing these policies and practices. We also need to think very seriously about how we assess and mitigate the risks of new technologies, so that we can assure safety, but also champion opportunity that tech provides for millions, billions of people living on this planet. Some of the goals of the session are to identify the main impacts of AI and new technologies on children globally, understand, hear from one young panelist, but also I see some younger participants in the room. I’m really looking forward to hearing your views on this, and to raise awareness for a child rights-based approach to AI-based service development. Perhaps at the end I’ll take the opportunity to talk a little bit about the dynamic coalition on children’s rights as a vehicle within the IGF to really bring together organizations interested in ensuring children’s rights are mainstream within Internet governance policies worldwide, and we would love you to join us. We have some flyers and some QR codes, so you can’t escape. You don’t have to write anything down, and you can consider joining the coalition so we can actually move forward. I’m really pleased to introduce our speakers as well. We have two speakers online and three speakers sitting next to me. Perhaps I’ll start with the online participants, since they’ve joined us very early, so they get the special prize. I have Patrick Burton, who is the Executive Director of the Center for Justice and Crime Prevention in South Africa. Patrick, good morning. I have Sophie Poehler. She’s a media education consultant at the German Children’s Fund. Thank you very much for joining us, Sophie. Here in the room I have, to my right, Liz Thomas, who is the Director of Public Policy and Digital Safety at Microsoft. I have Jenna Fung, who is a youth advocate for youth-led initiatives online. She’s representing the Asia-Pacific IGF, Youth IGF, and she’s part of the Youth Track Organizing Team as well. And last, but very much not least, I have Kats Takeda, who is the Executive Director of Child Fund Japan, who can also give us a perspective from the wonderful country in which we are attending this event. So thank you very much. Before we go forward, I wanted to just take a show of hands, because this is a round table. The seating makes it a little bit harder to make it a round table. So, you know, a bit of audience participation. So perhaps you could raise your hand if you’re from civil society in the room. And from government? Raised hand. From private sector? Good to have you. And from any other? And from the different regions? We have some, I think, some colleagues from Asia-Pacific region and European. Yeah. From any colleagues from Middle East? Hello. Thank you for joining us. Latin America? No. And Europe? Some Europe? Well, I’m from Europe, so. Yeah. And the Americas. Yeah. Great. Great to have you. So we have, we can have a global conversation, I think. We are lacking some regions, but it’s really great to have you all here. Thank you. Thank you for being here. I should introduce myself. My name is Amy Crocker. I work for an organization, please, called ECPAT International. We are a global civil society network dedicated to ending the sexual exploitation of children. And I’m here as moderator today, as the chair or coordinator of the Dynamic Coalition on Children’s Rights in the digital environment here at the IGF. So this is a 90-minute roundtable. You’ve already, I’ve already taken up a lot of the time, so we will go on. We’re going to organize this in terms of three themes. And what we’d like is, you know, within each theme to hear your reflections, take your questions. So we make this as much of a conversation as we can. And the first theme is broad, but crucial. And it’s on safety and children’s rights being a cornerstone of the internet that we want to need. This is our proposition, but it’s also a challenge, I think, to the internet governance community and to governments and companies and society worldwide. So what I’m going to do is perhaps start with you, Sophie, online, if I may. And perhaps you could tell us a little bit about your views on why children’s rights are so, digital rights, are so fundamental to our construction of a safe, equitable and secure online world.

Sophie:
Yes, thanks, Amy. And hello, everybody from Germany. It’s very early here in the morning. But I hope I can give you some insights in the German perspective on children’s rights in the digital world. Maybe just a quick background. I work in the coordination office for children’s rights on the German Children’s Fund, and we accompany the strategies of the German Children’s Union and the Council of Europe on the Rights of the Child in Germany here. And among other things with a strong focus on children’s rights in the digital world. Yes, Amy, you’ve already mentioned it. The General Commend 25 published by the UN Committee on the Rights of the Child in 2021 sums up the importance of children’s digital rights and provides a very comprehensive framework for this context. And yeah, the rights are crucial really to protect children from harm, but also promote their access to information and empower them with digital skills. And also important is that the rights of provision, protection and participation must be given equal consideration and are really of fundamental importance for the digital world. So upholding these rights is not only an ethical imperative, but also an investment in the well-being of future generations and the society as a whole. And maybe a quick German perspective, which is quite concrete. We have, as German Children’s Fund, we have looked into needs and challenges voiced by children when it comes to risks arising from online interaction. We have analyzed the research field on this question on how children deal with interaction risks such as insults, bullying or harassment in online environments. And therefore we’ve conducted a meta-research and compiled an overview of relevant studies with a focus on German children, how they develop coping strategies and how we can promote this, focused on the age group from nine to 13. And we’ve gained some interesting findings from the reviewed studies when it comes to children’s perspectives on online safety. Just a quick disclaimer, this was not in the context of artificial intelligence, but we still consider the results relevant to our discussion today. And I’d like to pick some important points for the discussion today and later maybe. The younger the children, the more important it is for them to have a social safety net. In case of online risks, they particularly want support from parents, confidants or teachers. And especially, particularly parents are perceived as the most significant and desired safety contact persons for young children. As children grow older, they increasingly resort to technical strategies to deal with online risks, such as blocking, reporting, deleting comments, enabling comment function. And this points to the considerable importance of the safe design in online spaces, which must be adapted to the needs of each age group. The youngsters voiced that platform-related reporting functions are seen critically by them, because the platform side processing of reports takes too long in their eyes and sometimes even fails to occur altogether. They want more information on how to report people, how to block people, and how to protect themselves from uncomfortable or risky interactions, especially sexual interactions. And there’s any case that they need more education to make a more informed decision when coping. And last but not least, two points from a study from THORN, conducted two years ago in the US, so not from Germany, but they have some interesting findings when it comes to reporting. First, anonymity plays an important role for adolescents, especially for young girls. They report that they would be more likely to use technical tools if they could be sure their report would remain anonymous. And very interesting, at the same time, this study results also show that adolescents would welcome a human connection in the reporting process in addition to anonymity. So the big majority of the 9 to 12 year olds we’ve looked at said they would be more willing to use reporting tools that connect users with a human than with an automatic system. And yeah, just a quick insight, there are more findings, but those highlight the importance really of human resources, as well as safe design for children in coping with risks online. Thank you, Sophie. And you’ve touched upon, you know, the second

Amy Crocker:
and third thing that we would be talking about, which is on the one side regulation and policy for safety, and that can be, you know, government policy, platform policy, and then also the issue of safe design, and we’ll go into those. And I think, you know, it’s really interesting, you know, obviously drawing on research conducted with children, when we take a rights-based approach, you said, you know, you won’t be talking, this study wasn’t specifically talking about AI, but indeed, if we take a rights-based approach, it is about rights, perhaps about principles and values, and the technology itself should be responding to those needs rather than the other way around. So I think before we go on to sort of some of the other issues around regulation and policy, I also want to turn to you, Katz, if I may, to talk about, I mean, we’ve heard from Germany, to talk about in the Japanese perspective, your experience of doing your work based on children’s rights, and what that means in terms of creating safety nets, meeting the needs of children, understanding their thoughts, so that you can help advocate for them based on their rights.

Katz:
Thank you for inviting me to this IGF, and especially this dynamic coalition session. So let me share some of the facts from Japan. But before then, I have to say, child rights is a fundamental part of this, I would say, work and societies everywhere. And we are, as a child fund, as child-focused agencies, we are promoting child rights everywhere. But we face several challenges so far. So we conducted some kind of omnibus survey recently. It’s in August. So this is age from 15 to 75 years old. It’s a quite long, quite wide range. This is a kind of image of the public opinion. So we have a question about the definition of CSAM. and also including some of the questions about AI. So, let me share some of the challenges here. Then, the results said is, how say, is some of the internal conflict between the human rights, especially the child rights, and also freedom of expression. So, this is maybe never-ending conflict everywhere, maybe not only in Japan, some other countries. I want to know some other countries’ practices or situation later on. But we think is we need to kind of balance between the two conflicts. Otherwise, we cannot continue to never-ending discussion between the child rights and freedom of expression. And secondly, I have to say, I want to share this one, some kind of misunderstanding or misinterpretation of human rights. So, we ask this kind of a virtual see-sam, and also see-sem, to the responders. Is some of the respond and some comments, narrative comments said is a virtual see-sam, see-sem, will prevent real crime. So, this is kind of a misunderstanding or misinterpretation of human rights, especially the child rights. We need more awareness or education to the public for this one. And thirdly, so I want to share this one, is the one of the result of the public opinion is the question about AIs. Is many of them is how say, we should regulate AI under the context of the see-sam, see-sem. But still is a minority is disagree on this how say, regulation. But interestingly, is 20% of the how say, answer is respond is we don’t know or I don’t know about the AI matters or AI risk. This is quite interesting and also some kind of risk in the future. So probably we should more focus how say, awareness to the public about the risk of the AI and also the opportunity of AI in the future. So that is one of the, some of our results. So, I just how say, share these three points but maybe later on, so I want to hear from you about some other thoughts or insight or some kind of a result or research about the similar work on your country. So, yeah, that’s it. Thank you.

Amy Crocker:
Thank you. And I think you pick up on a really crucial point is how children’s rights are understood and sort of made real within societies, how they’re realized, which often will be dependent on a local context based on principles that we have agreed on globally. But also helping people understand technology and the risks and opportunities. And I think this is a challenge and maybe something Liz, you will speak to later, how people, how you make technology explainable enough that people understand the different sides of it when they’re using it. And indeed, I think we will talk a little bit later about parents and the empowerment of parents. And I think this is something that has come up many times in conversations I’ve been hearing this week. So, speaking about children’s rights, Jenna, I’ll turn to you to tell us that we’re all talking rubbish. No, I’d love to hear your sort of, your perspective on how your experience of how children’s rights can be used to advocate for youth and whether you think we’re doing that in the right way.

Jenna:
Sure, I will try my best. As I work so closely with the youth in my own region in Asia Pacific, most of the people who are involved in this YIGF, they kind of have some sort of knowledge about what we’re doing here. And the youth that is engaged in those conversations, they’re over 18. But then, as we talk about children, they’re very young. And so, today I will add some and bring out some points from those outcome that we have discussed in Asia Pacific, but try to, we’ll try to have some more representation and we definitely not represent like teenager, which I personally see that they are the one that face a lot of challenges online these days. But I would touch on it a little bit later as I prepare some notes here and hope that I won’t disappoint the audience here today. I believe youth, not youth, sorry, correct myself, kids today, they leave and breathe the online world. They practically born with internet and tech gadgets in the hand, which many of us don’t really get to experience or dream of back in the day, even myself as a Gen C-er. I don’t get to experience that. I only get to get introduced to a computer or internet when I was in kindergarten. But kids these days, they have their smartphones or iPads in hand. As soon as their parents play the baby shark, they stop crying, right? That’s what they’re dealing with these days. It may be a bit dramatic to frame it this way, but before they’re born, the photos, everything, are filling up the parents’ social media feed. That’s basically how I find out my high school buddies become parents, and probably because their parents are Gen Z and posting on social media a lot. These kids today, they don’t really get to choose because they’re not born yet, but they’re already online. So it complicates our conversation even more. It might not always be the case because there are people who choose to be online, but somehow it is happening a lot more because of how different generations will use internet or technologies. And I think with all this, we must talk about trust. This is one of the biggest thing we also touch on a lot in the Asia-Pacific Youth IGF and within our own youth community as well because this is basically the bedrock of digital age, we believe. In a world where tech, we rely on technology for almost everything, I guess we don’t have to explain too much after the pandemic. Without the internet or technologies, we can’t really live during that time. So trust is really become the glue that holds everything together. And digital age make trust really crucial against the backdrop of growing reliance on technologies and possible risks related to data breaches, data privacy problems and unethical practices. So building trust and imparting fundamental digital knowledge are essential steps in creating a reliable and ethically responsible digital environment for the younger generations. Our society has evolved a lot to embrace diversity in terms of backgrounds, culture, sexual orientations, and more. With the progress that we have accomplished, potential harms and risks multiply. And the challenges of teenager that face and encounter today probably way more multifaceted than those of the past. And I myself can’t even relate. And I really hope that we will have a mechanism to engage those teenager, technically they’re underage, to be in the conversation so I can hear from them. I can’t speak for them because I am not them. Naming some classic examples from that, cyberbullying. We’re still talking about it. In early age, I mean early stage of the internet, you know, flaming, trolling, harassment through emails. Now it’s different, it’s not more than emails. Where today, where younger generations facing more than just social media bullying, it’s now that they’re encompassing a wider range of challenges like hate speech, doxing, cyberstalking, or one of the most concerned ones, like hate image-based abuse, especially with the rise of generative AI. It just make everything relatively easier to do. And so that’s one of the things that I think there are, the encounters, like just complicated than before. And when underage face such challenge, it’s very natural for them to turn to parents. Because, I mean, it’s just natural. Talk to someone you trust. Sometimes it may not be their own parent, but someone they trust. But you know, to provide a safety net for underage, the guardian can’t do it alone, and they must know something. And not all the parents or guardian would have the same level of knowledge of anyone in this room. And so, you know, especially when there’s nuances on the wrist that young children and teenagers ask, both do, it’s a totally different thing. I think it’s like a responsibility, it’s shared by all stakeholders in terms of safeguarding the younger generations on this very topic. And I probably should stop here and save the rest of the point when we move on to team two, theme two and three. And I hope that I have already brought some new insights from the younger generations. Because as I observed, there’s like only a few youth that’s interested in these kind of topic. And I hope that I represent a small portion of it here today. Thank you so much.

Amy Crocker:
No, you absolutely did. And you’ve touched upon some really good points that set us up for the next topic. But of course, they’re all interrelated. And I really liked that you mentioned the word trust. I think this is a really important word in these times. Trust in algorithms when we talk about AI, trust in institutions, trust in companies, trust in parents. You know, some children, many children don’t have a trusted adult that they can rely upon to help them. So I think we have a lot of different issues we need to sort of unpack. Before we go on to talk about everyone’s favorite topic of regulations and policies, just after lunch. So we’re at risk of everyone falling asleep. No, I’d love to hear from the room if there are any perspectives on how you’ve found building your work upon a basis of children’s rights, useful, challenging, difficult. I don’t know if there are any perspectives. I could call out, I think we have some colleagues from Brazil who just did a wonderful session and you had some videos of children themselves speaking. I don’t know if you’d like to speak or anyone else in the room about how you’ve used children’s rights practically in your work to do the work that you do. Just use the microphone because we have online participants.

Larry Magid:
Yeah. Thank you, I’m Larry Magid from Connect Safely. So in previous IGFs, we’ve had some workshops that I would co-lead called children’s rights versus child protection and the tension between the two. We could protect everyone in this room by putting in bubble wrap and never letting you out of your bed, although you would probably die from some bed-related disease. But the point is that being active in the world automatically creates some risk and clearly being online creates some risk, everyone knows that. And so we want to protect children, but at the same time, we want to protect their rights. And sometimes those are in conflict. And where it becomes particularly critical is in the area of legislation. Because even the United States, which as you all know, has something we call the First Amendment, which if you read the First Amendment in the American Constitution, it says nothing about how old you have to be. It doesn’t say people over 18 have the right to free speech. Everyone has the right to free speech. Well, it doesn’t really say that, but that’s how it’s interpreted. But at the same time, there are laws being proposed in America which would, for example, prohibit children from under 18 to go online without parental permission. So that means a 17-year-old exploring their sexuality, their politics, their religion, or whatever, would have to go to their parents for the right to express themselves. As everybody here I’m sure is aware, the UN Convention on the Rights of the Child guarantees children the right of freedom of expression, participation, assembly, et cetera. So these are in conflict, which is not to say that we should allow five-year-olds to look at hardcore pornography. I mean, I’m not arguing that we completely enable, empower all children to do all things, but at the same time, how do we ensure their rights and protect them at the same time without suppressing their rights? And frankly, if you were to ask some legislators, at least in the United States, and I think it’s true in other countries, they would favor protection over rights and would take away their rights in the name of protection. And it becomes particularly of an issue when there are marginalized groups that are engaged in controversial activities, whether it’s politics or transsexual issues or other issues, where their rights are being suppressed by legislation in name, reportedly, to protect them. So I just think that’s an important backdrop. And even though that workshop is not on the agenda at this IGF, it’s probably more important today than it was even the last time we had that conversation two or three years ago, because again, I can only speak for my country, there is more and more legislation that would essentially deny children their rights for participation online. Thank you.

Amy Crocker:
Thanks, I don’t know if anyone wants to speak to that, but I think absolutely, and there may not be a session on the agenda, but it’s certainly something that has come up many times in the conversations we’ve all been having and at different sessions. And it is a huge challenge we face. I wish I had the answer. In some ways, I feel like we need to embrace those conflicts because we’re always gonna be navigating those conflicts. But I think when we go on to regulation and policy, we need to really critically assess what we’re trying to gain through different regulations and how those should be shaped. I’ll be speaking to that. So we have two questions online. Bangladesh, can you comment on that? Maybe you’d like to ask your question while we’re waiting, yeah.

Steve Del Bianco:
Well, thank you, and it’s a follow-up on what Larry pointed out, Larry Magid. I’m Steve Del Bianco with NetChoice, and two of the US states which have aggressively attempted to ostensibly protect children extended all the way up to the age of 18 is a requirement that any user of any social media site, even something like YouTube.com, would have to present two forms of government-issued ID to make sure that the services knew that was an adult. And if they were younger than 18, they would have had to show that a legal guardian or parent had given verifiable consent for them to use a site. It’s fine to protect a 13-year-old or a 12-year-old, but it was a little ridiculous applying to a 17-year-old. And my organization, NetChoice, sued two states that had these laws, the state of Arkansas, the state of California. And last month, just a few weeks ago, we obtained a preliminary injunction blocking the state from enforcing those laws, which looks terrible for the tech industry to be suggesting that a state was wrong to try to protect children. But in fact, the judges ruled that the state was wrong to do it the way they were doing it. And in that mix will be an argument about the rights of a 17-year-old to access the kind of content that Larry brought up. And since your question was specifically about the rights of the child, if you dive into the document that’s on every other chair, the best interest of the child is supposed to be a balancing test. Whenever I say that, I get heartburn thinking about GDPR, but it’s a balancing test about the rights of the child to access and express versus the need to protect the child from harm. So I think you bring up the right framing of the question. And I realize that other nations that run into the same problem, Larry and I are in the United States, they may not be able to rely upon a court system and a First Amendment and the Constitution toward a block. block a state from going that way. But we need to educate lawmakers, or they will write laws that are mainly messaging bills, where they get to claim they’re trying to protect children, when in fact the mechanisms to do it on age verification just don’t exist. Thank you.

Amy Crocker:
Thank you, yeah. I mean, and of course we could have a whole kind of week-long session about these topics. I’m gonna move now, because in the interest of time, to the Bangladesh Remote Hub. There seem to be many of you. Great, please, go away. Tell us your question. I’m from England, but my English is poor. I do apologize. Please, please give us your question.

Tasneet Choudhury:
Hello, all. I am Tasneet Choudhury, Joint Secretary of Women, IGF, Bangladesh, and Media Personality. Dear moderator and today’s event, greetings to all present. Thank you for giving me this opportunity to ask my question. How do we ensure that AI strategies, policies, and ethical guidelines protect and uphold child rights across the world, especially developing countries like Bangladesh?

Amy Crocker:
Thank you. Thank you so much. Oh, for the question.

B. Adharsan Baksha:
We have another question from Bangladesh Remote Hub. Can we speak? Yes, please. Okay. Thanks a lot to all of us. I’m B. Adharsan Baksha, Bangladesh. Hi, sir, Bangladesh IGF. My question is, AI adoption among children can present many real risks. Data privacy being cheap among them. How popular chatbots like Synapse and MyAI can quickly extract and process vast amounts of personal data, potentially exposing children to cyber threats, targeted advertising, and inappropriate content. How we ensure a secure cyberspace for the children? Thank you.

Amy Crocker:
Thank you very much for those questions. And I think that they are big questions, but I think it leads us very well to sort of the topic of regulation policies around some of these really challenging child rights and child protection issues. And I suppose I’m gonna put a question to you, Liz, from Microsoft about, you know, what are the risks? Are there risks in a kind of one size fits all approach to dealing with some of these issues? Because clearly we have a number of different harms. We have different, as our colleagues from Bangladesh have just said, different contexts in which we have to consider these issues.

Liz:
Fantastic. Thanks so much, Amy. And thank you for the great questions online. It’s awesome to see the remote hub. I didn’t know folks were gathering in different spaces, but that’s brilliant. I mean, so starting from our starting point is Microsoft. You know, we absolutely recognize that we have a responsibility to protect our users and particularly our youngest users and children from illegal and harmful online content and conduct. And part of the way in which we have to do that is through that incredibly necessary balancing of rights. So children’s rights in the round, thinking about it as holistically as possible. So advancing safety, but also thinking about privacy is the questions just raised around freedom of expression, around access to information and everything else. And I think in part answer to the question that was just raised as well, I think the way that that happens is gonna be a combination of an ongoing need for both regulations, but also voluntary activities as we look to take on, you know, and build in safety and privacy by design. But for us as Microsoft to really do that balancing effectively, one of the things we really have to think about is the differentiation. So thinking about the differences between the wide variety of online services that we have. I suspect most of you in the room will be familiar with one or more of the wide variety of Microsoft’s product suite. But I think, you know, what we have to really think about when we’re thinking about a gaming versus a professional social network versus productivity tools is how we really tailor our safety interventions to the nature of that service. And so when we think about this, that’s really at the heart of our approach is how we think about safety and rights in a way that’s proportionate and really, really tailored to the service and the harms in place. And that’s at the heart of our internal standards and the way we think about safety by design as a company. And that includes when we think about what’s appropriate in terms of parental controls, the guardrails that are in place, whether we’re thinking about what the business model looks like and the kind of platform architecture or what’s needed by the way the culture of the service and what we wanna try foster in terms of user behavior and the way that we educate users and parents on those services. And really, we have seen some challenges start to arise internationally where regulation has been really, really broadly scoped and creating that sort of risk of one size fits all requirements. And a really good example of that that we see a lot is a real enthusiasm and desire to address some of the well-known issues arising from some of the social media services. But the definitions that can come through here may actually inadvertently capture a range of other services with measures that might not be appropriate or proportionate on those services. And so again, we really wanna help think through what the right, what the appropriate safety measures are to really think about rights in a holistic way. And then I think that comes a little bit to the points that have just been made on thinking about privacy and safety and isolation as well. Because we, particularly in legislation, thinking about kids’ privacy and safety, we see some kids’ privacy bills. We see some safety bills. And again, these are not taking that holistic approach. Or actually, there are some laws as well we are coming through where there are concepts from safety legislation and concepts in privacy legislation in ways that may not entirely work together here. I mean, it’s a challenge for us all because I don’t think there is a perfect regulatory model for this yet. We are all still learning. One of the things that we are starting to see come through more is really a set of focus on outcomes-based codes. And so really thinking about what the flexibility is for different services in the scope of those codes to achieve the safety and privacy outcomes that are desired. That does start to create a bit more of a web of granular and complex secondary regulation. But I think it’s the starting point where we really come to a place where we can evolve our approaches, really think systematically about risks, about rights, about impact on kids, and really think about what that looks like for the products where children are most vulnerable but also where the opportunities arise. And so enabling us to think really holistically about risks and the mitigations for those going through design and other choices. And I think we are also still learning on the process of learning about what that looks like for some of those products. I know there are folks here at the IGF who are doing some amazing work in this space. I mean, I think one of the things we’ll talk about as we come through, too, is there is still a need, I think, to grow some of the evidence base, particularly on emerging tech, to think about how we do this best. And so I’ll come to that in the next part of the conversation. But I think the other piece I just wanna flag as well as we think about different legal regimes culturally is that there is a risk that globally we see existing economic and social disparities and other inequities really enhanced if there are regimes created where kids are unable to access technology. Thank you.

Amy Crocker:
And that really brings together the importance of elevating children’s rights and how we design and how those are reflected within policies. And indeed, I mean, Patrick, I’ll go to you now. I think it’s interesting. I’ve also been, there’s been some talk of fragmentation of regulatory policies. I’m also told that we shouldn’t be using the word fragmentation in this context. But I think it is interesting in the United States. I know that’s been a challenge that you have state-based laws that main conflict with federal laws and that will be in other countries as well where you have those kinds of structures. And I think there’s richness in diversity, perhaps in testing what goes wrong, but regulations take a long time to develop. So we can’t just pivot in one month and decide we’re gonna create something new. And I think this is a challenge. So Patrick, you’ve seen this issue from many perspectives, both from South Africa and your region, and of course globally. And I know you and I in the past have also spoken about prevention versus regulatory approaches. So I just wonder what your perspective is on sort of differentiated approaches to regulations in different digital spaces, but also the balance between these different kind of, not conflicting, but different factors.

Patrick:
Yeah, thanks, Amy. And it’s quite hard to come after these amazing speakers who kind of taken all your thoughts and put them far more coherently than you could have. So I think I’m just gonna start off by reiterating what almost every speaker has said, that I think while we speak quite glibly about child rights, what those means in different contexts, I’m not sure that we can all together count on how child rights, even as they are contained in the CRC and in general comment number 25, translate into practice in different cultural, religious, national, geographic contexts. It’s huge variation in how child rights are interpreted or where countries or states choose to place the emphasis. And inevitably we see that emphasis being placed on particular rights rather than equitable embracing of all child rights. And that really translates so much into the digital space. I apologize, it’s six o’clock, I’m still not altogether coherent. But I also just want to say that, I think start off by saying that I don’t think we can regulate our way out of the challenges that emerging technologies, immersive spaces, AI present us with. Regulation, we need to bear in mind, is just one of those tools, one of those arrows in our quiver. And I think we often place so much emphasis on regulation and states put place so much, and when I say states, I mean nation states or states with provinces, whatever they might be within national boundaries, place so much emphasis on regulation because they see it as not an easy win, but it’s a very visible commitment to what they are doing and to their commitment to making sure that children stay safe online without investing, without putting the sort of proportionate investment into, as you say, the prevention side of things, the education, the awareness raising, building capacity of parents, building capacity of children, to deal with building children’s resilience, the one thing that we haven’t spoken about. And so regulation is critical. We can’t do away with regulation, but it really is just one component of what we need in order to make sure that children’s rights are realized online. Now, what does that mean for regulation? Liz mentioned this increasing focus on looking at secondary regulation, which is often quite messy. And I think there is a lot to be said for that approach because ultimately, platforms operate in different ways, services operate in different ways. There are some global standards, how data is managed, how data is protected, how data is collected, how data is used, for example, relating to children’s privacy online and the right to protection. Those are standard, but at the same time, different services offer different opportunities for children to learn digital skills, to be creative online. We need to recognize that children have different evolving capacities at different ages, in different contexts, and those evolving capacities are largely influenced by different contexts, geographical contexts in which they live. Those evolving capacities are influenced by the households they live in, by the access to non-digital services they have access. We know the link between what happens online and what happens offline. And so I think having sort of differentiated approach makes sense. I think it is a logical approach, but we can’t wait for that sort of regulatory environment to concretize. I think, Amy, you just summed it up perfectly. Regulations take a long time to implement, and we need to learn from the failures of regulation, and we need to see what’s working, what isn’t working. The same with legislation. You started off the session talking about the gap between legislation and implementation. Well, from the time we start formulating policy to the implementation and the evaluation of the implementation, you’re talking 10 years, by which point we are in a whole different universe in terms of emerging technology. And so we need to look at what individual services can do, platforms can do. And I can’t think about this without thinking that, in order to achieve that, we need to make sure that we are all singing of the same hymn sheet when it comes to what child rights are, and the transparent commitment and culture of child rights that any business, industry, government needs to be working from, and the transparency around that. Am I making sense? I mean, hopefully you can kind of bring all that together. I’m gonna stop, otherwise I’ll just keep talking.

Amy Crocker:
No, thank you, and I hope you have some coffee, coffee or tea by your side. But no, absolutely, it does make sense. And indeed, I mean, I’ll ask, you know, Jenna for your input on this, but I think absolutely the, I’m losing my train of thought, the need for, and we will talk now about a design approach and a child rights-based design approach, that we can’t wait for regulation. I think there is a strong role for it to provide a framework, to provide a legal basis on which we can have conversations and decide how to act. But I mean, each one of us in this room has probably five, 10 stories about the uptake of AI models or AI products. We won’t name any in particular. And some of those are good, some of those are bad, but it’s happening faster than we have the ability to take action. So we need to think very critically about where we go and actually build those into decision-making processes earlier on in the design of products, the building of products, and that’s what we will go on to. But Jenna, I just wanted, before we do that, and then if we can take any reflections or questions from other participants in the room or online. Jenna, what is your view on, from engaging with youth and through the youth sort of IGF perspective, what is your view on regulation as a, not as a solution, but as a part of the solution to some of the challenges we face? How do young people see that? What are the priorities for building a safe and empowering environment?

Jenna:
I’ve prepared some notes around it, of course. But before I respond to your questions or theme two overall, I wanna quickly respond to what Patrick mentioned earlier about how cultural factors and just culture in general, let’s frame it that way, will be so different. Because earlier this year, I partnered with a group of amateur, we just do it voluntarily, all this policy research actually, we had from Bangladesh local hub, actually we worked together to make a study and try to see how different jurisdiction in Asia-Pacific deal with online safety. And from part of our study is that Australia adopted industry code to mitigate this issue, where Singapore use a more government driven way. So it’s kind of reflected some cultural influence in how we approach things. And I just find that it’s really a fact that we have to admit, because especially in Asia-Pacific is really different. Myself and East Asian, there are things that I can’t understand completely from those who are from Southeast Asian and South Asian. And sometimes we will be unconsciously biased and people sometimes from Western world do not think that Indians are Asian as well. I find it quite interesting when I hear from some people sometimes. But anyway, that’s my quick respond to it. And I will try to touch on the question that you asked with the notes I prepared. But most of these are part of the outcome that we have from the discussion we had last month in Brisbane in our annual meeting. I think to deal with this very topic that we are trying to address today, the youth think that we need to have a clear definition and scope about all this online safety threat. Because sometimes different people of different background will have different definition and it’s important to have international standard of course, but also to have some localization to adapt into it. And so it’s relevant to their environment. I think the other day I was attending a workshop. and then they were doing capacity building even at a municipal level, because that might be even more effective, because I personally work so closely with youth as a project manager for the Asia Pacific or IGF. I figure out that we have to empower them at many level in order to get their voice heard, especially when we talk about internet governance, child rights online. If they don’t really know about the technical aspect, sometimes they will suggest something that is not really relevant. Putting my other hat on, I actually work for the top level domain registry as well. Sometimes we think that we understand how the technology of internet work, but then when I talk to those engineer, they were like, all this details that entered a head and they were like, that’s not exactly what it is, but sure. So we need to have more stakeholder get into the conversation, because there’s no way for everyone to understand everything. So we need to put all of them together. And if we are circling back to here, I’m going too far. If we are trying to bring in the younger voice, I really want to shout out to Bangladesh actually. They started way far ahead, because I know that they have this kids IGF happening in the past two years, which is very progressive. It’s hard to get a five years old into our conversation here, because there’s like different levels, but at a kid’s level, it’s really a good way to start early for them to start engaging them. There’s no way for my mom to understand what we are here talking about. Been here for a long time, she still have no idea what I’m doing. But what we really want to stress is that we need to have a multi-stakeholder approach, but in order to achieve that, we must have capacity building along, try to make information accessible, use more accessible languages as well. So people with different level of knowledge can understand, and sometimes, and myself included, don’t really speak English as the mother tongue, and so there’s like loss in translation sometimes. That’s also one of the barrier. And so if we really want to regulate and stuff, I think we really need to bring different voices into the process, and democratizing the process eventually.

Amy Crocker:
Thank you so much. I mean, you’ve hit on so many important points, and I think I love this. I often think of it as the regulation is being top-down, but your point about the bottom-up approach, not only among children themselves, but in communities, and actually building solutions through that. And then when we go on to sort of the safety dimension that helps support that, I think that will be crucial. I know we have, Jim, some collective comments or questions.

Jim:
Yeah. I’ll just summarize it, but just to pick up on the Bangladesh point, the second question was actually from the vice chair of the Bangladesh Youth IGF, so they’re actively engaged there. But between those questions, and we have a question here as well from Mohammed, who’s an instructor at Kabul University in Afghanistan. I think as you’re addressing these issues going forward, what about the perspective, and what can be done to help in developing countries like Bangladesh and Afghanistan address these problems? I think we all know the history of the challenges that these countries have with technology, and access, and capacity building. So as we’re discussing this forward, maybe think about that as part of your comments.

Amy Crocker:
Absolutely. I mean, would anyone on the panel like to talk about how we can address some of those issues? I mean, I think, Jenna, even you spoke a little bit about, looking at different opportunities for codes of conduct that can be not copied exactly, but that are based on some values, principles, some guidelines possibly, that can be translated into your own context for the participant from Afghanistan. I think learning from approaches to regulation that can work possibly, but obviously understanding the context there. And I suppose also back to Jenna’s point, making sure that the children and young people are consulted, find out what do they think, how do they feel about these issues, and trying to drive that. But again, I don’t know if anyone, even in the room would like to comment on that. Yeah, otherwise, we’ll take a question.

Andrew Campling:
Okay, thank you. Andrew Campling, I run a public policy, public affairs consultancy, but also a trustee of the Internet Watch Foundation, so probably more with that hat. It’s a very big topic, so I’m trying to give two fairly narrow points that are at least linked in token ways to AI. So first one, so first one, algorithms quite obviously make malicious content much more accessible through their recommendations. So for example, in the UK, we’ve seen a child who unfortunately was shown suicide-relevant content, committed suicide. It’s highly improbable she would have found that content had the algorithm not shown it to her. So first question, should there be restrictions on the application of surveillance capitalism to children? A blanket prohibition of doing the data gathering of known child users on platforms in the first place to try and prevent that from happening. Secondly, AI models are already being used to generate CSAM. So should AI-generated CSAM be illegal? It is in some countries, but it’s a loophole in others. And should prompts that are deliberately intended to generate CSAM, should the circulation of those be made illegal? Because there’s an active trade, that’s the right phrase, and the best prompts to use to get the images. And then more generally, so given the pace of technology change, and you said how difficult it is to create regulation, it’s easily been outpaced by the changes in the tech. Dare I say it, learning from the UK experience, should we try and avoid being caught out by the pace of change simply by imposing a duty of care on platforms of their users? Because otherwise it’s pretty much impossible for regulators to keep up with the changes. So just give the blanket duty of care and put the problem on the platform operators to do that responsibly. Thank you.

Amy Crocker:
Thank you. Big questions. I know that Patrick wants to come in. Oh, do you wanna quickly speak to that and then we’ll bring Patrick in? Patrick, go ahead.

Patrick:
Thanks, Amy. Just two very quick responses. The first to the question from Afghanistan, and it’s just a general observation. In so many of the countries in which I work, where governments are trying to catch up on policy, they’re trying to catch up on legislation, they’re looking to key countries for model legislation. They’re desperate to look at best practice. And so what tends to happen is there are three or four countries that come to mind, and they look at those countries and try to model their own legislation based on that, without recognizing some of the challenges and the dilemmas that those pieces of legislation face, or where they haven’t got it right. And so there’s a real danger in developing countries saying, okay, well, this is what country A has done, we’re gonna follow that model without any critical engagement as to what some of the challenges in implementation might be. So that’s just an observation. I think there’s a real danger of doing that. And I do see that a lot in many of the countries that I work in, Southern Africa, North Africa, some of the Asian Pacific, smaller island countries and territories. And then if I can just use my position, my mic, just in response to the question or the observation from the IWF colleague. The other thing that I’ve seen so many of the developing countries that I work is this issue around definitions. And now you raised the example of AI-generated CSAM. What tends to happen is countries are loathe to update their sexual violence. Whatever legislation, their child sexual abuse, exploitation, crimes, offenses are contained because it takes so long. And that’s why I think it’s also up to individual industries and companies to say, we are going to adhere to these definitions of CSAM and that includes AI-generated CSAM. So that it’s actually a step ahead of changing national policy because that is going to take five to 10 years for that policy to update because it’s such a process for legislation to be updated. Thanks.

Amy Crocker:
Thanks, Patrick. Go ahead, Liz, and then I’ve got many follow-ups to give to people in the room. Great.

Liz:
Well, I will try to be brief. I mean, I think a couple of great questions from the IWF here in the room and things that are really top of mind for us. And actually, I think this goes to some of the points I was hoping to raise anyway. So excellent segue. I mean, on the topic of AI-generated CSAM, I think certainly for us in industry, thinking about these risks has absolutely been at the core of our responsible AI approach at Microsoft, but also how we’re thinking about applying safety by design across the services where that’s being deployed and the features in that. On the question of legality, I think this really goes to some of the conversation we’ve just had around A, the criticality of regulation, but also B, regulation not being the only tool in the toolkit. And I think it goes to, again, we have to have the whole of society approach to addressing these problems. And part of that will be us taking responsibility to make sure that this particular horrific harm type is not being disseminated or created on our services. But secondly, that need for urgency in some regulation. I know in some jurisdictions, there have already been statements around the legality of CSAM, but I think it speaks to some of the great work by the We Protect Global Alliance and others as well with the model national response to really help support harmonization on legal regimes in this, so there are not spaces where this crime is permitted. On the question of whether children should be able to access some services or not, two quick points in response. And I think part of this goes to the references to safety by design across diverse services that I made before. And part of that is really thinking about where there are recommendation systems or other features, what impact that has on the risks to young people on the service and understanding the potential mitigations for that. But more broadly, I think you’ve kind of raised one of the major topics under discussion in child rights and child safety conversations at the moment, which is obviously age assurance and the ability to identify whether users are indeed actually children. And there are multiple strands of work, I think, that are needed here really to A, help us find the right tech solutions, noting that there are a range of trade-offs between sort of getting the right degree of accuracy around the age of a child versus privacy, security, and other factors. But then B, once we do know the age of a child, what are the choices that we make around the safety interventions and indeed access to services on that? And I think this is where we are really, certainly as Microsoft, very keen to continue the conversations with the experts and grow our evidence on these topics.

Amy Crocker:
Thanks, I know we have some questions. I just wanna, Sophie, I don’t know, because I know you’re waiting there with us online. I wonder if, picking up on the point I think made about the use of children’s data, for example, I wonder if you have anything you’d like to say about, for example, the Digital Services Act and what that may kind of mean for sort of protecting children’s data within the EU. Is that something you’d like to speak to or speak about the European context?

Sophie:
Yes, I can give a short insight. So we have the Digital Services Act and the European Union, which is going to be in point next year. And we also have regulations following the DSA in Germany. Right now, we are discussing it a lot. And from a child rights perspective, we consider it a really important point to make and a good way to protect the data of children, especially when it comes to advertising, but also when it comes to the responsibility of very large online platforms to protect children and young people from certain risks. I’d also like to add something to the idea of children’s rights by design and also children’s participation in regulation, because I think this is a crucial aspect if we really want to think children’s rights in a holistic way, not only to focus on the protection point all the time, but to do it in a holistic way and also looking for how can we empower children, how can regulation support empowerment of children and also how can regulation support the participation of children. And because how digital media are regulated and designed has a really direct influence on the lives of children and young people, but they, if we are honest, rarely have a say in these issues. The GC25 also addresses this right of young people to participate in questions and decisions about the digital environment. And here in Germany, we’ve already seen some efforts to involve children and young people in the design and implementation of legal youth and media protection. And as a German children’s fund, we’ve conducted also an exploratory research and concluded in this context that we need quality criteria for participation in this point. And we’ve encountered already a wide variety of participation oriented formats, such as consultations or comment processes where children are included and involved in regulation processes. We have youth juries, editorial boards, and also young people who design products and even design and conduct events on their own, get involved in peer-to-peer networks or consultations. And I’d be very interested in experiences from other countries. And this also leads me to the point of safety by design, child rights by design. Children and adolescents need social spaces where they can really implement their own ideas without being primarily affected by product guidelines or market-driven interests, allowing them to exercise their right to open creative processes. And this likely clashes a bit with a metaverse concept whose hosts also target young audiences. So safe social spaces are more likely created by civil society and educational organizations. That’s what we’ve seen so far. And the approach of children’s rights by design offers providers the opportunity to place children and adolescents’ self-realization and participation really at the forefront and develop ideas on how to involve them as informants and full-fledged design partners. And this is also, I think Patrick already mentioned it, an opportunity to bring in the aspect of the evolving capacities. And to really look how to develop age-appropriate social online spaces. And yeah.

Amy Crocker:
Thank you. That may be on this part. Yeah, thank you so much, Sophie. And I’m sorry to cut you off because we have a queue of questions in the room. So we’ll take some questions. Please go ahead.

Amyana:
Hello, Aymana. I’m from Brazil. And right now in the National Council for Children’s Rights we are preparing a document with some guidelines and recommendations for the prosecutors, the public ministry, and all the services that work with children and adolescents. And what these agencies should do and require from platforms to protect children. because how can platforms manage to remove content from films, for example, and not remove violent or dangerous content for children? So how can we focus on protecting by design, like you were saying, because yes, there are standards, international standards, but they are not applied equally. Children, especially from the global south, have a much lower level from protection than those from the north, and we already have data to affirm that. And another question is about how we can legally framework, for example, images of child abuse created by AI, because we are thinking about this now, because our legislation doesn’t fit for these actions. So how have you been dealing with this in your countries, like apology for crime or incitations? So that’s it. Thank you. I will quickly just

Amy Crocker:
see if, and then we’ll take, Kasia, your question. Kat, I don’t know if you would like to respond, because on this point of how you can think about legislating for this, because this is the point you raised earlier in Japan, and how you can sort of build awareness about the need to criminalize these types of content. Thank you for raising the issues. It’s quite important, but for

Katz:
Japan, so we don’t have any regulation and policies to regulate that kind of AI generated image so far. As quite recently, the BBC, how they focus on some kind of AI generated, some kind of a system, and but we couldn’t, how say, we couldn’t know about that kind of news from the Japanese media. I think kind of a more responsibility of media in Japan, so they have to inform us that kind of a situation right now. Otherwise, the normal people, we don’t know about what’s going on, the AIs. So I think we need to know more about that kind of a new information. Maybe not only the media, we can, how say, collect information from SNS, whatever, so. Thank you. I’m going to declare that we’ll all stay here for

Amy Crocker:
three more hours, so I hope you all have time. Unfortunately, we cannot, so Kasia,

Katarzyna Staciewa:
please. Thank you very much. Hello, everyone. My name is Katarzyna Staciewa, and I represent the National Research Institute in Poland, but I would like to link my intervention to my previous experience in law enforcement and also in research based on my education in criminology. So actually, it’s so live discussion that it only proves that we need more room in the future for these sort of discussions, and I wanted to thank you, Katsuhiko, if my Japanese pronunciation is well, and Liz for all the comments related to research and child rights in the so dynamically developing space. I have recently conducted research on the metaverse, and I believe research is a key, and research can also guide the developing countries, because there is a chance to benefit from what has been found out already, and the research can also guide our future actions. So in this research, I analyzed the darknet, and I analyzed the teams of conversations of people that are potentially sexually interested in children, and I found out that there are three teams that are absolutely worrying. The first one is that it’s an environment in which people like that can meet a child or can move conversation from publicly available spaces. The second is that they can create something that has already been said, that they can create AI-generated CSAM. So imagine that someone uses a picture or a video of a real child and transfer it into that sort of material. It will be constant re-victimization of a child that was absolutely innocent. And the third one is even more scary, because it was about updating, upgrading the existing CSAM into the VR or metaverse-oriented frame. So it means for the victims, the past ones and the future ones, a constant re-victimization, and we should be definitely looking at this perspective, and the call for more robust research has never been more valid. So I would just like to finalize this intervention with a focus on research as a potential gateway to more tailor-made, oriented actions for the safety of

Amy Crocker:
children. Thank you. Thank you so much, and actually, I mean, it points to a really interesting point, Sophie, that you made about safe spaces being created by civil society organizations, communities, families offline, and what should that look like in the metaverse? What can that look like? And are we ready, really, for that? And of course, we haven’t, I mean, we are short on time. We could speak about safety boat design for a long time, but I think these are crucial issues we have to grapple with as we allow children to operate as they want to. Young people want to be engaged in these environments, and also picking up on the point about what that means in different contexts, because a tool or an environment designed by a company in one country, one region, will not necessarily be meeting the needs of children in other environments, or children of diverse identities. So please. Hi, thank

Ahmad Karim:
you so much for all the intervention. My name is Ahmad Karim. I’m from UN Women Regional Office for Asia and Pacific, and I come from that angle of the discussion where whenever we have those kind of big topics, we tend to be gender-blind in the conversation, and I wonder if there is some specificities related to gender design that would relate more or give more attention to girls and young adults and females, and those who could be affected more by the advancement of technology and where national loss is not considerate, where we put all the children in one basket, but then there are some marginalized and fragile groups that deserve more attention, especially from the design of

Liz:
technology itself. Thank you. I can jump in briefly on that. I mean, fundamentally the lens we’re certainly coming at this from is we want to unlock the economic, social, and educational power of technology, but really find a way to do that. They’re using it mindfully and safely, and you can’t do that without being alive to the gender element. So I think absolutely. Where I think we are still, again, in need of a better understanding, we’ve done consumer research for a long time now. There’s a lot of good work underway, but I still don’t think we necessarily have the right level of understanding of some of those gendered impacts, and I think one of the only ways to do that actually goes back to some of the first conversation we had around youth participation, because as a millennial who got a device in high school rather than in kindergarten, I know that I don’t have an understanding of what it looks for a teenage girl online, let alone in a diverse range of cultures. I’m a New Zealander. I come with that particular lens. There are a whole range of lenses I don’t bring. So finding those ways to do that research and to get those perspectives, and we know that as a company, you know, we don’t always have the right ways of doing that either, and doing that mindfully in a way that is really asking questions of kids at the right age in the right places, and doing that safely as well so that they’re feeling really empowered to share, and I think it goes a little bit to some of the capacity building you talked about as well. Maybe I can jump in a little bit to quickly respond to

Jenna:
Emmett’s points about gender and youth participation. Actually, my colleagues right here, they are going to talk about something about gender tomorrow morning, about like, they are even younger than me, let’s be real, and then they often give up points that I don’t even touch on. They design the workshop from that perspective because they think it’s very important. Their interpretations about gender is different from what we historically define, and that’s really important, and I got invited to a panel about how we leverage AI to ensure gender inclusivity, and suddenly when I prepared a session, I was like, why am I even invited to that? Because I am just one of those ordinary one heterosexual person with really ordinary point. Why am I even on there? And so I feel like, you know, with talking to more young people, you will get some new insights from how they think as much as we dedicate a time to talk about CSAM. I think it’s really important for us to address that, and I do think that we should, you know, instead of just create one big bill to deal with how AI influences all this matter, I think government and all stakeholders should modernize different existing legal framework, like modernizing like Broadcasting Act, Consumer Protection Act, Competition Act, to make sure all these matters are integrated into it, and so public interest or the younger generations ideas are considered while we are creating all these like policies. While we are talking so much about CSAM, last month when I was like in Brisbane talking with all these Asia-Pacific youth, their workshop designed something about explicit content. They have a totally different approach, because I think when it comes to CSAM, maybe as an adult, we care how we protect them, which is very important, but they actually want to explore how they and maybe we use explicit content to express themselves, and so they actually talk about like OnlyFans, a kind of platform, how we create a safe space for those who want to express themselves through those content, which sometimes we forget to talk about it. This is also their right to express themselves if they want to, and so that’s just one thing that actually surprised me a lot, because I never thought about it. Probably I’m too conservative in some way, but that’s something we should, that is why we must bring them, because we will always find something new. We as an adult think they need this, but maybe it’s actually not, so we should have them. We have a few minutes left for

Amy Crocker:
final reflections, but that’s a perfect place to bring us home, because ultimately this is about creating safe, empowering spaces where you need regulation to do certain things, you need design to be mindful and informed by a child participant, so by child consultation and participation. So in two minutes, but I’m maybe gonna take an extra minute if we can, I’d like to invite all our panellists to just give maybe a final reflection on what they’ve heard today, something that really stands out, what perhaps even just what would be your takeaway, but what thing you would do tomorrow in response to this session, and I’ll go first online, so Patrick. Thanks Amy, and

Patrick:
it’s really hard to follow up from Jenna, because as you say, I think that is the perfect way to wrap it up. I had two notes, the one was speak and engage, not speak to engage and hear from, meaningfully, children in different contexts, their understanding, their experiences, both positive and negative, and how they want to use the internet. That means we need to be open as adults to challenge our own thinking around this, because we need to let young people who are the core focus here, determine or dictate, feed into that space. The second is also, my second point I just wanted to conclude with, was it was great to hear a speaker, the speaker from Poland, in the audience, who’s a criminologist. The other point that I wanted to make when I was talking is, we need to have criminologists, violence prevention, public health, educators, social workers, all of those sectors and specialities around, and child rights legal experts, in this conversation. It cannot come down to industry, to government, to regulation. We need to make sure that we have all of those pieces fitting together, in order to make this work. Thank you Amy, and thanks to speakers for a great

Amy Crocker:
conversation. Thank you, Sophie. Very, very short, if possible, just your main

Sophie:
reflection. Yeah, thanks to everyone for your inputs, to the speakers, to the audience. I think my learning from today is that, to advocate children’s rights in the digital world, in terms of a holistic approach, we need so many stakeholders, and it’s important really to grab them all and go this way, and especially to go this way with children and young people themselves, as a really important participant group in this context. Thank you. Kat, I’ll go to you for a final

Katz:
reflection, if I may. Yeah, thank you so much for your brainstorming sessions, so I really appreciate your input and encouragement everywhere. So, I think whatever the design, whatever the house of regulation policies, all the time we should move on to the rights-based approach, that is most important, whatever the human rights or child rights, very important, significant approach. Then also, in the past, probably we made an effort to more approach to public or people, but in the future, maybe to approach to AI in the future, so we need to more have an approach, the target will be increased in

Amy Crocker:
the future, I think. Yeah, thank you. Thank you. Very briefly, Jenna and then Liz.

Jenna:
I will be really brief, because I think I’ve had taken enough air time. I think one last takeaway is collaborations, I would say, because as someone who works on capacity building, I need research to back up all the things that I do, and then we, you know, all the stakeholders work together, is that, you know, in terms of legislation regulations, we need government, private sector, and everyone to work together to give a safe environment, and of course, don’t miss out the technical community, please, because they are very important, they have all the knowledge, and sometimes they might not be the best involved in the policymaking process. So yeah, that’s just my final words. Thank you.

Liz:
I’ll be really brief. I think my takeaway today is to continue to try to approach this in the spirit of learning, learning from others, learning to try to keep the holistic approach in mind, and we need to grapple with different harms, but we need to find a way to do that while also thinking about rights, and I think it’s a complex area, and we will have to keep learning together.

Amy Crocker:
Hello? Yeah, sorry, I won’t summarize this, we are over time, but it’s been a really fascinating conversation, and I genuinely wish we had more time, but as someone commented, we need to continue this conversation. If anyone is interested to join the Dynamic Coalition and to continue these types of conversations, we have some flyers, there is a QR code, you can go to the website, you can also go on the IJF website, find us, and sign up to the mailing list. We want to help create a space within the IJF, a bigger space, a renewed space for children’s rights issues to be discussed. I will end it now. Thank you so much for being here, and thank you to all our speakers, and thank you to Jim as our online moderator, and thank you to Bangladesh Remote Hub, it was so lovely to have you here, and all participants online. Thank you.

Ahmad Karim

Speech speed

172 words per minute

Speech length

130 words

Speech time

45 secs

Amy Crocker

Speech speed

178 words per minute

Speech length

4308 words

Speech time

1452 secs

Amyana

Speech speed

120 words per minute

Speech length

206 words

Speech time

103 secs

Andrew Campling

Speech speed

162 words per minute

Speech length

355 words

Speech time

131 secs

B. Adharsan Baksha

Speech speed

173 words per minute

Speech length

104 words

Speech time

36 secs

Jenna

Speech speed

169 words per minute

Speech length

2471 words

Speech time

877 secs

Jim

Speech speed

202 words per minute

Speech length

140 words

Speech time

42 secs

Katarzyna Staciewa

Speech speed

141 words per minute

Speech length

375 words

Speech time

159 secs

Katz

Speech speed

123 words per minute

Speech length

791 words

Speech time

386 secs

Larry Magid

Speech speed

203 words per minute

Speech length

547 words

Speech time

162 secs

Liz

Speech speed

228 words per minute

Speech length

2029 words

Speech time

534 secs

Patrick

Speech speed

169 words per minute

Speech length

1483 words

Speech time

526 secs

Sophie

Speech speed

137 words per minute

Speech length

1442 words

Speech time

631 secs

Steve Del Bianco

Speech speed

216 words per minute

Speech length

455 words

Speech time

126 secs

Tasneet Choudhury

Speech speed

159 words per minute

Speech length

69 words

Speech time

26 secs