The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sarah Myers West

The analysis provides a comprehensive overview of the concerns and criticisms surrounding artificial intelligence (AI). One notable concern is that AI can be misleading and misunderstood, leading to flawed policies. It is argued that in the field, there is a tendency to make claims about AI without proper validation or testing, which undermines trust in the technology.

At present, AI is primarily seen as a computational process that applies statistical methods to large datasets. These datasets are often acquired through commercial surveillance or extensive web scraping. This definition emphasizes the reliance on data-driven approaches to derive insights and make predictions. However, the ethical implications of this reliance on data need to be considered, as biases and inequalities can be perpetuated and amplified by AI systems.

The lack of validation in AI claims is another cause for concern. Many AI systems are said to serve specific purposes without undergoing rigorous testing or validation processes. Discrepancies and problems often go unnoticed until auditing or other retrospective methods are employed. The absence of transparency and accountability in AI claims raises questions about the reliability and effectiveness of AI systems in various domains.

Furthermore, it is evident that AI systems have the potential to mimic and amplify societal inequality. Studies have shown that AI can replicate patterns of discrimination and exacerbate existing inequalities. Discrimination within AI systems can have adverse effects on historically marginalised populations. This highlights the importance of considering the social impact and ethical implications of AI deployment.

In terms of content moderation, AI is often seen as an attractive solution. However, it is acknowledged that it presents challenges that are difficult to overcome. For example, AI-based content moderation systems are imperfect and can lead to violations of privacy as well as false positive identifications. Malicious actors can also manipulate content to bypass these AI systems, raising concerns about the effectiveness of AI in tackling content moderation issues.

To address these concerns, there is a need for more scrutiny and critical evaluation of the use of AI in content moderation. Establishing rigorous standards for independent evaluation and testing is crucial to ensure the effectiveness and ethical use of AI technology. This approach can help mitigate the risks associated with privacy violations, false positives, and content manipulation.

In conclusion, the analysis underscores the importance of addressing the concerns and criticisms related to AI. The potential for misrepresentation and flawed policies, the lack of validation and transparency in AI claims, the amplification of societal inequality, and the challenges in content moderation highlight the need for thoughtful and responsible development and deployment of AI technologies. Ethical considerations, rigorous testing, and ongoing evaluation should be central to AI research and implementation to ensure that the benefits of AI can be realized while mitigating potential harms.

Audience

During the discussion on child safety in online environments, several speakers emphasised the necessity of prioritising the protection of children from harm. They stressed the importance of distinguishing between general scanning or monitoring and the specific detection of harmful content, particularly child sexual abuse material (CSAM). This distinction highlighted the need for targeted approaches and solutions to address this critical issue.

The use of artificial intelligence (AI) and curated algorithms to identify CSAM content received support from some participants. They mentioned successful implementations in various projects, underlining the potential effectiveness of these advanced technologies in detecting and combating such harmful material. Specific examples were provided, including the use of hashing techniques for verification processes, the valuable experience of hotlines, and the use of AI in projects undertaken by the organisation InHope.

However, concerns were raised regarding the potential misuse of child safety regulations. There was apprehension that such regulations might extend beyond the intended scope, impeding on other important areas, such as encryption and combating counterterrorism. It was stressed that policymakers should be wary of unintended consequences and not let child safety regulations become a slippery slope for encroaching on other narratives or compromising important tools like encryption.

The participants also emphasised the significance of online safety for everyone, including children, and the need to prioritise this aspect when developing online solutions. Privacy concerns and the protection of personal data were seen as vital considerations, and transparency in online platforms and services was highlighted as a crucial element in building trust and safeguarding users, particularly children.

The existing protection systems were acknowledged as generally effective but in need of improvement. Participants called for greater transparency in these systems, expansion to other regions, and better differentiation between various types of technology. They stressed that a comprehensive approach was required, involving not only the use of targeted technology but also education, safety measures, and addressing the root causes by dealing with perpetrators.

There were also concerns voiced about law enforcement’s use of surveillance tools in relation to child safety. Instances of misuse or overuse of these tools in the past created a lack of trust among some speakers. An example was provided where a censorship tool in Finland resulted in the hacker compiling about 90% of the secret list of censored websites, revealing that less than 1% contained actual child sex abuse material.

In conclusion, the discussion on child safety in online environments highlighted the need to differentiate between general scanning and scanning for specific harmful content. It emphasised the importance of targeted approaches, such as the use of AI and curated algorithms, to detect child sexual abuse material. However, concerns were raised about the potential misuse of regulations, particularly in the context of encryption and other narratives like counterterrorism. The protection of online safety for everyone, the improvement of existing systems, and a comprehensive approach involving technology, education, and safety measures were identified as crucial elements in effectively protecting children online.

Namrata Maheshwari

The discussion revolves around the crucial topic of online safety and privacy, with a specific emphasis on protecting children. While there may be various stakeholders with different perspectives, they all share a common goal of ensuring online safety for everyone. The conversation acknowledges the challenges and complexities associated with this issue, aiming to find effective solutions that work for all parties involved.

In line with SDG 16.2, which aims to end abuse, exploitation, trafficking, and violence against children, the discussion highlights the urgency and importance of addressing online safety concerns. It acknowledges that protecting children from online threats is not only a moral imperative but also a fundamental human right. The inclusion of this SDG demonstrates the global significance of this issue and the need for collective efforts to tackle it.

One notable aspect of the conversation is the recognition and respect given to the role of Artificial Intelligence (AI) in detecting child sexual abuse material (CSAM). Namrata Maheshwari expresses appreciation for the interventions and advancements being made in this area. The use of AI in detecting CSAM is a critical tool in combating child exploitation and safeguarding children from harm.

The conversation highlights the need for collaboration and cooperation among various stakeholders, including government authorities, tech companies, educators, and parents, to effectively address online safety concerns. It emphasizes the shared responsibility in creating a safe online environment for children, where their privacy and security are protected.

Overall, this discussion underscores the significance of online safety and privacy, particularly for children. It highlights the importance of aligning efforts with global goals, such as SDG 16.2, and recognizes the positive impact that technology, specifically AI, can have in combating online threats. By working together and adopting comprehensive strategies, we can create a safer and more secure digital space for children.

Udbhav Tiwari

The analysis conducted on content scanning and online safety highlights several significant points. One of the main findings is that while it is technically possible to develop tools for scanning certain types of content, ensuring their reliability and trustworthiness is a difficult task. Platforms already perform certain forms of scanning for unencrypted content. However, Mozilla’s experience suggests that verifying the reliability and trustworthiness of such systems poses challenges. Currently, no system has undergone the level of independent testing and rigorous analysis required to ensure their effectiveness.

Another concerning aspect of content scanning is the involvement of governments. The analysis reveals that once technological capabilities exist, governments are likely to leverage them to detect content deemed worthy of attention. This raises concerns about the potential misuse of content scanning technology for surveillance purposes. Over time, the ability of companies to resist requests or directives from governments has diminished. An example of this is seen in the implementation of separate technical infrastructures for iCloud due to government requests. Therefore, the law and policy aspect of content scanning can be more worrying than the technical feasibility itself.

The importance of balancing the removal of harmful content with privacy concerns is emphasized. Mozilla’s decision not to proceed with scanning content on Firefox Send due to privacy concerns demonstrates the need to find a middle ground. The risk of constant content scanning on individual devices and the potential scanning of all content is a significant concern. Different trust and safety measures exist for various use cases of end-to-end encryption.

The analysis brings attention to client-side scanning, which already exists in China through software like Green Dam. It highlights the fact that the conversation surrounding client-side scanning worldwide is more nuanced than commonly acknowledged. Government measures and regulations pertaining to client-side scanning often go unnoticed on an international scale.

Platforms also need to invest more in understanding local contexts to improve enforcement. The study revealed that identifying secret Child Sexual Abuse Material (CSAM) keywords in different languages takes platforms years, suggesting a gap in their ability to effectively address the issue. Platforms have shown a better record of enforcement in English than in the global majority, indicating a need for more investment and understanding of local contexts.

The issue of child sexual abuse material is highlighted from different perspectives. The extent to which child sexual abuse materials are pervasive depends on the vantage point. The analysis reveals that actors involved in producing or consuming such content often employ encrypted communication or non-online methods, making it difficult to fully grasp the magnitude of the problem. Further research is needed to understand the vectors of communication related to child sexual abuse material.

Finally, the analysis stresses that users have the ability to take action to address objectionable content. They can report such content on platforms, directly involve law enforcement, or intervene at a social level by reaching out to the individuals involved. Seeking professional psychiatric help for individuals connected to objectionable content is also important.

In conclusion, the analysis of content scanning and online safety identifies various issues and concerns. It emphasizes the need to balance the removal of harmful content with privacy considerations while cautioning against potential government surveillance practices. Furthermore, the study underscores the importance of understanding local contexts for effective enforcement. The issue of child sexual abuse material is found to be complex, requiring further research. Finally, users are encouraged to take an active role in addressing objectionable content through reporting, involving law enforcement, and social intervention.

Eliska Pirkova

The analysis of the arguments reveals several important points regarding the use of technology in different contexts. One argument highlights the potential consequences of using AI tools or content scanning in encrypted environments, particularly in crisis-hit regions. The increasing use of such technologies, even in democracies, is a cause for concern as they can only identify known illegal content, leading to inaccuracies.

Another argument raises concerns about risk-driven regulations, suggesting that they might weaken the rule of law and accountability. The vague definition of ‘significant risks’ in legislative proposals is seen as providing justification for deploying certain technologies. The need for independent judicial bodies to support detection orders is emphasized to ensure proper safeguards.

Digital platforms are seen as having a significant role and responsibilities, particularly in crisis contexts where the state is failing. They act as the last resort for protection and access to remedies. It is crucial for digital platforms to consider the operational environment and the consequences of complying with government pressures.

The pending proposal by the European Union (EU) on child sexual abuse material is seen as problematic from a rights perspective. It disproportionately imposes measures on private actors that can only be implemented through technologies like client-side scanning. This raises concerns about potential violations of the prohibition of general monitoring.

Similar concerns are expressed regarding the impact of the EU’s ongoing, still-negotiated proposal in relation to the existing digital services act. If the proposal remains in its current form, there could be direct violation issues. The argument also suggests that the EU’s legitimization of certain tools could lead to their misuse by other governments.

The global implications of the EU’s regulatory approach, known as the Brussels effect, are also discussed. Many jurisdictions worldwide have followed the EU’s approach, which means that well-intentioned measures may be significantly abused if they end up in inappropriate systems.

The importance of children’s rights is acknowledged, with a recognition that the protection of children is a shared goal. However, differing means, policy approaches, and regulatory solutions may generate counterproductive debates when critical views towards technical solutions are dismissed.

In conclusion, the analysis highlights the complexities and potential implications of technology use in various contexts, particularly concerning online security, accountability, and rights protection. Dialogue and negotiations among stakeholders are crucial to understand different perspectives and reach compromises. Inclusive and representative decision-making processes are essential for addressing the challenges posed by technology.

Riana Pfefferkorn

The analysis explores various arguments and stances on the contentious issue of scanning encrypted content. One argument put forth is that scanning encrypted content, while protecting privacy and security, is currently not technically feasible. Researchers have been working on this problem, but no solution has been found. The UK government has also acknowledged this limitation. This argument highlights the challenges of striking a balance between enforcing online safety regulations and maintaining the privacy and security of encrypted content.

Another argument cautions against forced scanning of encrypted content by governments. This argument emphasizes that such scanning could potentially be expanded to include a wide range of prohibited content, jeopardizing the privacy and safety of individuals and groups such as journalists, dissidents, and human rights workers. It is argued that any law mandating scanning could be used to search for any type of prohibited content, not just child sex abuse material. The risk extends to anyone who relies on secure and confidential communication. This argument underscores the potential negative consequences of forced scanning on privacy and the free flow of information.

However, evidence suggests that content-oblivious techniques can be as effective as content-dependent ones in detecting harmful content online. Survey results support this notion, indicating that a content-oblivious technique was considered equal to or more useful than a content-dependent one in almost every category of abuse. User reporting, in particular, emerged as a prevalent method across many abuse categories. This argument highlights the effectiveness of content-oblivious techniques and user reporting in identifying and mitigating harmful online content.

Furthermore, it is argued that end-to-end encrypted services should invest in robust user reporting flows. User reporting has been found to be the most effective detection method for multiple types of abusive content. It is also seen as a privacy-preserving option for combating online abuse. This argument emphasizes the importance of empowering users to report abusive content and creating a supportive environment for reporting.

On the topic of metadata analysis, it is noted that while effective, this approach comes with significant privacy trade-offs. Metadata analysis requires services to collect and analyze substantial data about their users, which can intrude on user privacy. Some services, such as Signal, purposely collect minimal data to protect user privacy. This argument highlights the need to consider privacy concerns when implementing metadata analysis for online content moderation.

The analysis concludes by emphasizing the need for both advocates for civil liberties and governments or vendors to recognize and acknowledge the trade-offs inherent in any abuse detection mechanism. There is no abuse detection mechanism that is entirely beneficial without drawbacks. It is crucial to acknowledge and address the potential negative consequences of any proposed solution. This conclusion underscores the importance of finding a balanced approach that respects both privacy and online safety.

The analysis also discusses the challenging practical implementation of co-equal fundamental rights. It asserts that fundamental rights, including privacy and child safety, should be considered co-equal, with no single right taking precedence over another. The difficulty lies in effectively implementing this principle in practice, particularly in contentious areas like child safety.

Furthermore, the analysis highlights the importance of holding governments accountable for maintaining trustworthiness. It is argued that unrestricted government access to data under the guise of child safety can exceed the necessity and proportionality required in a human rights-respecting framework. Trustworthiness of institutions hinges on the principle of government accountability.

In summary, the analysis provides insights into the complications surrounding the scanning of encrypted content and the trade-offs associated with different approaches. It emphasizes the need for a balanced approach that considers privacy, online safety, and fundamental rights. Acknowledging the limitations and potential risks associated with each proposed solution is crucial for finding effective and ethical methods of content moderation.

Session transcript

Namrata Maheshwari:
Next, the organizers to help us see the speakers on the screen. The two speakers joining online. Oh, there we go. Hi. Just to check, Rihanna, Sarah, can you hear us? Yes. Great. Do you want to try saying something so we can check if your audio is working? Hi. Can you hear me? Yes. I can hear you. Great. Can you hear me? Yes, we can. Thank you. All right. So, Udbhav will be joining us shortly, but maybe we can start just to make the most of time. My name is Namrata Maheshwari. I’m from Access Now, an international digital rights organization. I lead our global work on encryption, and I also lead our policy work in South Asia. I have the relatively easy task of moderating this really great panel, so I’m very excited about it, and I hope we’re able to make it as interactive as possible, which is why this is a roundtable. So, we’ll open it up, hopefully halfway through, but definitely for the last 20 minutes. So, if you have any questions, please do note them down. Well, quick introduction, and then maybe I’ll do some context setting. I’ll start with Ilishka Pirkova on my left, who is also my colleague from Access Now. She is Senior Policy Analyst and Global Freedom of Expression Lead, and as a member of the European team, she leads our work on freedom of expression, content governance, and platform accountability. Thank you so much for being here. I will introduce Udbhav anyway while we wait for him to come here. He is the Head of Global Product Policy at Mozilla, where he focuses on cybersecurity, AI, and connectivity. He was previously at the Public Policy Team at Google, and Non-Resident Scholar with Carnegie Endowment. And online, we have Rihanna and Sarah. Rihanna Fafikorn is a Research Scholar at the Stanford Internet Observatory. A lawyer by training, her work focuses on encryption policy in the US and other countries, and related fields such as privacy, surveillance, and cybersecurity. Sarah Myers West is the Managing Director of AI Now Institute, and recently served a term as a Senior Advisor on AI at the Federal Trade Commission. She holds a decade of experience in the field of the political economy of technology, and her forthcoming book, Tracing Code, examines the origins of commercial surveillance. Thank you so very much. These are people who I believe have played a very important role in shaping the discourse around encryption and AI in recent times. So thank you so much for lending your insights and expertise to this panel, and thank you all for sharing your time with us here today. Well, we’re seeing a lot of proposals across the world in different regions on AI and encryption. So this session really is an effort to shed some light on the intersections between the two, which we think lie within the content scanning proposals that we’re seeing in different countries, US, UK, EU, India, and Australia, a lot of others. These proposals mainly suggest scanning content of messages on encrypted platforms, and proponents say that there is a way to do this in a way that would not undermine privacy and help eliminate harmful material. And opponents say that there is an over-reliance on AI, because the tools that would need to be developed to scan this content are AI tools, automated scanning tools, which are prone to biases, prone to, well, false outputs, and also that it would undermine privacy and erode into an encryption as we know it. So I’m hoping that the speakers on this panel can tell us more about it. With that, I’ll get us started. Just some housekeeping. Online, we have my colleague, Reits, moderating. So whoever is joining online, if you have questions, drop them in chat, and Reits will make sure we address those. Rihanna, if I could start with you. Proposals in many countries to moderate content on encrypted platforms are premised on the idea that it is possible to do this without undermining privacy. Could you tell us a little bit more about what the merits of this are, what the real impact is on encryption, and on the user groups that use these platforms, including the groups that these proposals seek to protect?

Riana Pfefferkorn:
Sure. So there’s a saying in English, which is that you want to have your cake and eat it, too. And that’s what this idea boils down to, the idea that you can scan encrypted content to look for bad stuff, but without breaking or weakening into an encryption, or otherwise undermining the underlying privacy and security guarantees intended for the user. We just don’t know how to do this yet, and that’s not for lack of trying. Computer security researchers have been working on this problem, but they haven’t yet figured out a way to do this. So the tools don’t exist yet, and it’s doubtful that they will, at least in a reasonable timeframe. You can’t roll back encryption to scan for just one particular type of content, such as child sex abuse material, which is usually what governments want end-to-end encrypted apps to scan for. If you’re trying to scan for one type of content, you have to undermine the encryption for all of the content, even perfectly innocent content. And that defeats the entire purpose of using end-to-end encryption, which is making it so that nobody but the sender and the intended recipient can make sense of the encrypted message. This has been in the news lately because, perhaps most prominently, the United Kingdom government has been pretending that it’s possible to square this particular circle. Basically, the UK has been one of the biggest enemies of strong encryption for years now, at least among democracies. It’s been trying to incentivize the invention of tools that can safely scan encrypted content through government-sponsored tech challenges, and it just passed a law, the Online Safety Bill, that engages in the same magical thinking that this is possible. The issue here is that, like I said, there isn’t any known way to scan encrypted content without undermining privacy and security. And nevertheless, this new law in the UK gives their regulator for the internet and telecommunications the power to serve compulsory notices on encrypted app companies, forcing them to try and do just that. The regulator has now said, actually, OK, we won’t use this power because they’ve basically admitted that there just isn’t a way to do this yet. They say, we won’t use that power until it becomes technically feasible to do so, which might effectively be quite a while because we don’t have a way of making this technically feasible. And part of the danger of having this power in the law is that it’s premised upon the need to scan for child sex abuse material. But there isn’t really any reason that you couldn’t expand that to whatever other type of prohibited content the government might want to be able to find on the service, which might be anything that’s critical of the government. It might be les majestés. It might be content coming from a religious minority, et cetera. And so requiring companies to scan by undermining their own encryption for whatever content the government says they have to look for could put journalists at risk, dissidents, human rights workers, anybody who desperately needs their communications to stay confidential and impervious to outside snooping by malicious actors, which might be your own government, might be somebody else who has it in for you, even in cases of domestic violence, for example, or child abuse situations within the home. So we’ve seen some at least positive moves in this area in terms of a lot of public pushback and outcry over this. Several of the major makers of encrypted apps, including Signal, WhatsApp, which is owned by Meta, and iMessage, which is owned by Apple, have threatened to just walk away from the UK market entirely rather than comply with any compulsory notice telling them that they have to scan encrypted material for child sex abuse content. So I take that as a positive sign that not only some of the major makers of these apps are saying that isn’t something that we could do, and that they’re saying we would rather just walk away rather than undermine what our users have come to expect from us, which is the level of privacy and security that end-to-end encryption can guarantee.

Namrata Maheshwari:
Thank you, Rihanna. Sarah, if you could just zoom out a bit for a second. And there have been a lot of thoughts about how artificial intelligence is a misleading term, and it could lead to flawed policies based on a misrepresentation of the kind of capabilities that the technology has. Do you think there is a better term for it? And if so, well, what would it be? And the second limb of the question was, again, there have been a lot of studies and debates around the inherent biases and flaws of AI systems. So if these were to be implemented within encrypted environments, which one of these characteristics, or if that’s true, if that is something that would happen, would these be transferred to encrypted platforms in a way that would lead to, well, unique consequences?

Sarah Myers West:
Sure, it’s a great question. I think it is worth taking a step back and really pinning down what it is that we mean by artificial intelligence, because that’s a term that has meant many different things over an almost 70-year history. And it’s one that’s been particularly value-laden in recent policy conversations. In the current state of affairs… No worries, maybe we can come back to Sarah once she rejoins. Ritz, could you let me know when Sarah’s back online? Oh, she’s back. Okay. Hi, Sarah. I’m back. Yes, sorry about that. What I was about to say was, what we sort of mean by artificial intelligence in the present-day moment is, you know, there are a lot of people who think that, you know, what we sort of mean by artificial intelligence in the present-day moment is the application of statistical methods to very large data sets, data sets that are often produced through commercial surveillance or through, you know, massive amounts of web scraping and sort of mining for patterns within that massive amount of data. So, it’s essentially, you know, a foundationally computational process. But really, you know, what Rihanna was talking about here was sort of surveillance by another means. And I think a lot of ideals get imbued onto what AI is capable of that don’t necessarily bear out in practice. You know, the FTC has recently described artificial intelligence as, you know, largely a marketing term. And there’s a frequent tendency in the field to see claims about AI being able to, you know, serve certain purposes that lack any underlying validation or testing where, you know, within the field, you know, benchmarking standards may vary widely. And very often companies are able to make claims about the capabilities of the systems that don’t end up bearing out in practice. We sort of discover them through auditing and other methods after the fact. And to that point, you know, given that AI is essentially grounded in pattern matching, there is a, you know, very well documented phenomenon in which artificial intelligence is going to mimic patterns of societal inequality and amplifying them at scale. So, we see widespread patterns of, you know, discrimination within artificial intelligence systems in which, you know, the harms accrue to populations that have historically been discriminated against and the benefits accrue to those who have experienced privilege. And that AI is sort of broadly being claimed to be some magical solution, but not necessarily with, you know, robust independent checks that it will actually work as claimed.

Namrata Maheshwari:
Thank you. Ilishka, given your expertise on content governance and the recent paper you led on content governance in times of crisis, could you tell us a bit about the impact of introducing AI tools or content scanning in encrypted environments in regions that are going through crisis?

Eliska Pirkova:
Sure. Thank you very much. And maybe I also would like to start from a sort of a content governance perspective and what we mean by the danger when it comes to client-side scanning and weakening encryption, which is the main precondition for security and safety within the online environment, which of course becomes even more relevant when we speak about the regions impacted by crisis. So, but unfortunately, the internet is not the only place where we have to think about but unfortunately, these technologies are spreading also in democracies across the world and legislators and regulators increasingly sell that idea that they will provide these magical solutions to ongoing very serious crimes such as child sexual abuse materials. And I will get to that. This also concerns other types of illegal content such as terrorist content or potentially even misinformation and disinformation that is spreading online on encrypted spaces such as WhatsApp or other private messaging apps. So, of course, there are a number of questions that must be raised when we discuss content moderation and content moderation has several phases. It starts with the detection of the content, evaluation, assessment of the content, and then consequently, ideally, there should be also provided some effective access to remedy once there is the outcome of this process. And when we speak about end-to-end encryption violation and client-side scanning, the most kind of worrisome state is precisely detection of the content where these technologies are being used. And one very important, and this is usually done through different using hash technologies, different types of these technologies, photo DNA is quite known. And, of course, these technologies, and I very much like what Sarah mentioned, it’s quite questionable whether we can even label them as artificial intelligence. I would rather go for machine learning systems in that regard. And what is very essential to recognize here is that these technologies simply scan the content and they are usually used for identifying the content that was already previously identified as illegal content, depending on the category they are supposed to identify. So then they trace either identical or similar enough content to that one that was already captured. And the machine learning system as such cannot particularly distinguish whether this is a harmful content, whether this is illegal content, whether this is the piece of disinformation. Because this content, of course, doesn’t have any technical features per se that would precisely provide this sort of information, which ultimately results into a number of false positives and negatives and errors of these technologies that impose serious consequences on fundamental rights protection. So what I find particularly worrisome in this debate increasingly, and that’s also very much relevant regarding the regions impacted by crisis, is the impact on significant risk and justifications that these type of technologies can be deployed if there is a significant risk to safety or to other significant risks that are usually very vaguely defined in these legislative proposals that are popping across the world. And if we have these risk-driven, I don’t want to call it ideology, but trend behind the regulation, then of course what will be significantly decreased is precisely the requirements for the rule of law and accountability, such as that, for instance, these detection orders should be fully backed up by the independent judicial bodies, and they should be the ones who should actually decide whether something like that is necessary and conduct that initial assessment. And when we finally put it in the context of crisis, of course in times when the rule of law is weakened, either by authoritarian regime in power that seeks to use these technologies to crack down on decent human rights activists and defenders, and we as AXIS now, being a global organization, we see that all over again, that this is primary goal of these type of regulations and legislations, or also these regimes being inspired by the democratic Western world where these regulations are also probably profiling more and more, then of course consequences can be fatal. sensitive information can be obtained about human rights activists as a part of the broader surveillance campaign and it also means that under such contexts where the state is failing and the state is the main perpetrator of violence in times of crisis it is the digital platform and private companies who often act as a last resort of protection and access to any sort of remedy and under those circumstances not only that their importance increases but so do their obligations and their responsibility to actually get it right and that of course contains the due diligence obligation so understanding what kind of environment they operate in and what they what is technically feasible and what is actually consequence if they for instance comply with the pressure and wishes of the government in power which we often see especially when it comes to informal cooperation between the government and platform that was a lot so I’ll stop here thank you.

Namrata Maheshwari:
Thank you. Our fourth speaker Udbhav Tiwari is having some trouble with his badge at the entrance I don’t know if the organizers can help with that at all but he’s at the venue but just having trouble getting a copy so just a request in case you are able to help no worries if not he’ll be here shortly but in the meantime we can keep the session going Rihanna I’d like to come back to you a lot of the conversations and debates on this subject revolve around the very important question of well what are the alternatives there are very real challenges in terms of online safety harmful material online and very real concerns around privacy and security so the question is if not content scanning then what in that context could you tell us more about your research on content oblivious trust and safety techniques and whether you think there are any existing or potential privacy preserving alternatives.

Riana Pfefferkorn:
Sure so I published research in Stanford’s own Journal of Online Trust and Safety in early 2022 there’s a categorization that I did in this research which is content dependent versus content oblivious techniques for detecting harmful content online content dependent means that that technique is requires at will access by the platform to the contents of user data so some examples would be automated scanning for the DNA as an example or human moderators who go to look for content that violates the platform’s policies against abusive uses I would also include content client-side scanning as at least I was describing as a content dependent technique because it’s looking at the contents of messages before it gets encrypted and transmitted to the recipient content oblivious by contrast means that the trust and safety technique doesn’t need at will access to message contents or file contents in order to work so examples would be analyzing data about a message rather than the contents of a message so metadata analysis as well as analysis of behavioral signals how is this user behaving even if you can’t see the contents of their messages another example would be user reporting of abusive content because they’re the reason that the platform gets access to the contents of something isn’t because they had the ability to go and look for it’s because the user chose to report it to the to the platform itself so I conducted a survey in 2021 of online service providers which included both with end-to-end encrypted apps as well as other non EDE types of online services and I asked them what types of trust and safety techniques they use across 12 different categories of abusive content from child safety crimes to hate speech to spam to missing disinformation and so on and I asked them which of three techniques I made content scanning which is content dependent and metadata analysis and user reporting which are content oblivious did they find most useful for detecting each of those 12 different types of abusive content and what I found was that for almost every category a content oblivious technique was deemed to be as or more useful than a content dependent one specifically user reports in particular prevailed across many categories of abuse I asked about the only exception was child sex abuse material where automated scanning was deemed to be the most useful so meaning things like for the DNA these findings indicate that and then encrypted services ought to be investing in making robust user reporting flows ideally ones that expose as little information about the conversation as possible apart from the abusive incident I find user reporting to be the most privacy preserving option for fighting online abuse plus once you have a database of user reports you could apply machine learning techniques to users or groups across your service if you want to look for some trends without necessarily searching across the entire database of all content on the platform another option metadata analysis in my survey that didn’t fare as well as user reporting in terms of the usefulness as perceived by the providers but that was a couple of years ago and even then the use of AI and ML were already helping to detect abusive content so those tools surely have room to improve I do want to mention though like it’s important to recognize that there are trade-offs to any of the proposals that we might come up with metadata analysis has major privacy trade-offs compared to user reporting because the service has to collect and analyze enough data about its users to be able to do that kind of approach there are some services like signal that choose to collect extremely minimal data about their users as part of their commitment to user privacy so when we’re talking about trade-offs trade-offs might be inaccuracy there might be false positive rates or false negative rates associated with a particular option privacy intrusiveness what have you there’s no abuse detection mechanism that is all upside and no downside we can’t let governors or vendors pretend otherwise and especially when it comes to pretending that you’re going to have all of the upside without any kind of trade-offs whatsoever which is what I see commonly used like oh yeah it’s worth these privacy trade-offs or these security trade-offs because we’re going to realize this upside well that’s not necessarily guarantee but at the same time I think that as advocates for civil liberties for human rights for strong encryption it’s important for us not to pretend that the things that we advocate as alternatives don’t also have their own trade-offs there’s a great report that CDT published in 2021 that looked at a bunch of different approaches called from the outside looking in it’s also a great resource for looking at the different sorts of options in the end and encrypted versus versus you know have the tension between doing trust and safety and how to continue respecting strong encryption

Namrata Maheshwari:
great above will come to you now a lot of proposals again on content scanning are premised on the admittedly well-intentioned goal of wanting to eliminate harmful material online from a product development perspective do you think it is possible to develop tools that are limited to scanning certain types of content and looking at the cross-border implications as well from a platform that provides services in various regions what do you think the impact of implementing such abilities in one region beyond other regions with different kinds of governments and contexts thanks number that I think that

Udbhav Tiwari:
the first kind of angle to with which to look at it is whether it’s technically feasible or not and the second is whether it’s feasible in law and policy and I think the both of them are two different answers purely on the technical feasibility perspective it depends on how one decides to define client-side scanning and and what constitutes client-side scanning or not but there are different ways in which platforms already do certain kinds of scanning for unencrypted content that some of them claim can be done for encrypted content in a way that is reliable but personally speaking and also from Mozilla’s own experiences as we’ve evaluated them it’s quite difficult to take any of those claims on face value because almost none of these systems when they do claim to only detect a particular piece of content well have gone I think undergone the level of independent testing and rigorous analysis that is required for those claims to actively be verified by the like by the rest of either the security community or the community that generally works in trust and safety like Rihanna was talking about and the second aspect which is the law and policy aspect is I think the more worrying concern because it’s very difficult to imagine a world in which we deploy these technologies for a particular kind of content presuming it meets the really high bar of being reliable and trustworthy and also somehow privacy-preserving the legal challenges that it creates don’t end with it existing but of how other governments may be inspired by these technological capabilities existing in the first place and that’s because once these technological capabilities exist various governments would want to utilize it for whatever content they may deem worth detecting at a given point in time and that means that what may be CSAM material in one country may be CSAM material and terrorist content in another country and in a third country it may be CSAM material, terrorist content and content that maybe is critical say of the political ruling class in that particular country as well and as I think if there’s one thing that we’ve seen with the way that the internet has developed over the last 20 to 25 years it’s that the ability of companies and especially large companies to be able to resist requests or directives from governments has only reduced over time. The incentives against them standing up against governments are like very very aligned towards them just complying with governments because it’s simply much easier for you from a business perspective to be able to just if a government places pressure upon you over an extended period of time to just give in to certain requests and we’ve already seen examples of that happen with other services that are ostensibly parts of which are end-to-end encrypted such as iCloud where in certain jurisdictions they have separate technical infrastructures that they’ve set up because of requests from governments as well so if it has started happening there I see it I think it’s very difficult to see a world in which we won’t see it happening for client-side scanning and these kinds of content as well. One other thing that I will say that and that’s especially from a product development perspective is Mozilla has also actually had some experience with this and the challenges that come with deploying end-to-end encrypted services and deciding to do them in a privacy-preserving manner but not having and not collecting metadata which was this service called Firefox Send which Firefox and Mozilla had originally created a couple of years ago to be able to allow users to share files easily and anonymously so you went on to a portal it had a very low limit you could upload a file onto the portal and then once you uploaded a file onto a portal you got a link and then the individual could click on the link and then you could download it and this the service worked reasonably well for a couple of years but what we realized I think towards the end of its lifespan was that there were also some use cases in which it was being used by malicious actors to actively deploy harmful content in some cases malware in some cases like materials that otherwise would be implicated in investigations and once we evaluated whether we could deploy mechanisms that would either scan for such content on devices which in our case was the browser which has even less of a possibility of doing such actions we decided that it was better for that piece of software to not exist rather than for it to create the risks that it did for users without the trust and safety options that were available to us because it was end-to-end encrypted so that’s also I think a nod towards the fact that there are different streams and levels of use cases to which end-to-end encryption can be deployed and different kinds of trust and safety measures that could be deployed to like account for different like threat vectors if you would like to call them that way and the ones that we’re specifically talking about which is client-side scanning most is most popular right now for messaging but the way that it’s actually been deployed in the past or almost been deployed in the past by say a company like Apple was the closest was actually scanning in from all information that would be present on a device before it would be backed up so there is and that’s the final point that I’m making that there’s also this implication that we are presuming this is a technology that will only scan your content after you hit the send button or when you’re hitting the send button but in most of the ways in which it’s actually been deployed it’s been deployed in a way where it proactively would scan individuals devices to detect content before it before it is backed up or uploaded onto a server in some form and that’s a very very thin line to walk between doing that and just keeping it and just like scanning content all the time in order to detect whether there’s something that shouldn’t exist on the device and that’s a very scary possibility

Namrata Maheshwari:
thanks above is Sarah online rates I I believe she’s had an emergency but that’s fine I will come back to her if she’s able to join again Alishka in many ways the EU has been at the center of this discourse or what is also known as the Brussels effect we see a lot of policy proposals debates and discourses on internet governance and privacy and free expression traveling from the EU to other parts of the world also true for it also happens horizontally across other countries but still in a disproportionate way from EU to elsewhere more recently there have also been proposals around looking at the question of content moderation on encrypted platforms what would you say are the signals from the EU for the rest of the world from a privacy free expression and safety perspective on what to do and what not to do thank you

Eliska Pirkova:
indeed the EU regulatory race has been quite intense in past few years also in the area of content governance and part from accountability specifically in the context of client-side scanning I’m sure many of you are aware of the still pending a proposal on child sexual abuse material it’s the EU regulation which from the fundamental rights perspective is extremely problematic as a part of a dream network there was a tree position paper that contains the main critical points around the legislation and couple of them I’ve already summarized during my first intervention and the whole entire regulation is problematic due to the disproportionate measure it imposes on the private actors from detection order and other measures that can be only implemented through technical solutions such as client-side scanning very short-sighted justifications for the use of these technology very much based on that risk approach that I’ve already explained at the beginning but also ultimately not recognizing and acknowledging that the use of such technology will also violate the prohibition of general monitoring because of course these technology will have to scan the content indiscriminately and I’m mentioning the ban on the general monitoring because if you ask me about the impact of the EU regulation of course another very decisive law in this area was or is digital services act even though digital services act regulates user-generated content disseminated for public if we speak about platforms but to some minimum extent we could say there are some minimum basic requirements for private messaging apps too even though it’s not the main scope of the digital services act but DSA has still a lot to say in terms of accountability transparency criteria and other due diligence measures that these regulation contains and we are really worried about the interplay between these horizontal legislative framework within the EU and the ongoing still negotiated proposal the proposed regulation the child sexual abuse material and if it would stay in its current form and we are really not there yet and of course there would be number of issues that would be in direct violation with the existing digital services act especially those measures that stand on that intersection between these two regulations and of course this sends a very dangerous signal to other governments outside the European Union governments that will definitely abuse these kind of tools especially if a democratic governments within the EU will legitimize the use of such a technology which would ultimately happen and we hope it won’t and there is a significant effort to prevent these regulation from ideally not being adopted at all which is probably at this stage way too late but at least do as much damage control as possible So we have to see how this goes, but of course the emphasis on the regulation within the European Union around digital platforms in general is very strong. There was a number of other laws adopted in recent years and it will definitely trigger this Brussels effect that we saw in the case of the GDPR, but also in case of other member states within the EU, especially in the context of content governance, for instance infamous NetzDG in Germany where Justitia is running the report every year where they clearly show how many different jurisdictions around the world follow this regulatory approach. And if it’s coming directly from the European Union the situation will only get, you know, as much as I believe in some of those laws and regulations, what they try to achieve, everything in the realm of content governance and freedom of expression can be significantly abused if it ends up in the wrong hands and in the system that doesn’t take their constitutional values and rule of law seriously.

Namrata Maheshwari:
Thank you. Udba, my question for you is actually the flip side of my question for Ilishka. Given that so much of this debate is still dominated by regions in the global north, mostly the US, UK and EU, how can we ensure that the global majority plays an active role in shaping the policies and the contexts that are taken into account when these policies are framed? And what do you think tech platforms can do better in that regard?

Udbhav Tiwari:
Thanks, Namita. I think that generally speaking if we were to look at, just for the first maybe minute, in the context of end-to-end encrypted messaging, I would say that probably the only country that already has a law on the books that nobody’s, at least the government doesn’t seem to have made a direct connection between possibilities like client-side scanning and regulatory outcomes is the Indian government. Because India currently has a law in place that gives the government the power to demand the traceability of pieces of content in a manner that still preserves security and privacy. So we’re not very, I don’t think it’s too much of a stretch for, say, a government stakeholder in India to say, why don’t we develop a system where there’s a model running on every device or a hash or a system that scans for certain messages on every device and the government provides hashes and then you need to be able to, like, essentially scan a message before it gets encrypted and reported to us for whether it, and if it is a match, it means that that individual is spreading, you know, messages that are either leading to, like, public order issues or other kinds of misinformation that’s spreading that they want to clamp down on. So, and the reason I kind of raised that, even though traceability is not necessarily a client-side scanning issue, is that I actually think that the conversation is both a lot less nascent in the vast majority of the global majority conversations, but it also has a lot more potential to cause much more harm. And that’s because a lot of these proposals both float under the radar, don’t get as much attention internationally, and ultimately the only thing that holds or protects the individuals in these jurisdictions is the willingness of platforms to comply with those regulations or not. Because so far we, apart from the notable exception of China, where in general there have been systems where the amount of control that the state has had on the internet has been quite different for long enough that there are alternative systems, to the point at which I think that the only known system that I’ve read of that actually has this capability is the Green Dam filter, I think it’s called in China, which does have the ability to, was originally installed on, and I think it’s almost mandatory for it to be present on personal computers, which was originally recommended as a filter for pornographic websites and adult content, but there have been reports that since then it had, that may have reported to governments when people have either searched for certain keywords or gone after or looked for content that may not necessarily be approved at that point as well. And I think that that shows, that showcases that in some places the idea that client-side scanning may not be this hypothetical reality that will exist in the future but may already exist for some time, and that given the fact that we are only relying, for better or for worse, on the will of the platforms to resist such requests before they end up being deployed, I think that the conversation that we need to then start having is, one, what are the ways in which people outside these jurisdictions are actually holding platforms to account for when these measures get passed? So if they do get passed, saying, do you intend to comply with it? If you don’t intend to comply with it, what is your plan for if the government escalates, like, its enforcement actions against you? And as we’ve seen in many countries in the past, like, they can get pretty severe, and ultimately I think this is something that will need to be dealt with at a country-to-country level, not necessarily platform-to-country level, because I think that ultimately, if, depending on the value of the market for the business or for the strength of the market as a geopolitical power, the ability of a platform to resist demands from a government is ultimately limited, and they can try, and some of them do, and many of them don’t, but ultimately it’s something that only international attention and international pressure can reasonably move the needle into. The final point that I’ll make there is, I do think that even when it comes to the development of these technologies, these are still very much, very, like, Western-centric technologies, where a lot of the models that they are trained on, a lot of the information these things are designed on, come from a very different, like, realm of information that may not really match up two pieces in the global majority. I have, like, read of numerous examples outside the end-to-end encrypted context, where, for example, something that a lot of platforms do is that they block certain keywords that are known to be secret keywords for CSAM, which are not very well known, and they vary radically in different jurisdictions, so in order to find it, it may seem like an innocuous word that means something completely different in a local language, but if you search for that, you will find users and profiles that actually CSAM already exists, and just finding out what those keywords are in various local languages, in individual jurisdictions, is something that, like, many platforms take years to do, be able to do well, and that’s not even an end-to-end encrypted or client-side scanning problem, it’s a how much are you investing in understanding local context, how much are you investing in understanding local realities problem, and if that happens there, I think that, like, it’s because those measures fail, it’s because when it comes to unencrypted content, that platforms don’t act quick enough or don’t account for local context enough, that governments also end up resorting to measures like recommending client-side scanning. That’s by no means to say that it’s the fault of these platforms that these measures or these ideas exist, but there’s definitely a lot more that they could do in the global majority to actually deal with the problem on open systems, where they actually have a much better record of enforcement in English and in countries outside the global majority than within the global majority.

Namrata Maheshwari:
Thank you. I have one last question for Sarah, and then we’ll open it up to everybody here, and if anybody is attending online, so please feel free to jump in after that. Sarah, as our AI expert on the panel, what would your response be to government proposals that treat AI as a sort of silver bullet that will solve problems of content moderation on encrypted platforms?

Sarah Myers West:
So, I think one thing that’s become particularly clear over the years is that content moderation is, in many respects, an almost intractable problem. And though AI may present as though a very attractive solution, it’s in many ways, it’s not a straightforward one. And in fact, it’s one that introduces new and likewise troubling problems. AI, for all of its many benefits, it remains imperfect. And there’s a need for considerably more scrutiny on claims that are being made by vendors, particularly given the current state of affairs, where you know, quite few models are being going through any sort of very rigorous independent verification or adversarial testing. I think there are concerns about harms to privacy. There are concerns about false positives that could sort of paint innocent people as culprits and lead to unjust consequences. And lastly, you know, there’s been research that has shown that malicious actors can manipulate content in order to bypass these automated systems. And this is an issue that’s endemic across AI. And underscoring, you know, even further the need for much more rigorous standards for independent evaluation and testing. So before, you know, we put all of our eggs in one basket, so to speak, I think it’s really important to one, evaluate whether AI, broadly speaking, is up for the task, and then two, to really look under the hood and get a much better picture of what kinds of evaluation and testing are needed to, you know, verify that in fact, these AI systems are working as intended, because by and large, the evidence is indicating that they’re very much not.

Namrata Maheshwari:
Thank you, Sarah. And thank you all so much on the panel. I’ll open it up to all the participants, because I’m sure you have great insights and questions to share as well. Do we have anybody who wants to go first? Great, sure. Could we, before you make your intervention, could you just, in a line, maybe share who you are?

Audience:
Oh, no. Is it? Okay, it’s better now. Good morning, everyone. Or, yeah, still good morning. My name is Katarzyna Staciwa, and I represent the National Research Institute in Poland, although my background is in law enforcement and criminology, and soon also clinical sexology. So, I really want voices of children to be present in this debate, because there were already mentioned in the context of CSAM, which is child sexual abuse material, scanned, and, yeah, on some other occasions. But I think there is a need to make a difference between general monitoring or general scanning, and scanning for this particular type of content. It is such a big difference, because it helps to reduce this horrendous crime. And there are already techniques that can be reliable, like hashes. And by hashes, I also mean experience of hotlines, in-hope hotlines present all over the world. And it’s already experience of, I believe, more than 20 years of this sort of cooperation. So, hashes, they are gathered in a reliable way. There is 3-I verification in the process of stating if a particular photo or video is CSAM. So, it’s not like a general scanning, it’s scanning for something what has been corroborated before by an expert. And then on AI, I’m lucky enough, because my institute is actually working on AI project. And we train our algorithms to detect CSAM in a big bunch of photos or videos. And I can tell you that this is being very successful so far. So, we use also current project by InHope that follows on ontology, specific ontology. So, we train algorithms in a very detailed way to pick up only these materials that are clearly defined in advance. So, and it’s again, it’s an experience of years of cooperation, international cooperation. And I can tell you that general monitoring is something very much different than scanning for photo or video of a six-month-old baby that is being raped. So, please take it into consideration in any future discussions that while we have obligation to take care of privacy and online safety, we first have an obligation to protect children from being harmed. And this is also deeply rooted in all the EU conventions and all the UN conventions and the EU law. So, we have to make a different, we have to make a decision because for some of these children, it will be too late. And I will leave you with this dilemma. Thank you.

Namrata Maheshwari:
Thank you. Thank you so much for that intervention and respect all the work you’re doing. Thank you for sharing that experience. I think one thing that I can say for everybody on the panel and in the room is that all of us are working towards online safety. And I know we’re at a point where we’re identifying similar issues, but looking at the solution from different lenses. So, I do hope that conversations like this lead us to solutions that work for safety and privacy for everybody, including children. So, thank you so much for sharing that. I really value it. Anybody else?

Audience:
Thank you for the great presentation. I’m Arlena Wozniak from European Center for Nonprofit Law. And thank you for your intervention. I’m following up on that. I’d love to hear from you, Alishka. You mentioned the potential misuse of EU regulation. Then, more broadly, how can this kind of child safety narrative can also be used as a slippery slope for other narratives like counterterrorism or fighting human trafficking, which are all laudable goals, which as human rights advocates, we all fight for. And thank you for your mention about child protection. Indeed, online safety applies to all, especially marginalized groups. But I’d love to hear from you how it’s not as easy, it’s not a black or white kind of picture, and how these narratives can often be abused and weaponized to actually prevent encryption.

Eliska Pirkova:
Thank you so much. Great question. And thank you very much for your contribution. From a position of digital rights organization, we, of course, advocate for the online safety and protection of fundamental rights of all. And, of course, children also have their right to be safe and they also have equally right to privacy. And we can go into nitty gritty details on general monitoring, how these technologies work and whether there is any way how general monitoring would not occur. And I think that maybe we would even disagree to some extent. But the point is that the goal is definitely the same for all of us. And especially when it comes to marginalized groups, as Marlena rightly pointed out, it’s a major priority for us, too. But I definitely find it difficult, mainly as an observer, because we truly rely fully on the ADRI network in Brussels, European Digital Rights Network, who leads our work on child sexual abuse material. And I often see that precisely the question of children’s rights is being, to some extent, I would say, I’m trying to find the right term. But the emphasis on that, even though it’s a number one priority for all of us, it can be used in the debate to maybe counter argue against opinions that are slightly more critical towards some technical solutions, while no one ever disputes the need to protect children and that they come first. And that often complicates and maybe becomes, to some extent, almost counterproductive. Because I don’t think that we have any differences in terms of goals that we are trying to achieve. We all are aiming at the same. outcome in the process but perhaps the means and ways and the policy solutions and regulatory solutions that we are aiming at might differ and that’s of course subject to debate and to ongoing negotiations what is that solution and none of them will be ever perfect and there will have to be some compromises made in that regard but I do find this you know dichotomy and more kind of like very straightforward black-and-white differences in terms of when we are doing a rare advocacy and almost occasionally trying to put it in a way that we should choose the side incredibly problematic because there is no need for that I think we all as I said have the same outcome in mind so I don’t know whether I answer your question but indeed this is a very complex and complicated topic and we need to continue having this dialogue as we have today and inform each other position and try to see each other perspective in order to achieve that successful act outcome that we are striving for and that’s the highest level of protection.

Audience:
Thank you, Vagesha and then Hioranth. Hi, thank you. Is this working? Okay. Hi, I’m Vagesha, I’m a PhD scholar at Georgia Tech. Kudos to all of you to condense all of that material into 40 minutes. I mean it’s a vast thing that you’ve covered here. I think I have a comment that will lead to a question so I’ll be quick. Alishka, you mentioned about significant risk in the beginning and I was thinking about how significant risk on any sort of online platforms are often masked by this concerns of national security when it comes to the governments directly right and national security risk could be often subjective and could be related to the context of which government and how they interact with it. So and I think Udbhav also mentioned about how harmful content is a big problem in the space. I mean all of us agree about that. I think my question would be largely alluding to one when you were talking about and this is to everybody when you’re talking about scraping of the content to be used further on how much of the apps that are available online actually store the data in an encrypted format so how big is that problem of you know scraping of that data and it’s encrypted it being an encrypted format and two how do we think about it from a user’s perspective so what can a user do directly to either not solve this problem but intervene in this problem and present their perspectives ahead. Thank you. Thank you.

Namrata Maheshwari:
Actually do you want Udbhav to take the question given the platform reference and could I request everyone to just keep the questions brief so that more people can participate. We have only a few more minutes to go. Thank you. Sure so

Udbhav Tiwari:
on the platform front I think how big is the problem and how pervasive is the problem is an interesting one because on one angle it depends on whose perspective you’re looking at it from. If you’re looking at this from the perspective of an actor that either produces or consumes child sexual abuse materials then it’s arguably a lot of them because this is how the one would argue they communicate with each other and share information with each other which is measures that either aren’t online at all or are ways in which they are encrypted but I think that’s definitely a space that needs a lot more study especially especially going into like what are the vectors in which these like pieces of information are shared and communicated with each other because there has been some research on how much of it is online how much of it is offline how much of it is natural discovery how much of it is you have to seek out the fact that it exists kind of discovery but a lot of that information is both very jurisdiction specific and I think overall has not been answered to the degree that it should be. On what are the things that users themselves can do I mean broadly into three categories right like one is there’s reporting itself because then even on other systems the ability for a user to say I have received or seen content that is like this and I want to tell the platform that this is happening is one route. The second route and this applies to more limited systems is the content exists in this form and I would like to directly take it to the police or to law enforcement agency saying I have it from this user in this way and this is the problem that it’s creating and ultimately the third is for the user and this is like something that a lot there’s been a lot of research on is like intervening at the social level where you talk to your like if it’s somebody you know for why this is kind of problematic and you ask them to get professional psychiatric help and then like they essentially get treated like they have a disease. Platforms can or cannot play a role in this some of them can proactively prompt you to seek help some of them can tell you it’s a crime there are some countries where courts have mandated that these warnings proactively be surfaced and laws too like in India but ultimately I think it’s a area that needs a lot more study which just hasn’t happened so far. Just to very quickly add to that and I’ll pass it on is that all

Namrata Maheshwari:
of this is by no means to say that you know platforms play a role in keeping people safer in a way that governments don’t. By all means we need measures to make platforms more accountable including the ones that are end-to-end encrypted absolutely but the question is just how to do it in a way that most respects fundamental rights. I’ll pass it to you and then to the lady in the back. Online Rianna, Sarah, if there’s anything that you want to add to any question please just raise your hand and we’ll make sure you can come in.

Audience:
Yeah my name is Rao Palme from Electronic Frontier Finland and I don’t have a question but I just like to respond as well to the law enforcement representative here that I’ve lost a lot of credibility in law enforcement to use their tools for what they actually say they use them for. For example in Finland they tried to introduce a censorship tool, well they did in 2008 and in the end it took a hacker to scan the secret list of the police that censorships or censors the websites and we found out like the rationale for the tool went that it has to be used like that because the CSUN material is hosted in countries that our law enforcement doesn’t have access to or even any cooperation and there was this hacker who scanned the internet to find out the secret list he was able to compile about 90% of that list and we actually went through the material and had a look what’s in there. First of all it was like less than 1% was actual child sex abuse material and the second point which I think is even stronger is that guess the biggest country that hosted the material? It was US. After that Netherlands. After that UK. In fact the first 10 countries were Western countries where all you need to do is pick up the phone, call them to take it down. That’s it. Why do they need a censorship tool? The same goes for this kind of client-side scanning. I feel it’s going to be abused, it’s going to be used for different purposes after it goes to gambling and so on. So it’s a real slippery slope and it’s been proven before that that’s how it goes. Thank you and thank you very much for your comment. So I work for iPad International which some of you will know we’re at the forefront of actually advocating for the use of targeted technology to protect children in online environments and I absolutely I mean what’s interesting just even about the people in this room is I think certainly what we’re seeing is an example and we speak for both sides of the conversations being divided and I I’m very happy I’m here and I’m really enjoying this conversation because I absolutely believe in critically challenging our own perspectives and views on different issues and it’s been really interesting to hear particularly the point about Global South and different jurisdictions and it’s absolutely I think we have a system that is working it’s not perfect and there are examples where there have been problems but in general the system is working very well and we could give many other examples of why that is but we need to build on the existing system to expand out into other regions. One of the things I think is interesting and this has been a key theme of the IJF I think for me is this issue of trust in institutions and trust in tech and they’re very difficult to achieve, they’re easy to lose, they’re hard to gain, it’s trust in general. I think on this issue it’s at the forefront of the problem. I think one of the things I always regret is that there isn’t more discussion of why we do agree because there are areas we agree and again one thing that comes up when we deal with issues of trust are issues of transparency whether that’s in processes, algorithmic transparency, oversight, reporting, they’re not perfect but as civil society we can call for accountability so I think that those are areas where we agree and I do wish we were speaking a little bit more about that. In terms of the legislation and general monitoring you’re right we’re not going to go into the details of the processes in the EU but I do think there’s a sometimes a convenient conflation of technology in general and specific technologies that are used for certain things and I think if we talk about targeted CSAM detection tools and spyware they’re not the same thing and I think sometimes there’s a convenient conflation of different texts that are used for different means. The other thing and this is very much to your point about data sets upon which these tools are trained, it’s true that we need to be doing much better at understanding and having data that will avoid any kind of bias in the identification of children but just to this final point one of the reasons for differentiating between hosting of content and which is very much related to internet infrastructure but it is shifting is also that we need to talk about victim identification that one of the reasons to take down and refer child sexual abuse material is that it gets to into processes where children can be identified and we have decades now of experience of very successful processes whereby law enforcement are actually identifying children and disclosing on their behalf because we have to remember that child sexual abuse material is often the only way a child will disclose because children do not disclose and one of the fallacies, I’m sorry I will finish here, one of the fallacies in the debate about the child rights argument is often that we we’re calling for technical solutions as a silver bullet, absolutely not. I think one of the things we all agree on is this is a very complex puzzle and prevention means technology, prevention means education, prevention means safety measures, prevention means you know working with perpetrators, it’s everything that we need to be doing and we’re absolutely calling for that. So I suppose it’s not a question but I wanted to sort of make that point and maybe it’s a question or call to action is that we really need to be around the table together because I think there are areas where we absolutely are agreeing.

Namrata Maheshwari:
Absolutely agree with that and I do hope we’ll have more opportunities to kind of talk on the issues that we all care about. Unfortunately we’re over time already but I know that Rihanna has had her hand raised for a bit so Rihanna do you want to just close us up in one minute?

Riana Pfefferkorn:
Sure so to close I guess I’ll just note that I’ll emphasize something that Alishka said which is that we know that all fundamental rights are meant to be co-equal with no one right taking precedence over any other and how to actually implement that in practice is extremely difficult but it applies to things like child safety as well that are these contentious you know horses that we can get stuck on and that’s the topic of report that I helped author including with an emphasis on child rights as part of the DFR labs recent scaling trust on the web report that goes into more depth into all the different ways that we need to be forward-looking with regard to finding equitable solutions for the various problems of online harms. I also just want to make sure to mention that when it comes to trustworthiness of institutions we do need everybody to be holding governments accountable as well. There was recent reporting that Europol had in some of the closed-door negotiations over the child sex abuse regulation in the EU demanded unlimited access to all data that would be collected and have that be passed on to law enforcement so that they could look for evidence of other crimes not just child safety crimes. So in addition to looking to platforms to do more we also need everybody child safety organizations included to be holding governments to account and ensuring that if they are demanding these powers that they cannot be going beyond those and using that as the tip of the spear with one particular topic to be demanding unfettered access for all sorts of crime investigations because that goes beyond this sort of necessity and proportionality that is the hallmark of a human rights respecting framework. Thanks.

Namrata Maheshwari:
Thank you. A big thank you to all the panelists Sarah, Rihanna, Udbhav, Elishka, Reitz. Thank you for moderating online and thank you all so very much for being here for sharing your thoughts and we hope all of us are able to push our boundaries a little bit and arrive at like a common ground that works for the best of all users online. Thank you so much. Have a great IGF. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

1948 words

Speech time

695 secs

Eliska Pirkova

Speech speed

171 words per minute

Speech length

2097 words

Speech time

734 secs

Namrata Maheshwari

Speech speed

177 words per minute

Speech length

2092 words

Speech time

709 secs

Riana Pfefferkorn

Speech speed

189 words per minute

Speech length

2000 words

Speech time

636 secs

Sarah Myers West

Speech speed

156 words per minute

Speech length

762 words

Speech time

294 secs

Udbhav Tiwari

Speech speed

194 words per minute

Speech length

2785 words

Speech time

863 secs

The Internet in 20 Years Time: Avoiding Fragmentation | IGF 2023 WS #109

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Henri Verdier

Henri Verdier, a pioneering internet entrepreneur, took a leap of faith in 1995 by starting his first internet company during a time when there were only 15,000 web surfers in France. Initially, Verdier harboured doubts about the potential of crowd-sourced knowledge bases like Wikipedia. However, he has since come to embrace the transformative power of the internet.

One of Verdier’s concerns is the fragmentation and privatization of the internet. It is disconcerting to see certain big states and tech companies disregarding the importance of a free, open, and decentralized internet. This issue raises questions about the future of an internet that is accessible and available to all.

Cyberspace has become intertwined with various aspects of life, including education, health, business, and even matters of war and peace. This highlights the enormity of the impact of the digital revolution in recent times. Furthermore, there has been a rise in digital diplomats as part of this revolution.

Understanding the distinction between technical fragmentation and legal fragmentation of the internet is crucial. Technical fragmentation leads to a higher temptation to disconnect from each other, while the legal aspect of internet governance empowers individuals to shape their own future.

In advocating for a free, open, and decentralized internet, Verdier acknowledges the importance of respecting each country’s right to establish its own legal framework. He believes in the right of the people to make decisions about their own future and is a strong proponent of an open and neutral internet. He opposes the idea of a unified market for tech giants such as Mr. Zuckerberg.

Another vital aspect is the need for network standards and legal standards to be interoperable. This ensures seamless connectivity and compatibility between different systems.

Verdier highlights the distinction between private online spaces, such as social networks, and the internet itself. He sees entering a social network as akin to leaving the internet, emphasising that social networks are “private places” built on top of the internet’s infrastructure. Additionally, Verdier expresses a preference for European rules over private rules from platforms like Elon Musk’s.

The golden age of the internet has ushered in an unprecedented openness of access to information, knowledge, and culture. This has been a monumental shift, allowing people from various backgrounds to engage with a vast array of resources. Furthermore, this period has uniquely empowered communities and individuals, enabling them to have a greater say in shaping their own futures. The permissionless innovation that characterizes this era has also spurred remarkable progress.

Verdier cautions that threats to individuals’ autonomy, empowerment, and creativity can stem from many sources, not solely rogue states. He has expressed concerns about the role of the private sector in potentially impeding these freedoms.

In conclusion, Henri Verdier, a respected internet entrepreneur, has witnessed and experienced the incredible evolution of the internet. While initially doubtful, he now recognises its transformative potential. However, he remains watchful of the dangers of fragmentation, privatization, and the potential threats to people’s autonomy and creativity. By advocating for a free, open, and decentralized internet, he strives to strike a balance between global connectivity and respecting the sovereignty of individual nations. Overall, his insights and observations shed light on the complex challenges and opportunities presented by the internet in the modern world.

Izumi Aizu

Predicting the future is a challenging task, especially when it comes to disasters and conflicts. These events are often unpredictable in nature, as exemplified by the earthquake and tsunami that occurred 12 years ago, which was not foreseen. Recent conflicts in Gaza and Ukraine were also unexpected. Despite advancements in technology, such as the Internet, smartphones, and AI, natural calamities and conflicts continue to impact the world unexpectedly. This suggests that while optimism about the future is important due to technological advancements, reality often brings unexpected events.

The future is multi-faceted, consisting of both positive and negative aspects. It is composed of different elements, including both dark and bright aspects. While there may be positive advancements, there are also dark and challenging aspects to consider. It is important to have a holistic understanding of the future, considering its multi-dimensional nature.

One perspective on the future of the Internet is presented by Izumi Aizu. He believes that the future scenarios of the Internet will be characterised by mixed networks co-existing with the traditional Internet, fragmented Internet with national bloc politics, and a globally unified strength Internet. This suggests that the future of the Internet may be chaotic and fragmented.

However, Aizu also believes in the Internet as a tool for global communication and knowledge sharing. Despite the potential fragmentation due to political and economic reasons, he emphasizes that the underlying ethos of the Internet as a communication tool is likely to persist. Aizu challenges the view that achieving a ‘better internet’ alone should be the ultimate goal. Instead, he emphasizes the importance of focusing on creating better societies and better people.

Aizu agrees with Sheetal Kumar’s statements about the need to harmonize legal frameworks to international human rights standards and make governance bodies more inclusive, particularly in the context of internet governance. He suggests that the future of technology, including the Internet, should be informed by current politics and environmental changes. This includes considering potential regulations on servers, data centers, and artificial intelligence (AI) due to environmental factors.

Furthermore, Aizu emphasizes that the focus should not solely be on the future of the Internet but on the future of humanity as a whole. He argues that it is essential to address global goals like good health and well-being, quality education, and sustainable cities and communities alongside technological advancements.

The current discourse on artificial intelligence (AI) is criticized for its lack of inclusivity. Aizu points out that important countries like China and India were not adequately represented in the discussion. This highlights the need for broader participation and diverse perspectives in shaping the future of AI.

The present state of the Internet Governance Forum (IGF) is perceived as peaceful but unremarkable. Although the IGF has evolved from being tense and fearful in the past, it is now considered to be less impactful and engaging.

In conclusion, predicting the future is a challenging task, particularly regarding disasters and conflicts. Advancements in technology do not eliminate the unpredictability of these events. The future is multi-faceted, composed of both positive and negative aspects. The future of the Internet may be chaotic, but it also holds potential as a tool for global communication and knowledge sharing. The focus should be on creating better societies and better people, rather than solely improving the Internet. Harmonizing legal frameworks and governance bodies to international human rights standards is crucial for responsible internet governance. Considering current politics and environmental changes is important when shaping future technology. Inclusivity is key when discussing topics like AI, and broader participation is needed. The present state of the IGF is perceived as peaceful but unremarkable, highlighting the need for more impact and engagement. It is essential for IP fundamentalists to expand their perspectives and engage with other global issues. By doing so, they can learn from and contribute to discussions on topics like climate change.

Olaf Kolkman

Predicting the future of the internet is a challenging task due to the complexities and rapid advancements in technology. However, there are differing viewpoints on what the future may hold.

One perspective is that openness is a key feature that should define the future of the internet. This notion is supported by the belief that the scientific method of sharing knowledge, criticizing each other, and making knowledge readily available has been instrumental in driving innovation and progress. Openness allows for collaboration and the exchange of ideas, thereby fostering continuous development and improvement. Furthermore, empowering communities through bottom-up methods, such as building Internet Exchange Points (IXPs) and providing cookbooks for community networks, helps ensure that everyone has equal access to the benefits of the internet.

However, there is another argument that proposes a future scenario where the internet becomes closed and proprietary. This model envisions a world where services are primarily developed to generate profits, prioritising monetary gain over network connectivity. Under this system, the concept of openness may be overshadowed by profit-driven motives, potentially hampering innovation and limiting access for certain groups of people.

Additionally, the lack of infrastructure is identified as a significant challenge that leads to fragmentation. Without adequate infrastructure, internet services may be limited or nonexistent in certain regions, impeding connectivity and hindering progress.

One area of concern is the influence of industry politics on standardisation bodies. It is recognised that choices made by these bodies can be influenced by industry interests and agendas, potentially impacting the open and transparent nature of internet standards.

The notion of consolidation is another topic of discussion. Even with open technologies, companies may seek to extract profits and monopolise the market, leading to consolidation and reducing diversity. This trend raises concerns about fair competition and innovation within the internet ecosystem.

On the other hand, innovation does not always require strict standards. For example, the development of blockchain technology by Satoshi Nakamoto, where an innovative approach was taken without relying on a predefined standard, showcases the possibility of permissionless, open, and individual-driven innovation.

Open architecture, open-source, open standards, and transparency are highlighted as essential components for a positive future of the internet. Open architecture allows people to build upon existing technologies, while open-source encourages collaboration and reuse of building blocks. Open standards and transparency promote inclusivity and foster trust among users.

Internet regulation and governance are acknowledged as crucial aspects for the future of the internet. A principle-based approach that considers factors such as individualism, autonomy, and societal values is suggested as a means of organising the internet. However, achieving global consensus on these matters is expected to be challenging given the diverse perspectives and interests of various stakeholders.

In conclusion, predicting the future of the internet is a complex task, given the rapid pace of technological advancements. While there are differing opinions on what the future may hold, the importance of openness, infrastructure development, community empowerment, and fair governance are recurrent themes in shaping a positive future for the internet.

Lorrayne Porciuncula

The analysis explores different perspectives on the impact and governance of the internet and technology. It begins by highlighting the initial optimism surrounding these tools, with the belief that they would serve as liberating and empowering forces. Lorrayne Porciuncula grew up closely involved in the evolution of the internet through her father’s local ISP in Brazil. She conducted a survey that revealed widespread optimism about the benefits that technology would bring to society. However, it is noted that the reality of technology’s impact is more nuanced than early optimistic predictions.

Porciuncula acknowledges that while the internet and technology have brought some positive changes, they have not fully lived up to the idealistic visions many had held. The argument presented is that the future concern lies more in the legal and regulatory aspect of technology rather than the technical layer. It is stressed that there is a need to consider how to build alignment across different national legal and regulatory frameworks to avoid fragmentation.

Furthermore, it is suggested that coordination and collaboration are essential in creating a more agile perspective towards internet infrastructure. This includes having a multi-stakeholder approach and addressing the challenges of cross-border coordination. Porciuncula emphasizes the importance of finding institutions and processes that are capable of considering various perspectives and adapting to the ever-evolving nature of technology.

The analysis also highlights the complexity of the internet and the need for international cooperation in its governance. It is recognized that the internet is difficult for one government to regulate and comprehensive governance requires collaboration on an international scale. The argument is made that the focus should be on governing the complex adaptive system of the internet through international cooperation.

Narratives are identified as playing a crucial role in discussions about the internet and digital society. Porciuncula emphasizes the importance of addressing issues such as walled gardens with competition tools and identifying the requirements society has for the internet. The analysis also notes that there is a lack of clarity about what society wants from the internet.

The need for remedies that allow users to switch between internet platforms is highlighted, drawing parallels with the example of telecoms where users have the right to switch. This is seen as a means to promote competition and reduce inequalities.

Addressing the complexity of internet governance requires a clear objective, an incremental and iterative approach, and multi-stakeholder inclusion. The analysis stresses the importance of considering the perspectives of underrepresented communities and incorporating them into the decision-making process. It is argued that multi-stakeholderism is not about relinquishing government decision-making power but rather about creating a more inclusive and democratic approach.

Lastly, the analysis suggests that sandboxes can serve as a valuable tool for testing new policies and understanding potential issues. By allowing for real-world testing of regulations, sandboxes can provide insights into the effectiveness of policies and help address any unintended consequences.

In conclusion, the analysis highlights the need for a more nuanced understanding of the impact and governance of the internet and technology. While there was initial optimism about their liberating and empowering potential, it is recognized that their impact is more complex. The focus should shift towards the legal and regulatory aspects and finding alignment across national frameworks to avoid fragmentation. Additionally, a more agile perspective, international cooperation, and multi-stakeholder inclusion are crucial in addressing the challenges of internet governance. Clear objectives, an iterative approach, and multi-stakeholder involvement are necessary to tackle the complexity of the system.

Emily Taylor

In the discussions surrounding the future of the Internet, Emily Taylor raises the need to explore potential risks and scenarios. Taylor outlines three possible scenarios for the Internet’s future: muddling along as it currently is, fragmentation due to various factors, or a more positive collective future created by society.

Taylor also reflects on the optimism once associated with the Internet, expressing a desire to rediscover that sense of potential for liberation and empowerment. This highlights the importance of not losing sight of the positive aspects of the Internet’s evolution.

The discussions emphasize viewing technology as an integral part of society rather than something separate. Izumi’s views on the chaotic nature of the Internet and the need for focus on better societies and individuals support this argument. The concept of a better future should encompass technological advancements as well as advancements in society and individuals.

In conclusion, the future of the Internet requires consideration of potential risks, a renewed sense of optimism, and recognition of the integration between technology and society. This comprehensive analysis offers insights into the discussions surrounding the future of the Internet and the need to align technological advancements with societal progress for a more inclusive and beneficial future.

Note: There were no UK spelling and grammar errors to correct in this text.

Sheetal Kumar

The future of the internet is predicted to become increasingly intertwined with our daily lives and more challenging to separate from our activities, according to multiple speakers. They assert that advancements in technology have resulted in devices becoming smaller and faster, leading to the omnipresence of cameras through mobile phones. This development has made capturing and sharing images an effortless part of our routine.

Furthermore, the speakers emphasize the accuracy of past predictions regarding technological advancements. This observation highlights the potential for future visions and creations to shape the evolution of the internet. It implies that our anticipation and actions today can play a crucial role in determining the trajectory of technological progress.

Sheetal Kumar, one of the speakers, underlines the significance of actively shaping the future of technology. She stresses that technology should feel liberating for all individuals, especially those who lack positions of power. Kumar emphasizes the need to address and overcome current social inequalities in shaping the future of the internet. This call for inclusivity is accompanied by an appeal for engagement and cooperation among technology builders and standard-setters.

Moreover, the speakers stress the importance of harmonizing legal frameworks with human rights standards and making decision-making bodies more inclusive. This notion is grounded in the existence of international human rights law and standards. The speakers argue that aligning legal systems with human rights principles leads to more equitable and just outcomes. They advocate for increased transparency and a reinstated sense of user control in internet data, as recent trends have demonstrated a shift in control from users to corporate actors and governments.

Protecting the openness of the internet is seen as paramount. The speakers highlight the value of open access, enabling people to go online, build new applications, and develop technologies. They argue that maintaining openness fosters innovation, collaboration, and an inclusive digital environment.

In conclusion, the future of the internet is expected to be tightly integrated into our lives, making it difficult to disassociate from our activities. Promoting a future where technology feels liberating and inclusive is a shared goal among the speakers. They advocate for engagement, cooperation, and the alignment of legal frameworks with human rights standards. Reinstating user control and transparency while protecting the openness of the internet is also considered essential. Ultimately, the future world should be built upon the principles of liberation and the safeguarding of human rights.

Raul Echeverria

During the discussion, the speakers covered various important topics, including the challenge of internet fragmentation and its negative impact. They acknowledged the already existing fragmentation in the internet and expressed the mission to minimize it as much as possible, promoting a more unified and accessible internet for everyone.

Another significant aspect discussed was the need for gradual objectives and commitments in policy making. The speakers emphasized the importance of starting with simple agreements and progressively improving upon them. This approach encourages collaboration and partnerships among different stakeholders, in line with the goal of achieving the Sustainable Development Goal (SDG) 17 of “Partnerships for the Goals.”

To improve messaging and policymaking, the speakers emphasized the importance of clear and concise messages regarding internet fragmentation and its implications. Simplifying these messages would enhance policymakers’ understanding and enable them to make informed decisions. This approach aligns with SDG 16, which aims for “Peace, Justice, and Strong Institutions” and underscores the need for effective communication in policy-making processes.

Additionally, the discussion shed light on the impact of fear on shaping future policies, particularly in relation to artificial intelligence (AI). The speakers observed that discussions at a global conference primarily focused on fears and concerns about AI, with a negative bias. They argued against formulating policies solely based on fear, advocating for a balanced and rational approach rooted in evidence-based decision-making.

The speakers also emphasized the importance of involving youth in policy discussions. They believed that regardless of their level of expertise, young individuals should have a voice in shaping the future. This recognition aligns with SDG 16 and highlights the value of diverse perspectives in the policy-making process.

In summary, the speakers stressed the need for collaboration, clear messaging, and gradual improvement in policy-making processes, while cautioning against the negative influence of fear. By involving various stakeholders, particularly youth, in discussions, they aimed for a comprehensive and inclusive approach to envision and shape the future of the internet.

Audience

The future of the internet is heavily influenced by innovation in use cases and applications. Younger engineers are seen as key drivers of this innovation, as they come up with new ideas that shape the development of internet protocols and technology. However, there are concerns about the current state of the internet. It has shifted from being a force for good to being driven by aspects such as surveillance, capitalism, malware, and misinformation. This observation highlights the need for measures to address these negative aspects and ensure that the internet continues to serve as a positive force in society.

Diversity and inclusion also emerge as crucial factors in the development of internet standards. The lack of female participation and end-user representation in standards bodies is seen as a problem that needs to be addressed. Having more diversity and inclusivity in these bodies allows for a wider range of perspectives, leading to more comprehensive and effective standards.

Predicting future advancements in technology should focus on understanding user demands rather than solely relying on technological capabilities and government regulations. The speaker suggests that the best way to anticipate future developments is by understanding what individual users want technology to do. This user-centric approach ensures that technological advancements align with the needs and desires of the people.

While there is technological optimism, challenges arise from governmental regulation fragmentation and enforcement contradictions. The existence of contradictory laws and regulations related to privacy and online content does not seem to inhibit governments from enforcing them, raising concerns about the effectiveness and coherence of regulation in the internet landscape.

Incentives, particularly money, play a significant role in driving internet development, especially in the context of web 3 crypto. However, it is acknowledged that money may not be the sole incentive driving technology development. Other factors such as societal impact, innovation, and user satisfaction should also be considered.

The influx of cryptocurrencies is expected to make the future of the internet more complex and fragmented. This observation raises concerns about the possibility of increased fragmentation and the need for regulation to address these complexities effectively. Government regulation fragmentation is seen as a major risk that could hinder the development of a cohesive and secure internet.

There is also a focus on the need for more inclusive regulation, particularly in the context of AI. The lack of consensus and the competition surrounding AI regulation are seen as challenges. It is suggested that businesses, civil societies, and the engineering sector should document the consequences of fragmented regulation to increase awareness and promote more balanced and inclusive approaches.

Inclusivity and engagement of users from the global south and countries with geopolitical differences are highlighted as essential for the future of the internet. By incorporating diverse perspectives, the development and governance of the internet can be more representative and inclusive.

There are concerns about the negative aspects of the internet, such as internet shutdowns and the exploitation of ICT by bad actors. These issues call for regulation and measures to ensure the proper and ethical use of technology.

The importance of aligning government regulations with human rights norms and standards is emphasized. Both governments and companies have responsibilities to uphold human rights through their actions and policies.

Inclusive governance and the involvement of diverse stakeholders, particularly users, are seen as crucial. By including different voices and perspectives, decisions about the internet’s future can be more comprehensive and representative.

In conclusion, the future of the internet is shaped by innovation in use cases and applications driven by younger engineers. However, challenges exist in terms of the internet’s trajectory towards negative aspects such as surveillance and misinformation. Ensuring diversity and inclusion in internet standards bodies is key, and predicting future technology advancements should focus on understanding user demands. Regulation, especially with regards to cryptocurrency and AI, needs to be comprehensive and inclusive. Inclusivity, human rights, and the prevention of negative impacts on society should be at the forefront of decision-making.

Session transcript

Emily Taylor:
There’s an expectant silence, so I’m going to fill it. Good morning, good afternoon, good evening to those who are joining us online, and welcome to this workshop organised by the DNS Research Federation entitled The Internet in 20 Years Time. So this is organised around the theme of avoiding fragmentation, and what we decided to do was to imagine ourselves into a future in 2043, and we will be reflecting on the internet as it has become in our prediction, how we got there, what good would look like in 2043, and what action we might need to take now to fulfil the hoped-for future that we want. So my name is Emily Taylor, I’m a founder of the DNS Research Federation, I’m also CEO of Oxford Information Labs, and an Associate Fellow at the international affairs think tank Chatham House. I’m joined today by a wonderful panel of experts who are going to indulge this act of imagination, but I also hope that we can involve you, the audience in the room, and also online. Please feel free to ask for the floor at any stage, we’re not doing the sort of opening remarks, we’re going to just travel through those themes of imagining the future and how we got there. So if at any point you would like to join the conversation, please, you’re more than welcome to do so. So I’m joined today on the panel, I’m going to kind of run through from end to end, we have Olaf Kolkman, who I think is your current job title, Principal Internet Technology Policy and Advocacy at the Internet Society? Thank you. But we’re very, very fortunate to have Olaf, those of you who know him will know how deeply his understanding and communication of the technical layers of the internet and his ability to communicate that to non-technical people is much appreciated on this panel, and I hope we’ll be hearing that and the reach across into standards as well. We have Lorraine Porciancula, who’s the Executive Director of the Datasphere Initiative, and I hope I haven’t pronounced your name completely wrongly. We have Ambassador Henri Verdier from France, who is joining us today as well. We’re very delighted to welcome you to the panel, Ambassador. We have, you’ll see an empty seat beside me, that is for Raoul Echevarria, who’s joining us at about the hour mark. He’s the Executive Director of the Latin American Internet Association. We then have Izumi Aizu, Senior Research Fellow at the Institute for Info-Socionomics at the … Something like that. Something like that. Did I say that right? At Tama University here in Japan, and then Sheetal Kumar, who’s Head of Advocacy at Global Partners Digital. So welcome to all of you on the panel. So my first question to you all is, if we imagine ourselves in 2043, what does the internet look like? Let’s try the sort of, you know, your best guess. Before we get started on that, as we’re in your hands as futurologists today, how good are you at predicting the future? Would you say, does anybody want to share any anecdotes about their prowess at predicting the future? Olaf, have you got anything for us?

Olaf Kolkman:
I knew this was coming. I told this story to Emily once. In the second half of the 90s, sort of 95, 96 or so, I was making webpages in the university, studying astronomy. At some point, a PhD student that I was working with came to me and said, let’s bail out. Let’s start a company making webpages. And I told him in his face, no, I will not do this. This whole web thing will not go beyond academic libraries and preprints. So that is how good I am at predicting the future.

Emily Taylor:
Great to have you on this future gazing panel, Olaf. With that, Sheetal, have you got anything for us?

Sheetal Kumar:
I’m not sure if I have a mic here. Sorry. I think we might have to share. So I think the danger with these questions is it also forces you to reveal your age to some extent, which is part of the game, perhaps. But I do remember, perhaps, it’s not me, but I remember my parents saying that they, well, one of them, that they imagined 20 years ago that in 20 years we would have cameras on us all the time, which is true. We have our phones. And we would be able to access, you know, what we want to see on smaller phones because they kind of imagined that the devices, the pagers and then the big block phones that we had and we were carrying around would just become smaller and smaller and faster and faster. And that’s what happened. They should be on the panel. But I think that that lends us to, you know, the question of, like, how does that then evolve and what does the internet look like? I think it’s more perhaps a question for people, what does the internet feel like? And it’s, I think, going to be along the lines of, of course, what we create and what we envision and how we build that. But an internet that is more, just more in our lives, more embedded, more difficult to disassociate from everything that we live in and inhabit. So that would be my prediction.

Emily Taylor:
And we’ll come back to the vision of the future in a second, Shetal. Thanks for sharing that. Ambassador Verdi, I think if we do that one.

Henri Verdier:
Hello and thank you for the invitation. I started my first internet company in 1995 in France, so we were 15,000 web surfers and I didn’t miss internet. So here I was right. And I remember when, for example, do you remember when Bill Gates said that internet would never work and Microsoft.net would be better? So here I didn’t miss the story. But my company was a subsidiary of a publisher and I remember the birth of Wikipedia and I thought and I said that it was impossible to conceive an encyclopedia without a genius like Diderot or d’Alembert. So I said, this is impossible. And that was my first mistake in this story. Yeah, well, you know, what do they say?

Emily Taylor:
Predictions are difficult, especially about the future, right? But you got you got a lot of it right, like Shetal’s parents. So Izumi, how about you?

Izumi Aizu:
Thank you. I think many of you know the very famous term. The best way to predict the future is to invent it by Alan Kay. But 12 years ago, there was a big earthquake and tsunami happened. We have never predicted it, right, and we didn’t want to invent it either at all. If it’s positive, you can invent, you can make the future bright. But how many of us have imagined that Gaza thing just started to fire and what’s going to happen? How many of you, Ukraine, not to mention, but also 28 years ago, there was a big earthquake in Kyoto. I mean, Kobe hit us, many killed. So yes, while we are very much optimistic about the future with all the great things like technology, Internet, smartphone, AI, the reality may be composed by many different colors, dark, void, vacuum, green and white. So I don’t know how really to respond to your nice question, Emily, but I will try to come up later.

Emily Taylor:
Thank you very much. Lorraine.

Lorrayne Porciuncula:
Thank you so much, Emily, and thank you for an invitation of this panel. I keep thinking that, well, I grew up with Internet, right? It was very, it’s hard for you to predict something that it was part of the air, something that’s already a given, right? And so I remember when I was very young and my father had one of, had a local ISP in Brazil. It was one of the first in Brazil, actually. And I remember playing around servers in the cool rooms and all that. That was very much part of sort of my life, right? I’ve been part of that. And so trying to retrace when I started thinking about the Internet as something separate was probably when I was around my undergrad studying international relations and economics. And I was very much into Amartya Sen. And as a Nobel Prize winner, thinking about capabilities and how that is development, right? Rather than just thinking about how much money you’re going to make or GDP you’re going to grow in a country, it’s about capabilities of individuals or communities. And it struck to me particularly because that was around where, well, the green wave and the movements in Middle East were happening. There was the Arab Spring. And I remember the sense of excitement of what technology was going to bring. And the kind of empowerment and expansion of capabilities was going to bring. And this realization very naively at the point, and I think was shared by many people, that there was no way to fight this because it was going to come in terms of liberating people and populations. And technology was going to empower everyone. And it was interesting because I did do a small survey asking people about their predictions. And there were a number of questions, answers to that question. And a lot of people saying, well, there’s just a lot of positivity and optimism in terms of what it was going to bring to society. So when I think back to that, I think, well, maybe, I mean, we certainly are not there in terms of just being the solution to so many of the problems that we already have inherently as societies. But somehow it has brought good things. So the answer is way more nuanced, as with any prediction. It never comes in an extremist kind of scenario.

Emily Taylor:
Yeah. So we’re not going to get it right. But I think that that view from you, Lorraine, as somebody who never remembers not having the Internet, that the future is difficult to predict. It will have good and bad aspects, as Izumi has said. But it’s also, you know, one of the things I hope we can rediscover on this panel in a small way is that sense of optimism that you describe from your earlier time. And so we set out in an accompanying paper to this session, three possible scenarios for the future, which we published on our blog. I don’t know if you’ve seen it. I think it’s on the on the page for this workshop. But in in the sort of TLDR aspect, it’s we muddle along more or less as we’re going. And it’s a little bit worse, probably, you know, but somehow all holds together somehow, a bit like what we’ve got today. But in 20 years time, there’s a fully fragmented future, which is either divided at the technical layers, at ideological layers, at at regulatory layers or all three. And then there is the bright future, the where we all collectively get our act together and almost sort of deliberately work to create the Internet in that optimistic frame that you described so beautifully, Lorraine. So if I could just get a sense from our panel and and also please do, you know, raise your hand if you would like to to to join in this conversation. I’d like to hear from you, you know, according to your expertise or your area of interest, you don’t have to cover everything. What do you think is the most likely future that we will have for the Internet and why and at which layer do you see the most risk? Shall I start with you, Olaf, as you started by sharing so honestly your prediction about there being no future?

Olaf Kolkman:
Yeah, I again, predicting the future is incredibly hard. And what you normally do with scenario thinking is you go into the extremes. Now, when I read this paper for the first time, what I what I sort of noticed is that the future is already here. What you’ve taken are points that we already see start happening to starting happening and that can explode and find their find their way into that future and become more prevalent. And if that happens, the world will look different. When I was thinking of the story of hope, and this is a way to sort of classify those futures, when I was young, again, what I liked about the Internet, what drove me towards the Internet and what made me the the professional that I am now is the openness, the openness, the really the scientific method of sharing knowledge. Criticize each other, having knowledge available. Everything I learned about the Internet, I learned on the Internet, and I shared my learnings and I contributed to that Internet as well. And that that feature of openness, I think, is is one that sort of classifies is another way to classify the scenarios that you have. A scenario that first scenario that you have a mixed scenario with a net closed networks or mixed networks coexisting with the traditional Internet is about being closed, about being proprietary, about developing services for which the services make the money and people pay for the services. And that’s the way that they connect. I connect to this service. And the network connectivity itself is not important anymore. And that’s different from the third evolution, which is more open and treats the Internet as a way to connect to the rest of the world and choose your services. So I leave it at this for the moment. We can go in deeper.

Emily Taylor:
Thank you. I’ve got three people waiting to join the conversation, and I’m so pleased to see that at such an early phase. And also, I would encourage any women in the room who would like to ask a question to either raise their hands. And I personally find the the mic in the in the aisle quite a big step. But if any if any women would like to join the conversation from the floor, please do. And some younger people. And Asians. Thank you. But let’s can we just run through some some very brief injections from you to the conversation if you’re ready to do so? Thank you very much. Sure.

Audience:
This is Barry Lieber. What I’m gonna say is going to follow on very nicely from what Olaf just said. I have a talk that I’ve given in a few places about Internet architecture, how we built the Internet. We collectively how we how we got where it where it is and where it’s going and in the how it got where it is. There’s a lot of what are the innovations that drove the architecture? And how did we add to the protocols, the suite of protocols that make up the Internet with things like media streaming and teleconferencing that we’re working? We’re now working on protocols for autonomous cars to talk to each other in a little pocket that. So there’s the where we’re going is a realization that the that what has driven the Internet is innovation in in use cases and applications and things that we that we can do with the Internet have built up the the suite of protocols and the technology that makes the Internet. So as we look for the future, where it’s going, it’s going to be I can’t predict specifics, but what I can predict is that it’s going to be some brilliant engineer who’s a third my age who has the next great idea for an application on the Internet that’s going to drive another set of standards and technology that builds the Internet 20 years from now.

Emily Taylor:
Thank you. And thank you for also highlighting the role of standards in shaping the way that we experience technology. So I hope that we’ll come back to that on the panel. Andrew, do you want to just give us a quick injection from you? And then after we’ve heard from Mike, I’m going to resume our panel discussion with Izumi.

Audience:
This might actually work quite well, follows quite nicely on Barry’s point. So I’m going to come at it from a different point of view. I think when we look back in 20 years, we’re probably at or close to an inflection point. Up until now, the Internet’s largely been a force for good. And I would observe that when we consider things like surveillance, capitalism, malware. disinformation and misinformation and CSAM, we’re at the point where the balance is shifting to it no longer being a force for good and actually being a force for harm rather than good, when you net out the various effects. If I look at, so this is where we get to Barry’s point, those sort of standards bodies, I literally this morning received an email telling me that from a survey of the ITF membership it’s around 10% female. I’ll leave that out without comment. There are no end-users or virtually no end-users present in the standards bodies. The ITF is not unusual in that regard. None of them are really very good in terms of multi-stakeholderism in any meaningful way. So I would suggest that when we look back in 20 years, I think the reason it’s an inflection point is we either change the SDOs to be multi-stakeholder and diverse in all sorts of different axes or potentially this will fail under the weight of the harms because we need to design this as a internet for the users not by the engineers. So we’ve got two very contrasting views already for the future.

Emily Taylor:
We’ve got from Barry the idea that you know there’s going to be some really unexpected piece of innovation that just comes out of nowhere and that sort of picks up a point from Izumi about you know when you look back at things we failed to to predict even in the last week these are unexpected things and a somewhat more pessimistic view from Andrew about you know and highlighting some aspects if you like of the standards development world that are not currently as inclusive as they should be and even the internet becoming a force for harm rather than good which I saw you know caused ripples in the room. Mike can you help us out with another vision and then we’ll resume the panel.

Audience:
Well my comments are going to feed in very nicely to both previous speakers. I’m at the Carnegie Endowment for International Peace but I’ve had eight dream jobs. A couple back where I was at Georgetown teaching about communications culture and technology. My most popular class was called how to predict the futures parentheses s and I taught the students that the best way to understand what’s coming isn’t to look at what the technology can do it’s to look at and not to look at what governments think the technology should not do. The best thing to do is to look at what individual users will want the technology to do and that’s whether it’s digital technology whether it’s biotech whether it’s cars and I don’t think your panel is structured in a way to do that. So I want to rewrite your project description your program description to spend at least a few minutes thinking about what is it that are driving the companies and the governments to make the internet better because I’m a technological optimist and a political pessimist. In those countries that have policies that allow a lot of innovation and competition I think we’re going to solve most of the problems that were just mentioned. But I don’t think we’re gonna understand the future if we don’t understand what’s driving it and I’m just challenging you to ask that question.

Emily Taylor:
That’s a really welcome challenge and I think Izumi I’d like to to turn to you on that you know we we often talk about fragmentation in very particular ways like we’re going we’re in the layers we’re talking about it at a technical level but as Mike has has challenged us you know there are lots of different issue as well. ways that fragmentation might emerge and there’s different ways of framing the

Izumi Aizu:
Thank you very much Professor Nelson. I’m a student of your class okay 20 years ago. Well it’s a pity that we don’t have anybody in the teens and 20s on the panel. Not to mention that many in the room. I had an interesting discussion yesterday with the 12 years old kid and five and talked about the war and peace and the internet. But let’s put aside. With the scenario you prepared three ones right? Mixed networks coexisting with the traditional internet and the second scenario is fragmented internet with national block politics internet and the third one is the globally unified strength internet. I would say our first one and to mix and I would call it chaos. I don’t see any globally coherence ranks the internet. I asked Mr. GPT and Mr. Bard a few minutes ago. I’ve got more than I can read in five minutes. But interestingly overall nature of the internet while fragmentation due to political and economic reasons might be predominant the underlying ethos of the internet is as a tool for global communication knowledge will likely persist. That’s Mr. GPT and Mr. Bard said personally I believe that the internet is likely to become more unified in the coming years. Very noisy pictures and these are AI not me. So I would perhaps later explain a little bit why I would call it chaos. As Mike may have already mentioned but you said for the better internet I challenge that. I would say for better society better people not internet. That’s a very different views of the world and the internet. So that’s my second contribution.

Emily Taylor:
Thank you very much Izumi and and I think it sort of comes back to where you started us Lorraine is the the and and Chital this sort of maybe it’s it’s more and more false to think of technology as something that is separate from ourselves. That it’s it’s integrated that the future of the technology is very much about our own future as societies and as people. Lorraine what do you think is the most likely of any scenarios but the three may have helped or not but you know we want to remember Mike’s challenge on that you know you can just frame it however you want.

Lorrayne Porciuncula:
Thank you so much. I actually I really love the flow of the conversation and how organically we’re integrating the different arguments. I’m going to try to build on all of that. I think that the question around fragmentation needs to be you know needs to take a step back in terms of in which layer are we talking about when we’re talking about fragmentation. Often people tend to confuse the issues in terms of where it’s actually what kind of fragmentation we’re talking about. And also I mean there’s a difference between fragmentation on the technical layer and a fragmentation on the legal and regulatory layer. And so for me it’s more useful to think about I mean if I’m talking about scenarios in 20 years I don’t think we’re gonna have such a big issue with the technical layer in itself. I think the real big hairy challenge will going is going to be around the legal and regulatory space. And that’s not a potential if it’s a reality right now. And so I’d rather focus on that scenario which is very much true around fragmentation around in regulatory aspects than on what could happen if a number of things happen in the domain name system etc right. So that taken aside I also think that ultimately it’s not only about fragmentation of the internet as Izumi said. It’s about what’s happening with our digital society. And so it’s the question that I’d like to pose then with the also instigated by Mike is what how do we how are we going to do to get along really. And what are the incentives because ultimately policymakers are going to design regulatory and legal responses to what they are afraid of, to what they want to control. And talking to Mike just before the panel he was saying he wrote a paper just 25 years ago around what are the different incentives of what governments would like to control which is basically taxes. Basically you know content that is online. How do you actually ensure national security, democratic processes. All of this are incentives in terms of how do you build the tools from a national angle to and see them reflected in the technologies that we have today. So the questions are around how the national concerns are going to are going to be really then reflected in that technology and the fragmentation happens in that in that angle really which is legal regulatory and about how do we actually find convergence or how do we find find interoperability around those different spaces of legal regulatory regimes. And how do we find the institutions and the processes that are able to to take this all in from a more agile perspective from a way that coordinates across borders and across multiple stakeholders.

Emily Taylor:
Thank you very much. So I hope we can expand on your final point about you know what do we need to do how do we equip ourselves from the future for the future we want. But we’ve we’ve heard from from the audience and the panel that there there might be fragmentation risks at a technical level and also from you from a legal and regulatory level. Ambassador Verdier I’d like to turn to you now for your prediction about where we are likely to end up and thank you very much I can see already five people including two women thank you very much for that. I’m going to I’m going to go to Ambassador Verdier and then to Chantal to articulate your predicted or preferred visions and and then I’d like to hear about your visions as well for the future and we can we can join forces in that way.

Henri Verdier:
Okay we’ll try to be brief and make four small remarks. First I don’t know what will happen but I know what we should fight for because you you say that you don’t remember the world before Internet. I can imagine the world without Internet and the world after Internet and I’m not sure that my daughters will know the world where we are living in today because of course probably there will always be a technical standard and the possibility to build interaction between computers but you all know that some big states don’t really like this free open decentralized Internet and some and most big tech don’t care about it and you did observe like me that for example now 80 percent of the submarine cables are private and we can observe a tendency of privatization of something that was a common and that’s so there will always be an Internet like there is a darknet for example but maybe we won’t live within this Internet and that’s a main concern so I wanted to share this with you and we have to fight for this again open neutral free decentralized Internet based on the open standards that we can share. My second and quite so maybe that’s normal I remember the declaration of independence of cyberspace most of you remember John Perry Barlow at this time the cyberspace could be seen as a foreign place somewhere else now it did invade and transform everything education, health, business, war, peace so this is the normal life there is not anymore a digital life this is a life so every problems we have as governments and as citizens as democracies are as a digital aspect that’s why you will observe more and more digital diplomats because now half of the diplomacy depends on the digital revolution so we have to think about this and this is new six years ago you didn’t have any digital diplomats now we quite all have digital diplomats so we’ll have to engage everything we did conceive as states as democracies for centuries and just to finish and to launch a conversation I’ll share your views that we should pay attention we should separate the possibility of a technical fragmentation which would be a very very bad scenario I don’t know if you did think about the fact that so far we are all interdependent even the less digitalized countries they rely on the internet the same as we do if you could imagine two or three technical internet the temptation to disconnect the other would be very high and the war will become a war but the ground infrastructure itself so so far we are we have cyber war we have attacks we do observe lots of things but no one did in turn to disconnect internet itself because it will hurt even the attack or so there is a technical layer and there is a political layer the legal layer from my perspective I will fight always for the unique open neutral decentralized internet the legal aspect is something different so of course it would be better for the business for everything to to converge in one direction but if you believe in democracy you believe in the right of the people to take decision regarding his own future so we cannot ask for what one legal framework for all the world because we want to agree with some countries in France you cannot say publicly anti-semitic or homophobic worlds because the French people want this so we don’t have to comply with other regulation if we want this as a democratic country and here we okay I will make a world and I finish with this I will fight for one unique internet I I’m not there to build one unique market for mr. Zuckerberg that’s another issue that’s not mine

Emily Taylor:
thank you very much and I found that very moving actually thinking about you know of the same generation who remember not having an internet and thinking to our children’s future and that that that might be the same more similar to our past and than we would like but thank you for for you know policy people do like to be miserable and to have to have you articulating so strongly the the intention to to really fight for a better future is something that we often we feel very passive sometimes with technology that it happens to us and we don’t get to design our future and from the panel I’d like to hear last but not least from you should tell about you know what scenario you think is either most likely or what you would like to see.

Sheetal Kumar:
Well quickly as I would love to hear from everyone else I can only speak to what I would like to see because I think we build we build and we do design our future unfortunately some of us of course don’t have as much power to do so as others and that’s something we have to be aware of and I think where the internet should should act as a tool for changing that but going back to my point of instead of thinking about how the internet might look like how it should feel it should feel in 20 years time liberating and it should feel liberating to people who perhaps now don’t occupy those positions of power and don’t have them and and that I think is an is an opportunity for us to to ensure that the internet now doesn’t or in the future doesn’t reflect the inequalities of our society and and the structures of them and to point that was made earlier what’s really important in that is ensuring that those who build it the technologies build the standards are engaging with each other and opening up these spaces to all those affected and I know we’re going to come on to what can we do what should we do I think a lot of thinking has been done both here within the Internet Governance Forum and outside about that so really happy to reflect on that because I think there’s a lot of positive recommendations and concrete recommendations I think we also frankly we know what we need to do but we often don’t do it so the more we enumerate I think and the more we vocalize and the more we commit collectively to what we need to do the better so I’m glad we’re here to do that and yeah happy to pick up on the points of what we need to do to get to that third scenario yeah

Emily Taylor:
thank you very much Chantal and that really teases up very nicely for the next stage in our conversation I’d like to thank you very much for waiting patiently I I’m going to go to Georgia first and then I’m going to come to to you Steve and then we’ll sort of zigzag across our audience members thank you very much

Audience:
thank you very much to the panel for all your comments my name is Georgia Osborne I’m senior research analyst at the DNS Research Federation and so you mentioned incentives and I think one thing I think about when I think about the internet in 20 years times are what are the incentives money is a massive incentive we hear a lot about what people are talking about with web 3 crypto and those kind of different fragmentations that you have on a technical layer and I would say that money is currently driving the incentive to build a web 3 through crypto and perhaps that this feature will be much more complex with that coming into play and perhaps it will be more than money that will drive that incentive as the technology develops I was wondering what whether the panel could comment on this type of fragmentation so you have the Ukraine war which is funded mainly by crypto and you know you can call it the metaverse the Fediverse or whatever you want to call it but I’d be curious to know what type of fragmentation you might see in that kind of area, whether you see it being integrated or more fragmented. Thank you very much.

Emily Taylor:
Thank you very much for that. I’m going to take the comments, and then we can invite the panel to make some comments or reactions to that challenge on Web3 and crypto. Thank you very much for your question. Steve.

Audience:
Thank you. Steve DelBianco with NetChoice. Fragmentation by regulation is not only the largest risk, but it is the reality, as you indicated. So for the short term, for the front end of the next 20 years, that is what we will confront. There are scarcely any inhibitions for a government to legislate in any way it wishes, in a populist fashion, in particular today, because the consequences are nonexistent. The governments that try to control what their citizens see and say and contradictorily impose privacy at the same time they’re trying to enforce the laws against bad actors, those contradictions are not enough to stop governments from doing so. Seventy-five percent of this conference has been about AI, and it really isn’t about a drive to consolidate and cooperate on AI regulation. No, it’s been a competition. The speakers have competed with their vision of how they believe AI should be regulated, and that will continue. In the case of NetChoice in the United States, we try to push back on that fragmentation through lawsuits based on unconstitutional approaches. We’re having some success there, but that is not going to work in a cross-border fashion. I’m calling on business and civil society, particularly the engineering sector, to begin to document the consequences of fragmentation regulation at all the layers, document not only the costs, because costs become a barrier to entry, and costs become costs that are passed on to the consumers, the voters of the countries that have embraced unilateral action by their governments. So I believe we need to raise the pain level so that governments believe that there’s some cost to enacting unique legislation that imposes cross-border jurisdictional impacts and raises the cost of everything we’re trying to do.

Emily Taylor:
Thank you very much. And I think that I want to hear from everyone that this is – I knew this would happen, but it’s great to have such an interactive conversation. But I want to come to you, Ambassador Verdier, on that point, and any others that want to join in, but how do we maintain what you were talking about, which is democratic choice and that diversity, while also maintaining the unity? So let’s hold that thought and hear from the others from the audience. Thank you very much.

Audience:
Hi, my name is Nikki Colasso. I run global public policy at Roblox, which is a metaverse company. I think that comment was really well taken, because as I’ve been sitting here, I’ve been thinking about the difference between IGF last year and IGF this year. And for those of you that were at the conference last year or attended online, a lot of the conversation was around how we do approach technology from a perspective of inclusivity. So Sheetal, I really appreciate your thoughts on inclusivity. We talked a lot about incorporating the global south into the decisions that were being made. And so my question for the panel is, as we move to this third phase of the conversation, I think we understand at a high level what the issues are. Very crisply, what do you see, or what are the specific steps that companies, civil societies and others can take to engage users, others in parts of the world that may not get representation and in countries that have geopolitical differences? How do we actually go about having those conversations? What is the way to do that? If we know we need to do it, that’s agreed, how does that happen?

Emily Taylor:
Thank you very much for that. So far, we’ve got Web 3.0 and money, interoperable laws, and involvement of global south, and those who have, you know, where there is disagreement on the basic ideology. Sir?

Audience:
I was really wondering if you knew my name. That was going to be impressive. Hi, I’m Jarell James, and I do have a question on, similarly, with regards to money. I don’t really know how to predict the future of the internet, but I do know how to look at history and see that money is the overwhelming factor for how power is flexed on certain communities. My question is with regards to both regulation and with regards to policy around sanctionable actions. So as we see regulation for monopolies exist in traditional, our traditional finance system, do we see a development where we stop what’s essentially digital colonization from happening from large actors like Facebook into regions that are massively underdeveloped in the communications infrastructure sector? And do we create and enact actual policies to prevent those communities from only taking solutions from outside of their regions? Instead of doing what we do, which is bring tech to these communities, how do we foster this development from within these communities so that in 20 years, we don’t sit on the same problem that we have now, which is 21st century colonialism and resource extraction? And the further part of that is just with these shutdowns, is there sanctionable actions that can start to inform this direction?

Emily Taylor:
Thank you very much for that. And it’s Jarell, right? Thank you. So we’re adding digital colonialism, and how do we foster sort of indigenous development, if you like, if I can put it that way, and also sanction bad actors? Thank you.

Audience:
Good morning. I’m Jennifer Bramlett. I’m the ICT coordinator for the Counterterrorism Executive Directorate with the United Nations Security Council. The issue of money is very interesting. I was amazed by the remarks by the representative of Saudi Arabia yesterday when he was quoting the internet that we deserve and the billions of dollars that would be lost if we didn’t solve issues of fragmentation. Hundreds of thousands of jobs. So I thought it was interesting that he put it right in front of us so bluntly. And when you look at gaming and all of these other industries, multibillion dollar industries, mobile gaming, regular gaming, I mean, it’s an amazing amount of money being generated. Amazing amount of money being generated by bad actors, which is the space that I look at, is how terrorists and violent extremists are exploiting ICT for criminal purposes. One of the main areas that we’re looking at is counter-narratives, and so how language is being proliferated across various systems to recruit, to radicalize, and to keep this criminal enterprise going. And that’s one of the issues that we’re looking at with regard to regulation, is language across jurisdictions and what’s considered harmful language, what’s considered unlawful language, and how authorities in various jurisdictions are going to deal with that in these borderless zones for what’s out on the internet. Also, one of the spaces that we’re looking at in terms of futuristics is the concept of reality. I do remember life before the internet. And yet, looking at the kids growing up, especially those who are playing in the games and in the metaverse areas, we had some really good talks with Naver Z recently, and the concept of reality, what I consider to be real, I go into a game, I play, and I leave, and then I go do my life. Whereas for other people, it’s becoming less and less of a division. And when we’re looking at legislating crimes, and bringing crimes into domestic frameworks, we already have a problem where we don’t have the legislative frameworks and the capacities in member states to be able to even deal with the internet as it is now. Many states don’t have laws that say, terrorism recruitment online is illegal. They have laws against terrorist recruitment, but it doesn’t apply to cyberspace. And so as we move into metaverse, fediverse type realities, what if something happens in the metaverse? If you, for example, detonate the White House in a metaverse world, it could be very real to some people. How do you even deal with that? And so these are things that we’re starting to think about from our side of the house.

Emily Taylor:
Thank you very much, Jennifer, and that’s a fascinating addition as well. So a huge amount to unpack in that, but the recruitment of terrorism online, the establishment of counter-narratives, and also I think the key point at the end, and among legislators, the legislative frameworks in all countries. Let’s have a very brief moment with Vittorio and Bertrand, because I feel like I’m neglecting our panel. I can see they’re like, hang on. But I want to have a reflection on what you’ve heard, and then in the final stage of our workshop, to really think about action, and think about how do we really start to articulate the vision of the future we want and what we need to do now. So let’s go to Vittorio, and then Bertrand.

Audience:
Thank you. Actually, I’ll get to my question. I’ll make my question first, but then I want to make a comment. The question was, don’t we think that we need the regulation to preserve the interoperability, the openness? Because I think that the reason why, at least in Europe, we need to regulate the interoperability, the openness of the internet, and not to break it. But I was prompted to make a comment by the previous round of comments about the future of the internet. If the future of the internet is decided by what the people want from it, what do the people want from an internet? If you take the average internet users today, what do they want? I’m going to Kyoto and not even looking at the places, but taking selfies and posting them to social media and saying, hey, I’m here, give me attention, I need someone to tell me I’m beautiful, interesting. So we’ve been building someone which is growing the bad parts of the internet, the worst parts of the internet personality of the people. So the problem is, what’s the social purpose of the internet today? Because we see the purpose in terms of new technology, we want autonomous driving and whatever, AI and bionic arms, and why are we doing them? Well, because we can, and because someone will make money out of it. But what’s the social purpose of this? I think we’re missing that. We had one 30 years ago, but we don’t have it now.

Emily Taylor:
Yeah, I think there’s a real thread running through a lot of the comments on incentives and what people want, and going back to Mike’s earlier challenge about the users,

Audience:
I want to piggyback on the distinction that Lorraine introduced between technical fragmentation and legal fragmentation. What is interesting is a lot of the technical fragmentation, if it happens, is not driven by a technical objective, mostly. It’s driven by the political environment and the objective of preserving spaces that would become separated because they would correspond to the spaces that are politically separated today. So the fact is that the legal fragmentation is a fact of the international system because of the national sovereignty, and that’s a reality that goes, as Henri was mentioning, to the notion of territorially based national sovereignties. However, to go back to what Jennifer was saying, one of the challenges that we have is that even without interoperability of the legal framework, the separation and the legal fragmentation is what prevents us from addressing the abuses in many cases. When you have a criminal investigation, the framework for access to electronic evidence is nonexistent at the moment and completely insufficient. And in most cases, it’s a whack-a-mole game to avoid a certain number of contents. And so I would just want to finish by saying, I completely agree with Henri that there is the democratic process for each country to do what they think is best for their citizens. That being said, interoperability doesn’t mean alignment completely. Just like the architecture of the Internet allows autonomous networks to function through protocols in an interoperable manner, I think the big challenge that we have if we want to preserve the open Internet is to reduce friction at the legal level by building a governance protocol that allows heterogeneous governance frameworks, including the governments, the companies, and all other human organizations, to be interoperable yet autonomous.

Emily Taylor:
Thank you very much. And of course, Bertrand, maybe there’s somebody in the room who doesn’t know Bertrand, but I think the way we promote that legal interoperability is very much leading in this. Thank you for your question. What I’m going to do, if I may, with your permission, is come back to the panel to react, and then I’m going to come back to you first in the queue. So we have had seven interventions so far, and it’s not even half past 12 yet. So what I’d like to do is to go through our panel, and please choose one question that you would like to respond to that is clearly in your frame of expertise or that just stimulated some thoughts. Who would like to go first? Can I start with you, Ambassador Verdier? Because there was quite a lot that was sort of building on your points or maybe challenging your points about interoperability, and how do we reconcile that autonomy with still being in a network?

Henri Verdier:
I think that quite everything was said. As I said, first we have to fight to protect a common infrastructure that can be interoperable. Internet is a network of networks. Internet is not just one standard everywhere. And there is a second and different question about legal fragmentation. I cherish your approach, Bertrand, that we should learn and try to build interoperable legislation. But so far we don’t really do this, and there is no world government. So I will say something more. So let’s try to progress in this direction, but we are not there. I have one observation from my perspective. You know, I come from a very libertarian Internet. I did cherish Jean-Pierre Barlow 30 years ago. So most of my friends ask me, why did you join the government, the bad actor? And why are you now fighting for regulation, for European regulation? And I don’t feel that I did change, because from my perspective, that’s personal. When you enter within a social network, you leave Internet. If you are within Facebook, within Twitter, within YouTube, you are not in Internet anymore. You are within a private place built on Internet. And I feel that there is a certain level of confusion in this conversation this morning, because we use Internet to speak about TCPIP and to speak about Facebook or Twitter or TikTok or whatever you want. And just to mention it, as a European citizen, I prefer the European rules than Elon Musk’s rules. And I prefer to discuss with other countries and other citizens, and to decide something and to impose this decision to these big platforms. If we don’t put this in the conversation, and we can still protect the free and open and decentralized Internet, but people don’t leave. My daughters, they are never on a blog, for example, or on Internet. They are always within something.

Emily Taylor:
Thank you very much. Chital and then Izumi. Okay, here we go.

Sheetal Kumar:
So there was a lot there to react to. I said earlier that I think we know what we need to do, and we are not doing it. And perhaps to elaborate a bit on that, what I would say is that on the legal fragmentation point and the fact that there is a need for harmonization of frameworks, we do have international human rights law and standards. And of course, there is a lack of agreement perhaps around how that is being effectively implemented. But it is there. We have the rule of law. We have our institutions. And we need to use what we have, which includes interpretations of international human rights law that already exist. And we need to commit to those. And I think global norms and standards, which include those that are discussed here at the IGF and are evolving, including in various UN institutions, requires referring to and, again, committing to constantly and adapting those to the digital age. Because ultimately, I think, we need to build on that. So there are a couple of areas. And I know we’re going to come to the question of what do we do. But I just wanted to highlight that protecting what is essentially the openness, the ability to get online, to perhaps build new apps or technologies and shape the future, I think is so key. We have the internet that Olaf discussed. We understand the need to protect that. We understand the need to align and to harmonize our legal frameworks to human rights standards. We know that we need to make these bodies more inclusive. And there are ways to do that. And I think that what’s really important is that we refer to these commitments and that we are all taking that home as well in various democratic institutions and in global forums that we are vocalizing these values and ensuring that there are mechanisms for implementing and instituting them. Perhaps I can say this at the end, but I’ll quickly just throw it in here. I know that we don’t have many people in their teens and perhaps younger people here. But I think it’s important for us, perhaps the older people, to not be so self-absorbed. nostalgic about the world that wasn’t that perhaps not you know isn’t that great or wasn’t that great either and to look forward to building a world that is ultimately about you know about liberation about ensuring human rights are protected and that when it comes to the internet I think is what is really key is ensuring user control so ensuring that the kind of control that has just been discussed about corporate actors or governments controlling and deciding what the internet is that doesn’t happen so we need to rest that back and how we do that I know we’re having that conversation I said we already have many many ideas and ways to do that but that’s key because it’s that that’s shifting and that’s what people are worried about and that’s when I was a child I remember thinking it was so cool that could just get online and I knew it was transparent knew how to you know where information was and and now I think what is happening is that the journey is becoming more in transparent we don’t know why certain things are happening and that needs to shift thank you very much

Izumi Aizu:
Izumi being a little bit older than you I’d like to go say a 70 percent what you said when I was 40 I was really excited to see all the new things online and stuff but to respond to maybe the Vittorio’s and Bertrand’s questions come comments I’d like to respond these it’s yes it’s the people’s will as Vittorio said or the jurisdiction legal framework etc or international politics I’d like to add a little more if you think that we’re in now in 2043 at the world let’s say conference or governance forum world governance forum not internet forum or internet governance forum with the new United Nations and the new United Communities in 20 years from now it could happen right after which ever will win at that war or this war so and then coming the war is the climate change war there might be a regulation that the one shouldn’t be too many servers at the data center not too many AI’s or crypto or these these environmental factors you may think it’s external factors to the Internet but I would suggest you to make it upside down back to the history in the 40s we had fought a lot they killed each other a lot with a reflection coming the some of the use of the information and then the ballistic calculators sending bombs and then coming the Internet during the Cold War phase but then in the 80s to 90s the world has changed the Cold War was over we hoped that East and West come closer that’s why the Eastern side wanted to be united using the Internet perhaps the China wanted to have technology and science and economic growth that’s why they accepted Internet we tried hard to talk with them so these wills of the people entities allowed some technology to be picked up and made global how about now China reached their India reached there and do they need real you know globally united and science sources from the West yes and no right so my first take of the mixture of the fragmented one as well as some you know chaotic one we all need both but we don’t know what’s the reality of the politics and environmental changes and stuff like that in the near future not to mention the far so I think we should go out of the box I’ll go out of the ivory tower or IP tower Internet centric thing the future of the Internet who cares the future of us we care so then how technologies including the current and the future Internet that you guys we guys will work may make something better that’s kind of my suggestion thank you thank you very much Lorraine thank you and I think that’s a perfect segue to to my point

Lorrayne Porciuncula:
as well I completely agree with Izumi and I think oh I I do and I think it’s a I mean we can think about the sort of golden age of the Internet and where it was a lot of sort of personal logs and websites and all that I think that the and of course it wasn’t just that but I’m just trying to think about the average person who ultimately doesn’t care about that and what they want is to be able to access content and and sometimes I think that we create I mean narratives really matter here and the words that we’re using really matter and I think that’s a common thread in this discussion we’re talking about the Internet and we’re talking about fragmentation so in questioning those terms also comes into play it’s important and so ultimately I don’t think it’s about the Internet in itself it is about a digital society right as Izumi was saying because I mean I do think that we need to think about what we want as a society first and as Chantal was saying what what does it do people people want the Internet to feel like or their lives to be on the Internet and I feel we actually don’t have the answer very clearly for that yet and that’s why we’re struggling because if we do know we are then not just creating those big enemies out of entities that are not entirely bad certainly not entirely good but not entirely bad because they offer services that are useful and at the same time I think that we then focus and what are the harms and what are the policy objectives that we are trying to achieve if the issue is with walled gardens there are tools that we can use in competition tools to use to address that if the issue is around a competition it we we can look into portability and interoperability as tools that have been used in different markets in telecoms for example on the right a user to actually move from one platform to the other that’s not possible right now and if we see this and if identify as a society that doesn’t that’s an issue we need to try to find a remedy regulatory and legal to address that the thing with with with with the internet is that it’s not one government that can do it by itself and so the question comes back to the point that is zooming made and to the point that become made it’s about governance it’s about how we cooperate in the international setting and so for me instead of using 75% or 18 workshops of the IGF to talk about AI mostly very sort of in the high level I would rather that we use that time to talk about how do we cooperate and how do we govern this complex adaptive system that is it’s really difficult to address and how do we actually identify what are our objectives economic and social and what are the best remedies to to get there

Emily Taylor:
thank you very much and that tees up the final part of our conversation today which is how do we get there but Olaf unfortunately you’re the last to go and there are several questions that have not been addressed by the panel so far web3 money incentives Global South geopolitical aspects digital colonialization counter narratives and capacities and legislative frameworks we’ve talked a bit about that but or anything you like yeah I that my first response would be stack overflow but

Olaf Kolkman:
I had indeed made a bunch of notes and I actually don’t know how to start where to start but let me start with not having infrastructure at all is the ultimate fragmentation not having it at all is the ultimate fragmentation and I I so associate with Joelle empower communities what we do at the Internet Society is is is empower communities by building IXPs by giving cookbooks for building community networks and that’s not the only way I know that Joelle is working on his own stuff and it’s very smart about it but that’s that’s really empowering and in the end that’s bottom-up other parts we talked a lot about economics and government rules and when you talk about economics and standardization I think we have to be honest standardization is also also to a large extent industry politics the standardization body you choose to do your work has to do with industry politics and we need to put that on the table and understand it economics of course consolidation happens even if you have open technologies companies will try to extract money out of the employing that open technology consolidation happens and with that you have you know accumulation of power to a point where you know he may say this is too much or at least government might say this is too much and I I associate with that to a large degree what I was also thinking is when we talked about standards you don’t need standards for every innovation and the name Satoshi Nakamoto came up and I hope you all know who that was yes blockchain inventor and that’s permissionless innovation that has changed the world for the good for the bad I don’t I don’t I have a sort of opinion about that but this is somebody who wrote a paper published the paper online for everybody to read that’s open innovation and it happens all the day we’re in this room but on the internet people are sharing code fragments and open building blocks all the time innovation happens today and it doesn’t happen only in standards organization no it happens by individuals in chats it happens in companies it happens everywhere if you ask me where you want to go with this internet in the future from a sort of technical perspective then I would say open open and open open architecture so people build against pieces of others open source so people can reuse those building blocks and open standards and with a lot of transparency around that I think that the building blocks of the few of a positive future have are are basically open I think that was my I’m my I drained my stack

Emily Taylor:
brilliant thank you very much we’ve got a little bit of time left and normally by the time we unpack all of the problems we’ve run out of time I’ve got a question from the audience and then I’m going to introduce our latest speaker Raul I’m not going to make him summarize the conversation so far what I’m going to do is to start with you Raul to think about the how not about the what not about all the problems we’ve rehearsed that not about the different visions but how do we get to a better place to the 20 years time where we’ve sussed it all out what do we need to do how do we get there so first of all thank you very much for your patience and thank you for your

Audience:
question hi I’m I’m not society youth ambassador I am probably too young to be conservative but I have my doubts on how much internet can fundamentally change but my fear is that with the pace of innovation and these bunch of emerging technologies coming in we’ll have applications built on the base of the internet and it will get really complex to the extent that we won’t have enough expertise understanding and knowledge to make sure that all these different parts work together one of the panelists mentioned complex adaptive systems I’m not sure how we are going to adapt to this so 20 years from now I think we will have fragmentation not by design but by default because we just don’t know how things work and standardization is being mentioned quite a lot and that is possibly one of the solutions but standardization is also an extremely slow process and with this pace of innovation I am not sure standardization can keep up also the interests that go into standardization as Olaf was mentioning it’s largely industry driven so I want to press the panel and the audience to think on how we can include more stakeholders especially the users in in standardization process and make it accessible to them thank you thank

Emily Taylor:
you very much so that’s a good how question as well how do we make our processes more inclusive but I think let’s let’s run through the panel but start with you Raul thank you very much for joining us and welcome to the panel Raul Echeberria how do we get to what do we need to do to get to that brighter future that that we’ve articulated thank you very much

Raul Echeverria:
I I’m sorry for the delay in joining I feel like an imposter here because I was in another session now I’m running to get here unfortunately I don’t have the answer that’s that’s the first point that the if I would have that answer probably I would get a wooden a good position a good job here in the in one of the companies that or governments or institution or institutions that but and so I just we can just reflect and think about that and as you describe it in the in the possible scenarios I think that we already we already have certain level of fragmentation in internet and we have to live with that and so I don’t think that is a feasible scenario where the that’s we have the ideal internet and perfect internet that that all of us want to have the the theory of the rational human that people act according the rationality and so there what is best for everybody is something that died in the 80s and so the incentive of policymakers are diverse many times even knowing that the decisions that are being taken are not the best one for for the the for one objective like keep the internet not fragmented so many times the decisions will go in a different direction so what our mission is to keep the internet as less fragmented as possible one thing I think that’s a I could say that’s two things is that we need to to have gradual objectives and commitments instead of for going for the whole packet just let’s try to get agreements on on simple things and try to gradually improve that and and we have to really is make our messages much more simple I I heard Olaf today in a in a in a session before this this morning earlier this morning and he was excellent you in that session and so is it is usual on him but but I thought when I was listening to him saying if I have to explain this to policymakers in Latin America I have to bring Olaf with me so the this is very it’s very complex so we need to make our our messages much more simple about what the government should not do what the policies should not produce and so to to give them more tangible tools for taking better decisions

Emily Taylor:
thank you very much Olaf I was thinking do I include mark now do I go back to the panel mark do you have a quick point or do you want to can we continue with them let’s hear two more from the panel come back to you so Lorraine and then Ambassador Verdier okay so what do we do right we’re gonna do let’s be action-orientated lectionary focused on that even it’s just one thing so from Raul we had just an incremental approach try to do small things well you

Lorrayne Porciuncula:
know yeah so so maybe we do that game where we just add on to what the other said so incremental I mean being clear on the objectives is the first be incremental is another and being iterative I think it is important as well and I think a lot of the issues with the processes that are being built to address the challenges that we have there are many is that we are under the impression that it’s suffice for us to design and develop the ultimate regulation and that’s going to solve all of our problems and and there’s a whole lot of questions that get unanswered once you do that because one we are the fact that we are in a complex that’s the system means that it’s very hard for us to predict it because the system moves in a way what one just one element in that system can have really implications on on the whole in ways that it’s very hard to predict so what we do with those systems is that we observe and we try to adapt to it in order to do that we need to be able to have the processes and the institutions that are much more agile than the ones that we have now so instead of looking to linear approaches of developing regulation we need to think of it like almost like software development where we have versioning of policies and regulation where we’re able to actually identify a bug and then be able to correct it by having inclusive multi-stakeholder consultations and the problem is that and and that’s an issue that I was trying to unpack with Bertrand the other day and it was the fact on how we think about multi-stakeholderism is that it’s trying to get away from government’s decision-making power so that everyone is deciding in a sort of a global happy assembly, right? And that’s not what multi-stakeholderism is supposed to be. And certainly it’s not what it’s sometimes and often applied by government which is, oh I’ll give you 30 days for you to participate in this consultation and I’ve done a multi-stakeholder process. That’s not it either. It’s actually being intentional about including people and different stakeholders. Inclusive, iterative, in a way that is actually particularly not only national levels in the international level that we are including the global South, youth, different communities that are underrepresented as well. And in the way that it’s actually intentional and looking towards a process that it’s not simply aiming to produce the ultimate legal text to rule them all, but it’s actually a process. What we’re trying to learn and we are assuming our just inability to predict accurately the future and the fact that we are adapting along the way and trying to be better at it. And we do not have the processes and we do not have the institutions. One of the things that we are doing in building is sandboxes as a possible avenue to testing out those issues that we don’t know about. And a lot more needs to be shared on how they best work, how they don’t work. We almost need to do a sandbox out of sandboxes in itself, but I won’t get into it so far.

Emily Taylor:
Sort of a playful approach, an iterative approach, building on where we started with Raul about incremental approaches, simpler messaging and this sort of sense of agility. Ambassador Verdier.

Henri Verdier:
So I don’t know what we have to do, but maybe we could agree on a compass. I was surprised when you did mention the nostalgia for the Golden Age, because usually I’m not a kind of nostalgic guy. But I think that we should start for at least three aspects of this Golden Age. The first one is the unprecedented openness of access to information, knowledge and culture. This was a big shift and this is not finished. We still have the digital divide and half of the humanities that doesn’t access to this. The second one was the unprecedented empowerment of communities and people. And the last one was the permissionless innovation. And from my perspective, this could be a kind of compass for future decisions. Are we increasing people’s autonomy and creative capacity? And that’s why I did mention some concern, for example, with sometimes a private sector, because if you think in terms of autonomy, empowerment and creativity, the threat can come from everywhere, not just from rogue states.

Emily Taylor:
Thank you. I like the idea of a compass and that can be very good organising. I want to come to Mark quickly and then I’ve got, I was going to come to you in a bit, or are you reacting very, yeah go ahead. If I may, with a very, what I

Olaf Kolkman:
wrote down is actually what Henri said, principle-based as the add-on. And I like the principles you pose. The note I made was principle-based and then there’s a differentiation, I think, of the approach of the Internet, of the regulator or evolution of the Internet and evolution on the Internet. I think that we can get to those shared principles much easier if we talk about the evolution of the Internet and that when we talk about empowerment, individualism, autonomy, getting a global consensus about that, on the Internet might be where the trouble is going to be in setting joint principles as a guide.

Emily Taylor:
Thank you very much. I’m going to pause in our final round-up to just have these two questions from the audience. We’ve got Mark and then Lucien and we are in the last five minutes, so thank you very much for that

Audience:
reminder. So, Revati. Thank you very much. Mark Derisgald from Brazil. I’m an Internet governance consultant for small media organizations. I found comments by Lohane and Olaf to be quite stimulating in a sense of something that I have been talking a lot about, which is the AI forum, as it has become now, hasn’t been focusing a lot on one of the questions that I find more key about it, which is open source versus closed source. So, why are we in this AI space we are in right now? Because there were very early open source developments on this that just let the technology lose. A lot of the debate goes around mid-journey and this proprietary technologies, but very few people are looking at things like stable diffusion, which are basically being iterated upon on the basis of papers, right? Just like I said, it’s paper after paper and that gets incorporated into the technology and that’s how it’s expanding. And then the private companies need to port that back into their proprietary code. So, this reality seems very feasible because we’re watching it happen right now, how open source and papers are actually starting to drive. There’s no AI consortium forum or anything. It’s literally being driven by research that’s published in an open space. So, this is something that we should be looking towards, not as, hey this is about AI, but rather about, is this the new paradigm of how different protocols and standards and different approaches will be developed? So, just to kind of complement that point, thank you very much.

Emily Taylor:
And a very, very valuable point about the role of research in acting as that sort of snowball effect.

Audience:
And I think, it’s Lucian Taylor from the DNS Research Federation, and I think my point follows on from Mark’s about protocols. I’ve been an internet engineer for over 20 years with my team and I’ve just had the privilege of meeting Vint Cerf with Emily and talking to him about protocols, an absolutely wonderful thing. And what he did, I think there’s a gap between the IETF, which is not frankly a safe space for women and people to develop standards, and how you develop protocols. Vint Cerf got together with a few other universities and they developed a way of doing things, and that was TCP IP, and they then invented the internet. And then we bake it in through standards bodies like the IETF. I think standards are a very good place to test these new ideas, and we are at an inflection point. We’ve got regulation hammering down on us, and that regulation needs to be tested. Things like know your customer, putting that into a free and open internet is really challenging. And my question to the panel is, is the IETF the place to develop in a free and open way those protocols that we need next?

Emily Taylor:
Thank you. So we’ve got probably less than three minutes left, and three panelists and questions.

Audience:
Okay, I’ll quickly say that I think, Vittoria, you posed some philosophical questions earlier, and I’m not going to judge what people do in terms of, you know, the question of what young people do, and selfies, and all of that. But what I am going to judge, and I think what we should all judge, is what governments do when they regulate. Do they align the regulation with human rights norms and standards? Do companies who also have obligations to do so, do that? Are they transparent? Are they accountable? And are our governance forums and standards bodies inclusive? No, clearly not. But there are recommendations for how to change that. The Office of the Human Rights Commissioner, OHCHR, has released a report with many recommendations on how to change and how to improve inclusiveness. And I can say that we’re doing a session on Thursday where we’ll be exploring that report. So I think protecting critical properties as they evolve, having that principles-based approach, aligning with what we have, building with on the human rights framework, and creating more inclusive spaces is really key.

Emily Taylor:
Thank you. Raul, and then Izumi. I endorse everything she said. Good. But beside that, another point.

Raul Echeverria:
A few weeks ago, I participated in a global conference of parliamentarians speaking about the future. But the average age was over 50, and so all the discussion was about the fears, about the future, and the fears about AI. And so we have to be very careful that the policies are not developed based on fear. So I say, of course, it’s normal that they have fears. I have fears. I’m terrified about the future. I’m scared. But don’t let my fears stop the evolution. This is why we have to involve youth in the discussion. And probably if we bring people like 18 years old, that they don’t have the old expertise in architecture and internet architecture and other things to speak, but they can say how is the internet they want. And that would help very much.

Emily Taylor:
Thank you very much. Give the microphone to Izumi. Our last thoughts.

Izumi Aizu:
I saw no China nor India in the last session, I mean the main session yesterday, while they are talking about AI. To me, it’s fragmented. The IGF wasn’t like that 18 years ago. We have tensions, we have fears, we have battles. Now we are peaceful and boring. Go out to the chaos or make the chaos, please. Fear, fine. We don’t know the future. Be bold. And to the IPers and IP fundamentalists, I would say, go outside the box. Go to the World Governance Forum. Go to the climate change thing and talk with them. Learn from them. Eat their foods. Don’t give them the food. Okay. So otherwise, all these complex things in 20 years would happen. The internet wouldn’t work in the youth. Thank you. Thank you very much.

Emily Taylor:
So that brings our session to an end. Thank you very much for all the interaction and to our brilliant panel. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

175 words per minute

Speech length

3836 words

Speech time

1317 secs

Emily Taylor

Speech speed

160 words per minute

Speech length

3229 words

Speech time

1209 secs

Henri Verdier

Speech speed

152 words per minute

Speech length

1386 words

Speech time

549 secs

Izumi Aizu

Speech speed

156 words per minute

Speech length

1132 words

Speech time

436 secs

Lorrayne Porciuncula

Speech speed

173 words per minute

Speech length

2092 words

Speech time

727 secs

Olaf Kolkman

Speech speed

136 words per minute

Speech length

1041 words

Speech time

460 secs

Raul Echeverria

Speech speed

146 words per minute

Speech length

586 words

Speech time

241 secs

Sheetal Kumar

Speech speed

174 words per minute

Speech length

1269 words

Speech time

437 secs

Successes & challenges: cyber capacity building coordination | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Claire Stoffels

The analysis reveals several key points about cyber capacity building coordination. Firstly, there is a lack of coordination among stakeholders, leading to diverging objectives, different approaches, and duplication of actions. This lack of coordination hinders the overall effectiveness of cyber capacity building efforts.

On the other hand, successful coordination requires a inclusive, demand-driven, and context-specific approach. Cyber security transcends many communities of practice, necessitating regional collaboration and a shared understanding of the specific needs and challenges faced by different regions.

Trust is identified as a crucial component for effective cooperation in capacity building. However, building trust is challenging due to the presence of different policy fields and institutions. Luxembourg, perceived as neutral and trustworthy, has played a role in relationship building by fostering trust among stakeholders.

Another challenge is the development of scalable models for coordination. Coordinating capacity building efforts sustainably is a significant concern. Establishing mechanisms that allow for the efficient coordination of efforts while adapting to different contexts and needs remains a challenge.

Furthermore, the analysis highlights the risks posed by a lack of coordination in cyber capacity building, namely duplication of efforts and the lack of coherence. Coordinating actions and sharing information across stakeholders is vital to avoid these risks and ensure a cohesive and efficient approach to capacity building.

The importance of multi-stakeholder approaches and partnerships is emphasized. Bringing together stakeholders from diverse sectors and actively engaging them in capacity building efforts can lead to more comprehensive and effective outcomes. Luxembourg has been successful in fostering multi-stakeholder approaches and partnerships, collaborating with the national cybersecurity agency and coordinating efforts across sectors.

The analysis also points out the benefit of using coordination platforms and practitioner groups in cyber capacity building. Luxembourg has joined various coordination platforms and practitioner groups, such as the GFCE and EU Cybernet, finding them beneficial in facilitating coordination and collaboration.

The D4D Hub is highlighted as a valuable platform for exchanging information, sharing best practices, lessons learned, and improving projects. Despite the challenges in gathering information, the hub serves as an important element in project inception and formulation.

Lastly, the analysis underscores the role of donors and implementers in promoting awareness, enhancing communication, and facilitating cooperation and knowledge sharing. Claire Stoffels endorses the idea that donors and implementers have a responsibility to play a larger role in capacity building efforts.

In conclusion, the analysis identifies the need for enhanced coordination in cyber capacity building. It emphasizes the importance of inclusive, demand-driven, and context-specific approaches, building trust among stakeholders, developing scalable models for coordination, and fostering multi-stakeholder approaches and partnerships. Using coordination platforms and practitioner groups, such as the D4D Hub, can also support information exchange and project improvement. Additionally, donors and implementers should take an active role in promoting awareness and facilitating cooperation among stakeholders.

Donia

The discussion revolves around the concept of capacity-building in the context of community development and technological solutions. Both participants agree that capacity-building should be seen as a comprehensive approach encompassing various aspects, such as community awareness, legal frameworks, and governmental policies. They argue that solely focusing on technology solutions is insufficient.

The speakers emphasize the importance of adopting a holistic approach to capacity-building. This approach should involve not only technological advancements but also community awareness, including educating individuals about the benefits and implications of technology solutions. They also stress the significance of developing legislative frameworks and government policies that encourage capacity-building, as these are crucial for creating an enabling environment for sustainable development.

The participants provide supporting facts, including questions posed by online participants, which demonstrate a concern for broader aspects of capacity-building beyond technology. They also suggest that capacity-building extends beyond the requirements of SDG 9 (Industry, Innovation, and Infrastructure) and SDG 11 (Sustainable Cities and Communities) to include SDG 16 (Peace, Justice, and Strong Institutions). This highlights the extensive scope and potential impact of capacity-building beyond immediate development goals.

Throughout the discussion, the sentiment of both speakers remains neutral. They present their arguments in a balanced manner, without expressing a strong positive or negative stance on the topic. This neutral sentiment indicates a willingness to engage in an open and constructive dialogue on the subject of capacity-building and its multifaceted nature.

In conclusion, the discussion underscores the importance of considering capacity-building as an end-to-end process that encompasses technological solutions, community awareness, legal frameworks, and governmental policies. The participants argue that capacity-building should not be limited to technological advancements alone. By addressing these diverse aspects, capacity-building can foster sustainable development, promote social progress, and contribute to the achievement of various SDGs.

Anatolie Golovco

During the discussion on cybersecurity, speakers emphasised the significance of the human element in protecting computers against cyber threats. They stressed the need for individuals with the right values, ethics, and technical skills to be involved in the field. Cybersecurity is ultimately about good people safeguarding computers from bad actors.

Insufficient coordination and a lack of clarity in project objectives were identified as challenges in implementing cybersecurity initiatives. When beneficiaries lose sight of project goals midway, misalignment in project delivery occurs. This issue can be compounded by competition among donors and a lack of clarity in defining project needs. To address this, speakers advocated for improved planning and better coordination among states. States should clearly articulate project needs and roles to donors, facilitating better alignment of objectives and successful project implementation.

One proposed solution involved a three-layer mechanism for effective coordination in cybersecurity efforts. This mechanism consists of a cybersecurity council, smaller groups for peer review, and the Ministry of Economy and Digital Development, each with defined roles. This approach was regarded as efficient and conducive to better coordination, ensuring project objectives are met. The role of clear policies formulated by the Ministry of Economy and Digital Development, which help translate plans into action, was also highlighted.

Another crucial aspect discussed was the need for a people-centric approach and a re-evaluation of the cybersecurity architecture. Reducing the complexity of tools and rethinking the overall architecture are necessary steps. Speakers emphasised the importance of focusing efforts on strategy rather than merely adding layers of security to a faulty system. There should be a substantial effort invested in rethinking the ecosystem to ensure effective cybersecurity.

Throughout the discussion, it was noted that adapting project timelines to accommodate the speed of learning and the dynamic nature of cyber threats is often challenging. Donors may face difficulties synchronising their contributions with the rapidly evolving needs of the field, resulting in a focus on acquiring tools rather than developing the individuals involved. Therefore, speakers called for a greater focus on the people in the cybersecurity process, prioritising their training and education alongside procurement of tools.

In conclusion, the discussion underscored the vital role of the human element in cybersecurity. It stressed the need for individuals with the right values, ethics, and skills, alongside improved coordination and clear project objectives. A three-layer mechanism, supported by coordinated policies, can enhance coordination, and a people-centric approach, along with a reassessment of the cybersecurity architecture, may lead to more effective protection against cyber threats. Speakers called for greater attention to be given to the development of individuals in the field, emphasising their training and education as essential components of cybersecurity initiatives.

Louise Hurel Marie

The analysis emphasises the importance of better understanding and coordination among countries when it comes to supporting capacity building in specific regions. It argues that in order to avoid duplication of efforts and overloading of recipient countries, a more coordinated approach is needed. The analysis also highlights the crucial role of political buy-in for the success and sustainability of cyber capacity building initiatives. It states that without the government seeing capacity building as a priority, it becomes challenging to gain traction and achieve desired outcomes.

Another key point raised is the need to break down cyber capacity building into more specific categories. The analysis suggests that traditional cyber capacity building, capacity building for crisis response, and capacity building for conflict or post-conflict recovery can be considered as subcategories. By doing so, it becomes easier to define and address the specific needs and challenges in each area.

Insufficiencies in coordination of capacity building efforts can lead to poor sustainability measurement, according to the analysis. It argues that donor countries and recipient countries may lack effective measurements for longer-term sustainability efforts. This can result in one-off efforts or effects, with impact measurement focused on specific projects rather than holistic outcomes.

In contrast, the analysis also highlights the positive impact of longer-term programs and sustainable recommendations in capacity building. It suggests that building a longer-term capacity building program in a region could enhance sustainability. Additionally, both donors and implementers could benefit from developing and adopting broader measurements of impact beyond individual projects.

Insufficient domestic coordination is identified as a potential challenge in capacity building efforts. The analysis points out that multiple departments within a single government may conduct different types of capacity building efforts, potentially complicating coordination. Recipients might also be overwhelmed by multiple offers and struggle to designate the appropriate point of contact. This lack of coordination can lead to complications and inefficiencies in capacity building.

The analysis recommends that coordination and trust-building between countries prior to crisis assistance can enhance the effectiveness of capacity building efforts. It states that countries that have provided assistance in a crisis often had a previous relationship, highlighting the importance of trust and prior coordination. Mechanisms such as Memorandums of Understanding (MOUs) and institutionalized responses, such as the European Union’s Permanent Structured Cooperation (PESCO) framework, are cited as examples that can increase the effectiveness of coordinated responses.

Crisis response is seen as an opportunity for countries to gain political visibility and set up new coordination mechanisms to enhance sustainability. The analysis mentions the establishment of the Center for Cybersecurity Capacity Building in the Western Balkans as an example of leveraging crisis response to create new mechanisms. It suggests that the crisis response capacity building type and the broader cybersecurity capacity building can complement each other depending on the context.

Progress is reported in international-level discussions on addressing cybersecurity issues. The analysis highlights the existence of working groups on incident response and cyber diplomacy as part of the Global Foreign and Cyber Expertise (GFC) platform. It also notes that different communities meet and discuss in informal settings at the international level, indicating ongoing efforts in addressing cybersecurity challenges.

Challenges still exist at the domestic level depending on the country and culture, states the analysis. It points out that different departments in the government may have varying understandings of cybersecurity. Additionally, community engagement varies depending on the maturity of a particular stakeholder group. This suggests the importance of considering context-specific challenges and cultural nuances when designing and implementing capacity building initiatives.

Civil society organizations and think tanks are highlighted as crucial actors in bridging different communities. The analysis emphasizes their role in involving as many stakeholders as possible during the planning and designing of specific projects. Their involvement can help ensure a more inclusive and comprehensive approach to capacity building.

The analysis also suggests including recipients in the design phase of projects. Providing a bigger inception phase, where stakeholders can engage and provide input, can help create ownership and increase the chances of successful implementation.

Lastly, the analysis calls for designing a typology that accounts for contextual considerations in cyber capacity building. It argues that the evolving landscape in terms of agencies, stakeholders, crises, and conflict or post-conflict situations should be taken into account. This would enable a more nuanced and tailored approach to address the diverse needs and challenges in different contexts.

In conclusion, the analysis underscores the importance of better coordination, political buy-in, and sustainability measurement in cyber capacity building efforts. It also highlights the need for longer-term programs, domestic coordination, and trust-building between countries. The analysis recognizes the progress in international-level discussions and acknowledges the challenges at the domestic level. Additionally, it emphasizes the role of civil society organizations and think tanks, as well as the involvement of recipients in project design. Overall, the analysis provides valuable insights for policymakers and stakeholders involved in enhancing cyber capacity building efforts.

Rita Maduo

The rapidly evolving and complex cyber landscape presents challenges in coordinating cyber capacity building projects. The difficulty lies in the constant need to update strategies and priorities in response to new technologies and their associated threats and vulnerabilities. This negative sentiment arises from the fast-paced nature of the cyber landscape, which makes coordination increasingly challenging.

Emerging economies like Botswana face additional obstacles due to limited resources. Adapting to the changing cyber environment is expensive, requiring substantial funding that may not be readily available. This limitation hinders the training of cybersecurity experts and the management of complex vulnerabilities, further exacerbating the challenges faced by these countries.

Insufficient coordination in cybersecurity efforts has negative consequences. It creates weaknesses in a country’s overall cybersecurity posture, making it exploitable by cybercriminals. Ineffectual coordination also leads to gaps and vulnerabilities, hindering the effectiveness of cybersecurity programs. Additionally, inefficient resource allocation is a direct result of insufficient coordination, leading to wasted resources and misplaced priorities. Overall, insufficient coordination limits the effectiveness of cybersecurity initiatives.

Effective information sharing is crucial for cybersecurity. Insufficient coordination hampers the sharing of threat intelligence between entities, making it more challenging to detect and mitigate cyber threats. Timely and accurate information sharing is essential for robust cybersecurity measures, underscoring the importance of coordination in this area.

A positive stance is taken, emphasizing the need for proper coordination among stakeholders for effective cybersecurity. Timely and accurate information sharing between stakeholders strengthens cybersecurity efforts and can only be achieved through coordination and collaboration. This positive sentiment highlights the significance of coordination in establishing robust cybersecurity measures.

Successful cyber capacity building requires a multifaceted approach and sustained commitment from all parties involved. Donors, implementers, and recipients must demonstrate ongoing commitment to achieve long-term success. The multifaceted approach includes embracing diverse perspectives and voices in cyber capacity building initiatives. By avoiding a stagnant approach, the positive sentiment emphasizes the importance of involving different stakeholders in cyber capacity building.

In conclusion, the summary highlights the challenges faced in coordinating cyber capacity building projects in the rapidly evolving and complex cyber landscape. Limited resources, insufficient coordination, and a lack of information sharing hinder progress in strengthening cybersecurity measures. However, the positive outlook emphasizes the importance of proper coordination, sustained commitment, and the inclusion of diverse voices in cyber capacity building initiatives. Addressing these challenges is crucial for enhancing cybersecurity globally.

Hiroto Yamazaki

The discussion on cybersecurity coordination explores the challenges that arise when multiple stakeholders are involved. One key issue is the presence of too many organizational stakeholders in cybersecurity, which hinders full coordination. This fragmentation of stakeholders is observed in various layers, including divisions between private and government entities, technical and policy experts, and different countries or regions. The lack of a unified approach and participation from all relevant organizations impedes effective coordination.

Another challenge is the difficulty in achieving full coordination due to the focus on bilateral cooperation. The Japan International Cooperation Agency (JICA), a key player in cybersecurity cooperation, bases its efforts on bilateral agreements between Japan and recipient countries. This approach requires JICA to align its initiatives with the recipient country’s own cybersecurity approach, strategy, and specific needs. While bilateral cooperation is important, it poses challenges in achieving comprehensive coordination across multiple countries and stakeholders.

However, it is stressed that respecting the recipient country’s ownership in bilateral agreements is crucial. JICA adheres to the policy of recognizing the recipient country’s authority and strives to follow their approach and strategy in cybersecurity cooperation. By acknowledging and respecting the recipient country’s ownership, JICA aims to foster a collaborative environment and ensure its efforts align with the recipient country’s priorities.

Inadequate coordination within JICA’s cybersecurity capacity building initiatives is identified as a problem, leading to negative effects such as reduced efficiency, failure to maximize development impact, and a lack of sustainability. The challenges stem from duplication of assistance, limited resources, an excessive number of resources, and isolated approaches to assistance. These factors contribute to suboptimal results and negative implications in JICA’s cybersecurity capacity building projects.

To address the lack of coordination, JICA employs two strategies: bilateral efforts and multi-stakeholder efforts. In bilateral efforts, interactions with Cambodian partners and organizations such as Cyber for Development are used to reduce duplication and enhance coordination. Additionally, JICA recognizes the importance of engaging multiple stakeholders, as evidenced by their technical cooperation project in Thailand, where they collaborate with ASEAN member states, the ASEAN Secretariat, and other donors. By incorporating multiple stakeholders in their initiatives, JICA aims to foster a more coordinated and comprehensive approach to cybersecurity capacity building.

A noteworthy success is JICA’s technical cooperation project in Thailand. With the collaboration of ASEAN member states, the ASEAN Japan Cyber Security Capacity Building Center conducts training and contests, contributing to the overall improvement of cybersecurity in the region. This success story highlights the positive outcomes that can be achieved through effective coordination and collaboration.

Furthermore, the discussion emphasizes the importance of coordinating with multiple stakeholders or through bilateral interactions to maximize development impact. It highlights the need to reduce duplication and harmonize efforts through coordination. The significance of creating sustainable outcomes, such as establishing guidelines and training materials, is also recognized in the cybersecurity field.

While some sentiment expresses negativity towards the one-time training or meeting approach, suggesting it is not an effective means of achieving coordination, there is positive sentiment towards delayed or time-difference coordination. This approach allows for longer periods of interaction and enables donors to engage with recipient countries even after initial engagement has taken place.

In conclusion, the discussion on cybersecurity coordination sheds light on the challenges faced by various stakeholders in the field. These challenges include the presence of numerous organizational stakeholders, difficulties in achieving full coordination due to the focus on bilateral cooperation, and inadequate coordination within JICA’s initiatives. Strategies such as bilateral efforts and multi-stakeholder engagement are identified as potential solutions. The importance of respecting recipient country ownership, creating sustainable outcomes, real-time coordination, and employing more long-term approaches is also emphasized. By addressing these challenges and implementing effective coordination strategies, collaboration and impact in cybersecurity capacity building can be improved.

Calandro Enrico

The proliferation of cyber capacity-building efforts has resulted in challenges in aligning strategies, priorities, and activities among donors, recipients, and implementers. These efforts aim to improve cyber resilience and skills in the face of increasing cyber incidents, state-sponsored attacks, and cybercrime. However, the sheer number of initiatives has created difficulties in coordinating and harmonising these efforts.

To address these challenges, a roundtable discussion is being organised, involving representatives from various sectors, such as the internet governance forum, government officials, civil society, technical community, recipients, donors, and implementers of cyber policy. The objective of this discussion is to assess the achievements and difficulties in coordinating cyber policy activities. The outcomes of this discussion will be formulated into a policy brief, which will serve as a guideline for stakeholders involved in the field of cyber capacity building.

In the realm of cybersecurity, it is crucial for project deadlines to adapt to the learning speed of the individuals involved. Human learning speed often falls behind the strict timelines set for cybersecurity projects. Thus, the focus should shift towards prioritising people and knowledge over rigid deadlines. This approach will ensure proper skill development and overall project success.

Political willingness and transparency are essential aspects of cyber capacity-building projects. Governments are investing substantial financial resources in these endeavours; however, political will from donors is necessary to secure funding. Additionally, transparency in the use of funds is crucial, as it provides stakeholders with an understanding of how the financial resources are being utilised.

Cyber capacity building not only serves as a means to enhance technical capabilities but also as a diplomatic tool to strengthen partnerships. It can be utilised to foster collaborations and build relationships between nations. This perspective highlights the multifaceted nature of cyber capacity building, extending beyond technical aspects.

The Global Forum on Cyber Expertise offers numerous mechanisms for improving coordination in cyber capacity building. These mechanisms include the Clearing House Mechanisms, regional donor meetings, and the publicly available Cyber Portal, which collects data and information related to cyber capacity building projects over the years. Despite these resources, there is a need for increased awareness and effort to enhance global coordination in cyber capacity building.

Inefficiencies and duplication of assistance can be avoided through effective communication and coordination. Examples from Cambodia demonstrate the importance of proper coordination in cybersecurity capacity building. The ASEAN Japan Cyber Security Capacity Building Centre (AJCCBC) serves as a coordination mechanism, hosting training sessions and facilitating collaboration among different organisations. Encouragingly, there is a desire for other donors to explore potential collaborations through the AJCCBC to improve coordination within the ASEAN region.

In conclusion, the influx of cyber capacity building efforts has led to challenges in aligning strategies and activities across various stakeholders. Coordinating these initiatives requires political willingness, transparency in the use of funds, and the use of available resources. Furthermore, there is a need for increased global coordination and effective communication to avoid duplication and enhance efficiency. The examples from Cambodia and the establishment of the AJCCBC exemplify the importance of coordination and collaboration in cybersecurity capacity building.

Tereza Horejsova

The coordination and effectiveness of cyber capacity building efforts face significant challenges due to a competitive environment and a lack of sharing. The competitive nature of the field makes coordination difficult, hindering cooperation and collaboration among actors involved in cyber capacity building. This leads to a lack of project continuity and a decrease in overall impact. Insufficient sharing of information and collaboration among stakeholders also contributes to problems, particularly with duplication of projects that overwhelm recipients and waste resources. Improvement is needed in the needs assessment process, which is currently time-consuming for individual projects.

The issue of projects being supply-driven rather than demand-driven is also prevalent in cyber capacity building. This approach fails to consider the specific needs and challenges faced by recipients, resulting in projects that may not fully meet their requirements. To address this, it is important to listen attentively to the needs of recipient countries and take their unique circumstances into consideration.

Various approaches and platforms have been suggested to enhance coordination and effectiveness in cyber capacity building. The Global Forum on Cyber Expertise (GFC) serves as a valuable platform for dialogue, information exchange, and networking among actors involved in cyber capacity building. The GFC’s Clearinghouse mechanism matches government needs with the right implementers and donors, while the Sybil portal aids in project mapping, improving coordination and resource utilization.

A sustainability outlook is crucial for lasting and effective impact in cyber capacity building. Projects lacking sustainability may provide quick fixes but not long-term impact. It is necessary to consider the goals of sustainable development and ensure projects contribute to them.

Connecting the development community with the cyber community is also important for improved efficiency and better solutions in the future. Learning from the development community’s expertise enhances cyber capacity building efforts and outcomes.

Promoting openness and increasing communication among stakeholders plays a vital role in enhancing coordination. Transparency, sharing best practices, and facilitating information exchange allow stakeholders to work together effectively.

To overcome these challenges, it is crucial to improve coordination through platforms like the GFC, listen attentively to the needs of recipient countries, promote dialogue and exchange between the development and cyber communities, and foster openness and increased communication. These measures will contribute to more efficient and sustainable outcomes in cyber capacity building.

Regine Grienberger

The analysis examines various aspects of cyber capacity building and explores the challenges and opportunities associated with it. Germany acknowledges that cyber capacity building is a relatively new topic within its foreign office. They recognize its significance as a diplomatic tool to strengthen partnerships and ensure stability in cyberspace. However, one of the primary obstacles is the difficulty in securing funding for such projects. This is largely due to budget restraints and the need for political willingness, which in turn depends on risk awareness.

While funding is crucial, it is not the sole factor in implementing cyber capacity building measures. The analysis highlights the need for human resources with expertise in cybersecurity. Simply having financial resources is not enough; experts are necessary for effective implementation. The establishment of platforms, such as the EU cybernet, is essential for facilitating the identification and development of train-the-trainer programs, ensuring a skilled workforce capable of implementing capacity building initiatives.

Transparency in the investment of trust funds is lacking within the field of cyber capacity building. It is important to understand how the allocated funds are being utilized and what outcomes are being achieved. This transparency ensures accountability and can help in identifying areas for improvement and learning from past experiences.

Understanding the needs of recipients is crucial for a successful cyber capacity building project. This understanding often begins with the development of cybersecurity strategies. Expressing and admitting these needs becomes a starting point for effective collaboration and assistance.

Coordination plays a significant role in the implementation of cyber capacity building initiatives. However, it is important to note that coordination should not favour certain recipients. In development cooperation, there are instances where some recipients are given preferential treatment, while others may be overlooked. Overcoming this bias is essential to ensure fair and equal distribution of assistance.

The analysis also emphasizes the importance of regional cooperation in addition to global cooperation. Mechanisms should be developed that foster collaboration among neighbouring countries, enabling them to assist each other in addressing common challenges in cyberspace.

The field of cyber capacity building should be viewed as a two-way street. It should not only focus on the traditional donor-recipient relationship seen in development cooperation. Instead, it should encourage mutual learning and knowledge sharing between all parties involved to create a more comprehensive and conducive cybersecurity environment.

Digital development cooperation should include cyber capacity building as it is integral to digital transformation. Enhancing the skills and capabilities of public administrations as they transition into the digital realm requires a strong focus on cybersecurity. This includes providing the necessary hardware and software to ensure robust cybersecurity measures are in place.

In conclusion, the analysis highlights various aspects of cyber capacity building, including the challenges of funding, the importance of human resources, the need for transparency, understanding recipients’ needs, the role of coordination, the significance of regional cooperation, and the integration of cyber capacity building into digital development cooperation. These insights provide valuable considerations for policymakers, funders, and implementers in their efforts to build strong and secure cyber capabilities.

Audience

The Budapest Convention plays a crucial role in cybersecurity by providing a legal basis for capacity building programs. These programs aim to ensure consistency and sustainability in equipping countries with the necessary knowledge and skills to combat cyber threats. A key feature of these programs is their emphasis on localized training, where trainees become trainers themselves, cascading the knowledge to others. This localized approach expands the reach and impact of capacity building efforts.

The Budapest Convention also highlights the importance of South-South Cooperation, where individuals from different regions participate in the capacity building program. For example, an African judge in Ghana may train judges in Kenya, fostering collaboration and knowledge sharing. This approach strengthens partnerships and promotes a collective response to cybersecurity challenges.

Regional cooperation plays a vital role in capacity building as well, facilitated by the Budapest Convention. Countries, such as Albania and Montenegro, collaborate to collectively address common cybersecurity challenges, sharing resources and expertise. This regional approach enhances collaboration, stability, and the effectiveness of capacity building initiatives.

The establishment of a point of contact in each country, compliant with international law, is strongly advocated. The 24-7 network provided by the Budapest Convention ensures a stable point of contact, enabling effective coordination and communication during cybersecurity incidents. This promotes international standards and legal obligations.

While a separate legal basis for capacity building programs is not immediately necessary, better utilization of existing legal frameworks is recommended. Utilizing existing treaties that already have capacity building programs ensures sustainable and coordinated efforts.

Donor countries have a significant role in supporting capacity building. Drawing lessons from past development experiences can enhance demand-driven capacity building in cybersecurity at the national level. By leveraging these experiences and knowledge, countries can improve their capacities and contribute to international cybersecurity goals.

Overall, the Budapest Convention serves as a foundation for capacity building programs in cybersecurity, promoting localized training, South-South Cooperation, and regional cooperation. It emphasizes the establishment of stable points of contact and the utilization of existing legal frameworks. Donor countries can improve capacity building efforts by learning from past experiences and improving capacity at the national level, ultimately contributing to global cybersecurity goals.

Session transcript

Calandro Enrico:
Thank you. Good morning, ladies and gentlemen. Welcome to a new session, a roundtable on the successes and challenges of cyber capacity building coordination. So today we’ll tackle these issues of cyber capacity building, and we’ll focus on the key areas of cyber capacity building, which is cyber resilience, as well as cyber skills and competencies, which is the coordination of efforts aimed at enhancing cyber capacity building. So in a world where, as you know, state-sponsored cyber attacks, cyber crime, and cyber incidents are proliferating, governments, governments, and governments are allocating substantial resources and funding to bolster cyber capacity building. Developing nations are receiving vital support to fortify their cyber defense capabilities, encompassing the ability to detect cyber threats, promptly report on cyber incidents, and respond effectively to cyber attacks. However, with the proliferation of cyber capacity building efforts has emerged as a challenge. The task of aligning strategies, priorities, and supported activities among donors, recipients, and implementers in the realm of cyber capacity building has grown increasingly intricate, and we’ll try to discuss all these things today. So our session aims to explore both the achievements and difficulties associated with coordination in the cyber policy area. Today we are privileged to be joined by a distinguished panel representing not only the We have a number of speakers from different regions of the world. We have a number of speakers from different regions of the world. We have various actors from the Internet governance forum space or government representatives, civil society, technical community, but also actors from the cybercapacity building community that are defined somehow in a slightly different way, because we have got recipients, donors, and implementers. And our speaker also from different regions of the world. So, we will start with the first part of the presentation. We will explore what are the repercussions of inadequate coordination in the field of cybercapacity building, and we will share what are the existing mechanisms designed to enhance coordination in this sphere. And then we will identify what actions can donors, implementers and recipients take to improve the coordination of cyber capacity building efforts. And of course, most importantly, we hope that many of you in the room will participate in the discussion, and we hope you will share also your own experiences and recommendations for enhancing coordination mechanisms in this cyberpolicy area. We also would like to prepare at the end of this session a policy brief, which will be shared with many stakeholders in this area. The Global Forum on Cyber Expertise, the German Agency for International Cooperation, other agencies dealing with international cooperation and cybercapacity building, the European Commission, recipient nations, implementers, and ongoing projects and initiatives. So, without any further ado, we can start the conversation. So, I will let all panelists to briefly introduce themselves when answering the first question, and I will simply go in the order of the table. So, let’s start with Rita. Thank you for joining us. So, can you tell us why it is difficult to coordinate cybercapacity building projects from your perspective?

Rita Maduo:
Thank you so very much for the question, and it’s really a pleasure to be in this forum today to share my views pertaining to capacity-building projects. First of all, I’d like to introduce myself. My name is Rita Madjoba-Dumiling, and I work for the Botswana National CSIRT. I’m actually a CSIRT respondent. Before I can jump into your answer, I would also like to emphasize what really cybercapacity-building encompasses, right? Cybercapacity-building, it actually encompasses all initiatives that actually drive towards the development of necessary skills, necessary capabilities, as well as the infrastructure that will ultimately or effectively address any cyber security challenges. Now, to go back to your question, why is it difficult to coordinate cybercapacity-building projects? One pressing issue that ultimately affects both developing countries and as well as developed countries is actually the rapidly evolving cyber landscape and its complexity, right? We are living in a tremendously evolving technological era whereby we are seeing the emergence of new technologies, and these technologies are, however, taken advantage of by threat actors, so in such cases, we are seeing emergence of sophisticated threats, sophisticated vulnerabilities. Now, therefore, in this dynamic… So, I think it’s important for us to understand that, you know, when we are talking about this dynamic environment, it rather becomes a challenge in coordinating, especially in reference to, like, strategies and priorities. There are strategies and priorities that are implemented to address issues that come with this, that are actually a challenge, but it’s also a challenge in terms of, like, how we are going to implement these strategies or policies rather is a challenge because of the dynamical, the dynamic environment, and this, however, especially for us developing countries, for example, Botswana, it’s rather expensive in the sense that in order to, like, be agile or keep up to speed with addressing these issues, it requires a lot of training, a lot of, like, training in terms of, like, how we are going to manage these complicated vulnerabilities. It requires a lot of funding. It requires a lot of, like, training cyber security experts in order to, like, try and keep up with this emerging challenges. So, it requires a lot of training in terms of, like, how we are going to manage these issues, and it requires a lot of training. So, we lack capacity in the sense that we do not, there is no, like, tailored training that is actually intended for different aspects of cyber security. So, those are one of the pressing challenges. So, in essence, complexity and rapidly evolving landscape, as well as resource constraints, are a challenge, especially for emerging, for developing countries such as South Spotswana. Thank you.

Calandro Enrico:
Thank you very much, Rita. I think it’s clear that there is a need of support, so there’s no doubt of that. And because of these complexities and evolving cyber threats, and some organizations are also trying to improve the complexity around the coordination of these efforts for supporting countries like Botswana. So, for instance, Teresa, one of the main goals of the GFC is somehow to support coordination. So, what’s your take on that, difficulties and everything else and challenges?

Tereza Horejsova:
Yeah. Thank you, Enrico. Thank you also, Rita, for setting us up with some really excellent points. Good to be here. So, my name is Teresa Horejsova, and I’m from the Global Forum on Cyber Expertise, the GFC, working mostly on our regional hubs and regional efforts. And building on what you said already, I will try to provide a little bit more, yeah, maybe frank assessment why it is difficult. Frankly speaking, this is the cyber capacity building efforts. It’s quite a tough and very competitive environment. That’s why, you know, for kind of all the actors involved, be it the donors, be it the implementers, or being the recipient countries, kind of the intuitive answer is that less sharing will mean more projects, will mean maybe more control about what type of projects are delivered, and so on. And that’s… a problem, as we will get later to why it is a problem, because when we are in a situation that there is not enough sharing, that also means that one project does not build on another project. We do not connect the dots as we should. So that’s why it is difficult to coordinate. We also, and you pointed to it a little bit, Rita, that in many cases it’s very supply-driven, the capacity building support that is being provided, rather than demand-driven. So I still think there is a lot of room for maneuver in listening to the recipients of cyber capacity building support on what their needs actually are, rather than presuming that we, on the other side, know what the needs are. Thank you.

Calandro Enrico:
Thank you very much, Teresa. I think it’s very interesting to highlight this issue of the competitive environment. There is a lot of competition, actually, and it’s clear, really, also working from an implementer point of view, because I’ve been working for a project delivering cyber capacity building, and it’s clear to see that there is competition, because the funds, even if they are there, available from a number of sources, are also somehow limited, because these projects also require a substantial amount of funds to really deliver on the promises. That’s also the reality. So Claire, then, from your perspective as a government representative, what do you think about the complexities and challenges of cyber capacity building coordination?

Claire Stoffels:
Thank you, Enrico. Hello, everyone. My name is Claire Stoffels. I’m the Digital for Development focal point at the Luxembourgish MFA, and within the Directorate for Development, Cooperation and Humanitarian Action. So first of all, I wanted to thank you very much for inviting me to participate to this panel on this really relevant topic, on which I hope I can share some useful insights with you from a donor perspective. So from my experience I can definitely say that cyber capacity building coordination is lacking amongst stakeholders, that we face a lot of challenges when attempting to coordinate notably diverging objectives, approaches, duplication of actions. There are however a number of positive efforts that have been undertaken which I will get to a little bit later. But first of all cyber capacity building coordination needs to be driven by several parties from within, meaning it requires really an inclusive, demand-driven and context-specific approach by which ownership is fostered among stakeholders at both national and regional levels in order to create sustainable change. I think this encapsulates really a key challenge in cyber capacity building coordination efforts. So as I said, it requires a regional approach and because it transcends so many communities of practice from technical incident responders to cybercrime police to civil society educators, it’s really challenging to gather all relevant parties around the same table. But beyond getting everybody to sit at the same table and to actually discuss, one needs to also recognize that the success of cyber capacity building coordination processes is contingent upon operationalizing the consensus at international level and reflecting that in national policies and practices in a way that aligns with national and regional socioeconomic and security priorities. Then another essential component to cyber capacity building coordination is trust. So it sounds very basic but trust is definitely a necessary component for practical cooperation between stakeholders. However, trust can be challenging to establish when working across so many different policy fields and and institutions. And trust can be built through transparency and accountability. And I think Luxembourg has historically been perceived as neutral and trustworthy. And this has definitely had a positive effect on relationship building and developing different initiatives in the cyberspace. And finally, one of the biggest challenges that I’ve encountered in the past year has been the development also of scalable models to establish mechanisms to coordinate capacity building efforts. And this is basically how it comes down to how a project can be developed sustainably in the future.

Calandro Enrico:
Thank you. Thank you very much, Claire, for alighting the requirement on an inclusive approach. Success contingent also to consensus at an international level that that needs to be translated in national policy, also on possibly coordination of cyber capacity building projects. And then trust. Trust is so important, right? Trust in cyber security is a world recurring theme across so many issues and also on cyber capacity building. So thank you for that. Anatoly, what’s your take on the challenges on cyber capacity building coordination?

Anatolie Golovco:
Hello, everyone. I’ll try to oversimplify things. So cyber security, from my perspective, it’s about good people who are protecting computers against bad people. So the main goal is to teach that good people to have the right value, the right ethics, and to be able to have the right skill to do the engineering of the process. So the fundamental problem is to deal to people. It’s difficult. It’s hard to plan. It’s not like building a construction or a road. You have to adapt to the speed of learning of people. of the people that you have in charge to cybersecurity process. So what’s happening very often, the donors have the timeline for the project, and they can’t adapt to the speed of learning, to the speed of the humankind. So they’re starting to buy tools. They’re buying more and more sophisticated cybersecurity tools. It’s easier to manage the project in this way, but you miss the main purpose. So you miss the humankinds who are fighting against, let’s say, the defects in the cyberspace, in the engineering of the cyberspace. So paying more attention to the people in the process, it’s the main thing that can help with this complex puzzle. Thank you.

Calandro Enrico:
Yeah. Thank you. Another key issue, so this issue of the timing of the project. Sometimes you don’t give enough time, actually, to a project to allow to reach, in terms of improving skills, because the learning curve might be slower. But there are specific requirements, especially from a donor perspective. I think the recommendation of actually focusing more on the people and how much they can learn in how much time would be probably a better way of approaching a project, rather than having very specific and strong deadlines. So probably from a donor’s perspective, unfortunately, that doesn’t always work. But we can try to discuss that a little bit more. Hiroto, from JICA, another donor organization and dealing also with international development, what do you think about the challenges?

Hiroto Yamazaki:
Thank you very much, Eniko. I’m Hiroto Yamazaki, Senior Advisor on ICT and Cybersecurity at Japan International Corporation Agency, JICA. So JICA is an official development assistant agency under Ministry of Foreign Affairs. So in the last five years, JICA has been involved in bilateral technical cooperation related to cybersecurity, mainly in Asian region. So over the past five years, actually, the technical cooperation has been implemented in Vietnam, Indonesia, Cambodia, Philippines and Thailand and so on. So today I would like to share our experience from the JICA’s activity. So I have three points on the difficulties on the JICA. So first one is that there are simply too many organisational stakeholders to coordinate. Some of them may not be globally identified, which makes coordination difficult. So cybersecurity has many communities divided into several layers, such as private versus government, technical and policy, and country or region. So in some cases, discussions among development partners do not include the communities of specialized security organisations, such as the first or IP third and so on. So in addition, not all organisations participate every time. So even when a group or organisation coordinates something, there will always be the organisations that are not included, so making it impossible to fully coordination. So I have some examples. So JICA attends the regional coordination meeting. So since 2009, Japan and ASEAN have established a framework of the cybersecurity policy meeting and working group. So this meeting is held four times a year. So at the meeting, capacity building session is held to share what kind of capacity building each organisation is implementing and to exchange our opinions. So this works well, but generally we cover the cooperation for government agencies, but does not include the support from civil organisations, private companies and international organisations, such as the first or the IP third. that, except for the JPSAT Coordination Centre. Sorry, I have a lot of example, but time is almost up, so I have one more reason. So we are the bilateral cooperation agency, so our cooperation is based on the bilateral agreement between Japan and the recipient country. So even if we could coordinate something with other development partners or donors, we still basically have to try to follow the recipient country’s approach, strategy, and their needs by respecting recipient country’s ownership. So sometimes it makes it difficult to coordinate. Thank you very much.

Calandro Enrico:
Thank you. Thanks a lot, Hiroto, for highlighting the number of stakeholders that actually a donor organisation is supposed to coordinate. But I think it’s a very interesting mechanism, what you described, the regional coordination meeting, I think that could help. Also, as you said, including all stakeholders might still be challenging. Louise, from your perspective, that I believe it’s primarily academic, what’s your experience and your take on the challenges on cyber capacity building coordination?

Louise Hurel Marie:
Thank you very much, Enrico. My name is Louise Marie Urell. I am a research fellow in the cyber programme at the Royal United Services Institute. So for those of you who don’t know RUCI, shorthand, it’s a security and defence think tank based in London, but we work globally across different regions. And my own background, I worked in think tanks and mostly focusing on Latin America and the Caribbean. So hopefully, I’ll be talking from that regional perspective, but maybe also talk a little bit more, as Enrico mentioned, from a more scholarly academic perspective. So as a person that has been in the position of being an implementer in many ways of different capacity building initiatives, thinking that implementation is not something that’s only conducted by different governments, but all different stakeholders have a place of implementation when it comes to cyber capacity building initiatives. I think what I’ve observed in the past couple of years… years is that, you know, we use the term cyber capacity building, but in fact we’re talking about evolving mechanisms, right? So there are MOUs that should be in place so that governments can then activate it and build an agenda bilaterally. We’re talking about multi-sided kind of multi-donor funds that are being established. We’re talking about, you know, coordination among civil society organizations that are working in conducting cyber capacity building and academia, the private sector, and other colleagues at the international level also kind of developing agendas on that. So I think I have three key points. What I have observed as well in terms of the context is that, you know, many donor countries are in the second or third wave of developing programs for capacity building, so they are also restructuring the way in which they’re doing and establishing, let’s say, funds within the government, so which departments you need to bring together. So I think that’s something quite interesting, but while you see like, for example, the GFC, the Global Forum on Cyber Expertise, has a civil portal where you map all of the different, let’s say, capacity building initiatives that are publicly kind of recognized, you know, while we see lots of programs, coordination doesn’t necessarily mean that, you know, that’s something that’s necessarily there. But the first point that I would say is that I think there needs to be a better understanding of, like, how coordination happens amongst countries that are willing to support. So from a supporter perspective, one big challenge of coordinating investments in a particular region, right, so I think there’s some countries from a donor perspective that would be more interested in some regions and some more in others, and whenever there’s, you know, no coordination among them, it’s very hard to see, you know, you have one country that’s receiving from multiple other countries, so how do you actually make sense that you’re not overloading the recipient country, because they also have to coordinate amongst themselves. So I think, you know, duplication is something that we really need to think about. The second point is really domestic buy-in, as a person that has… as I said, worked in Latin America for many years. Political buy-in is something that’s quite fundamental. So if you don’t have political visibility over these capacity building programs, it’s very hard to ensure sustainability of implementation, right? You might have a civil society organization or a think tank, as I used to work, trying to implement and bring visibility to cybersecurity capacity building, but then if the government doesn’t see that as a priority, sometimes it’s very hard to gain traction. So I think that is a very real challenge to thinking about coordination and sustainability. And the final point is really, I think we need to break down the term cyber capacity building a bit for us to have a better conversation and more focused conversation. So maybe we might be challenged on the coordination element because we need to break that down. So I would break it down into at least three different sub categories as a good academic that I am. So there’s the traditional cyber capacity building, we’re talking about skills, we’re talking about longer term projects or short term projects that are looking at, let’s say more whole of society approaches. A second element could be CCB for crisis response. So for example, Costa Rica having to respond to a large scale incident, right? This is a very different context of thinking about capacity building and investment in a particular recovery scenario. And the third one is capacity building for conflict or post-conflict recovery, which as we’ve been seeing like in Ukraine, for example, it’s a whole different landscape of investment and also capacity building efforts. So I think we need to break down the discussion around capacity building into the context in which it is applied. It’s very different when we’re talking about peacetime and maybe conflict or crises triggered by a particular incident. Second, domestic buy-in, we need to ensure that there’s political buy-in. And third, coordinations amongst countries that are willing to support given regional priorities for each of them so that we don’t duplicate efforts. So these are my three.

Calandro Enrico:
points. Thank you. Thank you very much, Luis. Many interesting and thought-provoking points, and especially I think also on having a little bit more of granularity right on the term of sub-capacity building, because all these efforts of course have got different goals, and I think it’s a good categorization between skills, sub-capacity building for trice response, and for conflict or post-conflict recovering. Regina, based on your experience as a cyber diplomat, what do you think about the challenges on cyber capacity building coordination?

Regine Grienberger:
Thank you, Enrico, and yeah, I’m Regine Grimberger, the German cyber ambassador. I would subscribe to almost all the elements that have been mapped out here as parts of the difficulties that we meet when coordinating cyber capacity building, and I would like to add perhaps four more. So the first one, also from a donor’s perspective, this is spoken from a donor’s perspective, it’s actually difficult to fund cyber capacity building projects. For us in Germany, this is, I mean, what is this, the 18th IGF, so I think there is quite a lot of time that has passed that we meet the needs, but in the foreign office it’s still a new topic, and it’s for us a new experience to really go into the details of cyber capacity building, but we realize that it is not only capacity building for the sake of increasing cyber security, but cyber capacity building is also a diplomatic tool to strengthen our partnerships, to strengthen the stability of cyberspace, and by thus also the security of us all. So it’s difficult to fund CCB projects because it needs a political willingness to do so, and the political willingness depends on the risk awareness and you know many people in the decision-making level of the Foreign Office for example are not as risk-aware as people you know in the basis like Rita described it from CSIRT. The second reason is of course budget restraints and then the third reason is that we have a very short-term planning, cameralistic planning for one year only and we have mid-term needs and even long-term strategies as you described when you broke down what is actually capacity building so our planning period is a little bit contradicting the recipients horizon. And then the last element that makes it difficult to fund is the deckability of the expenses. So because this is not always within the definition of what is then ODA, so Development Assistance in the definition of the OECD, we have to take it from other funds, other titles. So another element that I would like to add to the difficulties is we need to free human resources to do that. It’s not about money only, it’s also we need resources experts. We have established an EU cybernet which should be a platform to find an expert and also to develop train the trainers programs but this has really come out of the experience that we don’t have enough people to implement cyber capacity building measures. The third element is transparency. It’s what we need is a transparency and what we experienced is that for example when we invest in trust funds to fund cyber capacity building measures like with the World Bank we are really missing that kind of transparency. of transparency that we understand what happens with the money and what do the recipients do with it. And the last element that I would like to mention here is what we are also need needing to really coordinate effectively is we have to know the needs of the recipients which requires them expressing and admitting and expressing their needs and often it starts with the cybersecurity strategy that is missing that would give a structure to what are the needs in this particular place. Thank you.

Calandro Enrico:
Thank you, thank you very much Regina for sharing really from a donor’s perspective the difficulties to fund cyber capacity building projects. I think you touch upon the political willingness from a donor’s perspective and Louise said the political willingness from a recipient perspective. Sometimes there is a mismatch there and issues of transparency also very important because there is a need of understanding what’s happening with the money so probably you know having monitoring and evaluation mechanisms while the project is implemented could help that. And also a very interesting point that let’s not forget the cyber capacity building are also a tool, a diplomatic tool to strengthen partnership and working on security issues with other countries. So then let’s move on to try now to understand what can be the consequences of this insufficient coordination in cyber capacity building. For instance Rita from your perspective your organization the National T-SERT is primarily a recipient so if coordination is actually insufficient what are the repercussions for

Rita Maduo:
your own organization? Okay thank you Enrico for that question. As a member of the National CSAT, I have had firsthand experience and exposure to the repercussions of insufficient coordination, coordinating cyber capacity building. So what we have identified or what we have come across is insufficient coordination ultimately leads to disjoint cyber security efforts, thus leading to gaps and vulnerabilities in the country’s overall cyber security posture, making it easier for cyber criminals and malicious actors to ultimately exploit such weaknesses within the cyber posture. And then another consequence could be there could be insufficient resource allocation. For example, if different or multi-stakeholders within a nation do not come together, coordinate towards cyber security capacity buildings, resources may be wasted or there could be duplicated efforts or there could be misallocated areas with less priority, and this inefficiency could ultimately limit the overall effectiveness of cyber security programs. And then with insufficient coordination, there is certainly limited information sharing, because effective cyber security entirely relies on timely and accurate information sharing between different entities, that is, between different stakeholders such as government agencies, civil sectors, or private sectors, or even international partners. So, if there is insufficient coordination, this can, like, hinder sharing of threat intelligence and then making it harder to detect and as well as respond to cyber incidents. Thank you.

Calandro Enrico:
Okay, thank you very much, Rita. Those are really, yeah, worrisome points because somehow it seems that lack of coordination could result in deteriorating the cyber posture of a country because it really touched upon issues of limited information sharing, increasing cyber vulnerability, so then the effect of cyber capacity building could be completely contrary to its final goals. Teresa, from your perspective, there are also some mechanisms at the GFC, like the clearinghouse and others, so what are the repercussions of insufficient coordination between all these actors?

Tereza Horejsova:
Well, thank you, and I know I feel in an answer to this question we will be kind of reconfirming a lot what you have said, so yes, what are the consequences? Less impact than we could have and definitely insufficient use of the limited resources as several donors on this panel have already stressed, so there is a lot of duplication going on and if you’re a recipient, you might have got into situations that, yeah, the kind of same or very similar project was offered to you by various implementers, in some cases not knowing that this has been delivered, or donors trying to also support a project that might have been delivered in that given country too. Another maybe point worth mentioning is that we are often overwhelming the recipients, because if there is not sufficient coordination, imagine that for every single project an implementer would come and for instance would want to do a needs assessment for their particular project and needs assessment is so time-consuming and in this sense it’s very unfair to overwhelm already limited capacities for instance in a given country. So yes it’s a bit maybe utopian to expect that if needs assessment is deliverable in your project that you would be sharing it with other implementers. I understand that this is like probably not a typical situation but I feel it is a topic that we really really need to talk about yes and do you want me to go to the mechanisms now or later? Yes okay perfect so yes I mean now sorry that I will be talking a little bit about the GFC, the organization I represent because you know for those of you not familiar we are kind of a platform for actors involved and interested in cyber capacity building over 200 members from all stakeholder groups and the main idea actually is exactly providing a platform that hopefully naturally will lead to more exchanges, more networking, more conversations about about cyber capacity building. A few concrete mechanisms that we have experimented with have for instance and you’ve named it Enrico been the clearinghouse mechanism which in practice means that let’s say a government would express a concrete demand or need that they have and we would try to kind of clear it through the richness of the network that we have at the GFC connecting to the right implementers in an ideal case scenario connecting to the to the right donors. It’s not straightforward, for sure. I mean, the idea is probably good. The practice can be, of course, very complicated, but we feel this is an experiment that is worth playing with further. We also try to organize various donor alignment meetings in various regions, where we also try to kind of provide space for donors to come together and talk and kind of exchange notes. Again, it’s also delicate and tricky, and it is unrealistic to expect that also donors would come and share all their intentions and plans and strategies. But we feel that there is maybe some progress that is being done in this regard, and if the GFC can have a minimal role in helping to facilitate the discussion, we would be very happy to continue doing that. Another mechanism that we have available for the community is the Sybil portal, which is available free of charge at sybilportal.org. And this is a space where we kind of try to provide mapping of what are the projects available for a specific topic in the field of cybersecurity, for a specific region of cybersecurity, for a specific country as well. So it’s possible to kind of filter and play around. The resource will be more valuable the more comprehensive it is. So, of course, I cannot say that we have 100% of projects everywhere covered. It also relies a little bit on the implementers and donors sharing with us this information so that we can feed it in the platform. And of course, it also relies on us and our agility to keep the portal as up-to-date as it can be. So this can be kind of a basic resource to see, okay, let me see what projects in the field. field of, I don’t know, cybersecurity skills have been implemented in Botswana by whom? What was the angle? So it’s Sibylportal.org. I also, of course, have to stress a little bit our regional efforts. So I talked about it a little bit, you know, that we really think there should be more of this demand-driven capacity building, and then it’s not something that should be happening kind of like from the headquarters somewhere, but being really much closer to the situation on the ground is essential, which we are trying to tackle through our regional hubs. And maybe to conclude, you know, a general mechanism, and also building on what others have said, Regina, you’ve been very frank in your response, is the kind of short-term, long-term planning, yes? If we have a project, you know, that is a quick fix on something, but we don’t have the sustainability outlook, you know, in a way it’s also not as impactful as it could be in the long term, which I feel is the common goal of all of us. So sorry, I took more time now.

Calandro Enrico:
Thank you. Thank you, Teresa, for sharing. So there are many mechanisms available from the Global Forum on Cyber Expertise, the Cleaning House Mechanisms, the regional donors meeting, the Cyber Portal, which is a great resource, because all the information is publicly available, you’ve been collecting data and information for a number of years, so you’ve got actually an historical perspective, and that’s really available for not only donors, but also for implementers, for recipients, and I think it’s a great way of increasing transparency for the broader global community dealing with cyber capacity building. So information is there, it is available, sometimes it’s a matter of making an effort. Interestingly enough, I believe that there are still organizations … organizations and donors that do not know, unfortunately, these tools, but I believe that those are somehow also the foundations to try to improve these mechanisms globally. So I really invite everybody in this room, if you are involved in cybercapacity building, to have a look at these tools before embarking in your next project, because that could really help you improve the coordination efforts. Claire, so from your perspective, what are the consequences of this insufficient coordination? And then I don’t know if you would like to talk about some of the mechanisms to improve the coordination.

Claire Stoffels:
Yes, I don’t want to repeat anything that’s been said already, the excellent points that were made. From a donor perspective, there’s definitely obviously a risk for duplication and lack of coherence because of the proliferation of actions in the cybercapacity building space. So therefore, coordination is essential to increase situational awareness and allows to learn if some of the needs identified in a country or that will be identified by the project have already been addressed by other CCB projects. So that’s why also platforms like the ones from the GFC are essential, especially for countries like Luxembourg, where we don’t have as many resources to identify what’s being done, what’s being carried out by other stakeholders in the field. So I want to address some mechanisms. So I just said, as a small country, again, with limited resources, Luxembourg really has to foster multi-stakeholder approaches across sectors, not just in development cooperation. In digital for development, cybercapacity building is one of our main intervention sectors, and it’s a key priority really at national level, and it’s reflected in our policies, in our administrations, and in our private sector, and it really has trickled down. into development cooperation and we therefore together with our implementing agency LuxDev as well as other actors We fostered a lot of partnerships to carry out CCB sorry capacity building interventions We’ve coordinated efforts with also our national cybersecurity agency in the framework of our projects really because we try to identify which needs and gaps can be filled by different partners that we work with At European level Luxembourg is a is a founding member state of the digital for development hub, which you might have heard of So it’s a global platform that was launched by the European Commission in 2021 Which aims to foster a digital cooperation amongst EU member states to promote digital transformation in our in partner countries So the digital for development hub works on different Thematics among and has basically different working groups dedicated to those thematics Which aim at fostering discussions and initiatives among which cyber security? so Luxembourg shares the co-lead of this thematic working group with France and the European Commission and the purpose of these working groups really is to provide a forum for information and best practices sharing an Experience sharing between member states and we try to involve as much as we can different External actors as well on a regular basis and it has been more or less Successful so successful in the sense that it has created an informal forum for technical levels to exchange and share practices Less successful in the sense that I would say that most European member states still have limited resources Dedicated towards digital for development and therefore I don’t think it has reached its full potential yet in terms of how much Information knowledge sharing and coordination capacity it could carry out Allow me to share maybe just an example of how the cyber thematic working group can actually work in practice So, the European Commission is currently formulating a new project focusing on Sub-Saharan Africa with one component on cybersecurity and one on e-governance. I’m happy to get into the details a little bit later. And in parallel, Luxembourg, we’re formulating a project at bilateral level with our implementing agency LuxDev and the African Union Commission with similar complementary actions. So our respective formulation teams, they are in contact now also with the African Union Commission and other stakeholders on the ground to ensure that both projects are actually complementary and make an efficient use of resources, that basically we don’t also duplicate needs assessments, that we can actually base ourselves on one single needs assessment, that we ensure that we have the same contact points within the African Union Commission so that we don’t, basically that they also feel that we are coordinated on our side and that it’s not going in every direction. So we were able, basically as I said, to share this information and our respective objectives through this thematic working group, which is, I think, quite a good example of how that can actually work in practice. And then Luxembourg has also joined other coordination platforms or practitioner groups such as the GFCE, the EU Cybernet as well, which is a great platform as well, and those have proven to be very beneficial, again, for a country where we have limited resources dedicated to D4D and small administrations to carry out these initiatives.

Calandro Enrico:
Thank you. Thank you very much, Claire, for highlighting some of additional efforts from the national and European perspective. Maybe not everybody is familiar with this Digital4Development Hub, which is this European union mechanism between member states, where, as Claire said, they can share what are their priorities in terms of digital development and assistance to various countries. Of course, as you said, it’s got its own challenges, but it just started also in 2021. So I think it’s good and significant to see that there is an effort towards trying to improve coordination of these efforts. So thank you for providing concrete examples from that point of view. And of course, also the issue of national coordination and the problem with lack of enough resources, as also Regina highlighted before. So Anatoly, from your perspective then, what are the consequences of this insufficient coordination and can you highlight some of other mechanisms in place to improve coordination that maybe we do not know?

Anatolie Golovco:
Thank you. Yeah, I’d like to start with elaborating a little bit on what Teresa just said about the competition of donors. What I’ve seen in the last year since I’m serving my prime minister is sometimes the beneficiaries are losing the point, they are losing the scope of the project. So we see the project as a process, but we miss to remember why we started this project. Because you know, when the scope of the project is just to buy some hardware or software, it’s very clear. You have a shopping list, you split the shopping list between donors, you buy it, you put it in place and expect people to use it. It’s more complicated when you have people involved, because sometimes you don’t, you have for example training programme, but you don’t have enough brains to put that knowledge in, so you have a shortage of people to train. I can give you an example why it’s happening that sometimes the people don’t fit the projects in cyber we have. We had the discussion with the European Commission, especially on the topic of nationwide approach when we have to improve the cyber security in the region. in the local authorities and I discovered that the same problem is not just in Moldova, the same problem is for example in Estonia because of this big decentralization of power, you have local authorities which have a big autonomy but in the same time with this autonomy they have to serve themself and what we discovered that the cyber or the IT in general in the regions is handled by private companies so the Commission say we can’t invite non-public servant in trainings for cyber and we had now to find the mechanism to fit that employees of private companies who serve the infrastructure of the states in the regions and it’s not easy because you have to find the legal mechanism and to rebound the project scopes. So as I just told in my previous blog, to work with people is hard and when you have technical people is more hard because they are special. The solution to that is obvious, you need a better planning but it’s not the planning that the donors have to do, it’s the planning that the state actors have to do in terms of making this wish list to the donors and yeah, that’s the solution. Regarding the mechanism, we developed in the last let’s say half year or maybe eight months, we developed the following mechanism. So since 2015 we have the discussion that the Prime Minister need one single point of contact person in his office to be the window to the state. So they found that person, it’s myself and I organized the Council of Cyber Security in the Prime Minister’s office. And the vision of the Prime Minister, we are spreading across the institutions. We are spreading it to give the understanding what we wish to achieve and to give this clarity that sometime is missing in all cyber security projects. Another mechanism that we have, it’s small groups. Because this council, usually it’s between 30, 50 people, and when you have in the same room 50 people all together talking, it’s not efficient at all. So that’s why we have this, the council usually it’s meeting every month. We have every two month meetings with a shift from different donors. Usually it’s happening in small groups. It’s around, I don’t know, five, seven people. And these small groups are delivering the peer review between projects and between activities of the donors. And it’s very, very efficient because they can adjust their steps in delivering the project. So after having all this, let’s say coordination and identifying what has to be done, it’s already our Ministry of Economy and Digital Development, or Economical Development and Digitalization. So this ministry, it’s putting all that in the policy and it became a law or a government decision to have it on paper, let’s say. So this three-layer mechanism is very efficient for now. We’ll see how we’ll move with the critical infrastructure, because critical infrastructure is beyond the cyber security. And we’ll need extra coordination, because it’s a different profile when you have to take care about cables and other physical stuff. But yes. Thank you.

Calandro Enrico:
Thank you very much, Anatoly, for I think highlighting some very important issues. So I like the point on the need for a better planning, you know, not only from the donor’s perspective but also from a state actor’s perspective, right? Because sometimes, as we have highlighted also before, there might be some lack of transparency at that level. And I think some of the mechanisms that you have identified at the national level are really great and probably could be replicated also in other contexts. I like the small working groups because, of course, I believe that we have all been part of several working groups. So probably working with less people might be easier and more efficient, right? And then, of course, probably in your role trying to coordinate all these smaller groups, I think it’s something very concrete that other countries or mechanisms could actually try to replicate. So yeah, thank you. Thank you very much for that. And also I think the importance of another point of the recipients that are not only or always government officers but might be also private sector representatives. And that might create a problem of formality from the European Commission perspective because actually they cannot really directly support them. So the difficulties then from a recipient perspective of trying to find legal ways to let them understand that actually those are somehow acting as public sector employees because those are the people dealing with cyber capacity issues at the national and government level. So thank you. Thank you for that. Hiroto, once again, from JICA perspective, in terms of consequences of insufficient coordination from your experience, and what mechanisms did JICA put in place to improve that?

Hiroto Yamazaki:
Thank you very much. From the development agency’s perspective, so inadequate coordination in JICA is a problem in cyber security capacity building will lead negative effects, such as the reduced efficiency, non-maximized of development impact, and lack of sustainability. So looking at the negative effects in more detail, we can see previous speakers already mentioned the duplication assistance, lack of the resources, or the too much resources, or the siloed approaches to assistance, and so on. And I skipped some other parts. But conversely, by promoting coordination and cooperation, it is possible to eliminate these negative effects. So I have two examples. One is a kind of the bilateral effort. The other is a more multi-stakeholder effort. So regarding the duplication of assistance, I have an example from Cambodia. So our project started in May this year. And this project included assessment activities for the national CSIRT. But it was discovered that the cyber for development had conducted already a few years ago. So in this case, since the project had not yet started, we had a chance to talk with our Cambodian partner and cyber for development. So then we decided to use the result of the cyber for development instead of conducting the same assessment so that we could reduce the duplication of the assistance. The other example is more about multi-stakeholder. So Japan, I mean the JICA, is conducting the technical cooperation project in Thailand. So there is a training center in Thailand that is called ASEAN Japan Cyber Security Capacity Building Center called AJCCBC, established in 2008 or something. So in this training center, we have we are coordinating with ASEAN member state countries and ASEAN Secretariat so that we conduct the training at least six times a year and also CTF contest to meet their needs. So in addition to providing, in addition to training, provided Japanese training company or training institute, we also discussed with other donors or other partners. So this AJCCBC framework provided more training with, for example, the CISA of United States. So they provided some of the open source training evaluation, sorry, cyber security evaluation framework and also we have, now we are planning to provide more training in coordination with the FCD of United Kingdom and ITU and more other organisation. So through this AJCCBC programme, so this is kind of a training centre for ASEAN region, but also this AJCCBC has a kind of the coordination function to meet the needs of the ASEAN and also to reduce the duplication or something like that. So okay, that’s all, thank you very much.

Calandro Enrico:
Thank you. Thank you very much for sharing one concrete example of two projects collaborating in order to avoid duplication and also for sharing the existence of this centre, which I believe probably also other donors beyond ASEAN and those that you have mentioned could actually collaborate with in order to improve coordination within the ASEAN region. So I would invite everybody who’s working in that region actually then probably to get in touch with you and try to understand better how to work together in that, through that centre, because it’s not only a physical centre, but it’s actually a coordination mechanism. Yeah, Luis, same question for you. Insufficient coordination. and some of the mechanisms that you might have identified to improve coordination?

Louise Hurel Marie:
Absolutely, and we’re right at the last mile right now of the panel. Yes, I also don’t want to kind of say the same. I think we’re all biased over here. We understand very, very nicely the landscape and the challenges. But I think in terms of fragmentation of efforts, I would say that one of the consequences is really leading to poor sustainability measurement. And that goes from an individual, let’s say, donor or recipient country, but also from different donor countries trying to assess the landscape. I think we don’t have necessarily good measurements for longer-term sustainability efforts. I think we’re very good in measuring KPIs for an immediate project that you’re implementing. But once you’ve implemented the project, I think that longer-term element, because of so many other layers, I think Regine alluded to the bureaucracy of government and how sometimes it’s very hard to keep track of things beyond the financial year. So I think we need to be very realistic of how do we then build effective measurements in terms of sustainability over a longer period of time. Is that something that should be discussed in other forum internationally? Is that something that we should talk more about, be it at the ITU, be it at the UN, whatever works, of making those two communities meet, the CCB and the development community. So that also leads to a higher propensity of a one-off efforts or effects. So you only measure impact in, like, I implemented a project, then you measure impact based on that particular project. So you don’t have a holistic view. And I don’t think that’s something that’s just applicable to donor countries or recipient countries, I think implementers as well, like civil society organizations, think tanks that are working on this. It’s having a bigger measurement of how do we actually… see this as more than a one-off. Is there something on the recommendations for you as an organization, and I can say from a think-tank perspective, right, having conducted CCB in different countries and also helped in the implementation, I think it’s like, how do you actually provide recommendations that are sustainable? Is it about developing a longer-term, you know, capacity building program in a region, right? So I think we need to think more about that. In terms of also other consequences from the domestic, domestically from the donor side, already alluded to this, so there are multiple departments dealing with different types of capacity building efforts within different governments. So one example, and it’s not to put the U.S. on the spotlight, but like for example with Costa Rica and the Conti kind of ransomware incident, you had USAID kind of providing some some support, and then you had now the Foreign Military Fund providing other types of support to Costa Rica, right, which shows that there are different parts of government that are doing different types of CCB. Whether they’re coordinating, and I have no insight into the particularities of the the U.S. government definitely, but I think from a person that has been observing from the outside and the public information that we see, I think this is something to consider, right, in terms of let’s say the consequences of insufficient coordination from a donor perspective internally. And then domestically from the recipient side, this can lead to really like overwhelming effect when recipient countries really have lots of offers. That is assuming a very good scenario where you have like a country that’s receiving, but from a crisis response CCB perspective, and going back to what I said previously, when you look at countries such as Montenegro, right, so we hosted RUSI, we hosted a discussion over at the open-ended working group at the sidelines on thinking about ransomware and requests for assistance, right, and we discussed with the colleague from Montenegro was talking about how when they were attacked it was very hard for them to actually designate one POC to respond to the multiple countries that we’re trying to support at that moment. So that is a good scenario where you actually have lots of countries wanting to support, but then you actually need to be realistic internally of whether you do have a POC, do you have a national cybersecurity agency that has the authority to do that. So I think there are lots of different nuances to thinking about coordination domestically. Domestically from the recipient side, from the donor side, and also from organizations that are trying to measure sustainability more broadly. Very shortly on mechanisms. So still sticking to my breaking down the discussion and the language and the terminology around CCB. I think more broadly as I discussed, you know, the mechanisms that we have for broader CCB are already out there. I think they’re MOUs. For example, you know, Australia has done a very interesting work throughout the past couple of years of tying, you know, the BOA declaration, you know, looking at the Pacific Island countries, then tying kind of like particular parts of, let’s say, funds to do training on certs within the region. So I think there’s like a sequencing of actions and there are MOUs that come before that that are renovated after a while. So I think, you know, these are things that have been working quite well in different regions. There are different experiences. When it comes to crisis response mechanisms, I think there’s still a lot of experimentation on how government seeks to institutionalize this from, let’s say, Costa Rica, Vanuatu, PNG, and others. What, you know, just from a research perspective, what we’ve observed is that all of them have been preceded by MOUs, almost all of them. So countries that provided assistance in the context of a crisis already had a previous relationship. So we go back to the point on trust and how you actually have to build that trust before you actually conduct any kind of assistance. But there are other also evolving mechanisms, such as in the EU there’s the PESCO. framework and there’s the the cyber rapid response teams which is a way of getting particular types of countries within the EU to respond to a particular crises like cyber crises. So that’s still an experimental stage even though the PESCO framework has been there for a couple of years. I think this is a one way of thinking about how mechanisms can be explored within a group of countries. And finally when we’re talking about you know crises CCB I think we need we see some some mechanisms that kind of go back to the broader lens of CCB. So one other example that we’ve identified is that Slovenia, France, Montenegro they set up a center for cybersecurity capacity building in the Western Balkans in 2022. And that is an example of like Montenegro faced a very large-scale cyber incident, it was a ransomware, then they have received crisis response assistance and now they’re going back to like what are the mechanisms that we can build together to have a longer tail sustainable impact and how different countries can come together to actually institutionalize that. So I think we need to see those different types of CCB as complement to each other depending on the context and I think that last example just shows that you can go from crisis response back to like a broader CCB lens and use the crisis response as an opportunity for that political visibility to set up new mechanisms. So hopefully you know that provides a bit of a an idea of the landscape based on this typology let’s say.

Calandro Enrico:
Absolutely, thank you. You touched upon so many important points, the issue of sustainability, trying to find also long-term sustainability mechanisms. Also from our perspective I think try to have a broader understanding of the impact so beyond the single project I think that would be great and I don’t think it exists unfortunately. I’m sure that all European funded projects somehow have to to demonstrate the impact but that really happens at the So I think, you know, I think that there’s a lot of work to be done on the project level but what about then globally, right, observing the impact of all these projects? What will be the result? Do they look at sustainability? So thanks a lot for that. And also some of the mechanisms and how, as you said, from cyber crisis we can actually then identify other more long-term kind of mechanisms on coordination. I think those are great examples, but I would like to ask you a couple of questions. I think we have the last one in our table, so is there anything else that you would like to add on in sufficient coordination and some of the mechanisms that maybe Germany put in place to improve coordination in cyber capacity building?

Regine Grienberger:
Yeah, not to be repetitive, I would like to just throw some, you know, some ideas around and we see if we can follow up then in the discussion. So, I think, you know, I think that there’s a lot of work to be done on coordination, and I think it’s important to flag that there’s often a misunderstanding what coordination actually is. And I have to explain it also in my own system, that coordination does not mean telling other people what they should do. I mean, we have, for example, a very, I mean, a situation that is perhaps not very easy to explain, but I think that, you know, we have to, you know, try to, you know, try to, you know, so come up with a project, so this is a way of establishing a cyber capacity building project that is not, that cannot be coordinated in the same way that’s something that, you know, when you start with a white sheet of paper and just, you know, map it out according to the needs of the recipients. So, you know, I just go to the coordination meeting and tell them, okay, that’s what I’m going to do, because there is no leeway on my side to do something different. to choose a different recipient. A second element, of course, like in all development cooperation, there are darlings, darling recipients and others and be aware of that, but a coordination mechanism should also help us to overcome this bias that we always lean towards certain recipients who are well prepared to receive our assistance. Then you mentioned the Montenegro Center. What I was very, not surprised, but what became very visible in, for example, the Albania case of when the ransomware attack hit them, was that they turned towards help, also emergency support, far, far away from the US, from France, from others. They didn’t ask their neighbors, although there might be some familiarity with the structures, you know, arguments to ask the neighbors. So I would say a mechanism or mechanisms should also look for, you know, regional cooperation, not only global cooperation or, you know, I mean, in cyberspace there are no borders so neighborhood doesn’t mean the same thing as in the analog world, but nevertheless also in cyberspace there are reasons to ask your neighbors for help. Mechanisms, I mean, what is really necessary, Claire mentioned the D4D hubs. What I also find very promising is the regional tables that the European External Action Service is setting up with regard, for example, to the Western Balkans, now also for Moldova, to integrate foreign policy and security policy considerations and development or assistance considerations. So to bring these two aspects of the two perspectives together, because it is often, you know, development cooperation seeks to be not so political, more technical, but of course there are many reasons to integrate these technical perspective with a more foreign policy perspective. And then something I would also like to add is we have, I think there is a good case for on the donor side, to have also a top-down approach with regard to coordination. So starting with the language and with the paragraphs that we have in the OEWG reports and GGE report from 2021, because there is a good outline of what cybercapacity building is like, there is this very good idea and expression and notion of a two-way street. And we haven’t talked about this yet, but I think cybercapacity building should in principle be a two-way street, so that there is also, you know, north-south, south-north, south-south, north-north cooperation included and not only, you know, a donor-recipient relationship as in development cooperation. So the reports on the cyber norms provide us with a very good general concept, and what we have now in the discussion, the program of action will even give us more opportunities to also have it in a sustainable way, so not reopen the case every five years and renegotiate the same document, but have a more long-term perspective of where we would like to arrive one day.

Calandro Enrico:
Thank you. Thank you very much, Regina, for highlighting some of the, you know, how regional cooperation could actually improve these coordination mechanisms and the fact that not always that is actually the preferred way to ask for support. Thank you for highlighting the example of Albania having the Montenegro Center next to the UN. And also, it’s very interesting on how these new discussions on linking policy consideration and development considerations are actually not growing, so that the political aspects and the technical one somehow are trying also to find a way to have a better dialogue. And then, of course, for those that are not familiar, I would invite all of you to read the Open-Ended Working Group final report on Responsible State Behavior in Cyberspace, which highlights some of the principles of cybercapacity building, and one of those is these two-way streets, right? And the fact that the relationships between donor and recipient, south and north, needs to be revised and understand that they’re actually on a better, equal base also on cybercapacity building. So, I would like to open up now the floor to some questions or additional comments before we wrap up and conclude. Okay, there is somebody, yeah, you can start.

Audience:
Thank you. Hi, I’m an attorney, I’m a lawyer, so I like to consider legal basis as being a sustainable way of going forward, and it occurred to me, of course, there are different things that we talked about, and OEWG is also more of just resolutions that have basically been passed, but the Budapest Convention provides a legal basis, and I was thinking about the work that at least I’ve done with them and the things that it helps doing. They have sustainability because of the legal basis. I’ll mention three things, the first is the legal basis, the second is that their capacity building programs make sure that they don’t do what we call drive-by trainings. If they’re training judges, they’ll make sure that the training is done, it is localized, and then after that, those trainers are going to train others, and not only that, but the point you made about south-south cooperation, an African judge in Ghana, I mean, I was part of this just a few weeks ago, trained judges in Kenya, as an example. So that’s helpful, and someone from the Philippines is training someone in Morocco, for instance, this happens under their auspices. And also, the TCY, which is the committee for the Council of Europe, allows a measurable mechanism which is global for all countries to check what you’re doing when it comes to capacity building with these things, and they do this consistently, maybe not be too public, but it’s something that they do. something to look into. And finally, of course, they do regional cooperation constantly for Albania, Montenegro. I’ve been part of many of these exercises where they had many of these countries get together. But the most important one, and I’ll leave you with this, is the point of contact, the 24-7 network, which allows you basically to have a stable, legally, international law compliant, domestic law compliant, sustainable point of contact, where everybody can get in touch and talk to each other. And it’s not something we’re saying, well, which country does what? No. Every country has one point of contact, designated, known who they are. If there’s a problem, they know exactly who to contact. So this might be just one alternative. Thank you.

Calandro Enrico:
Thank you very much for your comments, for adding some other mechanisms for coordination, more from a legal perspective. I think that was great. So I’m wondering if somebody would like to address the question on the legal basis for cyber capacity building, because it’s something, actually, we didn’t touch upon, for sustainability and better coordination. Is there anybody who would like to? OK. Yeah. I think it’s an interesting question, because actually, there aren’t. And probably, that could be also a way of creating more clarity and somehow to address some of these issues. I think it’s a good question.

Audience:
At the risk of repeating, I’m sorry. My apologies. It wasn’t really a question for the panel. I’m not expecting we should have legal basis for capacity building, not at all. I’m just saying that there are legal basis in treaties, which have capacity building programs, that bring that sustainability. And maybe using them more is something to think about. The donors should be looking at it and saying, hey, that’s something stable. That provides structure. It’s legally sustainable on an international basis, as well as domestically. So that’s OK. Thank you.

Regine Grienberger:
Actually, also, understanding it as a tasking for both sides, both donors and recipients, if you want, or partners in crime here.

Calandro Enrico:
Thank you. Thank you very much for that. There was another question.

Audience:
Thank you so much. Thank you so much, and thank you so much again to the panelists and this very comprehensive points of view from different regions and different expertises, so thank you for that. So my question slash comment, so first of all I’m Yasmin, I work in the ITU in actually cyber capacity building project implementation. So many of the things that were mentioned really resonated, and my question is related to this challenge of demand-driven capacity building, because, you know, donors obviously have different political priorities, different budgets and everything that was mentioned as well, but we need to remember that this is not only a cyber security issue, there are a lot of lessons learned that can be learned from the development world, I mean there’s decades of development work being done, there are mechanisms in place like, you know, donor pledging conferences, donor coordination conferences, and so something that on the donor side that can be done is also at national level to look into what is being done in other, you know, government agencies and other divisions to see if there’s any lessons learned there, and of course at national level this can be looked into so that it’s improved at international level. So in short, my question is basically, you know, how can donor countries look into inside and their national lessons learned when it comes to development, so that we can improve on demand-driven cyber capacity building. Thank you.

Calandro Enrico:
Thank you, thank you very much. One of our donors country would like to answer to the question, thank you.

Claire Stoffels:
Thank you for your question. I think the D4D Hub actually is, that I mentioned earlier, gives a very good platform to do that, and what we aim, at least what we aim to do within the cyber thematic working group is to do exactly that, so to have the technical level people to share what good practices, lessons learned, what went wrong in your project, and how to do it better to really provide an informal platform to exchange. on that basis, sometimes it’s a bit difficult, actually, to gather that information that you mentioned, and that is actually the important elements that we need in the inception and the formulation of a project. So, yeah, I think the G4D Hub provides a very good stage for that, but, yeah.

Calandro Enrico:
Teresa, would you like? Teresa, please.

Tereza Horejsova:
Is it on now? Yes, okay. Thank you, Yasmin, also for your question. I think it’s an important point of connecting, actually, learning from the development community, but also connecting the development community with the cyber community, which is also something of quite an interest for the GFC, and we are together with our partners organizing a conference on global cyber capacity building in Ghana at the end of November, and this is exactly one of the objectives, to bring these communities together a little bit more for more efficiency in the future, hopefully.

Calandro Enrico:
Thank you, Teresa. Regina?

Regine Grienberger:
Yeah, I would like to add one aspect, and that is that it is also true in the other way around. If you do digital development cooperation, so development cooperation for digital transformation or for digital transition of public administration or so, you should include the cyber capacity building part, because it doesn’t make sense to help an administration transition to a digital system without including enhancing their skills, their capabilities, also the hardware and software part that is necessary for good cybersecurity. So I would say it’s also the other way around, yeah.

Calandro Enrico:
Thank you. Donia, probably there is an online question.

Donia:
Yes, we do have a question from online participants. First off, a response to Claire’s very early intervention, just support. And then the question is, can we see capacity-building as end-to-end across the whole community, rather than just technology solutions? And what are the panels we use on thinking of capacity-building as more than just technology solutions, but also building community awareness, legislation framework, or government policies, et cetera?

Calandro Enrico:
Thank you. Thank you very much for that. Is there anybody who would like to answer to that? I think my understanding is trying to see it beyond the technical issues, but rather on building awareness, legal basis, and so on and so forth. I think if I understood correctly the question.

Louise Hurel Marie:
Yes, sure. And thank you for the question. I think maybe I’m biased, because I feel that we probably bursted that bubble a lot. And I think a huge part of that effort, I take my hat off for, the GFC, the Global Foreign and Cyber Expertise, which tries to provide that platform. So they’re working groups on incident response, and they’re groups on cyber diplomacy and norms. And those kinds of communities in the coffee break kind of meet up. And I think it’s about those small efforts in the sense of mixing those communities and having the space to do that. So I think from my perspective, those kinds of efforts The CCB discussion across communities is gradually at the international level, through these mechanisms, kind of progressed. But for sure, when you look domestically, depending on the country, depending on the culture, depending on where in the development spectrum they’re in, it’s still very challenging. I remember working in my previous job in a think tank, and I was working in a think tank, so working in my previous job in a think tank based in Brazil, sometimes speaking to different departments across the government, or people that were working more on CCB for, let’s say, certs. It’s a very different kind of community. So I think there’s also this effort of us, at least, from civil society organizations or think tanks to try to do that as much as possible when we’re planning and designing a particular project. I think that’s part of how do we design that engagement with those communities as much as possible in our own, let’s say, stakeholder group.

Calandro Enrico:
Thank you. Thank you very much, Louise, for that. We only have four minutes left, so very quickly before we conclude, I would like to invite the panelists to provide, I would say, maximum two takeaways or recommendations and to improve cyber capacity building coordination.

Rita Maduo:
Okay, thank you, Enrico, for that. Brief, sorry. Oh, okay, sorry, sorry, sorry. So effective cyber capacity building actually requires multifaceted approach and a very sustained commitment from donors, from implementers, and as well as recipients. So drivers and coordinators of such efforts should be intentional about onboarding new strategic members such that the voices of all parties are diversified and initiatives do not remain stale due to having the same players at the table at all times. So by working together and following these actions, then we can effectively enhance a national cyber security resilience and capabilities.

Calandro Enrico:
Thank you. Thank you, Rita. Teresa.

Tereza Horejsova:
So yes, let’s consider this openness that we want to build to the extent possible as a win-win-win situation. And without wanting to sound pathetic, let’s really try to talk more and exchange more. Yes, thank you.

Calandro Enrico:
Thank you. Claire.

Claire Stoffels:
Thank you. I just have one major point. So I think I really believe that it’s donors as well as implementers’ role to really promote and carry out awareness-raising measures, enhance communication, and facilitate cooperation amongst actors and bring relevant parties. together and facilitate knowledge and information sharing as much as possible.

Anatolie Golovco:
I think that’s my main point. Thanks. Anatoli? I believe my main point is we need a people-centric approach in all the cyber security. So we need to reduce the complexity of tools to protect and to rethink from scratch the entire cyber security architecture of states. So having a good strategy is what we need and probably we need to put more effort in the strategy and rethinking the existing ecosystem instead of putting layers of security on top of badly

Hiroto Yamazaki:
designed things. Thank you. Hiroto? Okay, thank you very much. So through this discussion, of course the coordination is very important with the multi-stakeholder or the bilateral to reduce the duplication or to maximize the development impact, but also we must think about the sustainability from the coordination perspective. So we should not, of course we can reduce some duplication, but we should not provide one training or one awareness or one meeting or something like that. That’s not a good way. So we should create something like a kind of sustainable outcomes or sustainable output so that we can coordinate, real-time coordinate with the stakeholders that real-time online, but also I think through that kind of a sustainable outcome such as the guidelines or the training materials or trained trainer and so on so that we can and also we need we can refer to such a sustainable outcomes or sustainable output so that we can kind of the delayed or time difference the coordination. So not the real-time, but maybe we leave one country, but maybe later one donor will join that country, so then that donor can see our outcomes so that we can, we don’t need to talk, but we can. their coordinator later, after we leave.

Calandro Enrico:
Thank you very much, Hiroto. Louise.

Louise Hurel Marie:
Wonderful. Three key points. Definitely, first, I’d say, both from us, like think tanks, implementers, and others, and also like recipients, include recipients in the design phase. I think we can do that. That is a very practical element that we can do. Be it providing a bigger inception phase for projects where we actually get to engage with stakeholders then. So I think from an implementer’s perspective, that is great. From a donor perspective, I think it’s also about thinking what you need to embed that while, you know, to embed that in the timeline. Second, we need to really, and that’s my point since the start, really break down the typology. We need to design a typology, actually, to think about CCB that accounts for the different contexts as we see the landscape evolving, in terms of agencies, stakeholders, and as I said, crises up to conflict and post-conflict situations. And finally, on the South-South point, I would also say that, you know, there’s a next step for developing countries to also kind of empower their development agencies or to also kind of bring more of the cyber into the development agencies, right? Sometimes it’s a different part of the government that’s doing that, but I think there is a remodeling thing. There’s an element of bringing cyber into that. So I would say these are my three key points.

Calandro Enrico:
Thank you very much, Louise. And Regine.

Regine Grienberger:
I think I have nothing to add, so amen.

Calandro Enrico:
It was a lot. No, no, no problem at all. So thank you. Thank you very much for all our panelists for sharing your experience and insight. And of course, for all participants, yeah, we conclude our session. Thanks a lot. But the conversation, of course, doesn’t end here. Thank you all.

Audience:
Thank you. Thank you. Thank you. Thank you all. Thank you. Thank you. Thank you all. Next time I’ll play the devil’s advocate, y’all be like, no, I don’t agree, I don’t agree. We all agree pretty much.

Louise Hurel Marie

Speech speed

185 words per minute

Speech length

2928 words

Speech time

948 secs

Anatolie Golovco

Speech speed

148 words per minute

Speech length

1111 words

Speech time

450 secs

Audience

Speech speed

207 words per minute

Speech length

838 words

Speech time

243 secs

Calandro Enrico

Speech speed

172 words per minute

Speech length

3207 words

Speech time

1116 secs

Claire Stoffels

Speech speed

168 words per minute

Speech length

1487 words

Speech time

531 secs

Donia

Speech speed

204 words per minute

Speech length

77 words

Speech time

23 secs

Hiroto Yamazaki

Speech speed

149 words per minute

Speech length

1164 words

Speech time

468 secs

Regine Grienberger

Speech speed

150 words per minute

Speech length

1487 words

Speech time

594 secs

Rita Maduo

Speech speed

128 words per minute

Speech length

863 words

Speech time

404 secs

Tereza Horejsova

Speech speed

150 words per minute

Speech length

1412 words

Speech time

563 secs

Technology and Human Rights Due Diligence at the UN | IGF 2023 Open Forum #163

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Nicholas Oakeschott

The United Nations High Commissioner for Refugees (UNHCR) has developed a comprehensive digital transformation strategy that will be implemented from 2022 to 2026. This strategy aims to leverage technology and innovation to improve efficiency and effectiveness in providing support and assistance to forcibly displaced and stateless individuals. Additionally, the UNHCR is in the process of developing a formal policy framework on human rights due diligence. This framework will ensure that the organisation’s use of digital technology is aligned with international human rights and ethical standards.

In managing the risks associated with digital technology, the UNHCR has a wide range of policies and guidance in place. These policies cover areas such as privacy, data protection, and procurement. By implementing these policies, the organisation aims to mitigate any potential negative impacts and ensure the responsible use of digital technology.

The UNHCR is actively exploring the implementation of guidance on human rights due diligence. This includes considering the use of artificial intelligence (AI) and developing a user-friendly digital tool for implementation. The organisation takes a hands-on approach in reviewing existing policies and guidance on human rights due diligence to ensure the practical implementation of these measures.

The engagement with the UN Human Rights team has been crucial for the UNHCR in foreseeing and addressing potential challenges. The guidance on human rights due diligence has been increasingly seen as implementable at the field level, indicating its effectiveness in practice.

The protection of forcibly displaced and stateless people is integral to the mission of the UNHCR. They prioritise the provision of support, assistance, and protection for these vulnerable populations. This commitment is underpinned by the organisation’s concern for reducing inequalities and fostering peace, justice, and strong institutions, which are reflected in the relevant Sustainable Development Goals.

The UNHCR recognises the importance of engaging with states and the private sector based on the guidance they provide. By applying the guidance, the organisation believes it can establish a stronger basis for increased engagements and partnerships in pursuing its mission.

The UNHCR’s digital transformation strategy is a significant step forward in serving people digitally. Prioritising digital protection, digital inclusion, and providing more digital services are key goals of this strategy. It demonstrates the organisation’s commitment to embracing innovative approaches to enhance service delivery and access to information.

Engaging civil society stakeholders in the implementation of human rights due diligence is seen as a major opportunity and challenge for the UNHCR. The organisation values the input and collaboration of civil society in ensuring the responsible and ethical use of digital technology. Ongoing dialogue and discussions with civil society are recognised as necessary for continuous learning and improvement.

While the UNHCR acknowledges the importance of independent assessments and auditing, they are still evaluating the capability of independent entities to undertake audits beyond the existing system. Currently, the organisation has an independent auditing function that reviews the work of UN agencies. Expert suppliers also assist with data protection impact assessments at the global and field level.

In conclusion, the UNHCR is committed to the responsible use of digital technology and the preservation of human rights in its operations. The development of a digital transformation strategy, policy framework on human rights due diligence, and active engagement with stakeholders reflect the organisation’s dedication to continuous improvement and the pursuit of its mission. By leveraging digital innovation and technology, the UNHCR aims to provide better support and protection for forcibly displaced and stateless individuals while upholding international human rights and ethical standards.

Quintin Chou-Lambert

The United Nations (UN) recognises the importance of utilising digital technology across various aspects, such as peace and security, sustainable development, and human rights. This commitment extends to the UN’s internal operations, encompassing recruitment, procurement, and IT services. The UN aims to ‘walk the talk’ when it comes to its own use of digital technology.

A rights-based approach is considered paramount in the UN’s utilisation of digital technology. This is emphasised in the UN’s Roadmap for Digital Cooperation, which was initiated in 2020. The roadmap places great significance on incorporating a framework that upholds human rights in the UN’s digital endeavours. It serves as a guide and reference point in decision-making processes related to digital technology, aiding individuals in making informed judgments on a day-to-day basis.

Non-binding guidance plays a vital role in promoting horizontal alignment among UN agencies. This guidance assists in addressing the challenges of harmonisation and facilitating effective translation into local contexts. It encourages each entity within the UN to integrate these principles into their own local procedures. By doing so, entities can ensure optimal outcomes, particularly in areas such as the tendering process. Non-binding guidance acts as a beacon, guiding UN entities in hard-coding these principles into their operational procedures.

Each entity within the UN possesses distinct governing bodies and housekeeping rules. The Secretariat follows housekeeping rules prescribed by the General Assembly, while other agencies, such as UNHCR, enjoy more flexibility in certain cases. This diversity highlights the unique nature of each entity’s governance structure within the UN.

The COVID-19 pandemic served as a test for the UN’s application of the human rights due diligence framework. This framework assists in decision-making processes and helped the Secretariat make appropriate judgments during the pandemic. An example of this is when the UN considered the introduction of a contact tracing system. The human rights due diligence framework offered a guiding principle in ensuring that human rights were upheld throughout the decision-making process.

In conclusion, the UN is committed to employing digital technology in various areas, both externally and internally. A rights-based approach is fundamental to the UN’s use of digital technology, as highlighted in the Roadmap for Digital Cooperation. Non-binding guidance aids in maintaining alignment among UN agencies, while entities are encouraged to incorporate principles into their operational procedures to achieve optimal outcomes. Each entity within the UN has its own governing bodies and housekeeping rules. The human rights due diligence framework serves as a guide in decision-making processes, ensuring human rights are upheld.

Scott Campbell

The process of developing guidance on human rights due diligence for digital technology has been lengthy but valuable. It has involved multiple rounds of bilateral consultations, open forums, and heavy consultation with UN entities as well as external partners. Scott Campbell, in emphasising the importance of policy alignment and coherence, notes that the UN has considered these factors at the forefront of its thought process regarding digital technology and human rights due diligence.

The process has not only facilitated the development of guidance on human rights due diligence for digital technology but has also helped in mainstreaming these efforts across the UN. It has fostered a better understanding and alignment across the system, which is crucial for ensuring effective implementation.

The UN is now nearing the completion of this process. Despite its lengthy duration, the value that it has generated cannot be undermined. The involvement of various stakeholders has ensured a comprehensive and inclusive approach to developing this guidance.

In addition to the development of guidance for digital technology, there has been a parallel process to study the implications of expanding the scope of the current human rights due diligence policy. The guidance on digital technology intersects with this broader expansion, and there has been a broad agreement that it should be grounded in the parameters for this expansion. Recently, at an Executive Committee meeting, the parameters for the human rights due diligence framework policy were agreed upon.

Overall, the process of developing guidance on human rights due diligence for digital technology has been challenging but rewarding. It has facilitated the mainstreaming of these efforts across the UN and cultivated a better understanding and alignment within the system. The emphasis on policy alignment and coherence highlights the UN’s commitment to ensuring that digital technology is used responsibly and in a manner that protects and upholds human rights.

Audience

The analysis explores various concerns and issues related to the decision-making processes within the United Nations (UN) system. One of the key concerns raised is the lack of transparency in the procurement of technologies within the UN system. It argues for the consideration of the influences and priorities of specific actors who fund the UN system. This suggests that external factors may influence the decision-making process, potentially not aligning with the UN’s best interests.

Additionally, the analysis acknowledges the current lack of clarity within the internal mechanism of the UN system. This lack of transparency and clarity can impede effective decision-making and hinder the efficiency and effectiveness of the UN system.

Furthermore, the analysis questions the selection of technologies within the UN system. It suggests that the selection process should consider more than just efficiency. An example is given, citing the use of biometric technology by the United Nations High Commissioner for Refugees (UNHCR). While biometric technology has shown some efficiency gains in reducing fraud, the potential risks to the populations exposed to these technologies outweigh the minimal benefits. This highlights the importance of prioritising not only efficiency but also the potential risks associated with implementing certain technologies.

Another topic discussed in the analysis is the independence of assessments within UN agencies. Concerns are raised about who should conduct these assessments and ensuring their independence. Ana Cristina Ruelas from UNESCO specifically questions the independence of assessments and how to navigate diverse member states’ views when conducting evaluations. This raises considerations about maintaining unbiased assessments and managing potentially conflicting perspectives within UN agencies.

Furthermore, the analysis questions the absence of a due diligence process in the current development of UNESCO guidelines. An anonymous individual raises concerns about the lack of due diligence in the guidelines. This highlights a potential gap that could lead to oversight and negative impacts.

To conclude, the analysis highlights several key concerns within the decision-making processes of the UN system. These include the need for transparency and consideration of external influences, the importance of weighing potential risks when selecting technologies, ensuring the independence of assessments within UN agencies, and incorporating a due diligence process in guidelines, such as those being developed by UNESCO. These concerns highlight areas for improvement in the UN system and can contribute to more effective and accountable decision-making processes.

Peggy Hicks

The United Nations (UN) is nearing completion of a guidance document on human rights due diligence, a milestone in its efforts to protect human rights. The directive aims to standardize and harmonize approaches across the entire UN system, emphasizing the commitment and dedication of the UN and its partners to safeguarding human rights.

Partners within the UN have already started applying human rights due diligence as they introduce new technologies. This proactive approach ensures careful assessment and mitigation of potential human rights impacts. The directive seeks to harmonize these approaches and ensure consistency across different sectors of the UN system, preventing any gaps or inconsistencies in human rights protection.

The UN recognizes the importance of aligning actions across various sectors, viewing the directive as necessary to facilitate collaboration and achieve common goals. By implementing this directive, the UN demonstrates its commitment to SDG 16: peace, justice, and strong institutions.

Despite the complexity of some issues, UN agencies are genuinely committed to implementing the human rights due diligence framework. This dedication reflects the UN’s determination to protect human rights and uphold its institutional values.

However, the UN faces challenges in establishing public-private partnerships and engaging with corporations. Adequate funding is essential for the UN’s functioning and the protection of human rights and refugees. Insufficient funding impedes the UN’s work, making partnerships with the private sector crucial in bridging this gap.

Transparency, assessments, and enforcement are crucial aspects that the UN interagency working group will consider. These elements ensure accountability, identify areas for improvement, and enforce the human rights due diligence framework.

The UN’s digital transformation is heavily reliant on funding and partnerships. Many UN agencies recognize the lack of funding available for digital transformation initiatives, except within the context of important partnerships. The UN acknowledges the importance of advancing its digital capabilities to improve efficiency and effectiveness in addressing human rights issues.

In conclusion, the UN’s progress in completing the human rights due diligence guidance document reflects its commitment to promoting and protecting human rights. The directive aims to standardize approaches and ensure consistency within the UN system. Addressing challenges related to public-private partnerships, funding, and digital transformation is necessary to support the UN’s work effectively. Transparency, assessments, and enforcement are critical components that further strengthen the UN’s commitment to human rights.

Marwa Fatafta

The analysis presents several significant points related to human rights evaluation, transparency in decision-making, enforcement of human rights due diligence tools, technology implementations, public-private partnerships, and the need for evidence-based solutions.

One key argument highlighted is the necessity of incorporating independent assessment in human rights evaluation. The analysis argues that internal assessments lack oversight and accountability, and having an independent third party would mitigate bias and ensure proper scrutiny. This approach is perceived as essential in ensuring fair and accurate evaluations.

Transparency in practice and decision-making is emphasized as another crucial aspect. The analysis suggests that transparency in decisions allows affected communities to evaluate the decisions and assess how well they serve their needs and protect their rights. By providing transparency, decision-makers can be held accountable for their actions, leading to better outcomes.

Furthermore, the analysis advocates for the effective enforcement of human rights due diligence tools. It is argued that the tools themselves are only as good as their enforcement. If not implemented properly, sensitive personal data can be exposed to risks. Therefore, strong enforcement mechanisms are necessary to protect individuals’ rights and ensure the effective functioning of these tools.

The potential fallout of not conducting due diligence on technology implementations is also cautioned against. It is highlighted that when technology is used without due diligence, it can result in exposing sensitive personal data and create challenges in safeguarding the collected data. Therefore, it is crucial for organizations to thoroughly assess and evaluate the risks associated with technology implementation before deploying it.

The analysis also underscores the importance of transparency in public-private partnerships, particularly in regards to the selection of specific companies for partnership with United Nations (UN) agencies. It notes a lack of available information on why certain companies are chosen, advocating for greater transparency in these partnerships to ensure fairness and accountability.

Additionally, the need for evidence-based solutions is addressed. The analysis suggests that technologies are sometimes deployed or used without proper evidence, which can have negative consequences. It cautions against relying on “snake oil” solutions that potentially harm human rights. Instead, the focus should be on implementing solutions that have been thoroughly researched and proven effective.

Notably, the analysis raises the significance of scrutinizing certain technologies, such as artificial intelligence (AI). It highlights the importance of examining the narratives behind these technologies to ensure they align with ethical principles and human rights standards.

Moreover, the analysis supports the initiative by the Office of the High Commissioner for Human Rights (OHCHR) to incorporate human rights due diligence in UN bodies. This step is regarded as important and much-needed in the effort to protect and uphold human rights globally.

Lastly, the analysis acknowledges the role of civil society in building bridges and reaching out to stakeholders. Access Now, for instance, is mentioned as an organization willing to connect and provide consultation when needed. This highlights the potential for civil society to contribute to promoting human rights and fostering collaboration among different stakeholders.

In conclusion, the analysis sheds light on various aspects related to human rights evaluation, transparency in decision-making, enforcement of human rights due diligence tools, technology implementations, public-private partnerships, and evidence-based solutions. It emphasizes the importance of independent assessment, transparency, and effective enforcement in safeguarding human rights. The analysis also advocates for responsible technology implementation, transparency in public-private partnerships, and the need for evidence-based solutions. The support for the OHCHR’s initiative and the role of civil society in building bridges further strengthen the call for greater human rights protection and collaboration among stakeholders.

Catie Shavin

The guidance for human rights due diligence has undergone significant enhancements as a result of thorough consultations with both UN and external stakeholders. Four drafts of the guidance document have been circulated, and valuable feedback has been gathered during this process. The consultation process involved engagement with various actors, both within and outside the UN system. This inclusive approach ensured that a wide range of perspectives were considered, resulting in a more robust and comprehensive guidance.

One key aspect that emerged from the feedback received was the need for gender and intersectionality sensitivity in human rights due diligence. Entities highlighted the importance of considering the diverse impacts on girls, women, and gender non-conforming individuals. As a response, the new approach to the guidance explicitly incorporates this inclusion, addressing the concerns raised during the consultation process. By incorporating gender and intersectionality sensitivity, the aim is to ensure that the guidance is applicable and effective in promoting equality and reducing inequalities.

Furthermore, the consultation process resulted in insightful learnings that will support UN entities in effectively implementing the guidance. Entities with significant experience in human rights due diligence have been identified, and their insights have been gathered to understand the best practices and challenges associated with the implementation. Additionally, the consultation process helped identify the language and approaches that resonate with colleagues across the UN. These valuable learnings will aid in supporting UN entities and enhancing their capacity to implement the guidance.

The consultation process also shed light on the areas of alignment and divergence between UN entities and business enterprises when it comes to implementing human rights due diligence for digital technology use. The process initiated conversations regarding the adaptation of approaches such as the UN Guiding Principles for Business and Human Rights. By identifying areas of alignment and divergence, the consultations have contributed to a better understanding of the challenges and opportunities in implementing human rights due diligence in the digital technology sector.

In conclusion, the guidance for human rights due diligence has significantly evolved through extensive consultations with UN and external stakeholders. The process has led to the inclusion of gender and intersectionality sensitivity, generating valuable insights that will aid in supporting UN entities in effectively implementing the guidance. Moreover, the consultation process has provided a clearer understanding of the alignment and divergence between UN entities and business enterprises in implementing human rights due diligence for digital technology use. These findings contribute to a more comprehensive and adaptable framework for promoting human rights and ensuring accountability in various contexts.

David Satola

The World Bank’s operational work differs from other organizations in that it provides member states with financing for projects, known as recipient-executed activities, instead of directly conducting the work themselves. This approach allows for a distribution of resources and responsibilities, as member states are responsible for implementing and managing the projects. It promotes economic growth and development within member states by leveraging the World Bank’s financial support.

However, the World Bank’s approach faces challenges due to a lack of clarity and synthesis of rules for member states to apply. The nature of the World Bank’s business model presents member states with difficulties in understanding and navigating the set of rules required for project implementation. The absence of a unified framework can result in confusion and discrepancies in the application of rules across member states. To address this issue, an improved clarity and synthesis of rules are needed to provide a more uniform approach to project implementation.

In the realm of human rights due diligence, the use of a principles-based approach is seen as commendable and beneficial. This approach allows for more flexibility in interpreting and applying human rights standards. By considering the evolving nature of standards and rules, the principles-based approach seeks to overcome the limitations of a rigid ‘one-size-fits-all’ approach. It recognizes that different countries have varying levels of development and economic maturity, making it challenging to impose the same model on all member states. Adopting a principles-based approach enables the World Bank to acknowledge and address these differences, promoting a more inclusive and adaptable framework for human rights due diligence.

While accountability is considered crucial within the World Bank, there is currently no specific mechanism in place to address human rights issues. However, the World Bank possesses multiple tools, such as the grievance redress mechanism, fraud and corruption guidelines, the Inspection Panel, and the Independent Evaluation Department. These tools have the potential to be adapted and expanded to include human rights considerations. Incorporating human rights issues into these existing mechanisms enhances the World Bank’s accountability measures and ensures that human rights violations or concerns are properly addressed.

The World Bank has also taken measures to protect personal data within its projects, particularly in the context of the COVID-19 pandemic. Recognizing the significance of data protection, the World Bank has worked with borrowers to embed data protection frameworks in sovereign-to-sovereign agreements. This incorporation of data protection into projects serves as a powerful legal tool to safeguard personal data in countries without existing data protection laws.

In conclusion, the World Bank’s operational work provides member states with financing for projects, promoting economic growth and development. However, challenges arise from a lack of clarity and synthesis of rules for member states to apply. To address these challenges, a principles-based approach is lauded for its flexibility and adaptability. While accountability mechanisms within the World Bank currently lack specificity regarding human rights, existing tools can be tailored to include human rights considerations. Additionally, the World Bank has implemented measures to protect personal data in projects with countries lacking data protection laws. It is suggested that the World Bank enforce specific mechanisms for human rights as part of its accountability measures, enhancing its commitment to promoting peace, justice, and strong institutions.

Session transcript

Peggy Hicks:
Hello, everyone. Welcome. Very glad that you’ve decided to join us competing against receptions and other responsibilities for this session on technology and human rights due diligence at the UN, from guidance to practice. My name is Peggy Hicks. I’m the director of the thematic division at the Office of the High Commissioner for Human Rights in Geneva. And I’d like to very much welcome all of you and thank our co-sponsors, the Office of the Tech Envoy and the European Union, for their help in this session. We thought it’d be good to reconvene around these issues. Many of you have been involved in this process for some time. We are working on the human rights due diligence guidance for the UN, and we’re nearing the finish line is the phrase that I’ve been told to use. Scott Campbell from our office will tell us more about what that means in practice. But I do want to emphasize that while we’ve been working on this document, it hasn’t stopped our partners within the UN from applying human rights due diligence on an ongoing basis as they have rolled out and used new technologies. And through that process, they’re also sort of seeing some of the challenges they face in implementing the need to harmonize approaches across the UN system. So it’s sort of reinforced our desire to move forward on this process. And we’ll talk about that more as we go forward. Before we do that, I’d like to give the floor to Quentin Lambert from the Tech Envoy’s office for some opening words.

Quintin Chou-Lambert:
Thank you very much, Peggy. Yes, and thank you all for being here. My job is very simple and short, just to give a couple of welcoming remarks and to frame this discussion. So this human rights due diligence for digital technology is really about the United Nations walking the talk when it comes to its own use of digital technology. Grounded in the roadmap for digital cooperation back in 2020, this was already on the agenda for the UN as technology was coming into the UN’s work. And since then, the UN’s been grappling with this issue internally like many other organizations. As you can imagine, the different areas of the UN’s work across the peace and security pillar, sustainable development, and kind of human rights work itself, but also in its internal operations. So the use of digital technology in things like UN recruitment or recruiting UN personnel, UN procurement, IT services, and that kind of thing. So obviously there are challenges which all organizations are grappling with, and this is a really good opportunity to take a rights-based approach and walk the talk when it comes to digital technology in the UN. So back over to you, Peggy.

Scott Campbell:
Great, thanks very much. We’re very glad to partner with the office on this important area of work. As I said, I’m now going to turn the floor to Scott Campbell, our senior human rights officer with UN Human Rights who leads on this process, and he’ll give us an update on where the process currently stands. Over to you, Scott. Thank you very much, Peggy, and just a quick sound check to make sure you’re hearing me okay in the room. All great. Fantastic. Very pleased that we’re here today, and as Quintin, I think, very rightly put, seeing us all moving forward at the United Nations on walking the talk, and also pleased, as Peggy mentioned, to be nearing the finish line on this process. The drafting and consultation process for the guidance has been quite lengthy. It’s involved multiple rounds of bilateral consultations of open forums like this one, other public events, and we’ve consulted heavily with UN entities as well as with external partners, including member states, tech companies, and diverse members among our civil society partners. The process internally, while it has been lengthy, I should underscore it’s been very useful in giving us an opportunity to engage on human rights with a large number of entities across the full UN family, and some of these entities are very familiar with human rights mainstreaming, human rights due diligence, we’ll hear from a couple of them today, other entities far less familiar with human rights. So the process has really reinforced a broader mainstreaming of human rights due diligence efforts across the UN, and has assisted us in building more understanding and aligning approaches across the system. The process externally, the mandate given to us by the Secretary General to develop the guidance called specifically for consultations with external partners, and in particular, those most affected by digital tech, and I think this has really added a lot of value to where we’ve landed in terms of the content of the guidance. And I want to give a shout out to Access Now for having facilitated a number of public events and consultations with civil society partners. Just quickly on the timing, a fourth draft of the guidance was circulated back in July to the Secretary General’s Call to Action Interagency Working Group, which is a UN body. Comments were received, we’ve done some consultations in August and September, and we are, as mentioned, nearing the finish line. I just want to mention one note before handing it back over. As the process has evolved, alignment and policy coherence across the UN system has really been forefront in our thinking. And this guidance on digital tech intersects with another parallel and very much related process, which is a study to examine the implications of expanding the scope of the current human rights due diligence policy of the United Nations. And as many of you may be aware, this is a policy that’s been in effect since 2011, but has a narrow focus on UN support to non-UN security forces. So this study on expanding the existing policy, which was also mandated by the Secretary General’s Executive Committee, was begun before we began our work on this non-binding guidance for our use of digital tech at the UN. And in discussion with many actors along the way throughout the process, there was broad agreement that in drafting the human rights due diligence guidance, we needed to first be grounded in the parameters for the broader expansion of the existing human rights due diligence policy, which is a binding policy. And that that first, that groundwork, that foundation first needed to be set. We’re very, and of course the guidance that we would develop, which is non-binding guidance should of course align with that broader policy. So we were very pleased to see back in June at the Executive Committee agreement on the parameters of the human rights due diligence framework policy agreements on the next step to draft that policy and to develop an implementation plan and to seek resources. So with that set that we’re now, we’re now have the space to move forward on finalizing the draft guidance for tech, ironing out any remaining details and preparing for consideration of the guidance by the Secretary General’s Executive Committee on which you’re now trying to get on the calendar for that, that committee’s meeting. Following consideration by the Executive Committee and we, we trust with their endorsement, the Secretary General may decide to share the guidance with the Chief Executives Board for their consideration and potential use across the full UN system. So I’ll leave it at that on, on the process and hand it back over to you Peggy. Thank you.

Peggy Hicks:
Great. Scott will stay online for, for interpretation of, of all of that, which some who are maybe not as deep in the UN system may need at some point. But before we do, to move to that, I’d like to beg your indulgence for one more member of our team who will give us a substantive update on, on the issues that have arisen through the consultations and, and where we stand and on the guidance itself. So Katie Chavin, who’s our Senior Project Advisor on the project will come in now. Over to you, Katie.

Catie Shavin:
Thanks very much, Peggy and everyone. It’s wonderful to be here with you. As Peggy mentioned, I’m a Business and Human Rights Specialist and I’ve been working with Scott and his team to support the development of this guidance. I thought I would very briefly just offer a description of how the guidance has evolved through these rounds of consultation and share a few lessons and insights that we’ve gained throughout the process. Sorry, my computer’s a little slow today. So in terms of the process itself, as Scott mentioned, we’ve now circulated four drafts of this guidance for feedback with both UN and external stakeholders. And we’ve been very grateful for the time reviewers have given to this process. We’ve received an enormous amount of very constructive, thoughtful and helpful input, which has really supported us to strengthen the guidance, but also to ensure that it responds to the needs in the different contexts of UN entities. And it’s also given us some insight to inform early planning to support the guidance’s implementation, if indeed it goes forwards. For those who haven’t been following this process, earlier rounds of feedback really focused in on seeking clarity about the status of the guidance. Is it a guidance document? Is it UN policy? Will it be mandatory? Is it there to support entities? What is the guidance about? What is it trying to achieve? We also gained some insight into the different levels of familiarity that UN entities have with human rights due diligence processes, which has really helped us tailor the language and the approach to the guidance, particularly for those users who are newer to working with these types of concepts. Some of the earlier feedback provided an opportunity also for us to reflect on and to discuss with entities the appropriate scope of the guidance, what’s practicable, what best helps us steer towards a strong longer-term approach to managing the human rights risks of digital technology use. We received some requests for more concrete examples to help bring the material to life and give people a sense of what it would actually look like to implement in practice. And some of the UN entities actually worked with us to develop some examples that are hypothetical but also realistic of the types of situations that they face or anticipate facing as the digital technology use grows. We’ve also had some opportunities to explore how the guidance can be applied in sensitive contexts, for example by entities involved in the provision of emergency or humanitarian support, so we could tailor it to enable those entities to apply the guidance while navigating often very challenging and complex contexts and considerations. And we’ve also been able, through the process of consultation, to really explore how best to apply concepts and language around human rights due diligence that were originally developed for the private sector to UN entities, you know, recognising that there are some differences in the mandate, in the purpose, in the everyday language that is used across the UN family. When it comes to the most recent round of feedback, we generally heard very strong support for the approach that the guidance now takes, which was encouraging, and also some very targeted and very helpful feedback to support us to further hone and strengthen it. So, for example, a number of entities provided some helpful suggestions as to where we could more closely align the guidance with other agendas that are important across the UN system. So, for example, more prominently highlighting where digital technology use and human rights due diligence need to be sensitive to the different impacts on girls, women and gender non-conforming people, and to support an approach that is based on the principles of inclusion and intersectionality, so we’ve made that much more explicit. Some of the reviewers also helped us to identify other relevant principles, guidance documents and other resources on human rights and technology that are already in use across the UN, which has supported us to really promote an approach to the guidance that aligns with, rather than duplicates, those existing processes. The most recent round of review also offered us a chance to test some of the hypothetical examples that were now included in the guidance with other stakeholders, and we received some input on how we could further refine those to reflect issues that arise across different entities, not just the entities that helped us develop these examples, and to ensure that the language that we’re using resonates with users across different parts of the UN family. Finally, we heard that our efforts to clarify the relationship between this guidance and the process to develop a framework policy on human rights due diligence hadn’t quite hit the mark, and we received some helpful suggestions on how we could make this clearer for readers earlier in the guidance. As Scott mentioned, we’re currently working on the fifth and hopefully final or near final draft, and I think it’s likely to look very similar to the fourth draft, for those of you who have seen that. As I mentioned, the most recent round of feedback has really yielded input that’s helped us strengthen the guidance by tweaking the language in subtle but, I think, important ways, and to include more explicit connections to existing processes and resources, so they’re not major changes. Stepping back to reflect on the process as a whole, it’s generated some interesting learnings that are supporting us to start to think through how best we might support UN entities to implement the guidance when finalised. The consultation process, it’s not just helped us to hone the guidance, it’s provided us with some time and opportunities to learn more about where different entities are at when it comes to human rights due diligence, meaning that we’ve got a better sense of what might be needed to support capacity building in a more targeted and hopefully helpful way going forwards. We’ve learned a lot about what language resonates with colleagues across the UN. We’ve also been able to identify entities across the UN that already have significant experience working with human rights due diligence and have practical approaches and insights that they could potentially share with others to support that capacity building process. Our engagement across the UN has also generated a lot of food for thought on what risk management for the UN looks like in a world where a proactive approach to human rights is increasingly expected. Perhaps especially as we enter into an era in which increasing use of digital technology paves the way for a new world of human rights risks as well as potential human rights benefits. We’re very mindful that expectations not just of business but also of other organisations, including UN entities, when it comes to managing human rights risks and issues are becoming stronger and are also becoming increasingly connected to discussions about how to address environmental issues including the climate and biodiversity crises. Related to that, the process overall has provided a great opportunity to reflect ourselves and with both UN and external stakeholders who’ve been involved in the various rounds of consultation on the similarities and differences between UN entities and business enterprises when it comes to implementing human rights due diligence for digital technology use. We went into this process, I think it’s fair to say, with a general sense that it made sense to leverage and build on standards such as the UN guiding principles for business and human rights which were developed for business and we’ve been able to start the work of initiating conversation with those involved in the consultations on the nuances of adapting that approach. I might leave my comments there, though like Scott I will stay on the line in case there are any questions later.

Peggy Hicks:
Great, thanks very much Katie for that overview. We’re going to turn now to a discussion that looks at the practical realities and challenges of applying human rights due diligence for the use of technology within the UN system. As noted, while the work on the due diligence guidance has been underway, a number of UN entities have already dived into the space and we’re going to hear from two of them right now, UNHCR and the World Bank, to share some of their experience. For this section, we’re very fortunate to have with us David Sotola, the Office of the Legal Counsel at the World Bank, Legal Vice Presidency, and I do need to note that David is joining us at a miserably early hour in Washington, D.C., so thank you so much for being here, David. From UNHCR, Nicholas Oakeshott, Senior Policy Officer for Digital Protection at UNHCR, who we’ve worked closely with in the course of our work on the human rights due diligence guidance, and he helped us organize or organized himself a workshop on applying HRDD to UNHCR’s use of technology and complex field settings, and he’s been spearheading those efforts across UNHCR. So I’m going to ask those two panelists a couple of questions to give us a sense of how this looks in practice. Turning to you, Nick, first, since UNHCR has been a real leader in applying human rights due diligence and its use of digital technology, we’ve really appreciated your collaboration. Could you please give us a sense of how you’re applying human rights due diligence, particularly in complex settings like those that UNHCR engage in, including the systems and mechanisms that are in place and how they’re being strengthened? Thanks.

Nicholas Oakeschott:
Thanks, Peggy. I mean, as you’d expect, UNHCR has a wide range of policies and guidance that can help to manage risks in its use of digital technology. They range from privacy and data protection through to procurement, partnership, due diligence and beyond. However, in terms of a formal policy framework on human rights due diligence, that’s less developed and very much in line with what Scott was saying earlier on. But in our digital transformation strategy, which runs from 2022 to 2026, we’ve set the goal that UNHCR’s own use of digital tech will align with international human rights and ethical standards and in line with what was said earlier on about walking the talk. But these standards will also be promoted with states and the private sector with a focus on high-risk technologies, uses and contexts. So our process of engagement with the guidance development has very much been around building UNHCR’s understanding and capacity to apply human rights diligence approaches to its use of digital technologies in order to meet this overall strategic objective. As you mentioned earlier on, in January, we brought together a multifunctional team to implement a simulation of the third draft of the guidance, looking at field-based case studies. This approach allowed us to engage with experts on human rights due diligence from within the UN system, but also to receive advice from an international law firm, DLA Piper, which has expertise in advising the private sector on these issues. And this was facilitated through a strategic partnership that we have with DLA, which gave us access to this advice on a pro bono basis, which has always helped. By looking at the case studies, we were able to identify more clearly the potential implementation challenges, but also where the guidance added value to our existing policies and processes. The second case study, which looked at the innovative use of social media platforms to deliver protection information to people on the move, such as avoiding how they could avoid risks of exploitation and trafficking in online ads related to accommodation or work. That was particularly positive and resulted in immediate follow-up. We’ve had a regional bureau bring together another multifunctional team to undertake a full risk assessment of this approach, and that resulted in some quite important adjustments and a decision to develop some more established guidance on this innovation. We’ve also got, I think, a reasonably clear and positive identification of the way forward. First of all, to meet an immediate priority of the UN Security Council. will consider the guidance, even though it’s still a draft, as part of a multifaceted assessment of UNHCR’s developing approaches to the use of artificial intelligence, including generative AI. This will include the application of UNHCR’s new and expanded general policy on data protection and privacy, as well as the principles on the ethical use of AI in the UN system, which were adopted in September last year. And secondly, we’re going to review our set of existing policies and guidance to see how we can best implement the guidance once it’s adopted. This will also include exploring whether a user-friendly digital tool could help the field and other internal stakeholders in implementation, as well as how best to engage with effective communities and civil society, which is an important but challenging part of the guidance. So UNHCR’s journey down this road has begun, and I think a quite clear and useful way forward has been identified. Back to you, Peggy.

Peggy Hicks:
Great, thanks very much, Nick. It’s really clear that you’ve gotten a head start on a lot of this, and then the rest of us in the UN system will really be able to draw on some of those good practices that you’ve been working on. I’m going to turn to David now and ask you for a perspective from the World Bank. In my experience, it’s not always that easy to talk about human rights in a World Bank setting, so I’m a bit curious to hear how it’s been for those of you within the bank that are working in the area of human rights due diligence, and maybe if you could say a bit about whether you faced any pushback and how you’ve addressed it.

David Satola:
Sure. Good morning and good afternoon. Just a quick sound check. Can you hear me okay in the room? It’s great, David. Thank you. Thank you. Great. Well, thank you all for inviting me here. Despite the early hour, I’m delighted to be here virtually with you and for including me in the panel. I’m sorry not to be in Kyoto in person. Before I get onto some of the specific challenges, I do want to take just a minute and applaud the effort that you all are doing in trying to synthesize these disparate evolving threads. I mean, I think in the past few years, all of us have taken on different approaches to human rights and technology, whether it be in cybersecurity and more recently with artificial intelligence. I think the synthetic approach that you all are doing here to have a broad approach to human rights due diligence is really to be applauded. I also think that, and Katie and others and Scott have mentioned this before, but there are some elements of the process that you’re going through that I think are extremely important and that will resonate with those who have history with the Internet Governance Forum. One is the consultation process that you’ve undertaken. A multi-stakeholder consultation process will only reinforce the strength of this guidance. I can’t underscore enough the issues of capacity building that are mentioned in the document itself. That’s extremely important for us as we are providing financing in these areas for different digital development activities. I’m also struck by the sort of principles-based approach. And I think this is a reflection of one of the main challenges, and it’s not just for us, but it’s for all institutions who are working and trying to involve human rights due diligence, is that it’s difficult to have a one-size-fits-all approach. But if you do it on the principles basis, then that I think is reflected in the document, I think that then that can be achieved. I’d like to echo what Nick said as well, that in the past few years, our organization like UNHCR and others has attacked different things in different ways from procurement to human resources to other things. And now this is an opportunity for us to kind of, again, bring those threads together. So the first challenge I think is that, and this is, I think, exactly what you’re trying to do in this document, is to recognize that there are standards out there that they are evolving in different ways. But this is, I think, a first attempt to try and synthesize that. So that in itself is a big challenge. The biggest challenge for the World Bank in this area is that the way that we do business, the way that our operations are conducted is I think fundamentally different than most other UN organizations. So, and I don’t mean to speak for UNHCR or any of the others, but correct me if I’m mischaracterizing how you do business. When UNHCR does an operational activity, UNHCR is in the field, its staff are doing the work. Whereas in the World Bank context, when we do operational work in the field, we are generally providing financing to our member states to undertake a project. That’s called a recipient-executed, what we refer to as a recipient-executed activity. So we’re one step removed from the kind of direct interaction that most of our other UN family organizations are doing. And that is a principal difference. So one of the challenges that we are facing is that our member states are confronted with the kind of lack of clarity or lack of synthesis and a set of rules to apply. So even if we have a guidance for the UN family, it’s not necessarily gonna translate directly to how our member states might undertake their own due diligence. And with our renewed emphasis on digital as a principal way of doing business and development, I think this will be increasingly a challenge for us. I think I’ll leave it at that for now, but I appreciate the opportunity and look forward to the discussion today. Thank you. Great, thanks very much, David.

Peggy Hicks:
I think it’s really interesting to hear the downstream effects and the way the guiding principles on business and human rights that Katie mentioned are, is that not coming in now? Sorry, microphone, yes, good. Sorry about that. I was just saying that David’s comments about the downstream effects and the engagement, the indirect way that some of the guidance would need to apply given the nature of the way the World Bank works in different settings is really interesting when you look at how it fits with the UN guiding principles on business and human rights framework. So we’ll probably come back to that. But I’ll flip back to Nick now and just to ask a bit from your side, about pushback in terms of human rights due diligence. I know we hear a lot of comments from those that are engaged about some of the challenges they face within their institutions. And it’d be great to hear a bit from your side about what sort of things have come up and how you’ve been able to address them.

Nicholas Oakeschott:
Thanks, Peggy. I think that part of the process of engaging closely with the team at UN Human Rights has been really helpful in helping us to address and think through what the potential pushbacks. It’s important to recognize that UNHCR, as David was saying, is a field-focused organization. And the protection of the forcibly displaced and stateless is a key part, perhaps the key part of its institutional DNA. It’s integral both to the agency but also to its identity. So in this context, new processes could be seen as unnecessary steps to potentially getting in the way of the immediate delivery of protection and humanitarian assistance in challenging emergency contexts, something which is a duplication rather than an atom. However, as the guidance has strengthened from draft to draft, it’s been seen as being increasingly implementable at the field level. And the value add has become clearer, particularly in relation to existing risk management processes. And I’d flag up that in many ways, over the years, we’ve focused on similar questions, but through the use of the UNHCR, over the years, we’ve focused on similar questions, but through the lens of privacy and data protection rather than through a broader human rights due diligence perspective, which I think has obvious pluses in some contexts, but in other contexts is perhaps too narrow in scope. So overall, I’d say that UNHCR sees the guidance as an opportunity to realize the key digital protection strategic goal that I flagged up earlier on, and provides us through experience with a stronger basis for increased engagements with states and the private sector, including technologies on promoting the protection and forcibly of the forcibly displaced and stateless in digital contexts. I think that there are enormous advantages from applying the guidance, even in its draft form to existing field contexts, because it means that we’re more relevant in our approaches and the advice that we can provide to states and the private sector. Back to you, Peggy.

Peggy Hicks:
Great, thanks very much, Nick. And I’m quickly gonna turn back to David just to check in with you to see if you wanted to add on to your comments. I found your notes about the principle-based approaches as being very interesting. And in particular, I know that, especially given the amount of time that we spent on this human rights due diligence guidance, one of our hopes is that it will be a document that does have application beyond the UN system as well. And obviously, when you’re looking at recipient countries and how they engage, perhaps that’s one of the ways in which we could see that happen. But I’d love to get your thoughts on that point, David.

David Satola:
Yeah, thanks, Peggy. And just following up on that very point, and I think that the principles-based approach will enable this. I think one does need to recognize that our members and our member states are the same as your member states. They’re at different levels of development. And so one could call it a maturity levels. There are some big middle-income countries who borrow from the World Bank who are gonna be more sophisticated and have higher capacity to deal with some of these issues. If you take a big middle-income country versus say a small island country with a smaller population, less development, maybe even lower income levels, it’s hard to impose the same model on both. And so I think that the fact that it is a principles-based approach allows for recognition of those different levels of maturity to deal with the thing. I’m not suggesting at all a subjective approach to human rights or a relative approach to human rights. No, I think it’s the due diligence part and the capacity to integrate how one approaches technology and technology issues that would need to be recognized in those contexts. And I think that we find this in our normal lending operations as well. There are some things that are universal that apply across the board. We expect our borrowers to observe the same kind of procurement principles and things like that. So likewise, I think we can hope to achieve a universal approach to human rights due diligence. But in the process, I think we do need to recognize that different countries have different levels of development and economic maturity and that would need to be taken into account. Over.

Peggy Hicks:
Great, thanks very much, David. I think we’ll leave it with Nick and David at that point. And Marwa has been very patient. We’re very fortunate to have Marwa Fattah from Access Now with us. Access Now has been involved, as Scott noted, along this process. And we’d really like to hear from you, you know, Access Now’s views on how the UN’s doing in this area. Why do you think this guides could be important and what you’d like to see as we go forward. Thank you very much, Vicky.

Marwa Fatafta:
Okay, this works. I’m very happy to be here and I hope our colleagues who follow us online can hear me clearly. We, I mean, as a starting point, I think this is a very important step that OHCHR has taken over and to ensure that human rights due diligence is mainstreamed across all UN entities. And we think it’s a step that is, frankly, a bit overdue where we’ve seen the rollout of technologies or the deployment of technologies on a mass scale without sufficient assessment of potential negative human rights risks, which some of them have materialized. And I think that’s important, especially in contexts where there are not necessarily strong rule of law or a human, strong human rights records in countries where vulnerable communities, individuals and communities who may be impacted by the use of digital technologies by UN agencies can have access to effective remedy. So I think it’s a very important step and I really look forward to seeing it, to see the implementation of it and the final draft as well. We are, of course, we have been engaging on business and human rights on a number of fronts, especially with the private sector. And of course, engaging with human rights due diligence has given us a number of lessons learned that I would like to share and I think is important for this conversation. The first one of which is we’ve seen in the guide that the aim of this guide is basically to build the capacity of different UN agencies in headquarters and field offices to be able to conduct human rights impact assessments and use this guide. However, we think that it’s very important to add an element of independent assessment. This is important for a number of reasons, the first of which is oversight and accountability. When those assessments are made internally and especially when they’re not published or the findings of those are not published, it becomes hard for civil society to scrutinize the decision made. We’ve had situations and especially with the private sector where there is a decision to expand in a certain market or use a specific technology where we see clearly in red letters that this technology will lead to negative human rights impact. However, we’re told that it’s fine, you can relax because we’ve done our due diligence, we’ve done our human rights impact assessments and you can trust us that we’ll take care of this matter. Therefore, I think independent assessments are very important. It’s also, and truth be said, I mean, we’re all subject to bias and having an independent third party that can assess the rollout of technologies or specific programs that rely on tech or digital solutions, especially when they’re already being implemented is key in order to avoid a situation where the technology is being used and there is an assessment, but having someone from the outside that conduct this is important. And the second point and it ties to the first one and I’d already alluded to it and that is transparency. Transparency on the process, on the practice, engagement with civil society that allows affected communities to evaluate from their own perspective the extent which these decisions taken by the agency are actually serving their needs and protecting their rights. And this transparency for us is not an option, which I think the guide suggests. We think it’s important and key for the success of this tool. The third point is around enforcement and I mean, human rights due diligence tools are as good as their enforcement. And again, from experience engaging with the private sector and also with some UN agencies, including UNHCR, we have seen that those tools, whereas they are explicitly written or sometimes mandated by internal policy handbooks or internal policies, they’re not necessarily implemented. And it’s especially so in challenging situations such as in humanitarian context where UN agencies have to rush to get refugees registered or to get people across the border. And we are at the end of the day operating in an ecosystem where private companies are also aggressively selling solutions to solve very complex problems. And here we see that again, in such situations, the technology is used or rolled out and the assessments are either not made or made later. And when they are made later or not made at all, we have a situation where these technologies are implemented on a mass scale. So we have a kind of a de facto situation such as the biometric registration of refugees conducted by the UN Refugee Agency. When you have millions of people already registered with their biometric information, which is extremely sensitive personal data being used and processed, it of course exposes individuals and vulnerable communities to a number of risks. But it becomes hard to challenge these systems when it’s already been out. And then the question for us as civil society is, how can we work with UN agencies to mitigate these risks when the more data you collect, the harder it becomes to protect them? So that’s just one example for when there is no human rights due diligence done, what’s the long-term cost of that? And one point also to raise here, and I think David mentioned it, and that is sometimes also we’ve seen that when headquarters are very diligent about enforcing and implementing and doing human rights impact assessments, data protection impact assessments, when it trickles down to the field level where field office operates, sometimes those rules are not necessarily followed. It could be because of lack of capacity or lack of resources or the sector in which they’re operating or the context in which they’re operating. But here it’s key also to ensure that those tools are being implemented at the lowest level where there is direct interaction with affected communities. And so that’s one point to highlight on the enforcement bit. And then last key point to raise here is around public-private partnership. I think that’s very important to help strengthen the Human Rights Due Diligence Guide. When private companies are being procured, We don’t see any information from a number of UN agencies about why they have selected specific countries. There were also examples where companies with shady human rights records have been partnering with UN agencies, like Palantir is one classical example that comes to mind. And when civil society asks for more information or transparency on how company X has been selected, we don’t receive answers. So I think adding or strengthening transparency on public-private partnership due diligence for the companies that are procured are just as important as assessing the negative or potential or foreseen negative human rights impacts of the programs or the technologies themselves.

Peggy Hicks:
Thanks very much, Mayra. It’s really great to get your reflections on it. And your second point related to the transparency in the process involvement of civil society, which you said was key to making this process work. And I think your comments gave us a good example of that. And we need that sort of input about where we’ve gotten and how much further we have to go. That doesn’t mean we’ll necessarily get there all in one step. But it’s very important to have that spotlight and to understand what needs to be done and how we need to move not just from the guidance and not just from the implementation, but to look at some of these key issues about how to make sure that it’s as deep and meaningful and that these questions around transparency and independent auditing and other things are addressed. So thanks for that. With that, I think it’s time for us to move quickly to the question and answer. As I said, we’re very grateful to those of you that have joined us for this session. We’re happy to ‑‑ we have people online, I think, that may come in with questions as well, but we’d be very happy to prioritize questions in the room first. If people want to just ‑‑ or a small enough group, I think you can just flag me, and I’d be happy for ‑‑ I think you need to go to the mic just so the people online will be able to hear it, I’m being told. Anybody have any questions or comments on what they’ve heard? I’m seeing none. Sorry? There you go. Oh, thank you. And if you can introduce yourself as well, please.

Audience:
Sure. My name is Boshree Badi. My question is around, like, these guidelines that are being created. I’m wondering how much of a space there is to actually talk about what’s influencing, again, the decision‑making process around procuring certain technologies within the UN system and thinking about, like, I mean, there’s funding that goes into the system from certain actors that, like, have their priorities and agendas that are clearly set out, but that’s not necessarily something that could be made transparent, I think, within the parameters of how the UN system currently functions. But for, like, an internal mechanism within the UN system, there’s also a lack of clarity within it. So, like, is there anything that’s being developed maybe to make that more transparent internally, even if it’s not something that can be publicly shared? Because I think that’s an important part of understanding the decision‑making process of why certain technologies are being pushed and, like, the underlying narrative around those technologies because there’s, like, the understanding that maybe they’ll improve efficiency, for example, with UNHCR’s use of biometric technology. There was a lot that was discussed about it decreasing fraud instances, but those were seen to be so negligible that it didn’t merit the risks that those populations were being exposed to as a result of the use of those technologies. So I’m wondering if there’s any mechanism that’s being considered there as well. I’m sorry.

Peggy Hicks:
Great. Thanks. No, it’s a good follow‑up question to the comments that Marwa made as well. I’ll just see if there are any others that wanted to come in with questions, and then we can go back to the panel and others for response on that point. Anybody else want to come in? Do we have any questions online, Eugene, that we should bring in? Oh, sorry, please.

Audience:
Hello. I’m Ana Cristina Ruelas from UNESCO, and I have a question about the independence of assessments. How to identify who is going to do the independent assessment when you have, like, different bodies? Like, what is your experience of who will be the independent body that will perform assessments when it comes to UN agencies which have many member states with different views? Should the member states decide different names to perform the assessments? Should civil society decide? Which civil society should decide? What would be your recommendation on that? Because it’s very, like, that will be, like, how to develop that independency. Please. We’ll take this one last question, then I’ll go back to the panel with all three. Hi, my name is Oliver. I can’t name my organization because of security risks. My question is, because UNESCO just stood up, would the human rights due diligence that you’re developing, would it apply to the UNESCO guidelines that are currently also in development? Because a lot of civil society have been asking why the UNESCO guidelines have no due diligence process. Thanks.

Peggy Hicks:
So we have three questions on the table. I think, Nick, if you’re there, maybe it makes sense to go to you first as UNHCR has been coming to the conversation at several occasions. But I think, you know, to potentially broaden it out and just have a sense from you about, you know, how you’re dealing with some of the challenges that have been raised around public-private partnerships and transparency around them and, you know, the different factors that are in play when UNHCR is looking at some of these issues, including the use of independent assessments and other things. Thank you.

Nicholas Oakeschott:
Thanks, Peggy. I think that, you know, it’s a good question, the question about the purposes for which certain technologies are chosen. And I think one good reference point that civil society and other stakeholders now have is UNHCR’s digital transformation strategy, which I’m just going to drop a link into the chat so that you can see there what our objectives are on digital. And you can see in that strategy that it’s very much focused on the people that we serve. Three goals of the five are, you know, digital protection, digital inclusion, and providing more digital services for the people we serve. And so there I think that there’s more of a clear idea on the business side, if you like, of what we want to use digital technology for. And it’s the first strategy that we’ve had, so I think it’s an important reference point. On transparency questions, I think that one of the key opportunities, but also fundamental challenges that we’ve identified in the work we’ve done around the human rights due diligence guidance is how can we effectively engage with civil society stakeholders in the implementation of that guidance. I think that, you know, from talking to experts in the private sector, that’s also a challenge that businesses have faced, and I would very much welcome an opportunity to discuss with Access Now and other stakeholders ideas on how we can make that work. On the one hand, you know, respecting that there may be some confidentiality questions that arise, but also how important it will be to include civil society in those due diligence processes. On the question of independent assessments, I think that that’s a particular challenge within the UN system. There is an independent auditing function that does look at the work of UN agencies, and once the policy that Scott refers to is adopted, that policy will become auditable, if you like. And on the other hand, we have, say, in the context of data protection, established agreements with expert suppliers to help bring both expertise but also some independent rigor to data protection impact assessments that have been undertaken both at the global and the field level. But I think the jury is still out from UNHCR’s perspective about whether independent entities undertaking audits of the implementation of the guidance beyond the existing system would be something that we could work well with. But overall, I think that we’re on a learning process, as I said in my earlier comments, and would very much welcome greater dialogue and discussions with civil society about how we can best make this guidance work. Back to you.

Peggy Hicks:
Great. Thanks very much, Nick. And I’ll turn to David to see if you have any comments on that, and then to the panelists here.

David Satola:
Yeah, sure. Thank you. And just wanted to follow up very quickly on Marwa’s comment and the ensuing discussion on enforcement and related issues. I agree. I think that that lends itself towards accountability, which is definitely required. And while we don’t have anything specific on human rights at the moment, we do have a variety of other tools that are available both to our borrowers and to civil society and the beneficiaries of our work. And some of those are the following. One of them is we have a grievance redress mechanism in our projects. And so every project will have this, so that if there is a negative impact on someone, an individual, for example, they can then appeal to the World Bank to seek redress for whatever harm they’ve encountered. We also have, in terms of the – and I think this might address in part the PPP question or working with the private sector – a lot of the financing that we provide to governments goes to vendors or consultants or contractors. So if it’s a roads project, we’re not going to build the road. The government’s not going to build the road. They’re going to hire someone to build the road. But in that context, we have our fraud and corruption guidelines, which, to borrow the phrase, sort of follows the money and all the way down the chain to the most local subcontractors. So to make sure that they’re doing what they’re supposed to be doing with the money. We also have, in the broadest sense, an organization called the Inspection Panel, which is independent and which can be invoked if there are issues that arise in one of our projects that there was some serious breach or something like that. And we also have internally a group called the Independent Evaluation Department, which retrospectively looks at projects in terms of lessons learned and what worked, what didn’t work. And so collectively, there’s a lot of accountability mechanisms that are there. They’re not specifically designed right now necessarily to address human rights, but there’s no reason that they couldn’t be adapted to include human rights issues. And as Nick said, over the past few years since the entry into force of GDPR, personal data protection is a huge issue for us. We provided billions of dollars of financing in the COVID pandemic, and maybe some of you remember that from a couple of years ago. But the amount of personal data that was being collected by our recipients at that time, we realized, was going to be huge. And we wanted to put in place mechanisms in our lending instruments that would ensure that our borrowers had in place the right kind of legal and technical measures to protect personal data. Some of our borrowers had laws in place, and we could rely on those. In other cases, there weren’t legal frameworks in place, and so we worked with our borrowers to make sure that for those projects, the projects themselves had a framework in place. Now, let me just digress for a moment there. Our members are sovereigns. The World Bank is a sovereign. When we have a lending instrument, when we do a financing agreement, a sovereign-to-sovereign agreement is a treaty. It’s a very powerful instrument. And when we did those COVID projects with countries that didn’t necessarily have a data protection regime in place, we built it into our agreement. And so we were pretty comfortable with the fact that that sovereign-to-sovereign agreement, that treaty, for the purpose of the data that was collected in that context of COVID, was going to be protected. So not perfect, but certainly a tool that we had that we used to make sure that, to the extent that we could, those issues were being addressed. Over.

Peggy Hicks:
Great. Thank you very much, David. Great to get that insight. We only have a couple minutes left. We’re starting to hear noises outside of our room here in Kyoto. Marwa, I’ll turn to you quickly.

Marwa Fatafta:
I don’t want to hold people. Quickly, I couldn’t agree more with the first comment, and that’s an issue we also face. Often, technologies are deployed or used without evidence. And I think it’s evidence-based solutions are very important in a context where, again, private companies are happy to sell you, and I use this term, snake oil, or solutions that could have serious ramifications or negative impact on human rights. And therefore, for us as civil society organizations, we sometimes struggle to understand the rationale why certain solutions that are disproportionate, given their human rights impacts, are being used and justified. So having an evidence to show, for instance, with biometric registration, that there is no other solution but biometric registration that justify the collection and processing of sensitive data, and therefore, based on this evidence and this research. Research, of course, is resource-intensive, time-intensive, and I understand, again, that in challenging contexts, that’s hard to achieve all the time, but nevertheless, it is important to scrutinize the narratives behind certain technologies, such as AI. The point on independent assessment, I mean, I’m not in the business of promoting certain entities, but there are, of course, companies or civil society organizations that are specialized in doing exactly that, doing human rights due diligence. And as someone who had participated in a number of consultations, my job as a civil society organization to ensure that those, you know, as auditors or, you know, companies are speaking to the right people. So they’re not just speaking to Access Now as a global organization, but can actually speak to grassroots organizations, so the people who belong to the communities that might be affected. That’s, I think, something that civil society should continue doing in building bridges, and I understand the difficulty in reaching out to the stakeholders, which I believe Nick had mentioned, and that’s something that civil society can help with. An organization like Access Now, we have partners across the world, and we’re more than happy to connect whenever a consultation needed, and the same, of course, applies to other partners. Great. Thanks, Marwa.

Peggy Hicks:
Quinton, a closing word from Quinton?

Quintin Chou-Lambert:
Yep, sure. Thanks very much. And just bringing it back to the overall role of this non-binding guidance, and it kind of helps to reconcile these two challenges. One is how we have horizontal alignment across the different UN agencies and entities, and then making it principles-based such that it can be translated into those local contexts. It’s not a surefire outcome-guaranteeing kind of thing. To get to that, probably to hard-code it into the operational procedures like procurement, one would need to have it baked into the entity-specific procedures, including the tendering process, the kind of checklists and auditing that goes on those procurement processes. And this guidance can be a beacon for each entity to do that kind of hard-coding. It has to be done entity by entity because each entity has its own governing bodies. For example, in the Secretariat, the General Assembly prescribes housekeeping rules, we call it, but basically the way in which the UN does its procurement and has the criteria for procurement handed down by the GA, whereas UNHCR, other agencies, have more flexibility in some cases. But it can act as this kind of beacon, this guidance, for each entity to hard-code these kinds of principles into its own local procedures. And also, just in closing, as a kind of beacon just for individual people, staff members who are working in the organizations. I recall, for example, during the COVID days when the Secretariat itself was considering how to deal with the pandemic and whether to introduce its own contact tracing, proximity tracking system. In the end, it was a judgment. It was an emergency and it was a judgment. In my opinion, the correct judgment went out, which was that we were not going to do it and that the partner who was offering to do it was not going to be able to meet the privacy and requirements that were appropriate for the case. But it was a judgment call. This kind of human rights due diligence framework offers this kind of load star for the system to both translate the principles into its own kind of regular procedures, but also for individuals who are taking judgment calls on a day-to-day basis.

Peggy Hicks:
Great. Thanks very much, Quinton. We’ve run over time, so just to conclude by saying that we’ve had a good conversation here. Some important questions have been raised around transparency and assessments and enforcement. Those are issues that we will look very seriously at and the interagency working group will take them on board as we’re looking to implement and move forward in this process. And I think it’s coming at it from a previously civil society perspective, I have to say I think from my interaction with the UN agencies involved, there’s a real commitment to trying to move this forward in a positive way. But some of the issues raised are difficult ones for us to solve. The issues of the public-private partnerships and the corporate engagement, I’m in charge of digital transformation from a champion standpoint at my organization. And one of the things in reaching out to other UN agencies to have a sense of how they’ve been able to do what they wanted to do within digital transformation is the real recognition that the funding is not there for it to happen, except in the context of some of these important partnerships. And we’re grateful for that because the UN has to be an entity that functions with all of the tools necessary to protect human rights in my regard, to protect refugees in UNHCR’s regard. So these are challenging things for us to implement, but we really appreciate the input and commit to continuing the conversation as we go forward. Thank you all for staying so late and missing the reception outdoors. I hope you’ll get a chance to enjoy the evening here in Kyoto, and thanks again for all your time. And thanks to our panelists for all their efforts.

Audience

Speech speed

173 words per minute

Speech length

467 words

Speech time

162 secs

Catie Shavin

Speech speed

182 words per minute

Speech length

1397 words

Speech time

461 secs

David Satola

Speech speed

169 words per minute

Speech length

1856 words

Speech time

658 secs

Marwa Fatafta

Speech speed

169 words per minute

Speech length

1696 words

Speech time

603 secs

Nicholas Oakeschott

Speech speed

159 words per minute

Speech length

1524 words

Speech time

574 secs

Peggy Hicks

Speech speed

195 words per minute

Speech length

2195 words

Speech time

675 secs

Quintin Chou-Lambert

Speech speed

166 words per minute

Speech length

646 words

Speech time

233 secs

Scott Campbell

Speech speed

172 words per minute

Speech length

848 words

Speech time

295 secs

Stronger together: multistakeholder voices in cyberdiplomacy | IGF 2023 WS #107

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

John Hering

The analysis includes various speakers discussing cybersecurity and multi-stakeholder inclusion in dialogues. One speaker notes the increasing professionalism of cybercrime, with a growing focus on critical infrastructure sectors. Microsoft’s annual digital defense report highlights this trend. Moreover, 41% of observed nation state cyber operations target critical infrastructure.

Another speaker raises concerns about the integration of cyber operations in armed conflict, citing the situation in Ukraine as an example. Urgent discussions, particularly at the United Nations, are needed to address this rising concern.

The ownership and operation of cyberspace by private entities is also discussed. It is emphasised that cyberspace is primarily owned and operated by private entities, necessitating a proper multi-stakeholder approach to tackle conflicts in this shared domain.

Improving the United Nations’ processes for including multi-stakeholder voices in cybersecurity dialogues is identified as a key issue. The current approach is described as ad hoc and patchwork.

The importance of accountability and understanding existing cybersecurity norms is highlighted. Holding countries accountable for violating norms and focusing on implementation rather than creating new norms are deemed important.

Another speaker advocates for multi-stakeholder inclusion in future cybersecurity dialogues. The non-governmental stakeholder perspective is considered essential for impactful outcomes, transparency, and credibility.

Challenges faced by non-governmental stakeholders in engaging with processes like the Open-Ended Working Group are discussed. The speaker acknowledges the progress made since the first multi-stakeholder consultation in 2019.

Improving the process of multi-stakeholder engagement and learning from successful first committee processes are advocated for. Structured non-governmental stakeholder engagement and a comparison with successful processes are seen as crucial.

The hindrance of multi-stakeholder inclusion in dialogues by escalating geopolitical tensions is mentioned. It is noted that these tensions have blocked voices, including Microsoft, from participating effectively.

The importance of multi-stakeholder inclusion in future dialogues is stressed, highlighting its role in transparency, credibility, and aiding in implementation efforts.

Insights from different stakeholders are valued for a holistic understanding of the issues. Effective dialogues and engagement with governments are seen as important for gaining insights into their perspectives.

The goal of achieving a gold standard of multi-stakeholder inclusion is expressed. Working towards a higher level of inclusion is seen as necessary.

The legitimacy of questioning the involvement of private companies in discussing governance at national or international levels is acknowledged. However, it is argued that these companies should have a voice in such dialogues, with decision-making authority ultimately resting with governments.

The summary accurately reflects the main analysis, covering various aspects of cybersecurity and multi-stakeholder inclusion. It includes relevant long-tail keywords and adheres to UK spelling and grammar.

Joyce Hakmeh

The analysis explores the challenges and benefits of multi-stakeholder participation in UN Information Security Dialogues. One of the significant challenges mentioned is that some states actively block multi-stakeholder participation. Additionally, there is a lack of conviction among states regarding the value that multi-stakeholders bring to the table. States often perceive the multi-stakeholder community as a uniform group with the same agenda, which further hampers their participation. Moreover, there is a lack of strategic and consistent engagement with multi-stakeholders by supportive states. This lack of engagement creates uncertainty for multi-stakeholder groups regarding their accreditation in UN processes.

On the other hand, there is a supportive stance towards increased multi-stakeholder participation. The role of multi-stakeholders in the cybercrime convention marks an important milestone as it is the first time they are attempting to shape a legal instrument within the UN regarding cyber issues. Participants argue that multi-stakeholders bring diverse perspectives, and their input can significantly influence decision-making processes. Furthermore, in the context of establishing new processes in cyber and digital technologies governance, it is crucial to include multi-stakeholder participation from the beginning. Transparency and clear criteria for inclusion and exclusion are seen as essential components of good modalities in these governance processes.

The speakers emphasize the need for multi-stakeholders to prove their value through concrete actions such as providing data, conducting research, and offering capacity building. This is especially necessary because some member states do not fully understand the value that multi-stakeholders can bring. Additionally, the analysis highlights the importance of not solely focusing on the multilateral level but also considering the national and regional levels in digital technologies governance.

Collaboration and input from various stakeholders, including civil society organizations and industry, are seen as mutually beneficial. Multi-stakeholder involvement aids governments in quality control and gathering diverse ideas during negotiations and decision-making processes related to digital issues. However, the speakers emphasize the need for these collaborations and inputs to be more strategic, ambitious, and inclusive, rather than narrowly involving only big tech companies.

Furthermore, the analysis suggests that the current composition of multi-stakeholder groups is primarily Western-dominated, calling for more regional inclusion. It is argued that there is a wealth of valuable experiences and perspectives at the regional and national levels that can enhance UN processes and initiatives.

The analysis also highlights the importance of better coordination among multi-stakeholders. While it is important to improve collaboration with governments, it is equally crucial to enhance collaboration among the multi-stakeholders themselves to ensure diverse voices are included in the discussion.

The fragmentation of cyber negotiations is acknowledged as a present reality, with various negotiations focusing on different aspects of cyber issues. The interconnectivity and overlap of activities in cyberspace challenge the artificial separation between negotiations dealing with international peace and security and those dealing with criminal activities.

In conclusion, the speakers advocate for increased multi-stakeholder participation in UN Information Security Dialogues. While there are challenges such as states blocking participation and lack of conviction, the benefits include diverse perspectives, shaping legal instruments, and influencing decision-making processes. The analysis calls for the development of good modalities from the start, the provision of concrete evidence of value by multi-stakeholders, inclusion of regional and national levels, better coordination, and a focus on inclusive collaboration.

Nick Ashton Hart

The analysis explores the need for increased stakeholder participation in policy-making and decision processes, focusing on cybersecurity and international commerce negotiations. The lack of stakeholder involvement and frustration with current procedures are identified as significant issues that need attention.

One speaker emphasises the value that stakeholders bring to these decision-making processes. The absence of their input not only results in the loss of valuable perspectives and expertise but also undermines the legitimacy and effectiveness of the policies and decisions made. Additionally, frustration is expressed concerning the application and veto process in cybersecurity procedures. The closed nature of the World Trade Organization (WTO) negotiations on electronic commerce excludes stakeholders completely, limiting their ability to contribute and raising concerns about transparency and fairness.

In response to these challenges, one speaker proposes the implementation of a policy on stakeholder participation. Such a policy would transform stakeholder involvement into an administrative process, ensuring their perspectives are consistently considered and incorporated into policy-making. It is suggested that many states would support this policy if a vote were to take place, indicating a growing recognition of the need for increased stakeholder participation.

Another speaker supports a campaign to address the issue of stakeholder participation once and for all. Some states are indifferent to involving stakeholders and find the arguments and disagreements on this topic tiresome. A resolution would save time and energy by establishing a clear framework for stakeholder participation. The importance of stakeholder involvement, particularly in the context of cybersecurity, is stressed. It is believed that their participation would drive a more ambitious cybersecurity agenda, bridging the gap between current offerings in international cybersecurity and the actual need for comprehensive and effective solutions.

In conclusion, the analysis highlights the necessity of enhanced stakeholder participation in policy-making and decision processes related to cybersecurity and international commerce negotiations. The establishment of a clear policy or a campaign to address this issue is crucial to bring valuable perspectives and expertise to these processes and to achieve more effective and legitimate outcomes. Furthermore, stakeholder involvement is essential for bridging the gap between the current offerings and the actual need in international cybersecurity, leading to a more comprehensive and robust approach to addressing cyber threats.

Charlotte Lindsey

In a recent analysis, it has been highlighted that the veto power within the Open-Ended Working Group limits the participation of various organizations, a concern raised by Charlotte Lindsey. This poses a challenge for multi-stakeholder civil society organizations who strive to contribute to multiple parallel processes. However, the analysis also acknowledges that civil society organizations play a significant role by providing valuable data, evidence, and practical recommendations.

Another area of concern is the lack of transparency and clarity in the process for non-state actors to contribute. This issue is seen as a barrier to their meaningful engagement. To promote inclusivity, it is suggested that the scope of participation should be extended to include organizations operating at national and regional levels.

Charlotte Lindsey urges the creation of a dedicated forum that includes all stakeholders, as it would foster legitimacy and help shape future instruments. The involvement of civil society organizations in such a forum could facilitate the implementation of cyber norms by connecting different actors and building partnerships.

Additionally, it is recommended that states establish a mechanism that reflects the multi-stakeholder nature of cyberspace. This would enable relevant stakeholders to contribute to discussions and ensure transparency and credibility in decision-making processes.

The analysis also highlights the importance of increasing the representation of African countries in global processes. It notes that there is a willingness among ambassadors from the African Union in Geneva to engage and learn more about these processes. To foster the participation of African countries, there is a need for capacity-building efforts to enhance the skills of representatives from the African Union in negotiations.

To encourage wider participation, it is necessary to demystify the processes involved. Participants from the African Union reported a misconception that they could not contribute due to a lack of familiarity with the debates. Efforts should be made to provide clear information and guidance to potential participants.

Lastly, the analysis emphasizes the importance of fact-based framing and timely input for effective engagement. Even if organizations cannot actively participate in discussions, the ability to produce valuable input is recognized and valued.

In conclusion, the analysis highlights the need for greater inclusivity, transparency, and recognition of the value that civil society and multi-stakeholders bring to the table. Creating dedicated forums, enhancing representation, demystifying processes, and promoting fact-based engagement are essential steps towards achieving these goals.

Speaker

Joyce Hakmeh is the director of the international security program at Chatham House and actively participates in various UN cyber projects. In her role, she leads these projects, focusing on advancing cybersecurity and addressing emerging challenges in the evolving digital landscape. Hakmeh follows UN cyber processes such as the open-ended working group and the cyber crime convention, which play a pivotal role in shaping global standards and policies in the fight against cyber threats. Moreover, she is part of the international security National Research Institute, further showcasing her expertise and dedication to the field.

Nisha serves as the director of the Cyber Security Institute in Geneva and actively engages in UN processes. She is particularly involved in the open-ended working group and the ad hoc committee on cybercrime. Nisha’s primary focus lies in providing evidence and data-driven analyses of the cyber landscape, aiming to develop a comprehensive understanding of the challenges and potential solutions. By utilizing facts and data, she contributes to the formulation of effective strategies and policies to combat cyber threats and ensure a secure digital environment.

Joyce Hakmeh and Nisha both play crucial roles in the field of cybersecurity, making significant contributions to UN cyber processes. They bring their expertise and experiences to the table, actively participating in discussions and decision-making processes concerning global cybersecurity challenges. Through their involvement, they strive to enhance international cooperation and strengthen partnerships in addressing cyber threats.

Overall, the work of Joyce Hakmeh and Nisha underscores the importance of collaboration and knowledge-sharing in tackling cybersecurity issues. Their commitment to the field and active participation in UN cyber processes demonstrate their dedication to improving the security and resilience of digital infrastructure worldwide. Their expertise and insights serve as valuable resources in shaping effective strategies to combat cyber threats and ensure a safer digital future for all.

Pablo Castro

Pablo Castro, a cybersecurity expert, emphasises the importance of implementing existing norms rather than establishing new ones. He believes that instead of focusing on developing new norms, it is more crucial to focus on effectively implementing the current 11 norms. Castro argues that regional-level implementation of norms should be a priority for Latin America. This approach would ensure a strong foundation of cybersecurity practices and strengthen the overall security posture in the region.

Castro also supports the role of stakeholders in assisting states to improve the implementation of cybersecurity norms. He believes that stakeholders, such as industry experts and civil society organizations, can provide valuable insights, expertise, and resources to help states in the process of moving forward. To exemplify this, he mentions that Chile proposed a new set of Confidence Building Measures (CBMs) specifically aimed at leveraging stakeholder involvement to enhance the implementation of cybersecurity norms.

In addition to implementation, Castro highlights the need for capacity building in the Latin American region. He argues that capacity building is crucial to improve cybersecurity efforts and to bridge any existing gaps in expertise and resources. He mentions that several Latin American states made a joint statement in July, highlighting the importance of capacity building in the region.

Castro also emphasizes the need for a strategic approach to engage stakeholders in cybercrime processes. He suggests creating a clear strategy that defines specific roles for stakeholders in future dialogues, such as the Program of Action (PoA). This approach ensures that stakeholders are actively involved in shaping cybercrime policies and addressing challenges related to international law, norms, and Confidence Building Measures.

Advocating for partnerships between stakeholders and states, Castro calls for increased collaboration in specific tasks. He believes that by working together, stakeholders and states can better address the complex challenges of cybersecurity. He encourages stakeholders and states to establish strong working relationships to foster effective collaboration and improve cybersecurity efforts.

Furthermore, Castro underscores the importance of strategic dialogue with stakeholders. He observes that stakeholder opponents often have clear strategies and goals, making it essential for proponents to engage in more strategic and well-planned dialogues. He suggests developing a counter-narrative to address opposition and effectively advocate for stakeholder participation.

Castro also mentions the significance of working beyond formal meetings and rooms to achieve progress in cybersecurity. He believes that a lot of influence can be exerted outside formal settings, particularly at the regional level. He highlights the major opportunities for meetings and collaboration that regional initiatives present, making them critically important for advancing cybersecurity efforts.

From his analysis, Castro notes the struggles countries face in cyber discussions due to geopolitical and cultural differences. He highlights how these differences can lead to fragmentation in discussions and potentially result in different internets in the future. This underscores the importance of finding common ground and fostering collaboration despite these challenges.

In conclusion, Pablo Castro provides valuable insights into the importance of implementing existing norms, engaging stakeholders, building capacity, and forming partnerships in the field of cybersecurity. His emphasis on strategic dialogue, regional initiatives, and the need for an action-oriented approach through frameworks like the Program of Action demonstrates his comprehensive understanding of the challenges and opportunities in the cybersecurity landscape. Overall, his viewpoints contribute to a more holistic and collaborative approach to addressing cybersecurity concerns.

Bert

The Internet Governance Forum (IGF) and the United Nations (UN) have different discussion approaches. While the IGF promotes equal discussions, the UN discussions are more intergovernmental and less friendly to stakeholders. This discrepancy is concerning as it highlights the lack of stakeholder inclusion and equality in the UN’s discussions on cyber governance. The Open Networking Group faces challenges in discussing real-world threats like cyber espionage. It struggles to have an open discussion on these issues, which is important for addressing the evolving threat landscape. To address this, the Open Networking Group needs to be more transparent and open about cyber espionage discussions. Clear violations should be called out, ensuring a better understanding among stakeholders.

Implementing international law is crucial in cyber governance. The General Assembly has confirmed that international law applies fully, but there is a need to focus on better implementation and understanding of the existing normative framework. The Open Networking Group will dedicate sessions to this question next year. Some argue for new norms, while others believe that a better understanding of existing norms is sufficient.

Inclusive multi-stakeholder involvement is key in decision-making processes related to cyber governance. Non-state participants have been invited to negotiations in the Human Rights Commission, and NGO representatives are involved in government delegations in some countries. The Program of Action (POA) should focus on implementing the existing normative framework and involve non-state actors. This collaboration can facilitate efforts and coordination between stakeholders.

The involvement of stakeholders has been politicized, and moving it from a political process to an administrative matter is suggested. This administrative approach can remove unnecessary barriers and streamline decision-making. A one-size-fits-all forever resolution for stakeholder participation may not be ideal, as future circumstances may require different rules.

The upcoming global digital compact discussions should involve various stakeholders, despite opposition from some countries. The input and perspectives of different stakeholders are essential for an inclusive and effective digital compact. Bert supports a strong role for the mighty stakeholder model and the IGF, advocating for an inclusive approach involving industry partners, academics, and experts.

Negotiations must be inclusive, with representation from different countries. Availability of funding for travel aids representation, ensuring active participation from a broader range of countries. The quality of discussions varies based on the level and diversity of participation. Inclusive discussions lead to a better understanding of the issues at hand.

More funding and support are needed to facilitate multi-stakeholder participation in cyber governance. Denial of funding for extensive travels hinders effective participation. The COVID-19 pandemic has unintentionally democratized multilateral processes, allowing for more remote participation and inclusivity. While negotiations occur internationally, it is essential to engage at the national level as well.

Stakeholder involvement in the global digital compact process is emphasized, utilizing the national IGF for discussions and preparation. Partnerships and value contribution are crucial for effective decision-making, amplifying the impact and improving feedback provision.

In conclusion, there are discrepancies between the IGF and the UN discussions on cyber governance. Open and transparent discussions are crucial for addressing real-world threats. Implementation and understanding of existing norms are necessary, alongside multi-stakeholder involvement and inclusivity. Adequate funding and support are needed for equal and inclusive participation. The COVID-19 pandemic has unintentionally increased remote participation and democratized multilateral processes. National and stakeholder engagement are vital for effective cyber governance. The development of a global digital compact requires multi-stakeholder involvement and partnerships, with organizations having a potentially underestimated impact.

Eduardo

The discussion at hand revolves around questioning the legitimacy of companies participating in multi-stakeholder discussions within the sphere of international law development. This topic is relevant to Sustainable Development Goal (SDG) 16, which focuses on achieving peace, justice, and robust institutions.

Several concerns are raised regarding the involvement of companies in these discussions. One concern relates to democratic issues. It is argued that when companies participate in discussions shaping international law, it raises questions about democratic representation. In a democratic system, decisions about laws and regulations are ideally made by elected representatives who are accountable to the citizens. However, the inclusion of companies in these discussions potentially bypasses this democratic process.

Another point of contention revolves around the conflict of interest that companies may have when participating in these discussions. Companies, by their nature, prioritize their own interests and profits. In international law development, where decisions are made with the aim of benefiting society as a whole, the alignment of companies’ interests with broader societal interests becomes a concern. The question arises as to whether the participation of companies in these discussions could lead to biased outcomes that favor their own agendas.

Furthermore, the lack of direct election by citizens is raised as a valid concern in questioning the legitimacy of companies’ involvement. Unlike elected representatives who are accountable to their constituents, companies operate under their own governance structures. This lack of democratic oversight over their participation in multi-stakeholder discussions adds to concerns about the legitimacy and transparency of the decision-making process.

The sentiment towards these issues is negative, as the concerns raised highlight potential flaws in including companies in multi-stakeholder discussions on international law development. However, it is important to note that Eduardo’s stance is neutral as he is simply relaying a question posed by Amir Mokaberi on this matter.

The analysis emphasizes the complexity of balancing the involvement of various stakeholders, including companies, in shaping international law. The insights gained from this discussion emphasize the need for further exploration and deliberation on how to ensure legitimacy, transparency, and democratic representation in such multi-stakeholder forums.

Marie

Cybersecurity discussions have been ongoing since 1998, but their scale has significantly increased in recent years. There is a clear need for broader multi-stakeholder involvement in these discussions, including the participation of the technical community. However, the current level of inclusivity falls short of expectations.

Collaboration between different stakeholders is crucial in effectively addressing cybercrime issues, both within the United Nations and in other forums. Marie emphasizes the importance of connecting cybersecurity discussions in various domains to promote a secure and trustworthy online environment. The emergence of numerous multi-stakeholder initiatives is inspiring and can potentially enrich engagements beyond traditional diplomacy.

The lack of mention of the technical community in the report of the open-ended working group highlights the need for its inclusion in cybersecurity discussions. Marie insists on continuing dialogues with stakeholders such as the technical community, as their involvement enhances understanding of their potential contributions.

While discussions have grown in scale, it is challenging for developing countries to allocate resources and time to processes primarily taking place in Western countries like the UN. Marie highlights the importance of ongoing discussions at national and regional levels, emphasizing the value of long-term engagement in shaping informed policies.

Marie further emphasizes the significance of stakeholder engagement, drawing from her experience working on cyber issues in the Netherlands. She advocates for the use of platforms like the IGF, RightsCon, and GFC for open discussions and aims to demystify discussions in the first committee for stakeholders.

Capacity-building and the spread of knowledge regarding the normative framework are identified as essential elements in the field of cybersecurity. Marie’s team endeavors to share their knowledge about the first committee to enhance engagement, participating in regional meetings and holding cyber policy discussions.

Marie encourages non-governmental stakeholders to share information, facts, and the impact of projects, as this input can add value to the discussions within the context of the UN. Continuous involvement of all stakeholders and their accountability in taking the right positions are crucial. Marie acknowledges that the process can be frustrating but assures that raised issues do make their way into the final reports.

The idea that all stakeholders, including the private sector and civil society, should have a voice in policy-making dialogues related to cybersecurity is strongly supported. This inclusive approach recognizes the importance of considering a wide range of perspectives in shaping effective and comprehensive cybersecurity policies.

In conclusion, cybersecurity discussions have grown significantly since their inception in 1998. Broader multi-stakeholder involvement, particularly including the technical community, is needed to effectively address cybercrime. Inclusivity in these discussions must be improved, and collaboration between different stakeholders is crucial. Regional and national initiatives, capacity-building, and knowledge sharing are essential for robust engagement. Continuous involvement and accountability of all stakeholders are emphasized to ensure the right positions are taken and all perspectives are considered in policy-making dialogues.

Audience

The analysis reveals a significant issue concerning the lack of representation from African stakeholders in multi-stakeholder discussions. This absence is viewed as a negative aspect, highlighting the need for better ways to enhance the participation of African stakeholders in these discussions. The argument is made that the current level of engagement must be improved to ensure that the perspectives and interests of African stakeholders are adequately represented.

Additionally, the analysis emphasises the importance of stakeholder engagement at both the national and regional level, emphasising that it is crucial to strengthen and improve this engagement. It is believed that by doing so, a more inclusive and effective multi-stakeholder approach can be achieved.

The analysis also identifies a common problem faced by civil society organisations, which is a lack of access to engage with the government. However, it is suggested that national and regional level engagement could offer a sustainable solution in addressing this issue.

Furthermore, the analysis highlights the potential benefits of better engagement, stating that it could help strengthen the broader ecosystem of civil society organisations. This indicates that by actively involving and consulting various stakeholders, a more robust and collaborative approach can be fostered.

The analysis brings attention to the fragmentation of the cybersecurity debate, which is seen as a challenge not only for non-state stakeholders but also for many developing countries. Keeping up with multiple tracks of discussion at the UN is particularly challenging for developing countries, making it difficult for them to actively participate in these discussions.

The analysis also touches upon the polarisation of positions on the future of institutional dialogue after OEWG (Open-Ended Working Group). There is a division between those supporting the continuation of discussions on the proposal of a Program of Action (POA) and those against the idea of something legally binding at the moment. Brazil, for example, supports continuing discussions on the proposal of a POA.

Furthermore, concerns are raised about the potential underutilisation of OEWG if the POA is adopted this year. If the decision to adopt the POA is made two years ahead of the end of OEWG’s mandate on regular institutional dialogue, it is feared that OEWG discussions might be undermined.

The analysis also considers the involvement of users in the multi-stakeholder process, highlighting the importance of including users’ perspectives and addressing issues related to defective use and abuse. The role of Microsoft in involving users in multi-stakeholder processes is specifically mentioned.

Lastly, the analysis emphasises the engagement of young people in the tech industry, advocating for their perspective to be taken into account. It highlights how Microsoft incorporates the youth perspective into its submission and ensures that everything is on track.

Overall, the analysis underscores the need for greater inclusivity and participation in multi-stakeholder discussions, particularly concerning African stakeholders. It also highlights the importance of various levels of engagement, the concerns regarding fragmentation and difficulty faced by developing countries in the UN, and the significance of involving users and young people in the decision-making processes.

Session transcript

Audience:
th th th th th th th th th th th th th th th . . . . . . . . . . . . . . . . . . . . . . . . . . . .

John Hering:
Actually, this is a well-timed conversation because last week Microsoft released its annual digital defense report, which if you haven’t had a chance to dive into in the days since it’s come out, I would encourage you to do so. It’s our summative annual threat intelligence report that we put out, a pretty comprehensive overview of how Microsoft sees the threat landscape. It’s not necessarily the entire landscape. We only see our sliver of the internet on our platforms, but it does give a pretty illustrative view of what the contemporary challenges are. Unsurprisingly, cybercrime continues to be an increasing challenge. In particular, we’ve noticed over the past year it’s increasingly professionalized. That’s to improve the scale and then the impact of cybercrime operations. And then when it comes to nation state activities as well, we’ve seen continued escalations in that space, in particular with a focus on espionage operations over the past year, and 41% of which, in terms of all nation state cyber operations observed by Microsoft threat intelligence teams, were focused on critical infrastructure sectors across various regions of the globe. None of this is especially new. It’s been an escalating concern for decades. But now the integration, obviously, of cyber operations in armed conflict is becoming a rising concern, including in the past year and a half in Ukraine, most notably, which is making conversations around peace and security online, in particular at the UN, all the more urgent. We have seen over the same time period of the last few decades, also the UN stepping up to try and meet the moment and keep pace with an evolving threat environment. Stirring up various working groups and new processes, evolving its mandate to make sure it’s meeting the moment. And this has also introduced a new challenge in how do we include the right multi-stakeholder voices in those conversations. Cyberspace is, after all, a much more shared domain of conflict than perhaps any other, given that it’s inherently synthetic and a lot of it owned and operated by private entities. It also raises important questions about how to ensure the right human rights are protected and the necessary. necessary multi-stakeholder and academia voices are at the table as well. And thus far, we’ve seen sort of an ad hoc patchwork approach to trying to include more multi-stakeholder voices in those conversations. So that brings us to today, and I think a two-fold goal for this conversation. The one, on the one hand, it’s to hopefully keep everyone appropriately informed on where these conversations are at the United Nations and beyond, and to hopefully help people feel like they are equipped to more effectively engage in those conversations. And then two, is going to be to hear from you in the room, from those in the IGF community, about the challenges, recommendations, or guidance you might have around how we might improve the relevant inclusion of multi-stakeholder voices in cybersecurity dialogues. That will be essential, I think, both for our guests on the stage, as well for an after action report that we’ll put together following this session. So to that end, we will save the bulk of the time of this session towards the end for audience Q&A. So that’s not just question and answer, but also commentary, other suggestions, or things you’d like to contribute to this conversation, or that you’d like to hear our guests respond to. But without further ado, then, I’d like to welcome our speakers, first on the stage, and then Charlotte online, to just introduce themselves first, let us know who you are, what organization you’re from, and maybe your relation to the cybersecurity dialogues at the UN. We start at the end of the table, and come on down.

Marie:
Now working. Thank you very much, and thank you for having me on this prestigious panel. My name is Marie Mou, I’m working at the permanent mission of the Netherlands to the UN in Geneva, and I’m First Secretary Cyber, so I’m the incarnation of what cyber diplomacy is from a member state’s perspective.

Pablo Castro:
Thank you very much, John. I’m Pablo Castro, I’m Cybersecurity Coordinator at the Ministry of Foreign Affairs of Chile. I basically cover cybersecurity, cyber…

Joyce Hakmeh:
I’m the director of the international security program at Chatham House. My relationship to this conversation today is that my team, we lead a lot of or a number of projects following UN cyber processes, the open ended working group as well as the cyber crime convention, and we have had our fair share of most of the work that we do.

Speaker:
So, I’m very happy to be here. I’m also very happy to be here as a member of the international security NRI, when you know that there is a trilateral project, there might be a possibility of a multi-stakeholder engagement or attempt to do so. . Thank you. If you’re online, could you introduce yourself? My name is Nisha, I’m the director of the cyber security institute in Geneva, and with my team, we engage in the UN processes, the open-ended working group, the ad hoc committee on cybercrime, and other fora in order to try to bring evidence and data-driven-based analyses of cyber landscape.

John Hering:
After interviewing you and trying to deal with the private matters and I just kind of gave an outline from how to see the environment from the industry side, but let me maybe from Marie and Bert to start us off, how has the conversation around nation-state activity in particular, evolved at the UN in the time that you’ve had there? And where are we living up to and where are we falling short of the international expectations that have been set?

Marie:
Thank you. So after the introduction of the UN development programme, and again I’m going to focus on the cyber industry sense, which is cybersecurity. I work at CreativeOcean Public, but I think there have been a lot of developers within the public sector. So, 13 still or 20 or 20 people who are members of that public. I think those discussions are not really new. They’ve been going on for since 1998. But there is also a broader picture that we need to take into consideration because the cyber security discussion are not new, but what is new is first the scale at which they’re being discussed. So it’s in more and more places, but also the integration of other stakeholders that is pretty new into those processes. When actually we’ve seen on the, we would look at the broader cyber picture, more multi-stakeholder engagement, and that comes back to the 2003 with this. But the fact that in the cyber security strict sense of the discussion, we’ve seen stronger multi-stakeholder involvement, unfortunately does not have yet achieved the inclusivity that we had expected in the first place. And we would like to have more inclusivity. It’s already nice that the open-ended working group now is open to all member states, which was not the case with the GGE. So there is already more inclusivity, but I think we would like to go a bit further and to make sure that all relevant stakeholders can have their voice heard also in those discussions.

Bert:
Thank you. Thank you so much. From my perspective, I have to say I’m very much looking forward to the discussion here, because what I can observe is, of course, quite a discrepancy between the way we discuss things here at the IGF, and where everyone is on an equal footing. And as soon as, if I stay with the metaphor, step your foot into the UN, it becomes very intergovernmental. And by its very nature, it’s not very much stakeholder-friendly. So we always see. that is really an uphill battle, every time in all these processes. And when it comes to your specific question on sort of the threat landscape and how it is being discussed, also there, there’s a bit of a discrepancy between the real world and the UN world. In the real world, you described yourself, you just released your own annual report, which lays out the landscape that is very, which raises many concerns about state actors, non-state actors, the collusion between the two, or cyber crime activities by state actors, et cetera, espionage activities, how it’s combined, how cyber act, malicious cyber actors become more and more involved in disinformation, information campaigns, all this is there. But it’s very difficult, if not impossible, to have a frank discussion on this in the Open Networking Group when we discuss threats. There’s always, there’s a strong sense, it’s an uncomfortable discussion, so to say, and people rather skim over. There’s a stronger interest to discuss things like confidence building, et cetera, than to discuss the hard stuff. Why do we need to build confidence? Because there is a problem of growth in malicious cyber activities. Particularly when it comes to the issue of cyber espionage, we are there for the view that we should become much clearer in calling it out as clear violations of the normative framework when such activities are directed against states or critical infrastructure, et cetera, thank you.

John Hering:
Thank you, Paul, so much. Sticking sort of on the government side of this conversation, and we’ll go right back to you, actually, Bert, and then bring Pablo into the conversation as well. You guys have both mentioned, I think, now, the Open Ended Working Group, which is the current information security dialogue, the second iteration of that body. Previous to that, it was the group of governmental experts, and there were successive rounds of that. As of 2015, there have been established norms for responsible state behavior online, some recognition that international law. ought to govern also state behavior online. There has not been new norms established since that period of time. And I was wondering if you could just sort of shed some light on how should we think of the current status of the open-ended working group? What is its mandate and mission? And then what is the importance of multi-stakeholder inclusion in that?

Pablo Castro:
Thank you. Regarding norms, and especially new norms, I have to be really honest that maybe even from the perspective of Chile, we’re not really thinking probably in new norms. It’s more like the implementation of 11 norms, especially the regional level. It’s probably one of our main interests right now. And I would say this is also very important for our Latin American regions, to try to move forward in this. Not some, expectation could be different, regarding the open-ended working groups. And from Latin America and from our conversation with our colleagues, I would just say right now we have a good coordination with other states. After years that probably stayed up, and Minister of Foreign Affairs didn’t have, you know, someone in charge of cyber. Now it’s possible to do this sort of coordinations. Capacity to build it, for example, is very important. But when it comes to norms, I think implementation is something important, especially at the regional level. And that could be also a good chance for stakeholders, you know, how they can help this process, you know, the states, you know, and improve this implementation. It’s one of the reasons why last year, which has proposed a new CBMs, you know, there’s a working group at the OAS for the establishment of CBMs in cyberspace. It started back in 2017. We have now 11 CBMs, which is quite something. One of them, it’s about the strength implementation of 11 norms, you know. And that was proposed specifically to try to, you know, encourage a state to work more than this. With the assistance of the Organization of American States, I was a good program. It’s very good, very important, this process. And be working, you know, a lot with stakeholders. So I think that it could be a good opportunity, you know, to that stakeholder. they can help a state, you’re meant to move on this. Now, moving forward, the opening of the working group is a tough, it’s a really good question, you know, because as we have different expectation, by the way, for Latin America, for example, capacity building could be probably something really important. We managed in the last, I mean, in July, to make a joint statement, you know, several, I mean, states for Latin America about capacity building. And the current situation, I think, is a little bit complicated. We always have this conversation between the states and our region, because trying to get the consensus, the opening of the working groups, it’s really difficult. And so trying to, you know, to decide which are the subject, the item, do we really move on, it’s also very complicated. So it’s a complicated balance, you know. So far, I think we’ve been trying, you know, to agree on things that we, things that everyone could agree, one of CBM’s, you know, directory portal. And, but I cannot really oversee how we can really move on new topics and new discussion, we can do that. That’s because, of course, the current context of the complicated conversation we have, and I don’t see that’s gonna be easy to resolve in the coming years. I mean, according to the geopolitical situation we have right now. Thank you.

Bert:
Thanks so much. I can subscribe to everything Pablo said. And just to add, our sense is also, we need to focus on how we implement better, and even not only implement, understand better the normative framework as we have it. We also see quite some, there’s some countries out there who try to make, to produce confusion. They say, oh, we only have voluntary norms, we have, therefore we need a legally binding treaty to clarify what the legal obligations are. Ignoring the fact that we have, the General Assembly has repeatedly confirmed that international law, as enshrined in the charter, et cetera, fully applies. Therefore, we need to have more dedicated discussions to look specifically, what does it mean international law applies? So, since we are just finalizing our national position paper, where we’re. We are very happy, we have been lobbying this for a while, that next year, one of the intercessional sessions of the Open Networking Group will be dedicated to the question of application of international law. And as we need to see, it’s a bit, the voluntary norms are a bit muddy the waters almost, it leads to some confusion. We also will have more discussions on the voluntary norms, and all of you again, we don’t need new norms, we need better understanding of the existing ones, what exactly that means, and then of course, particularly, that they have been implemented and the countries who violated them are held accountable. Thank you.

John Hering:
Thank you both so much. I heard accountability, confidence building measures, clarifying the existing norms obligations, all the spaces where we can move forward within the context of the current Open Networking Group and future cybersecurity dialogues, and things which need multi-stakeholder inclusion and participation and engagement. And so to that side, I wanted to bring in Charlotte Online and Joyce from the sort of non-governmental stakeholder perspective. Could you just take us through, I think, first for, you know, I think we have a lot of government folks in the room. What is it like to try and participate and engage in the UN Information Security Dialogues as a non-governmental stakeholder?

Joyce Hakmeh:
Thanks, John. And very, very good question. I think this is something that sort of like an experience that we reflect on quite a lot, and we sort of share stories, the multi-stakeholder community between each other. I think, speaking from our experience at Chatham House, but also observing multi-stakeholder participation more generally, I think there are maybe four issues that I believe act as a challenge to the multi-stakeholder participation. And of course, you know, I’m not going to talk about the biggest one, which is states blocking actively going out of their way sometimes to blocking multi-stakeholder participation in UN processes. The first point I want to make is I guess there is a disbelief or perhaps insufficient conviction from some states about the value that multi-stakeholders bring to the table. And so you basically, and this comes often from states who arguably need this support or could benefit from this support the most. So often the starting point is really sort of making the case about why it is important that you’re at the table and what is it that you can contribute. And so this basically leads to either states not engaging with multi-stakeholders or, so if they tolerate your presence, they don’t engage or they engage at the superficial level. And perhaps this stems from the second point I want to make which is a perception that some states have that the multi-stakeholder community is a sort of a uniform group, it’s like monolith and we sort of all have the same agenda, the same approach, the same objectives. And this is obviously more true when it comes to civil society rather than to industry. But of course that’s not true, right? Because civil society is a very diverse group with sometimes overlapping but more complimentary mandates and the role that they can play is diverse. So if you don’t understand what they can bring to the table then it is hard to sort of engage with them properly. The third issue is, and this is more sort of directed at countries who actually support the multi-stakeholder participation and can be called the champions of multi-stakeholder participation who really kind of like make that point over and over again in UN processes and beyond. I think sometimes the challenge with that relationship is there is a lack of strategic but also consistent engagement. And this could be related to like time issues, resources issues or perhaps sometimes lack of coordination within the government itself between the different agencies. So this means that the relationship with multi-stakeholders isn’t as good as it can be, isn’t as impactful as it can be. Now, I’m not suggesting that this is just the responsibility of government. I think of course this is a shared responsibility, it’s a relationship so it has to go both ways. And perhaps, you know, on this current OEWG specifically, I think the, probably the sort of the word that describes it the best when it comes to multi-stakeholder participation is uncertainty, right? With every session, like multi-stakeholder groups, they don’t know whether they’re going to be accredited or not. I know Microsoft, you’ve had your good share in that, and so the Chatham House, until we finally got the ECOSOC status, which in a way kind of like, you know, gave us that right to be in the room. And you know, the ability to influence UN processes, like in this kind of very complex geopolitical climate that Pablo described, requires strategic planning over time. So if you’re uncertain whether you’re going to be in the room or not, it makes it very hard to actually influence. I want to also talk about these sort of other ways where you can influence, but maybe for later. But maybe I want to conclude with this point that, although this, you know, the participation hasn’t been great, it has been possible, right? And I think from our perspective, it is, it has been a learning curve, and particularly, for example, if we look at the cybercrime convention, this is the first time the multi-stakeholder community is trying to shape a legal instrument within the UN on cyber, right? And so we are learning a lot of lessons that will definitely help us in the future and also help us sharpen our tools.

John Hering:
And Charlotte, you’re up if you’re online.

Charlotte Lindsey:
Yes, I am. Thank you. I agree with the points that Joyce raised. I would just like to focus on a couple of points. I think that while the Open-Ended Working Group is officially open to stakeholders, states have this veto power, which Joyce mentioned, which limits the participation in dozens of organizations, including the Cyber Peace Institute, are regularly vetoed. And I think that that makes it very complicated for us to plan strategically, but also to really be representative. and to bring added value to this forum. I mean, clearly an achievement, what has been that the GGE and the Open-Ended Working Group run these sort of parallel processes and came out with a consensus report, which was aligned. However, it’s very complicated for these parallel processes for multi-stakeholder civil society organizations to be able to participate in all of these parallel processes and to really be able to contribute. The Cyber Peace Institute, we have been able to contribute to the objectives of several of the UN working groups. We’ve submitted comments, recommendations on pre-drafts, on zero drafts, on final reports of the Open-Ended Working Group. We have also submitted multi-stakeholder engagement statement, which we led with a group of other organizations and contributed ahead of substantive sessions. So we are able to find ways to contribute, but it does take a lot of navigation, a lot of engagement behind the scenes to be able to really be able to be present and to put statements and positions forward. I think that we, as civil society organizations, we do have added value that we can bring. And I think that what we have been able to demonstrate and many states demonstrate that they really appreciate these contributions is bringing data and evidence on many of the issues that are being addressed in the Open-Ended Working Group. And we have been able to, for example, bring things like a compendium of best practices on protecting the healthcare sector from cyber harm and bring practical recommendations that can really help negotiations and help discussions. And I think that by bringing these recommendations, we can add the diversity, we can bring voices which really represent the full range of how the cyber landscape is actually being managed and the threats on that. of that, in that landscape today. I would just like to make a couple of final points. I think while we see that a number of governments have really sort of reiterated their commitment towards an inclusive process in which the multi-stakeholder community really does have a voice, we think it really is important that there’s more clarity on what these potential contributions from civil society or non-multi-stakeholders can really bring. And this can encourage other states to really advocate for and pursue this more inclusive process. If there is an understanding of the added value, then each time each organization is not having to bring that. And we think also what is complex ahead of some of the sort of consultative meetings, we think it’s also very complex when documents aren’t shared ahead of meetings or are very late, and therefore it’s really hard to bring, as Joyce mentioned, this very strategic role if we’re not able to actually receive any of the documents, understand what the subjects are going to be, and then also not necessarily able to participate in the room. So we think it would be important to have real clarity on non-state actors and how they can participate in the substantive sessions, clarity on the level of transparency and visibility offered for multi-stakeholder contributions throughout the process. And we think that there also needs to be inclusion, not just at sort of international organizations and civil society organizations operating at an international level, but also those operating nationally and regionally. And this could also help have a more of a global understanding of the challenges, but also the contributions that different actors can play. Thank you.

John Hering:
Thank you so much, Charlotte. And just this, I think, hit all the major points in terms of, you know, as a non-governmental stakeholder who has also tried to engage in these processes before. for, I think covered pretty well what some of those challenges have been, but I do think, to your point, Joyce, this is also a learning process, and I think we should also give credit where credit is due. I remember, you know, the first ever multi-stakeholder consultation for the OEWG that happened, you know, in the room, conference room B at the UN in 2019, and we really have come a long way since then in terms of regularizing things and having much greater inclusion. That’s a credit to, I think, a lot of support from various member states, increasing numbers of member states, and then also the current chair of the OEWG, who I think we should recognize, as well as having worked to create regularized, at least intersessional consultations with non-governmental stakeholders. But I think to the broader point here, indeed, it has been highly ad hoc, and especially for resource-limited organizations, that’s a particular challenge to try and think about how best to structure that level of engagement. So then thinking about maybe the other moving forward and how to sort of begin to be a little more accommodating and inclusive, I’d love to hear from Burt and from Charlotte, thinking beyond sort of just the OEWG and the GGE, Pablo and Marie, I’d love to hear sort of just a comparison to other first committee processes that maybe have greater success with multi-stakeholder inclusion, whether that’s, you know, the ECOSOC status or any other ways that we’ve seen other stakeholders more successfully included in the past.

Pablo Castro:
Okay, I’m going to start. Well, I mean, as you mentioned, I mean, the cybercrime, you know, current process right now, we do have this modality, we agree, it was working pretty well. We can mention also, I mean, all the discussion, like, for example, we’re talking about weapons systems in Geneva. But when I also think about the future dialogues, future process, okay, I have to think about the program of action, which is something that’s coming. be a very good opportunity, you know, to really create a sort of, as you mentioned, and I really love the word strategy, something we definitely need, and in a way also tries to create some or define a specific role for multistakeholder, let’s say in the future BOA, you know, how they can help, you know, or assist in terms of identifying needs, assessment, how to help a state for the implementations. We can, in some way, our case, not to say in a structure, but define some roles in this future dialogue. I mean, that way, we can, I mean, identify some stakeholder good for some, let’s say international law, let’s say 11 norms, let’s say CBMs, or in that way, I think it would be good if we can actually try to start this discussion, you know. I think, well, Cyberspace Institute, for example, has been very good, I mean, reporting this at the website, because this is something that is coming, I mean, sooner in the next couple of years, and that’s going to be a good opportunity for a state, I mean, to think about this. Now, I would like to see also more, I mean, probably partnerships between the stakeholder and other states, something that maybe in Latin America, even from Chile, I would like to do more on this in some, as I said before, in this specific task. This is something I could, we could probably, I mean, start to think and work in the, I would say, near future. Thank you.

Marie:
Thank you. I think I will just go a bit further than just the UN and the first committee. But looking at, first, at a purely, like, UN perspective, I think those discussions we’ve seen popping in into so many different fora. When the pandemic started, WHO started talking about cybersecurity, we were seeing cybersecurity related discussion in the context of e-commerce, but also, obviously, with the ongoing situation in Europe and in Ukraine, in the humanitarian dialogue. So I think we also need to take this into consideration and how other stakeholders are involved into those discussion. Because if we don’t connect the dots and all those discussion as well, then we will never be able to have the open, free, and secure environment that we want and where people trust, that people trust, and that we can all benefit from. So that’s on a positive note. But we are also seeing a growing number of multi-stakeholder initiative outside of the UN, be it at the national, regional, international level. And those are really inspiring. And I think we need to look at how stakeholders, when they are having those multi-stakeholder initiative, how they engage with each other. I find it difficult to really compare because obviously we are in a UN situation or in a multi-stakeholder. But I think we need to look for inspiration wherever we can, and not only in First Committee or purely UN. Because as was pointed out, it’s really new to have multi-stakeholder engagement within First Committee discussion on cyber. So I think that’s one of the things. The other thing that was mentioned a bit earlier on the panel is it’s true that we don’t always really well understand as diplomat, the entire breadth of how much civil society can bring and the stakeholders. And one of the thing I would like to point out is in the context of the open-ended working group, we never mentioned the technical community. If you look at the report, we talk about civil society, the private sector, academia, but the technical community, for example, is not there. So I think that’s also a sign that we need to continue that dialogue, and we need to understand how much other stakeholders can bring to those discussion. And then little by little, I think we will make that space, and that we will hopefully see more participation in those discussions.

John Hering:
Thank you both. And Pablo, I think you… mentioned the elephant in the room here, perhaps, in these conversations, which is the program of action, the sort of recently passed resolution to establish what would be kind of the first standing body that’s gonna be focused on cyber security at the UN. And there was a lot of open questions about that, but I’d like to invite Bert and Charlotte back into the conversation to share a little bit about what that might look like and how that could regularize, perhaps, some more multi-stakeholder inclusion in the UN processes in the first committee.

Bert:
Yes, with pleasure. And again, as I said in the beginning, the discrepancy between the IJF and the General Assembly, we will not overcome this. There are many great initiatives out there. And again, we should also look more what the IJF experience, what that can bring, similar to the Vistas Forum, et cetera. We have to see this, but also, just to mention, what’s the best way forward? We have these limitations, and I must also say, I found it more frustrating in the Open Networking Group, because there, the no-objection procedure that is now the practice for the invitation of multi-stakeholder was used much more extensively than in the Ad Hoc Committee’s cyber grant process, where basically, if you look at the list, it’s a long, long list. More or less, everyone who wanted to participate was able to participate, which is exactly how it should be. There are also other ways. I mean, if you look in the, I have done many things. I was delegate to the CSW in the past. There, you have a practice. Many countries involve NGO representatives in government delegations. Now, this is something, this was also done by some countries in the Open Networking Group after some, with blocked organizations. I’m not totally sure that’s the right message, because I must say, for me, multi-stakeholder, or to put them in your delegation sounds like they’re aligned with you. They have their own voice. I mean, I want you to be there, whether you agree with me or not. That’s the idea. I don’t want them to be part of a government delegation. So, I’m not absolutely sure that’s the right way. I was often a delegate to the Human Rights Commission. Commission. There, it depends a bit on the country, but in a number of negotiations and resolutions, non-state participants are invited to participate in the negotiations as well. So there is precedent for almost everything. When it comes to the POA, I will not go into the question how it should look like, etc. This is a separate discussion. It’s an important project and we’re preparing another resolution for the General Assembly that is happening as we speak. But again, the idea would indeed be to have more stability by having a permanent body, by having it inclusive. A strong focus of such a POA should, of course, be on implementing the existing normative framework, including capacity building, where, of course, multi-stakeholder play a key role. They are major actors in this field, so therefore they also need to have a proper seat at the table. But this we have to see, the POA, we want this to be a UN body, so we still will have to fight that UN rules and regulations apply. So we have to see, we will have these difficult negotiations to have multi-stakeholders as prominently as possible at the table. Because, as the saying goes, if you’re not sitting at the table, you’re on the menu, so to say. So we really want multi-stakeholders to be at the table. Thank you.

Charlotte Lindsey:
Yes, thank you. So, yeah, I think it’s really important to underline that the POA does present a sort of a unique opportunity to try to advance peace and security in cyberspace by really focusing on the implementation of the agreed norms and ensuring practical and needs-driven capacity building. We do think that this initiative needs to address a variety of issues related to the operationalization of the agreed-upon framework that would benefit from real practical implementation and meaningful stakeholder, multi-stakeholder participation. And that needs to therefore be reflected in the modalities. So the modalities for stakeholder participation need to be very much clarified. to make sure that the multi-stakeholder nature of cyberspace is reflected. And the inclusion of all the relevant stakeholders in a dedicated forum would build legitimacy, would shape any future instruments. So this inclusiveness could create a process that really reflects the lived realities, addresses real threats that affect the safety, security and wellbeing of people. And stakeholders can assist states to build their capacity and understanding of how to apply the norm. So I think there’s a real added role that civil society and other multi-stakeholder organizations can play on a practical day-to-day level that if we can contribute would be invaluable. And I think civil society organizations are particularly well positioned to connect different actors and to build partnerships across a variety of communities and geographies and to help the practical implementation of the cyber norms. And we can help in national and regional implementation efforts, including reporting on the progress. So the real added value is there. And I think what is really, and I come back to this point I started with, the modalities, the POA modalities in relation to the scope, the method of establishment, the format, the frequency of meetings, the decision-making structures and stakeholder participation, all of these points are being debated. And we urge that states really create a mechanism that reflects this multi-stakeholder nature. And as some of the previous participants mentioned, it does need to include civil society, industry, academia, the technical community and other experts who can really play a vital role and bring expertise to future dialogues on cybersecurity in the context of international security. And this will really drive much more impactful outcomes from the process and really contribute to ensuring transparency and credibility of the agreed decisions, as well as the sustainability of implementation.

John Hering:
Thank you both so much, points very well taken. I’m putting everybody on notice that we’re gonna have maybe one or two more questions here, and then I would love to hear from folks in the room. Again, either questions or comments on things that you think would be helpful at including more multi-stakeholder voices at the UN or elsewhere in conversations around peace and security online. Speaking of online, if you’re part of the online audience, please do put questions in the chat, and my colleague Eduardo will make sure that he addresses them to the room. But I want to pull over to sort of, I think what’s been mentioned a couple times but underscores a lot of this, which is the geopolitics of the moment. And so to maybe Joyce and then to Pablo to discuss, rising tensions means that this is getting more difficult to have more inclusive, well, seems to be more difficult to have any kind of productive conversation in diplomatic spaces, certainly multilateral ones, but in particular as it relates to multi-stakeholder inclusion, increasingly difficult to have multi-stakeholder voices heard. Microsoft is certainly among many, many other multi-stakeholder voices that would seem to be relevant to dialogues, but have been blocked from participating by respective member states amid escalating geopolitical tensions. How can we address this, do you think, such that we can ensure that we have the necessary voices and the inclusive dialogues we need in future conversations without letting geopolitics play such a weighty role? And Joyce, if you want to start.

Joyce Hakmeh:
Thank you, thank you, John. I think this is a very important question. Like how do we understand our reality and work within the confines of that? I think the, maybe I’ll sort of like split my answer into two kind of parts, or maybe to talk about it from sort of two different lens. So first of all, there is, there will be new processes, right? So we heard about the POA, but outside of cyber, there are sort of like processes that are being established and in cyber, there are calls for new processes. whether leading to something binding or otherwise. So I think it is very, very important, and this point has been mentioned before, is that the starting point ought to be figuring out good modalities for the process, right? So it’s much harder when you have bad modalities to fight for multi-stakeholder participation. It’s much easier when it’s already enshrined in the process from the very beginning. And in that, there has to be transparency. There has to be clear criteria for inclusion, but importantly, clear criteria for exclusion, right? And I think we can perhaps also aim to be a little bit more ambitious than just that, because even if a certain member state can object and can say why it’s objecting, and this won’t really go much further than that, I think maybe we should be more ambitious and ask for maybe some sort of formal procedure to resolve disputes when it comes to multi-stakeholder participation. I think we should, if we believe multi-stakeholderism is the kind of way forward in digital technologies governance, which it should be, right, then we ought to have it more sort of like part and parcel rather than something we sort of like every time try and beg for, right? It has to be there, and it has to be unquestioned. But of course, this is a journey, and bit by bit, and as you said, we’ve already had some successes, and we hope to build on that. The other thing that I want to, sort of the other lens, is how we sort of, what can we do with existing processes and in the kind of geopolitical context that you described. I think an important point is that while it is very important to be in the room, and if you’re not on the table, you’ll be on the menu, that’s probably maybe true, but also I think it is also important to know that the ability to influence is not just in the room, you know? There’s a lot that can be done outside the room, and arguably, you can have a better impact outside the room. When we take the floor in the open-ended working group, they give us three minutes to speak. You know, how much can you influence in three minutes? That’s very, very arguable. So I guess this sort of like combining this with other initiatives outside of the UN processes is extremely important. And working on that sort of relationship with states on a long-term basis. I think the, I talked about the fact that some member states don’t understand the value of multi-stakeholders. And I think there is an onus on multi-stakeholders to actually prove through actions what their value is. Charlotte talked about data and research and the importance of that. Capacity building is absolutely important. And then through actions, member states can understand why multi-stakeholders are valuable. And they will then become, so the champion’s circle will expand beyond the current few. And I think also importantly is to focus on not just multilateral, we talked national, but also regional. And Pablo talked about OAS and the different kind of initiatives there. Because that has also like a huge potential for influence. If you can get ECOSOC status, then do. Because that will help you overcome a lot of challenges. And maybe kind of a final point, I think we are working on new areas, emerging areas. And I think we can’t use sort of like always or just old or existing models to solve new and emerging problems. I think it’s very important to be innovative, to be creative, think outside the box. Particularly that we as multi-stakeholder community have limited resources. So yes, we might be able to participate in meetings if the door is open, but we might not even if they let us, right? So I think there’s also the need to think about how can we do it creatively and differently than the way we do it now.

Pablo Castro:
Thank you. Well, I agree 100% of what we were saying. So I’m not quite sure that that’s something more valid. But thinking about this, I think while you mentioned modalities and also transparency in the regional aspect, let’s come back again for the strategy council. I think we definitely need more, this is the perspective of government, to work more with the stakeholders in terms to how to face this problem. And how again, I mean, create your own strategy for doing so. I don’t think there has too much dialogue, again, for the perspective of mine. region, Latin America, that we really do a lot in terms of every time we have new meetings opening up in the working group or cybercrime, we really have this chance to start to have this dialogue, you know, with stakeholders, you know. We are trying to move on in this last year with the Dutch initiatives in Chile that we managed to organize a dialogue with stakeholders and representatives from Ministry of Foreign Affairs in the region, which is really good, to basically discuss, you know, open up the working group, aid the economy. But I think it is, I think we definitely need to try to, I mean, to work more in a strategy to face this, because the members say that they are actually against participation, they already have a strategy now, they have a goal, you know, that’s the problem, we are not facing something that they like it or not, they really have a very clear mission, a goal to stop this. So I think we are not maybe have this sort of, this is my impression at least, this sort of coordination to say, okay, they have a strategy, they want to do this, how we can actually create the counter-narrative, you know, and do more than this. And I agree also with Jules very much, is a lot of things we can do at the margins on all these meetings, you know, especially at the regional level, at the U.S. or in Africa, et cetera, which is probably has the most chances, you know, to come and meet together, you know, and really thinks about things that are also are important to move on, as you mentioned, capacity building, implementations, those are the thing that in some region are really critical, really important, and we can have the chance in that case to work together, you know, at the space. Thank you.

John Hering:
Thank you so much, Pablo. You brought up a really good point at the end there that I want to circle back on at some point here, which is sort of what are the opportunities for engagement outside the U.N., and how can sort of cyber diplomats in that community help to facilitate that, and what can others from the nongovernmental community do? But sort of before diving in there, I do want to invite, now that we’re sort of in the latter half of the program, anyone in the room or online who has a question or a comment or other ways to contribute to this conversation and invite them to please take the floor. There are microphones in the aisles here. And there is certainly the chat box online. And Eduardo, if you’re able to come on, maybe you could ask the first question if there is one.

Eduardo:
Sure, John. Maybe actually I’ll pass the floor over to Nick Ashton-Hart, who’s had his hand up and I think wants to make a comment. Go for it, Nick.

Nick Ashton Hart:
Good morning from New York. It’s like 2.30 or something here. No, 3, sorry. Yeah, I wanted to follow up on the point that Joyce made. I mean, I agree with everything everyone has said about the value of stakeholders and what we bring to the table. I think we all know that’s true. But I think we have to do something about it. Because just like when women got the vote, they didn’t get the vote because those who had the vote decided it would be the right thing for them to get the vote. They got the vote because they went out and said, you’re giving us the vote, right? And made it unavoidable. And I follow a lot of processes at the UN. The cybersecurity process are frustrating because of this theater of the absurd of applying and then being vetoed. The WTO negotiations on electronic commerce are completely closed to all stakeholders. It’s the least open process. So believe it or not, it’s actually somewhat better in the first committee. But I spend a lot of time with delegates in New York. I think they’re tired of having this stakeholder argument every time a new first committee process is launched. I know they’re tired of it. I think a majority of states think it’s a lot of wasted time going. on arguing about this. It’s the same argument every time. And I do believe that there is appetite to make a set policy on stakeholders that would turn it into more of an administrative process that happens each time a first committee process is convened, especially related to the internet. And then that would be the end of it. The decision would be taken, we would be able to participate and that would be that. States would still take decisions and we would speak last and all the rest of it. But we would have something more like what we have at the ad hoc committee on cyber crime, where it’s an administrative process really. It’s not a political process, which is what of course it’s being turned into. And I think as stakeholders, that’s something we believe we want. We’re going to have to advocate for it. We’re going to have to do the legwork on the ground with the delegates, get someone to propose a general assembly resolution. And I think we would win. I think we would win on votes if there’s voting. It wouldn’t be consensus, of course, because the states that don’t want us, don’t want us. And that’s the way it is. I think we would have a clear majority in favor of an administrative process just because we’re right, basically. We’re right. But also even for the states who are somewhat, who don’t care that much one way or the other, they’re tired of fighting about it and wasting a great deal of time arguing over the subject. So I think it would be interesting to, if any of the rest of you have thoughts on that, it would be interesting to actually mount a campaign to solve this problem on a horizontal basis once and for all. Because I think that’s the only real way we’re going to get a solution. And the honest truth is the states would be far better off if we were around to bug them because they need a more ambitious agenda when it comes to this. cybersecurity really. I mean if you look at what’s on the table to be decided at the OEWG and you look at what’s going on in international cybersecurity, there is a huge gap in need versus

John Hering:
what’s actually being addressed. Well taken Nick and thank you so much. Well I will leave it to the panel on the table to see if there’s anyone that would like to take up that thought about moving this to an administrative matter as opposed to a political process and whether or not Nick’s read of the appetite has some accuracy and validity to it. I would be happy to try to

Bert:
answer, to try to respond to Nick’s question. It’s a good point. You’re right that people are very tired of this question because it comes up again and again and it’s particularly because it has become such a politicized question and it has been politicized by a number of countries and it will be difficult all along. I think the idea whether a one-size-fits-all forever resolution of the General Assembly, how might the stakeholders should participate in such processes, is an interesting idea. It has to be discussed. I see a number of drawbacks, namely it’s so difficult if you make this totally unclear for what type of future process. I’m not sure we get the best result. We might get better results on the specific process, on the specific circumstances, than minutes for any future process where it might become quite narrow and it might be a difficult process. But it’s something to be discussed. My concern is if it would succeed, if then a new process would be set up, we would then again have a fight whether the agreed framework is being applied or whether specific rules have to be decided upon. So we might come back to square one. But we certainly it’s an urgent matter and the issue where I think which is particularly urgent is we are discussing here a lot both in sessions and informally about the upcoming process of negotiating particularly global digital compact which is is part of a much broader process preparing for the summit for the future where we need strongest possible mighty stakeholder involvement in the general assembly process which is sort of as I said already intergovernmental by nature where we have the challenge that basically our key objective is out of this process a reaffirmation of the mighty stakeholder model. Also in our view a strong role for the IGF, but then again to get there the process must be as mighty stakeholder oriented as possible and this will again be an uphill battle. It’s even not clear whether the mighty stakeholder arrangement would be made specifically for the global digital compact negotiations or for the entire process. I would be in favor of doing it specifically for the global digital compact because there’s a better understanding of the world and for instance when you negotiate a new agenda for peace where basically I think there’s a sense that states have a much stronger role to play. But it’s also I think we need to see I mean where are countries who are I mean there’s some countries who are very critical they’re opposed to mighty stakeholder involvement for a number of reasons. We also need to do more work why it is of benefit to all of us. It has become a political issue for me it’s an issue of expertise of quality control. I can only say I come from a country our capacity in the area of cyber digital etc. is limited. We benefit a lot from talking to industry partners from academics experts etc. without them we can’t survive these such negotiations. This is from where we get ideas inputs with a quality check of our ideas and I’m sure it’s the same for others. And it’s also therefore important that mighty stakeholder involvement is as inclusive as possible and as representative because there’s also a sense mighty stakeholder means basically big tech companies sitting at the table it must be clear that this must be broad and every effort must be made it is as inclusive as possible. Thank you.

Joyce Hakmeh:
And maybe if I can add I agree with everything you said but and I agree with the sentiment behind Nick’s message that. We need more passion to have this issue resolved, and we need to be a little bit more strategic and have more ambitious plans. But on your point, Bert, about how you benefit from multi-stakeholder’s input, I think it also goes both ways. Because we benefit also when we speak with governments about what’s on their mind, how they’re thinking about the different priorities. Sometimes, even if we follow online, if we’re in the room, we might not know what’s really going on. So speaking to them is also very valuable to us, because it makes our role much better, if we have our fingers on the right pulse. So I just wanted to add that.

John Hering:
Absolutely. Thank you both so much. Thank you again to Nick. Even if we don’t have something that’s going to be the be-all, end-all, I think even moving towards what is a gold standard of multi-stakeholder inclusion, what in the US we call the Cadillac of multi-stakeholder inclusion. But noticing that we’re in Japan, maybe it’s the Lexus of multi-stakeholder inclusion, I think could be a good framework to work towards. I think we have a question in the room here.

Audience:
Thank you very much, our speakers, for such an interesting conversation and discussion. I think I’m just going to point out the elephant in the room. We are talking about multi-stakeholderism. And I was just looking at the representation of different multi-stakeholders from the panel, and I don’t see representation from African stakeholders. So I guess my question would be, how involved are African stakeholders in these discussions and debates? And what can they do to improve their participatory role in these discussions? So I understand maybe government actors, there could be different processes being followed. But with the private sector, academia, civil society, what exactly is being done to increase or to improve their participation in discussions like this? And just giving this as an example, like we talk about inclusion. if we are not going to have African voices being part of these discussions, it becomes a bit difficult to understand how we approach our multi-stakeholderism. Thank you.

John Hering:
Absolutely. Thank you so much for the question, and I will leave it to those on the table to comment on multi-stakeholder inclusion and participation in the dialogues from across geographic regions and lines of difference.

Joyce Hakmeh:
Yeah, thank you for your question. I’m happy to take a stab. I think you’re absolutely right. I think we talk about multi-stakeholder participation, but if we look at the composition of the multi-stakeholder groups, it tends to be more sort of Western-dominated. So you’re absolutely right, that there’s a need for inclusion that goes also at the regional level and not just bring different actors, but also actors who represent different regions. And that’s why I talked about the importance of regional efforts, that we don’t put all our focus just on UN processes, because there’s a lot going on at region level, at national level, and the experience from those stakeholders who are very much on the field would be absolutely very, very valuable to the UN processes and beyond. So I think, you know, definitely agree with you there. And, you know, I think also we need to be honest about how multi-stakeholders coordinate with each other. And I don’t think it’s great either. I think there is definitely room to improve, but as I said, it’s a learning curve on several different fronts. The focus for today is how we work better with governments, but also there is a bigger question around how we work better with each other and how we bring more voices into the debate.

Marie:
Thank you. Thank you for the question. I think, indeed, there is a lot that still can be done, but it’s also a capacity issue, I think. And coming from developing country, I think it’s even more difficult to dedicate some time to come to New York and to come to those processes. And I think that’s why also the initiative at national, regional level are so important. And it was mentioned earlier, it’s not only about what you were saying in the room, it’s actually the ongoing discussion that you have with your representative that will go to New York. and will represent those points. And having those long run discussion, not only a one go during the open-ended working group, but really like an ongoing discussion where you actually bring to your governments, to your people that will represent, be present in the room negotiating, you give them the arguments that they will need to shape an informed policy that will benefit also, not only us, but like everyone, every stakeholder groups. And that’s completely part of the entire process. So we have the luxury that we can do it. We also have some diplomats that are there in different countries that can also have those discussion, not only with our national stakeholders, but also with other stakeholders from other regions. But really like we need those information to take informed policy decision that we will then bring to those fora. And thank you, Nick, for being a very dedicated stakeholder being still up at 3 a.m. for this discussion. But that’s exactly the kind of stakeholders that we need, like really dedicated as well. And we understand that it’s a capacity issue as well. So wherever you can go at any level, try to like bring your expertise and knowledge so we can take better informed policy decisions.

John Hering:
I believe Bert and then Charlotte online also asked for the floor, so.

Bert:
Okay, very briefly, sorry. I think a very important point, just two comments. One is, it’s the same challenge also on the government side, how to ensure to have negotiations that are inclusive. What I noticed, for instance, if you compare the open networking group with the ad hoc committee negotiations, through a number of measures, including that some funding is available for travel, far more countries are represented by experts from capital in the cybercrime negotiations than in the open networking group. And you see that the quality of the discussion is quite different in a way. I mean, I find it, I learn a lot from listening to the different. perspectives, and that’s extremely positive. The same applies on the multi-stakeholder side. Some initiatives have been taken to facilitate, to provide funding, to participate in such meetings, etc., but of course we all agree, even for us. I mean, we sometimes get denied the funding for travel to New York because it’s too expensive to spend two weeks in New York, and I’m sure it’s even worse for multi-stakeholders. And so we have to see what more is possible there to allow participation, because if you have seen it once, then you also understand better how it works, and then that’s maybe the advantage, that’s also one of the positive side effects of COVID, there’s a sort of a democratization of such multilateral processes. All of this is now hybrid, all of this is screened, and you can participate much more easily. And again, a key role is, any government in New York, the position formulated is back in capital, so you have, you need to work with the people in capital so that the people who sit in New York, Geneva, or wherever, they press the right button or make the right statement. So a lot of the work has to happen at national level in any event. Thank you.

John Hering:
Thank you, and Charlotte, if you’re going to take the floor, please do.

Charlotte Lindsey:
Yes, just very quickly, and I think it’s a really important point, particularly about African representation. It’s something that we tested also a year and a half ago, where we invited ambassadors from representatives of the African Union in Geneva to come for a half-day workshop on all of these processes. There was definitely an appetite, there were representation from most countries of the African Union at ambassador level, so there’s definitely an appetite to engage and to learn more about these processes. And I think it’s also really important to demystify these processes, because we heard feedback, for example, that, oh, well, you know, we specialize more on human rights. Well, actually, a lot of what’s been discussing at the Open-Ended Working Group is about human rights, and so there are very transferable skills. It’s just sometimes the language is very exclusive. or very difficult for people to feel that, oh, I haven’t followed these debates for many years, therefore I can’t contribute. And actually what we saw was that there were very key messages and participation possibilities from the representatives of the African Union that could very easily transfer their skillset into these negotiations. So I think there’s an appetite. We just need to focus much more on the capacity building side.

John Hering:
Thank you all. I think we have two questions I saw in the room. Patrick, were you at the mic a moment ago? And then the young woman over here. And then back over to you, Eduardo, if there’s anyone online after that.

Audience:
Hi, I’m Patrick Pawlak from Carnegie Europe. My question was partly asked and partly answered. So let me use the microphone to push back a bit and get a bit more precise answers. In answering to the colleague’s question, many of you said, yes, the engagement with stakeholders at the national and regional level is important and we have to do it more. How exactly do you envisage this? We have three governments on the podium. Could you describe to us how each of your governments engages with your civil society ahead of the open-ended working groups? I know for the fact that actually we very often talk about this engagement of multi-stakeholder community. It happens through the side events during the open-ended working group sessions or any other events there. And very often those meetings are really used as a fig leaf, let’s say, for the lack of engagement of the national level. So if you could share some concrete examples, that would be great. Speaking about national engagement with civil society, a lot of organizations from many countries around the world will tell you that actually they have no access. It will be easier for Joyce from Chatham House to talk to anybody in the world, to cyber ambassadors and get the access, that for the regional civil society organizations who are completely ignored, right? So how do we break that sort of a ceiling at the national level? And thirdly, I think that disengagement at the national and regional level indeed might be a more sustainable. sustainable solution, if we really want to create, let’s say, better functioning cyber diplomacy engagement, simply because so many countries in the world actually have this shrinking space for civil society organizations. So by creating the opportunities for engagement around cyber issues, we’re also contributing to strengthening the broader ecosystem of civil society organizations. So yes, I agree, but I wonder how you think we could do this in a more specific way. Thank you.

John Hering:
Thank you so much, Patrick. And maybe a question over here as well, and we’ll just sort of take both together.

Audience:
Thank you very much. I’m Larissa Calza, Head of Cybersecurity at the Ministry of Foreign Affairs of Brazil. I would like to, first of all, thank all the panelists for their interventions. And I would have one question building up on a point that I believe Charlotte made about fragmentation of the debate on cybersecurity and how detrimental it was in the period from 2019 to 2021 to the participation of non-government stakeholders. Well, for Brazil, fragmentation is a huge concern. It is a challenge not only to non-state stakeholders, but also to most developing countries. It’s always difficult to have enough delegates to follow multiple tracks at the UN. And so one question I would have is, well, we’ve spoken a lot about the POA and in very supportive terms, and Brazil very much supports continuing discussions on the proposal. But it is not a consensus within the UN. We have observed recently a fragmentation on states that support a POA, states that still are very much in favor of starting negotiations on a legally binding instrument, which us nationally feel that it is not quite a moment for it yet, though we do not oppose the idea of something legally binding. So I guess my question would be, do you see a a risk of having this fragmentation once again, given the polarization of positions on the future of institutional dialogue after the OEWG? And second, if the POA is indeed adopted this year, how do we avoid that the OEWG in a way is undermined or has its discussions emptied due to a decision being made two years ahead of the end of its mandate on regular institutional dialogue? Thank you very much.

John Hering:
Thank you both so much. I will leave it to you to sort of take the questions in turn or in the order that you’d like. Anyone on the stage would like to hop in? Or of course, Charlotte online.

Joyce Hakmeh:
So I can’t answer the question, what governments are doing to engage the society? I know my government doesn’t do anything, so yeah, how they should do it, I suppose. But I can sort of like, because this is of course like a very important problem, and we think about it as well, because inclusive governance is one of the sort of strategic priorities for our work at Chatham House. And just someone talked about the appetite that exists, and I agree with that. I think there’s a huge appetite. We organized, I think it was last year, a conference in Jordan about, and we have a representative from the MFA here, about the cyber diplomacy in the Arab region, and what are their perspectives? And I was amazed by the turnout, by how much eagerness there was from not just governments, but also non-state stakeholders to be part of this conversation. But there is of course like issue of, you know, sort of subject matter expertise with these UN processes, and as Charlotte mentioned, it can be a little bit too intimidating, because if you’re not, I mean, even for us, if I miss one OEWG session, I’m like, I don’t know what’s happening anymore, you know? It’s very hard to kind of stay on top of these very lengthy negotiation process. and be, and feel like you have the expertise to contribute every time in an informed way. And so I guess there is sort of responsibility on both sides. If we look at the list of accredited organizations to the Cybercrime Convention, which as Bert mentioned, they were all accredited after maybe a little bit of a pushback. I think there were around 160, something like that. But if you look at how many organizations actually participate, I think maybe 20, something or maybe a little bit more in terms of consistently participating, and although there is the opportunity for online engagement, et cetera. So there is also this, if you want to engage, you need to put in the effort, and that’s very true. But there is maybe perhaps how do we encourage that? I think maybe sort of the governments, the way they have been supporting developing states to come to the negotiations, perhaps there could be some funding dedicated to bring in multi-stakeholders more into the debate. And I know, Patrick, you’ve done work on that in the past, and I think more initiatives like this would be extremely important. On the fragmentation point that was mentioned, I think the question was should we be concerned about fragmentation with new processes? I think, to be honest, I think the fragmentation is already here to a certain extent. I mean, because we engage in the open-ended working group and the cybercrime process, you feel that there is this huge desire to keep those different conversations separate. And of course, this one is dealing with international peace and security, this one is dealing with criminal activities. But the reality of cyberspace is that the lines are not sort of like this division is not very clear. Sometimes it’s artificial and the distinction is not as clear-cut. So there are overlaps that need to be understood. If we take, for example, as you probably know now, the open-ended working group is trying to operationalize this point-of-contact directory about each state will have one organization dedicated to sort of answer responses and requests to de-escalate. So, we need to be more conscious in terms of how do we make sure that we have the same expectations in the cybercrime convention about 24-7 networks and having a point of contact, et cetera. Of course, they will have different mandates, but, as we know, in a lot of states, like, there will be one agency doing different, like, the same role, right, doing sort of cybersecurity stuff, but also cybercrime stuff, so we need to be more conscious in terms of where do these, where the touch points are, how do we understand them, and how do we make sure that we have the same expectations about the same things, and I think here, really, like, multi-stakeholders can play a very big role, right, in bringing those sort of nuances together and kind of, like, talking about them in a more sort of clear way. So that’s my answer.

Pablo Castro:
» Thank you. Let me start with a question from Larissa. This is a very good question, by the way, because we have this internal discussions, you know, in our countries, like, Brazil, Argentina, et cetera, about the situation we are right now, as I mentioned before, about this geopolitical content, which is quite difficult, and the problem is not just in the open-ended working group, also in the cybercrime, if you go to this cautionary weapon system, we cannot, it seems that we have this sort of fracture, you know, that is already there, so what we can do, basically, I mean, what we can really, I mean, one of the reasons we, regarding the open-ended working group, I mean, we have this discussion in the UN, we have this discussion in the IPOA, what we supported, I mean, the pay from the, not the very beginning, but from the 2021, was because it was action-oriented, so it was something, say, okay, we have this discussion in the UN, which is, well, by the way, Chile voted against the open-ended working group back in 2019, as I say, for the comedy. I can agree with the idea, because it’s something that, basically, we definitely need from the perspective of our country, Latin America, action-oriented, you know, focus on capacity building, implementation, we can keep the discussion, I mean, about international law application, et cetera, but we have very critical needs that we need to in some way achieve. So that’s the reason why we’re against this. We support it, but you’re right that we have this sort of things, what we can do now, that we have the Open Networking Group, the POA, the POS part, of course, the discussion, the Open Networking Group, the regional dialogue. And I’m not quite sure that I have the right answer. In a way, I think it solves something connected with what’s going on today, I mean, worldwide. This situation is going to have been stayed, I mean, for a long term, or we cannot just give a point that we can actually create or establish some concern. The discussion is quite frustrating, by the way. The cybercrime is sometimes even impossible in order to agree in some technicals, I mean, the practical solution, because we have this, to know this problem, you know. And it’s not so simple, because some states have their own view, principles, and values, and other states are avoiding different ones. So it’s a cultural problem, geopolitical problems, maybe in the next future, where we have different internets, I don’t know. But I agree with those, and this factor is already, I mean, the way I can manage this is going to be something that definitely we need to discuss more and see what can we remove on. But I agree with you that it’s something that several states have a lot of concern about how to deal with the process. So it’s not just, it’s a very important matter, I mean, part of our discussions. Regarding Patrick’s questions, always very good questions, very fundamental questions. I have to confess to you, Patrick, that, you know, sometimes we are, I see a lot of states that regard the multistakeholder in our statement, you know, we’re very clear that we support multistakeholder engagement, and so on, and at the very end, you come back to capital and realize that probably I did not do it, I mean. enough, you know, to work with them. That is true. I have to confess, in my case, when I started cyber security at the Ministry of Foreign Affairs ten years ago, I had no idea about this thing. All it was, I think, was Microsoft, the first time, kind of a secret in Singapore that taught me that Microsoft has a cyber diplomacy approach. I just came back to our Ministry of Foreign Affairs to explain to my bosses, you know, ambassadors about the role of Microsoft in international security, to try to make them understand that. So that’s been quite fascinating. I mean, my background is non-proliferation, I’m in control. You don’t see that, you know, in other processes. Sometimes, you know, and I can tell from our reality, and maybe also Latin America, is a lack, you know, of people, you know. You still don’t have too much, you know, expert on our Ministry of Foreign Affairs. You have the capacity, you know, to cover one thing or another. You know my case, I have to cover cyber security, cyber crime, and many other things. So we’ll have to have more time, you know, to engage, you know, on what I would like to do more with Stacey Holder is to work in some specific line of actions, you know. Again, the idea of a strategy, you know. If it comes to international law, I mean, China has maybe, you know, have some idea to do something, I mean, next year in Chile, you know. When it comes to CBS, or we can, I don’t know, with the implementation, IHL, which is something very important in Switzerland, be one of the champions in this, you know, how we can actually work with some specific, Stacey Holder thing in our regions. It’s something that can be done. It’s sometimes a lack of time, a lack of resources, you know, many things to do back in the capital. But I would give it again, I mean, there’s something that we did with Netherlands, you know, which is dialogue, you know. I don’t think we have too much dialogue in our regions, you know, when we can. And for that dialogue, something that we agree with the Dutch Ministry of Foreign Affairs was to invite representatives of the Ministry of Foreign Affairs, not just people from Ministry of Interior or CSIRS, you know, to bring, I mean, the people in charge of cyber security, the Ministry of Foreign Affairs, to talk. and engage with multi-stakeholders, you know, and talk about, you know, processes we have, you know, at the UN level, you know. And I would add just to keep in mind that the important roles that regional organizations play on this. And most of engagement of stakeholder be thanks to the OAS, and I think in other regions it’s the same. Chile’s now the chair of the SICTA, you know, the Inter-American Combat Against Terrorism, with the cybersecurity programs placed there. So it’s something with, and especially on the implementation, CBM is something that we definitely need to, would like to do more, and engage this stakeholder more in this process. But I totally agree with you that it’s not maybe good enough we’re doing right now. Thank you. Francesco.

Marie:
Yeah, I couldn’t be a better advocate for our way of doing stakeholder engagement than you are. But yeah, maybe I’ll give you a bit of like my background, and how it was when I started working on cyber issue, and in the Netherlands, that was a few years back. So at national level, it was back in the preparation of the GGE, and the Open-Ended Working Group in 2019. And back then, I went back to The Hague from Geneva, and we were having actually a consultation with other stakeholders to actually, before we entered into those rooms, more or less open, then we would be able to have an informed policy position. And I’m not saying we’re doing it enough, and I think that’s one thing, probably not enough, but we’re already trying at that level. The other thing that we are doing is, obviously, those conferences, I think Bert pointed out quite well that the IGF is a place also where we can have lots of open discussion. And we should also grab those opportunities that we have at the IGF, at our national, the IGF at also the regional IGF, to also talk about those issues that we are facing in the first committee and then grab all the expertise that is there. Because there are so many people around here, they know so much more than we would, then we really need to grab those opportunities. I’m talking about the IGF, but I can also talk about non-UN forum like RightsCon or the GFC conference in Accra next month, for example. I think we need to take all those opportunities to really also ourself, engage with the stakeholder community as well. I mean, capacity building, we are doing lots of capacity building and we’re also trying to bring this knowledge about what the first committee is about, what we’re discussing, what is the normative framework, what are our objective, and really looking into the implementation and what it means also for people. So then when they are informed, they can also engage. So Charlotte, as you said, we need to also demystify what’s happening in the first committee. And I think that’s on our side, also an effort that we need to do, because it’s already complex for us to understand how those processes work. So for someone who doesn’t have been there for so long or is not engaged in everyday or in all the discussion, or can be engaged in all the discussion, it’s even more complicated. So I think on our side, we also need to do more on demystifying those processes and explaining what you can bring to those discussion. I have to say, we have the luxury of having a nice cyber policy and we have 34 now cyber diplomats around the world. So we also participate in regional meetings and so on and so forth. And we try to grab all of this, but we also try to share our knowledge and our experience to make everyone be able to also engage. and we still have so much to learn and I’m sure some others have better ways of doing, but I think it’s about exchanging on how we do and then learning from others on what they have been doing and then we can just like improve the way we engage, but it’s true that we have the luxury of having a bit more people, so I’m happy to share, but also really happy to get some feedbacks on how you would like us to engage with you, because that’s the only way we can make it better.

Bert:
Thank you. Thanks so much. Also, as I mentioned already, for us it’s important, we learn a lot from others, both governments, other multi-stakeholders, it’s of critical importance. Joyce, you said that many multi-stakeholder were accredited to the ad hoc committee, but not so many make use of it, and that’s of course a challenge and of course even for governments, I mean these cybercrime negotiations, this is this year, three times two weeks. It’s a huge investment. It’s difficult to take, and there indeed, if you’re not following it closely, it’s difficult to do so. So that’s a challenge in terms also of resources. And by the way, if I may go back for one second to the global digital compact, which is coming up. Also there, I hope that many multi-stakeholder will make the investment, because it’s important that one does. I was a bit concerned that everyone was invited to provide input and so on, already late last year, until I think March or April this year. And then I think nobody ever heard what happened to the input. And then we had the policy briefs, which I can’t imagine would really somehow be a reflection of the input received. So I hear, also hear when I talk to people about the process, there’s a lot of, there’s some who say, well, is it really worthwhile to invest? It’s so difficult anyway, there’s such limited access, and so far our input has not been appreciated. That’s a huge concern to me, because we need multi-stakeholder involvement in the process in order to get the reaffirmation of the multi-stakeholder model as an outcome. So that’s certainly an issue. in the global digital compact, we have a very special committee, and it is responding to the question of Patrick, how do we involve multi-stakeholders, and maybe I start with the global digital compact. There basically we used a national IGF to discuss the process, but also to prepare input. So we had both a government input and a multi-stakeholder input, but we used a national sort of IGF for it. And also, I have to state, there was a huge circus, second Saskatchewan conference on technology security platform where we basically bring all the people together. Telecom, they’re hugely interested, how this treaty turns out because it has some serious implications for them. But some of them also actively participate in, but we regulated exchanges with them. So, there’s a lot of interest, and we have to make sure that they’re fully aware of it. So, we have a relatively small country, the interest is limited, because most people are not really, for them, it’s not clear what’s in it for them. So, there we need to mobilize interest, so that they’re fully aware of it. But as I mentioned, we’re working on a national position paper on international law. There we’re now finalizing our government draft, and this one we want to consult with mighty stakeholders, particularly, of Brazil, Oscar Diaz, Douglas Gilbert and there are many stake holders from Brazil. Soon, we will have very important stage questions to go before the Paul committee with the idea of the POA. I also think it makes a difference in terms of implementation, et cetera. We will permanently settle the question of micro-stakeholder involvement, hopefully, at least for that process. But again, also there, the idea certainly would be that we use the current open-ended working group to discuss in detail how this should be figured out, what the element should be, and then any sort of configuration would be the follow-up to the open-ended working group. But we will have to see how it pans out. It’s negotiations on this ongoing, but we very much hope that in the end we have an inclusive process and we end up with one mechanism after the open-ended working group, because also more, as we discussed, for any one of us, it’s difficult to entertain. Thank you so much.

John Hering:
Thank you. Thank you all for this. We’re coming up on time here, so I’m just going to say, Eduardo, is there a question online? Otherwise, I think we’ve exhausted things in the room, and I’ll move to just a final quick lightning round question.

Eduardo:
We do have a question. I wonder if we have time to answer it, but Amir Mokaberi was questioning the legitimacy of companies participating in multi-stakeholder discussions, especially in the field of international law development, nor making, due to their democratic nature, conflict of interest and lack of election by citizens. So I wonder if you have a quick response to that. John?

John Hering:
Yeah, maybe I’ll take that one. No, I think it’s a fair question and a fair thing to be concerned about in terms of what is the proper scope and size of private industry engagement in any conversations that relates to governance, whether at a national level or international level. The only thing I’ll say is Microsoft makes products and services that we sell, and that helps to augment the digital domain. And we certainly don’t want to be contributing to a space that is getting increasingly unsafe and more unstable. And so supporting these dialogues is critically important to us as an organization that is a large technology company. But I think we are always clear in that, and we would want to always be as transparent in this as possible in saying that. that obviously governments make the decisions here. We don’t. We are, you know, I think together with our other multi-stakeholder partners pushing for a voice at the table, not a vote. And that seems to be always the proper boundary and limitation there. So I hope that answers that question well enough. And I will just say in the last couple minutes, a lightning round. If there are sort of non-governmental stakeholders in the room who have not engaged before in any of these processes at the UN, what would be just a quick piece of guidance on the way they could be most impactful in helping to support government dialogues on cybersecurity at the UN? Anyone can start.

Marie:
I’ll start because I’m at the back of the table. I’ll be short because we don’t have so much time, but I would say, so approach us. We’re not, we will listen and be there, provide information, numbers, facts, show the impacts of the project that you’re doing like in the different countries, in the different regions, report on what’s happening. Those information can only give us like, give an added value to the discussion that will happen in the context of the UN. But also if you start following it from a, like it’s not a one go. If you start following it, then come back to us and tell us, oh, you did this, but you haven’t yet talked about that or this. And actually I have to say, I find it like very sad that people are saying that they don’t see the impact of what they bring to the table. But I can say that for some of the outcome from the open-ended working group and GGE report in 2021, there are actually things there that I heard happening at the beginning of the process, when it was inside event in discussion that we had with civil society, the private sector, academia, and actually they made their way through the end report. So actually it’s a long process. It’s frustrating because it takes time. and you don’t always have everything you would like to see, but it made its way through. So just like continue and all this accountable for making sure that we are taking the right position when it comes to those discussions.

Pablo Castro:
Yeah, I would agree on what Marie said. And also, I mean, I encourage to approach the states, you know, in just our conversation in New York, in Vienna and other places, but also in capital, you know. Most of our work that we did with stakeholders is because they approach to us, you know, proposing, you know, side events. We did one very good in July regarding the toolkits for implementation of norms for marginalized stakeholders. That was very interesting with other states, I mean, Mexico and Colombia. And most of this relation has been because thanks to this stakeholder approach to us, you know, to propose ideas, to make exchanges view about what thinking about their, I don’t know, the next POA or open-ended working group or cybercrime conversations. So I encourage to know to take on the winner state, you know, of course, during this, our meetings in New York and Vienna, we had the chance to create a sort of, I would say even friendship, you know. That’s one of the things I really like about State Hall, you know, you share beer, go dancing, whatever. And then, I mean, just come back, you know, and say, hey, let’s go to work on something or just have, you know, some meetings. And we were having this conversation very much with, for example, Microsoft, with cybercrimes. I always really like, of course, also the submission of documents. Probably we never really, I mean, thanks, I mean, to the stakeholder about this really good documents. Sometimes even in our state, we didn’t be using some of the very good ideas in the office today. I mean, it’s never really, I mean, recognizes, I mean, incredible. We’re very good documents, and we need that just for both, I mean, conversation. So thank you for that

John Hering:
We are one minute over so 20 seconds for everybody else No, please

Joyce Hakmeh:
Okay, 20 seconds, and I think maybe choose One thing that you think you can contribute value and not try to do everything if you’re new to this, right? If you want if I want to look at the way WG in July They agreed on an animal progress report with the like a whole loads of recommendations very concrete actions, right? If you’re you know a CSO like industry, whatever you want to be involved Look at those recommendations and see can I contribute to one or more from my perspective whether it’s national or regional? I maybe sort of take that as your first step and gradually, you know You’ll feel that you are being more involved than as Marie mentioned like, you know states are reading are listening So your input will make a difference. I

Bert:
Would fully subscribe to that Build also partnerships with others and then again what also often notices we receive a lot of proposals ideas sometimes They general sometimes very specific and very often it happens that you pick certain elements up in a statement in Negotiations as an argument etc. But rarely you write back to the organization’s. Thank you. I use this here there. So therefore I’ve thought I want to get better with this because it’s difficult sometimes to measure impact and Often you might not hear about it and you might have no idea how it might was used and you might have more impact

John Hering:
That you think that you have. Thank you. Thank you. And Charlotte if you’re online for the last word for 20 seconds

Charlotte Lindsey:
Yes, I am. I would just say very quickly In terms of engagement, I think what is critical is fact-based Framing and the timing of the input particularly for states so that they can then take that input engage So even if you can’t speak at the table that you can produce that input, but you have to do it in a timely way

John Hering:
Thank you so much. Thank you all so much for showing up, especially late in the day here to everybody online Especially folks like Nick who are up at 3 in the morning really appreciate the engagement and look forward to seeing you all throughout the week here at IGF. And thank you to our panel. And please join me in giving them a big round of applause.

Audience:
I am a computer science major at Georgia Tech. And I was just, we were talking multi-stakeholders. I thought it was just… See that again? Oh, oh yeah, that was really crazy. That was so crazy. It makes me so happy. No one expected that. No, no. We were talking about multi-stakeholders, I thought it was really interesting that no one really brought up the defective use, abuse and how users should be more involved in the multi-stakeholder process. So I was wondering, at Microsoft, when you’re talking about multi-stakeholder capabilities, how do you involve users? Do you involve users, or what does that look like? Good question. I just importantly have to think about what our role is in setting up these societies. Engaging them. We’re engaging them. And our goal is to sort of provide the opportunity for them to sort of get a part of it and say, hey, we have this new gadget that we want. Along the process, we’re inherently not doing this in the first place, until we find ourselves in a happy place. So I said, hey, how are we getting the youth perspective into Microsoft’s submission, into Oh, hey! Yes, how are you? Good, good. Everything’s on track? Yes. I mean, for us, yes. But I think we still have, like, a couple… As I’m working on it. We don’t have to freak out. No, not yet. Not yet freaking out time. No, no, good, good. I mean, the only reason why I’m also pushing a bit is because I’m… Especially for the people… Especially for the people who need funding. But I’m going to go through my sessions. Yeah, exactly. But because others don’t require the funding. So then we… I’ll send, like, in any case… Yeah, exactly. So are you staying until 5? Ah, thanks. Okay. Okay. Yeah, yeah, exactly. Yes. Okay. Yes. Yes. Yes. Yes. Yes. Thank you very much. Thanks. Thanks. I’ll see you tomorrow then. Thank you. Thank you.

Audience

Speech speed

141 words per minute

Speech length

1357 words

Speech time

576 secs

Bert

Speech speed

203 words per minute

Speech length

3094 words

Speech time

914 secs

Charlotte Lindsey

Speech speed

171 words per minute

Speech length

1493 words

Speech time

523 secs

Eduardo

Speech speed

170 words per minute

Speech length

104 words

Speech time

37 secs

John Hering

Speech speed

211 words per minute

Speech length

2806 words

Speech time

797 secs

Joyce Hakmeh

Speech speed

204 words per minute

Speech length

3109 words

Speech time

913 secs

Marie

Speech speed

184 words per minute

Speech length

2146 words

Speech time

699 secs

Nick Ashton Hart

Speech speed

187 words per minute

Speech length

662 words

Speech time

213 secs

Pablo Castro

Speech speed

198 words per minute

Speech length

3043 words

Speech time

923 secs

Speaker

Speech speed

191 words per minute

Speech length

124 words

Speech time

39 secs

Searching for Standards: The Global Competition to Govern AI | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Michael Karanicolas

During a session on AI governance, organized by the School of Law and the School of Engineering at UCLA, the Yale Information Society Project, and the Georgetown Institute for Technology, Law and Policy, Michael Karanicolas hosted a discussion on the development of new regulatory trends around the world. The focus was on major regulatory blocks such as China, the US, and the EU, and their influence on AI development globally.

The session aimed to explore the tension between the rule-making within these major regulatory blocks and the impacts of AI outside of this privileged minority. It recognized their dominant position and sought to understand their global influence in shaping AI governance. The discussion highlighted the need to recognize the power dynamics at play and ensure that the regulatory decisions made within these blocks do not ignore the wider issues and potential negative ramifications for AI development on a global scale.

Michael Karanicolas encouraged interactive participation from the audience, inviting comments and engagement from all present. He stressed the importance of active participation over passive listening, fostering an environment that encouraged inclusive and thoughtful discussions.

The speakers also delved into the globalised nature of AI and the challenges posed by national governments in regulating it. As AI consists of data resources, software programs, networks, and computing devices, it operates within globalised markets. The internet has enabled the rapid distribution of applications and data resources, making it difficult for national governments to control and regulate the development of AI effectively. The session emphasised that national governments alone cannot solve the challenges and regulations of AI, calling for partnerships and collaborative efforts to address the global nature of AI governance.

Another topic of discussion revolved around the enforcement of intellectual property (IP) rights and privacy rights in the online world. It was noted that the enforcement of IP rights online is significantly stronger compared to the enforcement of privacy rights. This discrepancy is seen as a result of the early prioritisation of addressing harms related to IP infringement, while privacy rights were not given the same level of attention in regulatory efforts. The session highlighted the need to be deliberate and careful in selecting how harms are understood and prioritised in current regulatory efforts to ensure a balance between different aspects of AI governance.

Engagement, mutual learning, and sharing of best practices were seen as crucial in the field of AI regulation. The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the latest developments and challenges in AI governance. It also emphasised the importance of factoring local contexts into regulatory processes. A one-size-fits-all approach, where countries simply adopt an EU or American model without considering their unique circumstances, was deemed problematic. It was concluded that for effective AI regulation, it is essential to develop regulatory structures that fit the purpose and are sensitive to the local context.

In conclusion, the session on AI governance hosted by Michael Karanicolas shed light on the influence of major regulatory blocks on AI development globally. It emphasised the need for inclusive and participatory approaches in AI governance and highlighted the challenges posed by national governments in regulating AI. The session also underscored the need for a balanced approach to prioritise different aspects of AI governance, including intellectual property rights and privacy rights. The importance of engagement, mutual learning, and the consideration of local contexts in regulatory processes were also highlighted.

Tomiwa Ilori

AI governance in Africa is still in its infancy, with at least 466 policy and governance items referred to in the African region. However, there is currently no major treaty, law, or standard specifically addressing AI governance in Africa. Despite this, some countries in Africa have already taken steps to develop their own national AI policies. For instance, countries like Mauritius, Kenya, and Egypt have established their own AI policies, indicating the growing interest in AI governance among African nations.

Interest in AI governance is not limited to governments alone. Various stakeholders in Africa, including multilateral organizations, publicly funded research institutions, academia, and the private sector, are increasingly recognizing the importance of AI governance. This indicates a collective recognition of the need to regulate and guide the development and use of artificial intelligence within the region. In fact, the Kenyan government has expressed its intention to pass a law aimed at regulating AI systems, further demonstrating the commitment towards responsible AI governance in Africa.

However, the region often relies on importing standards rather than actively participating in the design and development of these standards. This makes African nations more vulnerable and susceptible to becoming pawns or testing grounds for potentially inadequate AI governance attempts. This highlights the need for African nations to actively engage in the process of shaping AI standards rather than merely adapting to standards set by external entities.

On a positive note, smaller nations in Africa have the potential to make a significant impact by strategically collaborating with like-minded initiatives. International politics often stifle the boldness of smaller nations, but when it comes to AI governance, smaller nations can leverage partnerships and collaborations to amplify their voices and push for responsible AI practices. By working together with others who share similar goals and intended results, the journey towards achieving effective AI governance in Africa could be expedited.

In conclusion, AI governance in Africa is still in its early stages, but the interest and efforts to establish responsible AI policies and regulations are steadily growing. While there is currently no major treaty or law specifically addressing AI governance in Africa, countries like Mauritius, Kenya, and Egypt have already taken steps to develop their own national AI policies. Moreover, various stakeholders, including governments, multilateral organizations, academia, and the private sector, are recognizing the significance of AI governance in Africa. Despite the challenges that smaller nations in Africa may face, strategic collaborations and partnerships can empower them to actively shape the future of AI governance in the region.

Carlos Affonso Souza

In Latin America, several countries, including Argentina, Brazil, Colombia, Peru, and Mexico, are actively engaging in discussions and actions related to the governance and regulation of Artificial Intelligence (AI). This reflects a growing recognition of the need to address the ethical implications and potential risks associated with AI technology. The process of implementing AI regulation typically involves three stages: the establishment of broad ethical principles, the development of national strategies, and the enactment of hard laws.

However, different countries in Latin America are at varying stages of this regulatory process, which is influenced by their unique priorities, approaches, and long-term visions. Each country has its specific perspective on how AI will drive economic, political, and cultural changes within society. Accordingly, they are implementing national strategies and specific regulations through diverse mechanisms.

One of the challenges in regulating AI in the majority world lies in the nature of the technology itself. AI can often be invisible and intangible, making it difficult to grasp and regulate effectively. This creates a need for countries in the majority world to develop their own regulations and governance frameworks for AI.

Moreover, these countries primarily serve as users of AI applications rather than developers, making it even more crucial to establish regulations that address not only the creation but also the use of AI applications. This highlights the importance of ensuring that AI technologies are used ethically and responsibly, considering the potential impact on individuals and society.

Drawing from the experience of internet regulation, which has dealt with issues such as copyright, freedom of expression, and personal data protection, can provide valuable insights when considering AI regulation. The development of personal data protection laws and decisions on platform liability are also likely to significantly influence the shape of AI regulation.

Understanding the different types of AI and the nature of the damages they can cause is essential for effective regulation. It is argued that AI should not be viewed as purely autonomous or dumb but rather as a tool that can cause both harm and profit. Algorithmic decisions are not made autonomously or unawarely but rather reflect biases in design or fulfill their intended functions.

Countries’ motivations for regulating AI vary. Some view it as a status symbol of being future-oriented, while others believe it is important to learn from regulation efforts abroad and develop innovative solutions tailored to their own contexts. There is a tendency to adopt European solutions for AI regulation, even if they may not function optimally. This adoption is driven by the desire to demonstrate that efforts are being made towards regulating AI.

In conclusion, Latin American countries are actively engaging in discussions and actions to regulate AI, recognizing the need to address its ethical implications and potential risks. The implementation of AI regulation involves multiple stages, and countries are at different phases of this process. Challenges arise due to the intangible nature of AI, which requires countries to create their own regulations. The use of AI applications, as well as the type and nature of damages caused by AI, are important considerations for regulation. The experience of internet regulation can provide useful insights for AI regulation. The motivations for regulating AI vary among countries, and there is a tendency to adopt European solutions. Despite the shortcomings of these solutions, countries still adopt them to show progress in AI regulation.

Irakli Khodeli

The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ago by 193 member states, demonstrating its widespread acceptance and importance. The principles put forward by UNESCO are firmly rooted in fundamental values such as human rights, human dignity, diversity, environmental sustainability, and peaceful societies. These principles aim to provide a solid ethical foundation for the development and deployment of AI technologies.

To ensure the practical application of these principles, UNESCO has operationalized them into 11 different policy contexts. This highlights the organization’s commitment to bridging the gap between theoretical principles and practical implementation. By providing specific policy contexts, UNESCO offers concrete guidance for governments and other stakeholders to incorporate AI ethics into their decision-making processes.

One of the key arguments put forth by UNESCO is that AI governance should be grounded in gender equality and environmental sustainability. The organization believes that these two aspects are often overlooked in global discussions on AI ethics and governance. By highlighting the need to disassociate gender discussions from general discrimination discussions and emphasising environmental sustainability, UNESCO aims to bring attention to these crucial issues.

Furthermore, UNESCO emphasises the significant risks posed by AI, ranging from benign to catastrophic harms. The organization argues that these risks are closely intertwined with the pillars of the United Nations, such as sustainable development, human rights, gender equality, and peace. Therefore, global governance of AI is deemed critical to avoid jeopardizing other multilateral priorities.

While global governance is essential, UNESCO also recognises the significant role of national governments in AI governance. Successful regulation and implementation of AI policies ultimately occur at the national level. It is the responsibility of national governments to establish the necessary institutions and laws to govern AI technologies effectively. This highlights the importance of collaboration between national governments and international organisations like UNESCO.

In terms of regulation, it is evident that successful regulation of any technology, including AI, requires a multi-layered approach. Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure comprehensive and effective governance. The ongoing conversation at the United Nations revolves around determining the appropriate regulatory mechanisms for AI. Regional organisations such as the European Union, African Union, and ASEAN already play significant roles in AI regulation. Meanwhile, countries themselves are indispensable in enforcing regulatory mechanisms at the national level.

To achieve coordination and compatibility between different layers of regulation, various stakeholders, including the UN, European Union, African Union, OECD, and ASEAN, are mentioned as necessary participants. The creation of a global governance mechanism is advocated to ensure interoperability and coordination among different levels of regulation, ultimately facilitating effective AI governance on a global scale.

Additionally, bioethics is highlighted as a concrete example of how a multi-level governance model can function successfully. UNESCO’s Universal Declaration on Bioethics and Human Rights, along with the Council of Europe’s Oviedo Convention, serve as global and regional governance examples, respectively. These principles are then translated into binding regulations at the country level, further supporting the notion that a multi-level approach can be effective in governing complex issues like AI ethics.

In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding AI ethics in fundamental values, providing specific policy contexts, and emphasising the importance of gender equality and environmental sustainability, UNESCO aims to ensure that AI technologies are developed and deployed responsibly. This requires collaboration between international organisations, national governments, and other stakeholders to establish regulatory frameworks at different levels. Ultimately, a global governance mechanism is advocated to coordinate and ensure compatibility between these levels of regulation.

Kyoko Yoshinaga

Japan takes a soft law approach to AI governance, using non-binding international frameworks and principles for AI R&D. These soft laws guide Japanese companies in developing their own AI policies, ensuring flexibility and adaptation. Additionally, Japan amends sector-specific hard laws to enhance transparency and fairness in the AI industry. Companies like Sony and Fujitsu have already developed AI policies, focusing on responsible AI as part of corporate social responsibility and ESG practices. Publicly accessible AI policies are encouraged to promote transparency and accountability. Japan also draws on existing frameworks, such as the Information Security Governance Policy Framework, to establish robust AI governance. Each government should tailor AI regulations to their own context, considering factors like corporate culture and technology level. Hard laws on AI risks may be dangerous due to their varying nature, and personal data protection laws are essential for addressing privacy concerns with AI.

Simon Chesterman

The analysis of the given text reveals several key points regarding AI regulation and governance. Firstly, it is highlighted that jurisdictions are wary of both over-regulating and under-regulating AI. Over-regulation, especially in smaller jurisdictions like Singapore, might cause tech companies to opt for innovation elsewhere. On the other hand, under-regulation may expose citizens to unforeseen risks. This underscores the need for finding the right balance in AI regulation.

Secondly, it is argued that a new set of rules is not necessary to regulate AI. The text suggests that existing laws are capable of effectively governing most AI use cases. However, the real challenge lies in the application of these existing rules to new and emerging use cases of AI. Despite this challenge, the prevailing sentiment is positive towards the effectiveness of current regulations in governing AI.

Thirdly, Singapore’s approach to AI governance is highlighted. The focus of Singapore’s AI governance framework is on human-centrality and transparency. Rather than creating new laws, Singapore has made adjustments to existing ones to accommodate AI, such as changing the Road Traffic Act to allow for the use of autonomous vehicles. This approach reflects Singapore’s commitment to ensuring human-centrality and transparency in AI governance.

Additionally, it is mentioned that the notion of AI not being biased is covered under anti-discrimination laws. This highlights the importance of ensuring that AI systems are not prejudiced or discriminatory, in alignment with existing laws.

The text also emphasises the need for companies to police themselves regarding AI regulations. Singapore has released a tool called AI Verify, which assists organizations in self-regulating their AI standards and evaluating if further improvements are needed. This self-regulation approach is viewed positively, highlighting the responsibility of companies in ensuring ethical and compliant AI practices.

Furthermore, the text acknowledges that smaller jurisdictions face challenges when it comes to AI regulation. These challenges include deciding when and how to regulate and addressing the concentration of power in private hands. These issues reflect the delicate balance that smaller jurisdictions must navigate to effectively regulate AI.

The influence of Western technology companies on AI regulations is another notable observation. The principles of AI regulation can be traced back to these companies, and public awareness and concern about the risks of AI have been triggered by events like the Cambridge Analytica scandal. This implies that the regulations of AI are being influenced by the practices and actions of primarily Western technology companies.

Regulatory sandboxes, particularly in the fintech sector, are highlighted as a useful technique for fostering innovation. The Monetary Authority of Singapore has utilized regulatory sandboxes to reduce risks and enable testing of new use cases for AI in the fintech sector.

In terms of balancing regulation and innovation, the text emphasizes the need for a careful approach. The Personal Data Protection Act in Singapore aims to strike a balance between users’ rights and the needs of businesses. This underscores the importance of avoiding excessive regulation that may drive innovation elsewhere.

Furthermore, the responsibility for the output generated by AI systems is mentioned. It is emphasized that accountability must be taken for the outcomes and impact of AI systems. This aligns with the broader goal of achieving peace, justice, and strong institutions.

In conclusion, the text highlights various aspects of AI regulation and governance. The need to strike a balance between over-regulation and under-regulation, the effectiveness of existing laws in governing AI, and the importance of human-centrality and transparency in AI governance are key points. It is also noted that smaller jurisdictions face challenges in AI regulation, and the influence of Western technology companies is evident. Regulatory sandboxes are seen as a useful tool, and the responsibility for the output of AI systems is emphasized. Overall, the analysis provides valuable insights into the complex landscape of AI regulation and governance.

Audience

During the discussion on regulating artificial intelligence (AI), several key challenges and considerations were brought forward. One of the main challenges highlighted was the need to strike a balance in regulating generative AI, which has caused disruptive effects. This task proves to be challenging due to the complex nature of generative AI and its potential impact on multiple sectors. It was noted that the national AI policy of Pakistan, for example, is still in the draft stage and is open for input from various stakeholders.

Another crucial consideration is the measurement of risks associated with AI usage. The speaker from the Australian National Science Agency emphasized the importance of assessing the risks and trade-offs involved in AI applications. There was a call for an international research alliance to explore how to effectively measure these risks. This approach aims to guide policymakers and regulators in making informed decisions about the use of AI.

The discussion also explored the need for context-based trade-offs in AI usage. One example provided was the case of face recognition for blind people. While blind individuals desire the same level of facial recognition ability as sighted individuals, legislation that inhibits the development of face recognition for blind people due to associated risks was mentioned. This highlights the need to carefully consider the trade-offs and context-specific implications of AI applications.

The global nature of AI was another topic of concern. It was pointed out that AI applications and data can easily be distributed globally through the internet, making it difficult for national governments alone to regulate AI effectively. This observation indicates the necessity of international collaboration and partnerships in regulating AI in order to mitigate any potential risks and ensure responsible use.

The impact of jurisdiction size on regulation was also discussed. The example of Singapore’s small jurisdiction size potentially driving businesses away due to regulations was mentioned. However, it was suggested that Singapore’s successful publicly-owned companies could serve as testing grounds for regulation implementation. This would allow for experimentation and learning about what works and what consequences may arise.

Data governance and standard-setting bodies were also acknowledged as influential in AI regulation. Trade associations and private sector standard-setting bodies were highlighted for their significant role. However, it was noted that these structures can sometimes work at cross-purposes and compete, potentially creating conflicts. This calls for a careful consideration of the interaction between different bodies involved in norm-setting processes.

The issue of data granularity in the global South was raised, highlighting a potential risk for AI. It was noted that the global South might not have the same fine granularity of data available as the global North, which may lead to risks in the application of AI. This disparity emphasizes the need to address power dynamics between the global North and South to ensure a fair and equitable AI practice.

Several arguments were made regarding the role of the private sector in AI regulation and standard-setting. The host called for private sector participation in the discussion, recognizing the importance of their involvement. However, concerns were expressed about potential discrimination in AI systems that learn from massive data. The shift in AI learning from algorithms in the past to massive data learning today raises concerns about potential biases and discrimination against groups that do not produce a lot of data for AI to learn from.

The speakers also emphasized the importance of multi-stakeholder engagement in regulation and standard-setting. Meaningful multi-stakeholder processes were deemed necessary for crafting effective standards and regulations for AI. This approach promotes inclusivity and ensures that various perspectives and interests are considered.

Current models of AI regulation were criticized for being inadequate, with companies sorting themselves into risk levels without comprehensive assessment. Such models were seen as box-ticking exercises rather than effective regulation measures. This critique underscores the need for improved risk assessment approaches that take into account the nuanced and evolving nature of AI technologies.

A rights-based approach focused on property rights was argued to be crucial in AI regulation. New technologies, such as AI, have created new forms of property, raising discussions around ownership and control of data. Strict definitions of digital property rights were cautioned against, as they might stifle innovation. Striking a balance between protecting property rights and fostering a dynamic AI ecosystem is essential.

The importance of understanding and measuring the impact of AI within different contexts was highlighted. The need to define ways to measure AI compliance, performance, and trust in AI systems was emphasized. It was suggested that pre-normative standards could provide a helpful framework but acknowledged the lengthy time frame required for their development and establishment as standards.

Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of resources, case studies, and knowledge. The mutual benefit between academia and industry in research and development efforts was acknowledged, emphasizing the significance of partnerships for effective regulation and innovation.

In conclusion, the discussion on regulating AI delved into various challenges and considerations. Striking a balance in the regulation of generative AI, measuring risks associated with AI usage, addressing context-specific trade-offs, and promoting multi-stakeholder engagement were key points raised. The impact of data granularity, power dynamics, and the role of the private sector were also highlighted. Observations were made regarding the inadequacy of current AI regulation models, the need for a rights-based approach focused on property rights, and the importance of understanding and measuring the impact of AI within different contexts. Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the discussion to support these points.

Courtney Radsch

In the United States, there is a strong focus on developing frameworks for the governance and regulation of artificial intelligence (AI). The White House Office of Science and Technology Policy is taking steps to create a blueprint for an AI Bill of Rights, which aims to establish guidelines and protections for the responsible use of AI. The National AI Commission Act is another initiative that seeks to promote responsible AI regulation across various government agencies.

Furthermore, several states in the US have already implemented AI legislation to address the growing impact of AI in various sectors. This reflects a recognition of the need to regulate and govern AI technologies to ensure ethical and responsible practices.

However, some argue that the current AI governance efforts are not adequately addressing the issue of market power held by a small number of tech giants, namely Meta (formerly Facebook), Google, and Amazon. These companies dominate the AI foundation models and utilize aggressive tactics to acquire and control independent AI firms. This dominance extends to key cloud computing platforms, leading to self-preference of their own AI models. Critics believe that the current market structure needs to be reshaped to eliminate anti-competitive practices and foster a more balanced and competitive environment.

Another important aspect highlighted in the discussion is the need for AI governance to address the individual components of AI. This includes factors like data, computational power, software applications, and cloud computing. Current debates on AI governance mostly focus on preventing harm and exploitation, but fail to consider these integral parts of AI systems.

The technical standards set by tech communities also come under scrutiny. While standards like HTTP, HTTPS, and robot TXT have been established, concerns have been raised regarding the accumulation of rights-protected data by big tech companies without appropriate compensation. These actions have significant political and economic implications, impacting other industries and limiting the overall fairness of the system. It is argued that a more diverse representation in the tech community is needed to neutralize big tech’s unfair data advantage.

The notion of unfettered innovation is challenged, as some argue that it may not necessarily lead to positive outcomes. The regulation of AI should encompass a broader set of policy interventions that prioritize the public interest. A risk-based approach to regulation is deemed insufficient to address the complex issues associated with AI.

The importance of data is emphasized, highlighting that it extends beyond individual user data, encompassing environmental and sensor data as well. The control over and exploitation of such valuable data by larger firms requires careful consideration and regulation.

A notable challenge highlighted is the lack of oversight of powerful companies, particularly for non-EU researchers due to underfunding. This raises concerns about the suppression or burying of risky research findings by companies conducting their own risk assessments. It suggests the need for independent oversight and accountability mechanisms to ensure that substantial risks associated with AI are properly addressed.

In conclusion, the governance and regulation of AI in the United States are gaining momentum, with initiatives such as the development of an AI Bill of Rights and state-level legislation. However, there are concerns regarding the market power of tech giants, the need to focus on individual components of AI, the political and economic implications of technical standards, the lack of diversity in the tech community, and the challenges of overseeing powerful companies. These issues highlight the complexity of developing effective AI governance frameworks that strike a balance between promoting innovation, protecting the public interest, and ensuring responsible and ethical AI practices.

Session transcript

Michael Karanicolas:
Hi, Simon. How are you? Can you hear us? I can indeed. Great to see you also. And we can hear you. You can hear me? Just give me a thumbs up if you can. Can you hear? I can hear you. I’m not sure if I’m coming through on your side. Welcome. So just off the top, I want to invite folks that are sitting in the back to come join us at the table. We want this to be as interactive as possible. So please don’t be shy. It’s OK if you’re doing your emails. We won’t judge. We just want we want people to be participating in the conversation as opposed to, you know, 75 or 90 minutes of us talking at you. Welcome to today’s session, Searching for Standards, the Global Competition to Govern AI. My name is Michael Karnikolas. I’m the executive director of the UCLA Institute for Technology, Law and Policy, which is a collaboration between the School of Law and the School of Engineering. And this session is co-organized with the Yale Information Society Project and the Georgetown Institute for Technology, Law and Policy. Our objective today is to foster a conversation on the development of new regulatory trends around the world, particularly through the influence of a few major regulatory blocks, particularly China, the U.S. and the EU, whose influence is increasingly being felt globally. And the tension between rulemaking within these centers. power and the impacts of AI as they’re being felt outside of this privileged minority. As part of that conversation, we have a fantastic set of panelists. We’re not going to be setting aside a specific time at the end for Q&A. Rather, we’re hoping to run this session more as an inclusive conversation. So what that means is that after an initial round of short three-minute interventions from each of our panelists, strictly policed three minutes, we’ll have a set of discussion questions. And for each of those discussion questions, after a couple of interventions from our panel, we’re going to be inviting interventions and comments from the rest of you to engage on these questions as well. So please, again, for those of you who are just joining us in, come join us here at the table and participate. So without further ado, let’s kick things off with a set of short introductory comments from our panelists to discuss trends in AI governance related to their region and area of specialization. Out of deference to our wonderful host country, I’m going to start with Kyoko Yoshinaga, who is a project associate professor of the Graduate School of Media and Governance at Keio University and also an expert at GPAI’s Future of Work Working Group. Kyoko.

Kyoko Yoshinaga:
Thank you, Michael. Welcome to Japan. I’m Kyoko in Kyoto. Okay. So let me, first of all, give you a brief overview of AI regulations in Japan. Japan adopts soft law approach to AI governance horizontally while revising some sector specific laws. It’s not really known worldwide that Japan took the lead in introducing principles for AI research and development designed to guide related G7 and OECD discussions. In 2016, the then Internal Affairs Minister, Minister Takaichi proposed eight principles of AI R&D principles. Transparency, controllability, safety, security, privacy, ethics, user assistance, and accountability as non-binding international framework which was agreed by participating G7 and OECD countries. And it contributed to the OECD’s AI principles. Japan has social principles of human-centric AI which was developed by the Cabinet Offices Council as principles for implementing AI in AI-ready society. And there are seven principles to which society, especially state legislative and administrative bodies should pay attention and they are human-centric, education literacy, privacy, ensuring security, fair competition, fairness, accountability, and transparency, and innovation. Then we have AI R&D guidelines made in 2017 which added collaboration to the previous eight principles of AI R&D principles which I mentioned earlier for developers and business operators of AI. And we also have AI utilization guidelines which consists of 10 principles to address the dangers associated with AI systems. And it was also developed by Ministry of Internal Affairs and Communication. But these were for the developers, users, and data providers of AI. This user perspective guidelines was made because AI may change its implication and output continuously by learning from data in the process of its users, process of its uses. Also, we have governance guidelines for implementation of AI principles issued by the Ministry of Economy, Trade, and Industry, which guides how to analyze the risks associated with AI system implementations and offers some examples to help organizations adopt suggested principles. So these non-regulatory, non-binding soft laws are used by prominent Japanese companies to develop their AI policies and communicate them to external parties. As for the sector-specific, I wouldn’t go into detail right now, but Japan is amending sector-specific hard laws, such as the Act on Improving Transparency and Fairness of Digital Platforms and Financial Instruments in Exchange Act, which requires businesses to take appropriate measures and disclose information about risks. Also, for doctors, there is notification from the ministry that the doctor owes the responsibility of final decision in the treatment that uses AI. So we have soft law approach at horizontal level combined with some hard law approach by revising existing laws. Thank you.

Michael Karanicolas:
Let’s go next to Carlos Afonso Sousa, the Director of the Institute for Technology and Society of Rio de Janeiro and a professor at Rio de Janeiro State University Law School.

Carlos Affonso Souza:
So thanks, Michael. It’s a pleasure to be here among friends to discuss this very important topic on how we think about regulation of AI in the region. So in this my brief introduction, just to say a bit that it seems that even though AI like national strategies that do end up like sharing a common language, of course, like different states, they will have different priorities, they will have different approaches, and they will have, of course, different long-term visions about like how AI will end up producing relevant economic, political and cultural changes in society. And especially when we look in the region and by region, I think about Latin America, we see that different countries are looking to the issue of governance and regulation of AI. And we have for that specific purpose, Argentina, Brazil, Colombia, Peru, Mexico, all being very active in this discussion. But one thing that I would like to pinpoint that we can see right now in the region, and I think that’s something that we might scale up to a discussion to different regions, is how we’re moving through almost like this three-step process, in which it all began with a very broad ethical principles about AI, that end up turning to a second phase in which different countries end up designing their different national strategies. to think about AI and we now seems like we are in this third phase in which different countries are actually actually regulating about AI through hard law through different mechanisms and that’s I think one of the the greatest moments for us to take a look on especially because governance and regulation on itself it’s a form of technology and we need to understand how are we approaching those different topics concerning the future of AI and making sure that regulation and governance is appropriate to deal with the challenges that we’re facing right now and at the same time come up with solutions that could be future proof in terms of the challenges that we are going to face forward so I’ll just stop here in this very brief introduction but just to to take to provide this quick look on the region and seeing different countries going through this different stage on thinking about national strategies regulation and governance tools for for AI so thanks Michael perfect Courtney Raj is the director of the Center for Journalism and Liberty at

Michael Karanicolas:
the Open Markets Institute and a member of the IGF’s multi-stakeholder advisory

Courtney Radsch:
group thank you so much so in the United States the focus right now is on creating frameworks for figuring out what governance of AI should look like and what regulation should look like and I think one of the challenges is that we talk about AI as if it is a brand new thing without actually thinking about its components and breaking down what exactly it is we mean by AI including the infrastructure data cloud computing computational power as well as decision-making so right now a few of the major kind of regulatory initiatives or standard-setting initiatives include the blueprint for an AI Bill of Rights by the White House Office of Science Technology and Policy which is mainly focused on risk management and mitigation it includes a set of five principles and associated practices that are designed to help guide the design use deployment of automated systems these are like automated decision-making systems so again only one small component of AI designed to protect the rights of the American public in in this age through safe effective systems dealing with algorithmic discrimination protections data privacy notice and explanation human alternatives considerations and fallbacks and it is intended to inform policy decision and guide regulatory agencies and rulemaking but it is non-binding the OSTP is soliciting input to develop a comprehensive national AI strategy and it is focused on promoting fairness and transparency and AI meanwhile the National AI Commission Act which is a proposal that would create a 20-member multi-stakeholder Commission to explore AI regulation within the federal government itself is focused on responsible AI and specifically how the responsibility for regulation is distributed across agencies their capacity to address regulatory challenges alignment among enforcement package actions and a binding risk-based approach much like the EU I would say so there is support for the creation of a new federal agency dedicated to regulating AI which include which could include licensing activities for AI technology, although there are alternative views which think that some of this regulatory expertise should be embedded within each individual agency. There is also at the federal level the Safe Innovation Framework, which sets priorities for AI legislation, focusing on security, accountability, and protecting foundations and explainability, as well as a proposed privacy bill, the American Data Protection and Privacy Act, which would set out rules for AI, including, again, risk assessment obligations. The federal agencies are providing guidance to regulated entities. So for example, the FTC is regulating deceptive and unfair practices attributed to AI and are increasingly using their antitrust authority to impose some antitrust impositions looking at whether they can break it down some companies. I’d also just add that at least nine states have enacted AI legislation, with another 11 with proposed legislation. And we need to, I think, look at competition interventions as well, which is not yet part of the regulatory landscape, but is occurring with some court cases happening alongside these regulatory standard-setting initiatives. Thank you.

Michael Karanicolas:
So we have three fantastic panelists in the Zoom as well. Let’s go first to Simon Chesterman, who is the David Marshall Professor and Vice Provost for Educational Innovation at the National University of Singapore, as well as the Senior Director of AI Governance at AI Singapore.

Simon Chesterman:
Thanks so much, and I’m sorry not to be there in person. But coming from the Singapore and Southeast Asian perspective, I think one of the challenges that every jurisdiction is facing is we’re wary both of under-regulating and of over-regulating. and you expose your citizens to risk, over-regulate, and particularly for small jurisdictions, you risk driving innovation elsewhere. And so when the European Union adopts the AI Act, META might determine that it’s not gonna roll out threads, but it’s not gonna withdraw from that market completely. If a very small jurisdiction like Singapore adopted something like that, it might lead some of the tech companies to opt out of that jurisdiction completely. So that’s one of the sort of baseline considerations that I think is operative here. A second consideration is that in these discussions, certainly over the last eight or so years, the tendency has been to try and come up with new sets of rules, very much like sort of Isaac Asimov laws of robotics, that will address this problem of AI. But as Courtney just said, AI is not that new, and indeed laws are not that new. And I think that kind of approach often misunderstands the problem as too hard and too easy. Too hard in that it assumes that you’ve gotta come up with entirely new rules, whereas a lot of my own work has been essentially arguing that most laws can govern most AI use cases most of the time. But that approach also misunderstands the problem as being too easy, because I think it fails to understand that the real devil is in the application. It’s in the application of rules to new use cases. So in Singapore’s context, rather than coming up with a whole slew of new laws, we have had some tweaks. So for example, the Road Traffic Act had to be adjusted so that leaving a vehicle unattended wasn’t necessarily a crime, which would be a problem for autonomous vehicles and so on. But at the larger level, what we’ve really focused on is two things, human centricity and transparency. And the majority of the model AI governance framework that was adopted here back in 2019 is looking at use cases, what this actually means in practice. Because saying that AI shouldn’t be biased is merely repeating anti-discrimination laws. Discrimination should be illegal, whether it’s done by a person, a company, or by a machine. But applying that to particular use cases can be a challenge. So recently, Singapore released AI Verify, which is a tool which is intended to help companies police themselves, help organizations police themselves and determine whether or not they’re actually holding themselves up to the standards that they’ve been espousing and whether more work needs to be done. So I’m looking forward to a really interesting discussion, but I’ll hand the time back to the chair. Thank you very much.

Michael Karanicolas:
Thanks. Let’s go next to Tamiwa Elori. Tamiwa is a postdoctoral research fellow at the Center for Human Rights at the University of Pretoria. Tamiwa, are you there? Oh, yes. Yes, I am.

Tomiwa Ilori:
Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiatives in Africa on AI governance. And quickly, according to the African Observatory, on Responsible Artificial Intelligence, there are at least 466 AI policy and governance items or used in this conversation initiatives that make direct reference to AI in the African region. And it covers quite a broad period from 1960 to 2023. Those initiatives are categorized in various ways. First, some are categorized as laws, some are categorized as policies, some as reports, some as organizations or projects. Currently, across the region, there is no major treaty or law or standard when it comes to AI governance. When it comes to policies, there are just about two to three of them. And when it comes to… organizations and projects that are currently all working on AI governance, there are about 25 of them. So I wanted to give just about a high level summary of what is happening with respect to initiatives across the region. And these initiatives cover at least 17 policy areas, including access to information and accountability, data sharing and management, digital connectivity and computing and so on. Generally, these initiatives are led by government, multilateral organizations, public funded research, academia and the private sector. And the jurisdiction this initiative cover include the national, we’re looking at countries like Mauritius, Kenya and Egypt already have a kind of national AI policy. Then we have regional initiatives such as the AU Working Group on AI and also documents that refer tangentially to the regulation and governance of artificial intelligence systems, such as digital transformation strategy that covers from 2020 to 2030. Then also the African Union Data Policy Framework. Then we also have jurisdiction like global jurisdiction, like the OECD AI initiatives, and also subnational initiatives. Quickly, that said, artificial intelligence governance in Africa is still very much in its infancy. And most approaches for now are soft, but we already seen growing interest towards a hard blow approach. And that was just a recent example from, for example, the Kenyan government that has signified interest to now pass a law with respect to regulating AI systems. However, while governance may tarry for a while, interest are increasing from diverse key stakeholders such as governments, businesses, civil society, regional institutions, and many others. What this signals is that governance will not only catch up. We not only have to catch up, I mean, but when it does, it needs to be dynamic and respond to the unique challenges faced by Africa as a region in order to ensure that we do not replicate ongoing inequalities. I will stop there for now. Thank you.

Michael Karanicolas:
Thank you. And finally, let’s go to Irakli Khodeli, who is a program specialist for UNESCO to introduce their initiatives in this area.

Irakli Khodeli:
Thank you very much, Michael. Good day, everyone. Thank you for inviting UNESCO to join this panel. My name is Irakli Khodeli. As announced, I’m from the Ethics of Science and Technology team of UNESCO, and I’ll be contributing to our discussion today from a specific angle, angle of UNESCO’s recommendation on ethics of AI, focusing on the tool for its proven potential to guide countries on AI governance and AI regulation. The recommendation was adopted, so in a way, I’ll be very happy to be bringing in a global perspective on AI governance, global because the recommendation that I’ve mentioned was adopted almost two years ago by 193 member states of UNESCO. It is grounded in overarching fundamental values, such as human rights, human dignity, diversity, environmental sustainability, peaceful societies, and then these broad values are translated into 10 principles. There was a lot of mention of the principles already. There is perhaps nothing new in the UNESCO principles, either some of, for instance, Kyoko has mentioned some of the principles that were guiding the national discussions in Japan and also OECD principles were mentioned. What does make UNESCO’s framework distinctive is the specific emphasis on gender, because UNESCO believes that this should be actually disassociated from the general discussion on discrimination, because there are some specific and. and severe harms and threats to gender diversity, gender equality, and there’s also an emphasis on environmental sustainability because oftentimes in the global discussions this is under overlooked, this dimension. And then finally these values and principles are translated into concrete policy action areas by the recommendation to show the governments how you can actually operationalize these principles in specific policy contexts, whether this is education and scientific research, whether it’s economy and labor, whether it’s healthcare and social well-being, etc. There are 11 different, in communication and information, there are 11 different policy areas of the recommendation. Now there has been, as you’re aware, a lot of discussion globally focusing on the risks posed by AI, ranging from benign to catastrophic and from unintended to very much intended and deliberate harms. And we understand that the risks are significant and these risks are also cross-border. This AI also is closely related to pillars of the UN, such as sustainable development, human rights, gender equality, peace. So in this sense, a UN-led effort in our view is critical, not only because AI requires a global multilateral forum for governance, but also because unregulated AI could undermine other multilateral priorities like the sustainable development goals and others. So what I would like to postulate today in our discussion is that UNESCO’s recommendation represents a comprehensive normative background that can guide the design and operation of the global governance mechanism. I will end with saying that despite this focus on the global governance, we must admit that the successful regulation happens at the national level. Ultimately, it is the national governments that are responsible for setting up institutions and laws for AI governance. And here again, the recommendation on ethics of AI comes in handy because we are currently working with governments around the world, both in the global north and global south, to help them make a concrete use of this recommendation by reinforcing their institutions and the regulatory frameworks based on this overall ethical framework. Thank you very much. Really looking forward to engaging

Michael Karanicolas:
with these discussions with you today. Thanks. So I think that’s a fantastic framing of different initiatives as they’re taking place in different parts of the world and by different agencies. I want to start now by opening things up with a discussion of the north-south, global north-global south, majority world-minority world dynamics that are at play in the broader regulatory landscape, and particularly the pressures from standard setting emerging from major regulatory blocks and the challenges that that creates in trying to make space for, particularly for smaller nations or for voices from the majority world. Why don’t we, I think that Simon might be a good place to start there in terms of the challenges that smaller nations face in trying to make their own way from a regulatory perspective, and then we’ll maybe go to someone else from there.

Simon Chesterman:
Sure, thanks so much, and again it’s great to be part of this conversation. I think as Carlos said earlier, there are phases that countries go through. It starts with principles, but indeed those principles themselves, this sort of set of ideas that, as Irakli said now, we’ve seen in the UNESCO document, you can actually trace their origins back through primarily western technology companies. It was around 2016 to 2018 that western technology companies, partly because it was around that time that the Cambridge Analytica scandal revealed to many that the risks of errant AI went beyond a weird Amazon recommendation or a biased credit or hiring decision to actually potentially impacting elections, and now we’ve seen with generative AI everyone’s suddenly realizing that AI could actually affect their jobs. So we’ve seen this spread around the world, and it is now I think a truly global discourse, but there are three challenges I think facing small countries in particular. The first is the one I’ve already highlighted, the sort of whether to regulate, because if you’re a small jurisdiction and you regulate too quickly, one of the concerns is that all you will do is drive innovation elsewhere. That can happen to big countries as well. An example of this is when in In 2001, the United States imposed a moratorium on stem cell research, and that really just led to a lot of that research moving elsewhere. So that’s the first question, whether to regulate for fear of driving innovation elsewhere. The second is when to regulate. And here, there’s a useful idea that some of you might be familiar with called the Collingridge Dilemma. This goes back to David Collingridge’s book called The Social Control of Technology. And basically what he argued back in 1980 is that in an early stage of innovation, regulation is easy, but you don’t know what the harms are. You don’t know what you should do, and the risk of over-regulation is significant. The longer you wait, however, the clearer the harms become, but also the cost of regulation goes way up. And so I think, again, for smaller jurisdictions, there is this wariness of losing out on the benefits of artificial intelligence. And as we carry on this discussion, I do think it’s important to keep in mind that there are risks associated with over-regulating as well as under-regulating AI. The third challenge, and this faces many countries around the world, but again, in particular, smaller countries, is that in many ways, the biggest shift over the past decade of machine learning is the extent to which fundamental research as well as application has moved from public to private hands. Back 10 years ago, at the start of the machine learning revolution, a lot of the research was going on in publicly funded universities. Now, a decade later, almost all of it is happening in private institutions, in companies. And that means a couple of things. Firstly, it means it greatly shortens the speed of deployment from an idea to application. We’ve seen that in generative AI in particular. But secondly, again, it limits the ability of governments to constrain behavior, to nudge behavior, or even to be involved in the deployment cycle. So with those ideas, I’ll hold off, but again, I’m really looking forward to an exchange of views. Thank you.

Michael Karanicolas:
Let’s go to Carlos next, and then I would like to hear from someone in the room. So please express your interest now if you’re interested in intervening in this.

Carlos Affonso Souza:
And since we’re talking about regulation, of course, architecture is a form of regulation as well, and the architecture of this room might be non-inviting for people to come up and provide their ideas and to join the conversation. So please feel free to do that. And just to offer a quick segue on what Simon was saying, one challenge that we might face when we think about governance and regulation of AI in the majority world is that AI might be invisible, might be something really ethereal, hard to grasp on how regulators could enter into this discussion to begin with. And of course, that will end up creating the effects that examples that we can take out from different countries that have already regulated this topic end up as creating a very strong influence on the models, the categories, the way in which these conversations is actually being set up in those countries. And we now face a challenge because we want countries from the majority world to be protagonists in the discussion about governance and regulation of AI. But at the same time, when we think about especially the largest countries in the majority world, they end up serving mostly as a resource for users of the AI most famous applications than anything else. And this is something that we need to pay extra attention because when we think about the regulation and the governance of AI, we need to think about what we are trying to communicate, what are we addressing? Because one thing is the deployments and the creation and the design. of AI. And another thing is the usage of this AI application. And when it comes to the majority world, we can see pretty often that the applications were not going to be designed, created in those countries, but they will be used heavily in those countries. So it’s quite obvious that the discussion about how to regulate not only the creation, but also the use of those applications will be key for the successful experiences of those initiatives on regulation and governance. I’ll just stop here, Michael.

Michael Karanicolas:
Perfect. Let’s go here to the back and then over this way afterwards.

Audience:
Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is involved in policymaking. It was interesting to listen to Simon who mentioned that under-regulation and over-regulation, that has to be a balance. And the thing is that we have a national AI policy. It’s in the draft stage and we’re currently getting input from a lot of stakeholders. Now, it does touch upon the aspect of generative AI because it’s the newer phenomena, but it’s had a really disruptive effect in so many ways. And we talk about the ethical use of AI, but as long as generative AI, for example, I’ll consider a use case that as long as it’s assistive in nature, it’s acceptable. But beyond that, it can be considered unethical. So I just want to learn from the panelists that how can we strike a balance over there? Because at the government level, if we look at the education sector, there are a lot of problems already being raised by different government institutions, educational institutions, universities, that generative AI is misused. So at a policy level, I would like to know, I mean, how can we address this problem? Thank you. All right. Thanks very much. Sliming Zhu from Australian National Science Agency. So we basically, our staff chairs Australian’s AI standards body, and also we developed Australian’s ESCO AI principles back in 2019. So as a science agency, interestingly, we are not allowed to comment on policy and regulation because we provide scientific evidence into these policy discussions. But I think I want to raise two points. One is no matter the standardization and the policy and the regulation, you need to measure the risks, the size of the risks. And that’s a scientific question. And we need to have an international research alliance on how to measure those risks. Once you fully understand how to measure those risks, then you can probably reduce those risks and make a very informed decision. The second point I want to make is, a lot of these are trade-offs decisions. Only those risks are well understood and measured, you can make those trade-offs. For example, when US Statistical Bureau released their data for further research, they had to make a very concrete trade-off between data utility and privacy. You have to make a conscious decision to sacrifice some privacy for the gain of some benefits. And that informed decision is done by stakeholder groups from privacy advocates. But it’s even more complex than that. After Lady Dice, I think there are studies coming out to say, actually, privacy, utility, and fairness, they all have trade-offs. Having privacy-preserving approaches sometimes harm fairness and sometimes promote fairness. You have to have the fairness foundation for that as well. And many of these are context-driven. I don’t know whether people have seen the recent CHATGBD vision system model. And they basically said, one of the use cases they have is blind people wanting to have face recognition. That’s the number one feature they requested. Because blind people do say, I just want to have the same level of ability of normal humans in a room to recognize face. But based on face recognition risks and the various legislations, they are not allowed to have that. And that’s the number one feature they have requested. But I think that’s an interesting discussion to say how the standardization of policy and the science will enable this kind of trade-off decisions. Thank you. Milton, did you? So I just wanted to raise a question about something. I can’t remember which panelist said that when we talk about regulation, we’re necessarily talking fundamentally about national governments. And if you look at what AI consists of, break it down into its component parts, as Courtney said, you’re looking at a combination, really, of data resources, software programs, networks, and devices, computing devices. And all of those are globalized markets. And we… with the internet, and here’s where I’m trying to create a link to internet governance, which is what this forum is supposed to be about, although we might wanna rename it the AIGF. The internet makes it all very easy to distribute applications and to distribute data resources very quickly and hard to control. So I understand that many forms of applications will be regulated at the national level. Like medical devices or something, where you have a nicely defined thing, but AI as a whole is going to be a very globalized form of human interaction, and I don’t think that national governments are all going to solve this by themselves.

Michael Karanicolas:
Let’s hear from Tamiwa, and then maybe one more intervention, and then we’ll move on to the next question.

Tomiwa Ilori:
Thank you very much, Michael. And discussing not-so dynamics, especially from an African perspective when it comes to AI governance, for me, I think the race towards global AI governance will favor the boat. And while I will not delve into the ethics of that sentence, it is the reality, especially in Africa, especially with how the region is often bedeviled with importation, especially of standards, and sometimes even being referred to only as standard stakeholders, not people who design standards for themselves. And we know in international law, as it is in international politics, smaller nations are seldom bold, and they often end up as pawns or testing grounds for bad governance attempts. However, in my view, smaller nations in this context can be bold if. if they strategize and work together with like-minded initiatives or systems. And when I use the word small, I also use small in terms of progress with AI governance and initiatives on the ground. The way I see it, it is a long way for a small nation to move alone, but that journey towards responsible AI governance could be shorter if we work with others who may share maybe similar goals and intended results. That would be my quick contribution on that. Thanks.

Michael Karanicolas:
So let’s go to one more intervention from the room and then I’m going to move on to it. Yeah.

Audience:
Janet Hoffman, Germany. I have a question to Simon Chesterman on the situation in Singapore. You pointed out that Singapore is a small jurisdiction and thereby always face the risk of driving companies out of the country. But I was thinking of the fact that Singapore has quite a number of really successful companies under public ownership. So I was wondering whether that not creates perfect conditions for regulatory sandboxes where you can in fact test what type of regulation works and what effects it has on the companies.

Michael Karanicolas:
Sure. Simon, did you want to respond to that? Sure.

Simon Chesterman:
So it’s a great question. And indeed, regulatory sandboxes is something we’ve been exploring in particular in the fintech sector. The Monetary Authority of Singapore has used this technique, which is not unique to Singapore. The basic idea is you give a kind of safe regulatory playground where there are reduced risks that enables companies to test out new use cases. But the larger point about the danger of driving innovation. elsewhere really is a concern not limited to Singapore’s domestic economy, but to attracting the big tech companies apart from anything else to Singapore, which we saw 11 years ago when Singapore adopted the Personal Data Protection Act. Its legislation was specifically said to be aiming at balancing the rights of users against the legitimate needs of business. And so I think the combination of the small size, the openness to the world and the regulatory flexibility of a country like Singapore does give us an opportunity, but we’ve still got to operate within those kind of constraints. Maybe if it’s appropriate, I can very quickly just respond to earlier comments. I didn’t catch his name, but the gentleman from Pakistan. One of the key arguments that I think needs to be spread around the world about the use of generative AI is that if you’re going to use these things in particular, if you can use them in a public sector context, you’ve got to keep in mind two things. Firstly, that if you share data with these generative AI systems like Chattopadhyay and similar capabilities, you’re essentially sharing that data with private agencies. So you need to be very careful what you share. The second is it needs to be clear that whatever comes out of it, if you use that, you are responsible for it. And then really quickly, Li Ming, great to see you at a distance even, and I think it was Milton, I’d link both those two to say that the levels of regulation that we’re talking about, you need to think of three. We do need the regulatory hammer. As Irakli said, you do need states are the only entities with real coercive powers that are going to be essential for harsh regulation when that’s needed, and that’s going to be an important level. But above and below that, you also need self-regulation, you need industry standards, you need interoperability that will in practice be the most common form of regulatory intervention, that kind of standard setting. do need some measure of coordination, not just coordination of standards, which is what I think Milton was talking about, but also to Li-Ming’s point, you need the ability to share information about crises. And so I won’t get into it now, but elsewhere I’ve written, as others have, about possible comparisons with the International Atomic Energy Agency and the efforts to share information about safe uses of nuclear power in exchange for a promise not to weaponize that technology. I want to pick up on something on what you mentioned previously about the

Michael Karanicolas:
Collingridge dilemma for the next question. We’re in relatively early phases of it. There are a lot of unknowns of this technology, but there are also a lot of clear manifestations of potential and existing harms. So the regulatory questions are certainly not speculative on this, but we are in the relatively early phases of implementation, wide-scale implementation at least. Are there lessons to be drawn from previous eras of tech governance in how we approach the regulatory picture? Are there successes and failures of previous regulatory frameworks that can teach us about what works and what doesn’t? Maybe I’ll go to Courtney on this first. Thanks so much.

Courtney Radsch:
Yeah, so my work is primarily focused on the so-called global south or majority world with a focus on the Middle East. And I think if you look at previous eras of tech governance, whether social media, search, app stores, online marketplaces, even standards, they were all rolled out and they remain controlled by a few monopolistic tech firms. And so we need to really take this as instructive. The debate about AI governance has failed to grapple with the issue of market power. We are taking the economic ownership and control of AI as a given. And while the discussions around how to prevent AI from inflicting harm are important and the issues of preventing exploitation and discrimination are absolutely necessary, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market. I think that is the biggest takeaway. No matter how well we design our rules, we will struggle to enforce them effectively on corporations that are too large to control, that can treat fines as the cost of doing business, and that can decide to simply, for example, censor news in an entire country if they don’t want to comply with the law, as we saw Meta do in Canada recently. So I think that we have to look at AI again in its component parts, as I mentioned earlier, and think about the dominance that we’re already seeing by literally a handful of big tech firms that are providing the leading AI AI foundation models, they are taking aggressive steps to co-opt independent rivals through investment, partnership agreements, and their dominance over, for example, key cloud computing platforms. We know that between Meta and Google and Amazon, for example, nearly a thousand startup firms were bought with no merger oversight, no FTC intervention. This has to change because I think, as we’ve discussed the kind of small, large divide, the big economy, small economies, this is kind of relevant, but also irrelevant when you have massive firms that are creating new capabilities, creating new technologies that national governments do not have power over. And so I think that we have to look at reshaping the structure of markets and ensuring that we crack down on anti-competitive practices in the cloud market, look at common carrier rules so that, for example, regulators should be considering forcing Microsoft, Amazon, and Google to divest their cloud businesses in order to eliminate the conflict of interest that incentivize them to self-preference, for example, their own AI models over those of rivals as we have seen in app stores, in search, in the way that Amazon constrains and forces small businesses to comply with the rules it sets because if you’re not on Amazon marketplace, it’s very hard for you to do business. In many countries, if you’re not on Google search, you might as well not exist if you’re a news organization. So there’s much to be learned, but we need to get out of this idea that this is somehow some new, really scary thing that we’re trying to govern. I mean, like, all right, again, the components, data, how do we separate out data, computational power, software applications, cloud computing, and think about each of those component parts as well as thinking about risk assessments, these more risk frameworks that are really at a far end of the application layer and implications of a certain subset of AI systems.

Michael Karanicolas:
The multidimensional nature of how power is concentrating is certainly well taken. Let’s, if we’re thinking about a longer view of technological developments and regulation, maybe let’s go back to Iraq, UNESCO, who certainly has been present through, UNESCO at least has been present through a lot of these different areas of governance and would be interested to hear their thoughts.

Irakli Khodeli:
Sure, thank you very much, Michael, and also to all the other participants. participants for very insightful comments and discussion. This is actually a really nice question for me to also get back to something that Milton has mentioned in terms of the difficulty for member states to govern something that is so cross-border in nature, and that has to do with things like the flow of data across borders, internet, et cetera. Because that relates to how have we, are there cases where we have successfully regulated an emerging technology? And my answer there goes that you need to have, and I might be reiterating some of the points that Simon has made and other speakers have made, is that the successful regulation of any technology, in our view, takes regulatory frameworks existing at different levels. Of course, at the global level, that’s precisely, again, responding to Milton’s questions, that’s precisely why you need a global governance mechanism that coordinates and ensures compatibility and interoperability between different layers of regulation. Usually at the global level, you have the softest level of regulation, it could be a declaration, it could be a recommendation, but it could also be a convention, which would be a more binding document. And this is what, at the international level right now, at the UN level, that’s what the conversation is about, is what kind of regulatory mechanism that you have. And let’s not forget the importance of the regional organizations and regional arrangements. Of course, European Union comes immediately to mind, and it has been mentioned many, many times, but ideally, we would want to have the same type of movement within African Union, within ASEAN. in Asia, Council of Europe already has a movement, a concrete process towards an instrument, of course OECD, so a regional organization, regional regulation would be very important. Then again national, we cannot avoid the fact that whether it comes to redressing cases where harm has been done or enforcement of different mechanisms, then the national level is indispensable. And let’s not also forget sub-national level. Courtney has mentioned for instance a lot of state-level activity on AI regulation. We’re also aware that in other countries also similar processes exist. In India for instance there is a lot of legislative activism at the state level, below the national level. So all these different levels I think can effectively work together to regulate the technology. And I’ll end with a concrete example. Bioethics for instance is something that UNESCO has been engaged for a long time. Simon has mentioned stem cell research. Perhaps that is not the best specific example because maybe for the US that is an example of over-regulation. But bioethics, so we have the Universal Declaration on Bioethics and Human Rights at UNESCO that all member states, all countries around the world basically, have signed on to. Then you have an example of a more stronger framework, Oviedo Convention of the Council of Europe, also in bioethics that provides more stringent framework. And then that is translated into specific very binding and strong regulation at the country level in the European countries to protect people against the risks that are emerging from biological and medical sciences technologies. So that’s a concrete example, thanks.

Michael Karanicolas:
Yeah, I think that the structural framing is helpful. I would add trade associations as well and private sector standard-setting bodies as well, which can be enormously influential. I’ll also note though that you talk about how these different levels of regulation, these different structures can work together. They can also compete and work at cross-purposes, which I think adds interesting dimension to how norms get set and applied. Let’s go to another comment in the room and then to Carlos.

Audience:
Can you use this one? Just a moment. We have lots of microphones right here, probably too many. We can diffuse them out. So I think the microphone was broken, it was not just my fault. Anyway, Ingrid Volkmann, University of Melbourne, Digital Policy. I think this debate about power is really interesting. And it’s power between, or new power dynamics rather, between the global North and South, with the Western companies producing a lot of data across the world, etc. We’ve addressed that. But I think there is another dimension and that is the granulation of data. Because in the global South, perhaps there is not the same quality of fine granulated data that’s available in the global North. So through that process alone, I think there is a lot of risk that could be produced through AI. And I don’t know a solution to that. I know that the ITU has a lot of initiatives around AI for good, with farming and medical, etc. in the global South. But I think this issue of data granulation is perhaps also another one that could be addressed in the power debate we’re having. Thank you.

Carlos Affonso Souza:
Super quickly, since we are discussing what are the lessons learned on the at least 25 years of thinking about internet regulation, maybe one thing we should take into account is that copyright and freedom of expression were the two issues addressed early on in the regulation of the internet. And by the time social media ended up appearing in the global scenario, we had this surge of personal data protection laws that was fundamental for us to understand what internet regulation looks like in the last decade. So when we shift gears into the discussion about AI regulation, it looks like we have at least two very interesting questions in comparison to this experience that we had with the regulation of the internet. So first one is like how much the modeling of personal data protection laws such as concerning about risk analysis, we will end up influencing the way in which AI regulation will be shaped. And the second one is the decisions that were end up being taken on the issue of platform liability in different countries in different regions. And how can we to on a certain way, take that into the discussion about the damages caused by AI? Because it looks like, first of all, we need to ask ourselves what type of AI we’re talking about, what type of damages are we talking about. And especially, I think we have an entirely different discussion when it comes to AI. Because when it comes to AI, we have this opportunistic discussion in which if the AI application end up causing trouble and damage to other people, the AI application is dumb. And you have, sorry, quite the opposite. The AI application is super smart. And the robot, the application, decides on itself to cause the harm. But on the opposite end, when the AI application end up providing you profits, such as in this discussion about copyright, you want the machine or the application to be as dumb as possible. So you as a developer, you as the deployer of the AI, you end up having all the profits of having this type of application out there in the market. So I think this is a type of discussion that we didn’t have back then in the discussion about internet regulation. And this is very unique to the discussions that we have right now on AI, this opportunistic usage of the autonomy of the AI application. I think this put us in a very different set of questions.

Michael Karanicolas:
I think the copyright example, anytime you’re talking about learning from previous eras of regulation, the copyright example is incredibly salient. And I would say, I think that even today, enforcement of IP rights online today is vastly stronger than, say, enforcement of privacy rights. And the reason for that, it’s entirely a legacy of the early prioritization of harms that were viewed as the most pressing and the most urgent to address early on in regulatory efforts. And I think the point about needing to be deliberate and careful in selecting how harms are understood and prioritized in the current regulatory, as we grapple with developing new regulations now, is incredibly important. Because this will ripple forward over time as these technologies continue to proliferate.

Courtney Radsch:
Yeah, just to build on that, I mean, I think we have to think about also that there’s a political economy of the protocols that are created by technical standards setting bodies. This idea that, for example, robot TXT, that’s a standard, or HTTP versus HTTPS, are standards that were created in technical communities without necessarily considering the political economic implications of the abilities that were being created through those standards. And so to build on your two points here on copyright, the ability to just hoover up all of this rights-protected data to create large language models without any compensation to content creators, news producers, et cetera, has huge repercussions on the economics of certain industries. In my own work, I focus on the journalism industry, but also then on broader society, work, et cetera. And so I think we need to take that into consideration, that technical standards are not neutral. They have political economic impacts. And we have to think about neutralizing big tech’s unfairly acquired data advantage proactively. And we should think about the fact that a representative from Meta stood on the AI high level panel as a representative. We are recreating a lot of the problems of the past by elevating the same big tech companies versus seeing a greater diversity of technology and technical community. You have an overabundance of the big tech representatives and corporate tech representatives in a lot of the multi-stakeholder processes. So we need to, I think, reorient that.

Michael Karanicolas:
OK, so we’re doing a lot of beating up on big tech and the tech sector at the moment. So let me ask then, the next question I want to get to is, what is the appropriate role for industry in regulation and standard setting? What does it mean to have meaningful multi-stakeholder engagement? I want to go to the room. So we have one over there. And I want to know, is anybody here from either industry, from tech sector, who could contribute as well, either here or from the back? No? Yes, well, let’s go to the corner first.

Audience:
Can you hear me now? Okay. This is Guo Wu, actually from TWIGF. And it’s kind of interesting. I learned the natural language processing in 1980. I don’t know how many of you is, you know, learning the natural language processing in the early days. And it’s really a big difference between the 1984 and now. Because in the early days, we are talking about the algorithm, but these days we are talking about a massive database. You know, the machine is learning from the massive database. But I don’t know anybody who do have a study in this kind of situation. Because today, the AI is learning from massive of the database. But the problem is, let’s try to think about a two group. A group is a group of the people that produce the huge data. And such the machine can learn from this huge data from the group A. And think about it as another group of the people. They don’t produce a lot of data. Let’s mean machine cannot learn enough from this group B. Now when the AI machine, after this whole study, and then try to generate their comment or whatever the result, is that it’s possible because this data is less, this is more. So there would be generate kind of discrimination. to the good B and prefer the good B. I don’t know anybody study such of a case. Yeah, I’m gonna make one more call for anybody from private sector industry to discuss the role of the private sector in regulation, in crafting good regulations and standard setting. If nobody speaks up, you don’t get to complain if our outcome document is not fair. So well, let me frame it this way then. I mean, we hear a lot about multi-stakeholderism. What does it mean to have a meaningful multi-stakeholder process in terms of crafting either standards or regulation in this space? Why don’t we go to Kyoko first and then someone in the room who would like to chime in or in the Zoom.

Kyoko Yoshinaga:
So let me talk about the industry role. Industry should consider developing or using responsible AI as part of their corporate social responsibility or as part of their environmental, social and governance, ESG practices. Since the way in which they develop and sell or use AI will have huge impact on society as a whole. So I would like to point three main things what organizations can do. One is to create guidelines on the development and use of AI, including a code of conduct, internal R&D guidelines and AI utilization principles and provide publicly accessible documents such as AI policy on how the organization develops and utilizes AI systems. Like we did for privacy policy. And this is very meaningful. And I know this because I was working in a think tank developing AI. AI systems. I was in charge of AI risk management and compliance. But when we made those and we made AI policy publicly available, all the people involved in this developing AI process will have responsible, will be really responsible in making this ethically. And so it’s like a manifestation, but I think this manifestation is very important and effective for the developer companies and user companies to be responsible for making and using AI appropriately. And many companies in Japan, like Sony, Fujitsu, NEC, NTT data, they already developed AI policies based on the guidelines, which I mentioned at the beginning. And it seems to be working well, even if we have non-binding guidelines, software approach. I’m seeing a similar situation now of what I’ve seen back in 2005. In 2005, Ministry of Economy, Trade, and Industry created what we call Information Security Governance Policy Framework. At that time, there were many information security incidents and the government realized that we need to do something. And I was in the think tank assisting to make that Information Security Governance Policy Framework. But we made three tools for establishing information security governance. One is we made an information security benchmark to help organization rigorously and comprehensively self-assess the gap between their organization’s current condition and the desirable level of information security. Second, we made a model for information security reporting to encourage companies to disclose their information security efforts. And three, we made a guideline for business continuity planning to encourage companies to develop such plans. Now, this initiative has led many companies to build robust information security governance. So in the context of AI governance, creating informed frameworks may encourage management to establish robust AI governance within their organization, perhaps functioning as part of their ESG efforts.

Michael Karanicolas:
So let’s go to Simon next, and then someone else in the room if they wanna join in, if they wanna chime in.

Simon Chesterman:
Thanks, yeah, on the role of companies, I do think it’s sort of amazing how things have changed. So back in 2011, Ryan Kala, who’s a great scholar in this area, wrote something very silly, I think, where he argued that in order to encourage research into AI, we needed to give companies immunity from suit. Otherwise, the risks would be so great that they wouldn’t innovate. Now, clearly that hasn’t happened. Jump forward to today, and you’ve got companies lining up to call for regulation. But they’re doing that for at least three reasons. One reason is, I think many of them do actually accept that some regulation would be useful. Second, they know that some kind of regulation is coming, they’d like to be part of that conversation. But thirdly, especially for the big market players, they know that if the regulatory costs go up, that becomes additional barriers to entry for their competitors, so it’s good for them. So by all means, I think it’s important to involve companies in these processes. And I echo what Kiyoko and others have said about the importance of standards, the importance of these emerging interoperable standards is gonna be very, very important. But we’re also gonna be clear-eyed about the incentives that drive these companies, which is to make money, which is one reason why a lot of this stuff that’s being deployed now is really, seems to be making money in two ways. that have been revealed as the sort of money-making aspects of AI. The first is to monetize human attention, and we’ve seen that through the surveillance capitalism, the experience of social media. And the second is to replace human labor. And so for all these reasons, I do think it’s important to involve companies, but also to understand that their role is, yes, they’ve got to pay attention to ESG and so on, the triple bottom line, but ultimately they are businesses. And if we, the community, or if regulators make a determination that these companies are too big, then it’s necessary to either… You’ve got three choices. You’ve got the litigation path, which the US is going down at the moment with the slim, slim possibility that some of these FTC or DOJ actions might actually lead up to the breakup of companies. You’ve got the European approach, which is to say, okay, we’re just going to identify gatekeepers and these six companies are now going to be subject to a much heavier regulation. Or you’ve got the Chinese approach, which is to say, well, just through executive action, Alibaba is going to be broken up into six companies and address the problem that way. So yes, by all means, I think it’s important to involve companies, but also to understand their perspectives, where they’re coming from, and not expect them to be turkeys that vote in favor of Christmas.

Michael Karanicolas:
Yeah, I think that that leads pretty neatly to the next question that I wanted to raise, which is related to risk-based versus rights-based approaches towards regulation and challenges of, not to say self-regulation, because I think that there’s probably going to be good consensus in this room and in most rooms that we don’t want to, we’re not satisfied with a self-regulatory solution, but the emphasis within a lot of early regulatory models on self-assessment and risk assessment as a critical component of regulatory structures. That’s an important part of the EU’s regulation in this space, draft regulation in this space. In the US, the AI Commission Bill has explicitly endorsed a risk-based assessment model. Are there thoughts on challenges and the role of this kind of assessment in effective regulation, the role of companies in carrying out these assessments, challenges in finding an effective avenue towards developing an effective framework if it relies on internal assessments by companies and the need to develop that in a robust way? Melton?

Audience:
Oh, yeah. Well, I think the risk assessment approach that is in one American bill and in the European bills are kind of a joke. Basically, they are asking for self-assessments. And this is not because I’m anti-industry and don’t trust them to do this. I think it’s just going to be a ticking-the-box exercise. And the point that I think we need to think about is that you don’t know what the risks are going to be in many cases. These things don’t exist yet, right? And so that’s what makes me laugh about the European model. So people are supposed to sort themselves into the five different risk levels. But how do you know what the risk is until it happens? And so I don’t believe in this kind of ex-ante forms of regulation where the government pretends that it is all-knowing and it thinks it can decide. And I’d like to bring your attention more to a rights-based approach based on property rights, which is whenever you have a new technology, you create new forms of property. So we saw this in the domain name industry, Michael. We saw these things were nothing. They were given out for free. And then suddenly, they were valuable. And they conflicted with trademark rights. So we had a policymaking process ex-post where we figured out who had a right to what. And now with so-called surveillance capitalism, we are discovering the value of data resources. And we have to renegotiate the boundary between who owns or who controls what data when users interact with platforms. And I think it’s a mistake to view that as this extraction process where a helpless human just gets data taken away from them by you’re engaged in an exchange. You are getting something. And you are giving up something. And we have to decide how that data gets monitored and owned and regulated. And that’s not an easy problem. So I would think that with AI, the issue is going to be a lot about property rights. And it’s interesting to see how we’re replaying some of the conservative protectionist stuff about a copyright now. Remember, when we started the internet, some copyright people were saying, every time you move a file from one server to another, you’re making an illegitimate copy. And that would have killed the internet. They wanted the definition of property rights in digital items to be so strict that we simply would not have had an internet.

Michael Karanicolas:
So we have to be careful about that. I’ll also say it’s interesting because regulatory ambiguity in other use cases can lead to overly cautious approaches if it’s accompanied by either aggressive enforcement and or coercion. or extremely severe penalties. So in the speech realm, a vague law is always viewed as being really dangerous because if it can be aggressively enforced, people see a really clear of the line. But it’s just not clear to me that any of the proposed AI regulatory frameworks would incorporate or could incorporate that level of enforcement. And so that’s why I think it’s interesting that ambiguity can work both ways, but it’s unlikely to work that way in this context. Let’s go, Courtney, and then, yeah.

Courtney Radsch:
Yeah, I think one of the problems, to definitely agree with Milton on the risk-based approach, you just don’t know, and that limits what you’re even talking about. We’re not talking about, for example, regulation that is aimed at reclassifying some types of companies as common carriers or having public utility type requirements. Common carrier requirements, for example. I think that, I mean, yes, the way that we address property rights on the internet gave rise to the internet. On the other hand, the way that we implemented some of those copyrights or lack thereof, and some of the digital advertising structures that emerged has killed off a large part of the news media industry, which is considered an essential component to democratic systems. So there is a trade-off. I think we have to think unfettered innovation is not necessarily good. So I think that the rights versus risk-based assessment does not get to many of the issues at stake. We talk a lot about individual level data. I think you’re right, Milton, you know this. User data, there is some exchange, but there’s a lot of data that is not individual data. It’s sensor data, it’s environmental data, it is data about movement and data about data. That is also incredibly valuable and currently dominated again by larger firms that have more access to data, et cetera. So we have to, I think really, I feel like the rights and the risk-based approach are important for a specific subset of AI, particularly when we’re talking about maybe generative AI, decision-making AI systems in certain sectors, but that is only a small component of AI. And we have to think about public interest-oriented regulation and a wider set of policy interventions.

Michael Karanicolas:
So let’s, did you raise your, oh, I’m sorry. Oh, yeah.

Audience:
Thank you. I wanted to respond to Milton and politely disagree. First of all, I mean, no, that’s nothing new. It, my objection concerns the fact that you think it’s ridiculous to ask platforms to assess the risks. First of all, all companies have lots of experience with risk modeling as a technique. It’s not new to them. They’re used to doing that. And now they are asked to assess the risks vis-a-vis some fairly specific groups, vulnerable groups, to wellbeing, to all sorts of things. And as we know, through various leaks, they know themselves what they’re doing to specific user groups. Also, that is not new to them. And finally, the DSA gives researchers now privileged access to data produced by platforms in the area of risk. And I think it will be possible to some extent for research to assess how platforms assess the risks they impose on societies. It will be, I think, fairly interesting to see, particularly how platforms deal with the question of general risks to society. I have no clue how they’re going to operationalize that. They will have to do that to tick boxes, but there are research groups that will be able to hold them to account on the ways how they approach this problem.

Michael Karanicolas:
Yeah, I think a lot of us poor non-EU researchers are jealous of our colleagues that are gonna be able to do some really interesting research based on that. I have-

Courtney Radsch:
At least funding to do that, right? You’re relying on underfunded civil society and academia to provide oversight of powerful, wealthy companies that do their own risk assessments, but may fire the people who find the risks or bury that research. So it’s not a perfect solution.

Michael Karanicolas:
All right, so let’s… So Carlos, did you want to enter in? And then I think we have a comment on the call.

Carlos Affonso Souza:
So just very quickly to react to Milton’s provocative comments on the status of regulation. And I think there’s something for us to take into account here is that for countries to have something about regulation of AI, it’s almost like a brand, a signal of being part of a group that is thinking about the future. And that’s been leading us to situations in which we come up with regulation that are far from perfect, but we keep hearing that in different countries, in different discussions, people say it’s better than nothing. So I think this is a moment in which we should think about, should we be happy with having something that is like better than nothing, just to be part of a group of countries that have already. connected to something, I think quite the opposite. We are in a very important moment in which we could learn from the experience from abroad, coming up with lessons learned and best practices to come with interesting and innovative solutions. But just to react to Milton’s comments, I think that when we look to some, especially thinking about the influence of the European solution to some of those topics, we have shadows of the European solutions appearing in different countries, solutions that might not even function properly, but legislators will say, hey, we have done something. So by the end of the day, better than nothing.

Michael Karanicolas:
Yeah, I think there’s an interesting tension between the need for, the undoubted need and benefits of engagement and mutual learning and sharing best practices and the importance of factoring local contexts into regulatory processes, right? Like obviously I don’t know that the world benefits from 195 different, radically different frameworks, but it can also be problematic when countries sort of cut and paste, say an EU model or an American model into their local context, which can either lack an appropriate regulatory structure of related legislation, right? Like the EU Act, but without the GDPR and without the DSA and without the Digital Markets Act that are also important components of the same regulatory ecosystem, or that just import a conception of harm that’s not necessarily fit for purpose based on a local context. We had a comment in the corner.

Audience:
Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few words, and I thought we were drilling towards what I was trying to say earlier on, but it’s actually maybe helped me a bit more now that everyone else has spoken. So the word assessment, I think is where I’m going to start. And so that’s expressly a measurement activity is probably where I’d start with that one. So how are we going to measure all these things? So how are we going to measure compliance, performance? How am I going to measure how I can trust a system or that it’s safe depending on the context, because every context is different. So there’s a lot of thought. It’s a bit of a paradigm shift that’s coming, and it’s coming in our part of the world as well as just generally. And when I mean our part of the world, I mean the measurement part of the world rather than any geographical part. So how do we measure these things? And before standards, things called pre-normative standards, which can be anywhere from two to 20 years in development before you get to the standard. So this is where how you can measure what it says on the tip. And so there’s a lot of work that needs to go from that side of things. And so the kind of work that missed us in America is what MPL does in the UK. So there are a hundred signatories to this thing around world nation states. So there could be an interesting platform where some collaborations and multi-stakeholder approach occurs. And the reason I say that is the organizations like us, we sit on that cusp between industry, academia, and civil society. And then just to hit on the, what can industry do for us part is we need to collaborate with industry. They provide access. They will provide access to resources, be that compute, be it to the models. They bring us access to case studies and use cases. And also that knowledge and understanding, we can help them, they can help us to help them. Because then we open things up and we get different lenses can be supported on various things. And then the last thing is this context thing. So it’s not just about quantitative measurement, it’s about qualitative. So what does a socio-technical test bed to measure the trustworthy outputs of AI actually look like? that’s something that the world needs to work on together. Thank you. Thanks.

Kyoko Yoshinaga:
Yes, I understand that the EU, US and Japan are all taking risk-based approach right now. And I think it is important to examine what the risks are beforehand. But regulating this risk precautionary with hard law is somewhat dangerous. Because the risks varies according to context. Also, the level of AI technology varies among countries. And so, we should not impose hard law to other countries. But to agree with the basic principles and leave it to each country to decide whether to take hard law or software approaches. Like factors like corporate culture, if the companies are obedient or not, safety, the level of technology, it should all be taken into account how to regulate AI. For example, one of the threats caused by AI is the intrusion to privacy. For example, surveillance or real-time biometric ID systems. In that case, it is important to have personal data protection law. And these factors vary among countries. So, we should not say this law is better than the other. I think each government should make regulations on their own way considering these factors in their own context. Thanks. So, that just about takes us to time. I was a bit daunted when I saw

Michael Karanicolas:
the IGF schedule get released and saw that there were so many different sessions on AI. I’m not going to say that ours is the best. I might put that in our outcome report. But I certainly I learned a lot from the perspectives expressed here, both among our panelists and from the rest of you in the room. And I think that it’s an incredibly important conversation given both the importance and urgency of these challenges and this unusual combination of having something that urgently needs attention, but it’s also incredibly important to get right. So thanks again to all of our panelists. Thanks again to all of you who participated and I look forward to keeping the conversation going. Thanks, Michael. Thanks everyone. Thanks, Michael. Thank you. Have a nice day.

Audience

Speech speed

172 words per minute

Speech length

2755 words

Speech time

964 secs

Carlos Affonso Souza

Speech speed

150 words per minute

Speech length

1572 words

Speech time

627 secs

Courtney Radsch

Speech speed

166 words per minute

Speech length

1868 words

Speech time

674 secs

Irakli Khodeli

Speech speed

138 words per minute

Speech length

1211 words

Speech time

527 secs

Kyoko Yoshinaga

Speech speed

124 words per minute

Speech length

1186 words

Speech time

574 secs

Michael Karanicolas

Speech speed

159 words per minute

Speech length

2197 words

Speech time

828 secs

Simon Chesterman

Speech speed

201 words per minute

Speech length

2278 words

Speech time

681 secs

Tomiwa Ilori

Speech speed

141 words per minute

Speech length

743 words

Speech time

316 secs

Scramble for Internet: you snooze, you lose | IGF 2023 WS #496

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator 2

The discussions held at the Internet Governance Forum shed light on the ongoing struggle of Global South countries to ensure internet access and treat it as a basic human right. These discussions reveal a disparity in approaches to internet access and function between the Global North and the Global South. While the Global North countries have a different approach towards internet access, the Global South representatives focus on the fundamental aspects of internet functioning and the importance of internet as a fundamental human right. This discrepancy in perspectives highlights the disparities and challenges faced by countries in ensuring equal access to the internet.

Furthermore, following its withdrawal from the G8 in 2014, Russia has shifted towards aligning more with the Global South. Although specific reasons for this shift are not mentioned, this change in alignment could potentially impact Russia’s stance on global issues and its interactions with other countries in the future.

The discussions at the Internet Governance Forum offer a vital platform to address the crucial issues related to internet access and governance. By acknowledging and understanding the differing perspectives and challenges faced by countries in the Global South, there is an opportunity to bridge the digital divide and promote equal and inclusive access to the internet for individuals worldwide.

Moderator 1

The perspective of the global South is essential in discussions about fragmentation, particularly regarding technology and infrastructure issues. These countries often face challenges due to vulnerable infrastructure and poor internet governance, which can lead to frequent internet shutdowns. Such disruptions can have significant impacts on the economies, education systems, and overall development of these nations.

International cooperation is emphasised as a key approach to address these challenges. By promoting partnerships and collaborations, it becomes possible to ensure that all countries and regions have equal access to technological equipment and innovation. This is particularly important in bridging the existing digital divide between the global North and South.

Representatives from the global South tend to highlight the fundamental significance of the internet in discussions about fragmentation. They argue that access to the internet should be considered a basic human right, as it facilitates communication, access to information, and opportunities for socioeconomic development. Their perspective is influenced by the ongoing struggle to guarantee internet access for their populations, which is often hindered by various factors such as limited infrastructure, socioeconomic disparities, and inadequate internet governance frameworks.

It is interesting to note the stance of the Russian Federation in these discussions. Despite being geographically considered part of the global North, Russia has shown alignment with the perspectives of the global South. This shift in alignment became more noticeable after the country’s withdrawal from the G8 in 2014. It indicates that Russia is placing greater importance on addressing the challenges faced by the global South, particularly concerning fragmentation and internet governance issues.

In conclusion, the global South perspective holds significant weight in discussions about fragmentation, as these countries grapple with issues of infrastructure vulnerability and internet governance. International cooperation is crucial to ensure equitable access to technology and bridge the digital divide. The global South emphasises the essential nature of the internet as a basic human right, while the Russian Federation’s alignment with the global South highlights their shared concerns regarding fragmentation and the need for inclusive internet governance.

Roberto Zambrana

The internet was initially designed to connect the scientific and academic community, but it quickly expanded as people recognized the benefits and wanted to join for services like email and access to information. This early growth and widespread adoption of the internet marked a positive development.

However, as the internet continued to expand, issues started to emerge. One major concern was the security of the internet. With more users and an increase in the exchange of information online, there was a greater risk of cyber attacks and breaches. Governments also took actions that could be seen as leading to the fragmentation of the internet, potentially dividing it into smaller, controlled networks. These negative aspects raised concerns about the future of the internet.

Furthermore, the technical dimensions of the internet itself presented challenges. New protocols that altered the original architecture had the potential to lead to fragmentation. The introduction of the Hypertext Transfer Protocol (HTTP) was a significant advancement that facilitated the growth of the internet. However, changes like these could also contribute to fragmentation if not carefully managed.

Another factor that could contribute to fragmentation is the lack of actions to provide internet services to everyone. In many parts of the world, particularly in the Global South, over half of the population remains unconnected to the internet. This lack of accessibility and the failure of stakeholders to take action to address it hinder the expansion and unification of the internet.

Despite these challenges, there is recognition that maintaining respect for internet sovereignty is crucial. The internet should be treated as an entity deserving of respect, and there should be active exchange and adherence to the principles on which it was originally designed. This positive stance suggests that upholding internet sovereignty is necessary to preserve the integrity and functionality of the internet.

In conclusion, the internet’s original purpose was to connect the scientific and academic community, but it quickly evolved as people sought to benefit from its services. However, challenges such as security issues, potential fragmentation caused by technical changes and government actions, a lack of actions to provide internet services to all, and the need to maintain respect for internet sovereignty have emerged. These issues represent significant hurdles that need to be navigated to ensure the continued growth, accessibility, and integrity of the internet.

Dr Milos Jovanovic

Internet fragmentation is a complex issue that takes on three forms: Technical, Governmental, and Commercial. Technical fragmentation concerns issues with the underlying infrastructure of the internet, such as inconsistent network protocols and incompatible standards. Governmental fragmentation involves internet access and information flow being restricted by governments through censorship and content filtering. Commercial fragmentation involves business practices that prevent certain users from creating and spreading information, such as targeted advertising algorithms.

To maintain a sovereign internet, it is important to focus on critical infrastructure, ensuring the stability, security, and resiliency of the internet’s underlying infrastructure. This includes protecting information channels through encryption techniques.

However, geopolitical issues and interests hinder the development of a minimum common framework to manage internet fragmentation. Different regions hold different perspectives and approaches to internet governance, leading to fragmented development and lack of consensus.

Emerging technologies like Artificial Intelligence (AI), Blockchain, automation, and 5G/6G networks also impact internet fragmentation. AI presents challenges in defining its boundaries and ethical use. The implementation of these technologies can either exacerbate or alleviate fragmentation, depending on how they are developed and deployed.

Internet fragmentation is expected to continue and deepen due to a multipolar world and shifting power dynamics. Challenges exist in parts of the world, such as Africa, that are less connected. Bridging the digital divide and ensuring equitable access can help mitigate the negative effects of fragmentation and reduce inequalities.

In conclusion, addressing technical, governmental, and commercial aspects of internet fragmentation, ensuring critical infrastructure, considering the impact of emerging technologies, and promoting global cooperation are necessary to manage and reduce the negative impacts of fragmentation.

Olga Makarova

The analysis delves into two main topics: technological revolutions and internet fragmentation. It asserts that these revolutions follow a cyclical pattern that can be predicted. The cycle begins with an eruption and frenzy, characterized by rapid growth and excitement surrounding a new technological advancement. This is followed by a crash, where the initial enthusiasm subsides, leading to a decline in the market. Regulatory intervention then comes into play, as authorities step in to establish rules and guidelines to govern the technology. Finally, the revolution reaches its ultimate maturity, where the technology becomes an integral part of society. Currently, the analysis posits that we are in the midst of the fifth technological revolution, referred to as the information and telecommunication age.

Moving on to internet fragmentation, the analysis suggests that this phenomenon can occur due to a combination of technological, political, and economic factors. The internet is described as a collection of interconnected but autonomous systems. Fragmentation, as the analysis points out, lacks a clear-cut definition, making it a concept that is difficult to pin down. It argues that fragmentation may manifest in various forms, leading to potential consequences for connectivity and access.

Furthermore, the analysis proposes the idea of employing mathematical models to gain an understanding of and predict internet fragmentation. It highlights an older model from 1997 that quantifies fragmentation in terms of distribution, intentionality, impact, and nature. The analysis expresses optimism about the potential usefulness of mathematical models in comprehending the complexities of internet fragmentation.

In conclusion, the analysis provides valuable insights into the predictable cycle of technological revolutions, specifically focusing on the current information and telecommunication age. It also explores the potential for internet fragmentation, noting its potential consequences on connectivity and access. Additionally, the proposal to employ mathematical models as a tool for understanding and predicting internet fragmentation adds another layer of interest to the analysis. Overall, it offers a comprehensive overview of these topics, shedding light on past trends and potential future developments.

Otieno Barrack

The analysis explores the topic of internet governance, with a particular focus on its relevance in the Global South. It highlights the fact that many nations in the Global South are utilising systems and solutions that were largely designed in the Global North. This reliance on infrastructure not specifically tailored to their needs has resulted in a number of issues, such as internet shutdowns due to weak infrastructure.

The rise of internet shutdowns in the Global South is a growing concern, as they have a significant impact on local internet economies. This emphasises the need for internet governance to be applicable at a local level, despite its global public good nature. Design principles specific to internet infrastructure in the Global South need to be considered to ensure effectiveness and reliability.

Investment in the correct technological competence is also crucial. The private sector must invest in the appropriate technological capabilities to prevent infrastructure compromise. Poorly executed investments in technological competence can result in significant problems and hinder the development and stability of internet systems.

Additionally, the government plays a key role in creating a level playing field for all actors in internet governance. Their involvement ensures that the interests and needs of various stakeholders are taken into account. By fostering a fair and inclusive environment, the government can help promote the stability and growth of internet systems.

The analysis also highlights the negative effects of internet shutdowns on both local and global internet economies. Studies have shown that these shutdowns incur significant costs that extend beyond the immediate disruption of internet access. This further underscores the importance of addressing internet governance issues and safeguarding the stability and accessibility of internet systems.

In conclusion, the analysis emphasises the importance of relevant and applicable internet governance at a local level in the Global South. It stresses the need to consider region-specific design principles, as well as the significance of private sector investment in the appropriate technological competence. The role of the government in creating a fair and inclusive environment for all actors in internet governance is also highlighted. Lastly, the detrimental impact of internet shutdowns on local and global internet economies serves as a compelling argument for addressing these issues and ensuring the stability and accessibility of internet systems.

Session transcript

Olga Makarova:
development. She studies the relationship between technological development and financial bubbles. In 2020 Forbes named Carlotta Perez of five women economists worthy of our attention. She came to the conclusion that every technological revolution follows the same cycle. It all starts with an eruption, followed by frenzy, lots of ideas, lots of money. Then a crash and a turning point. At this stage, governments step in to regulate. And then comes synergy and maturity. According to Carlotta Perez, we are still living in the era of the fifth technological revolution, the age of information and telecommunications. And we have not reached the turning point yet. So the question now, what could be the turning point? What will happen after? What should be the institutional recomposition? What might synergy and maturity mean for the Internet? Can we increase this process and how? Each technological revolution causes many changes in society. This one gave rise to Web 2.0 and digital empires. However, the national state systems has not passed away since the Internet advent. While our virtual lives are in full swing in the digital empire’s vastness, our real life still takes place within the sovereign state borders. So, the questions are, could the growing confrontation between sovereign states and digital empires be responsible for the turning point start? Do we need a mature Internet? And what should it look like? We have not got proper answers to all these questions, but we are confident that we don’t want many fragmented split Internets to overrun the mature Internet. Internet fragmentation has a myriad of verbal definitions, sometimes emotional, sometimes sophisticated, but never precise. Some form of fragmentation can be useful for the entire Internet. Google’s QUIC is a case in point. But no definition can answer one important question. The question is, where is the very red line that crosses the boundary between fragmented and unfragmented Internet? The problem is complicated by the fact that fragmentation concepts treat the Internet as an unfragmented, pre-existing whole. But that’s not true. The Internet is a structurally… What we have just discussed in the lecture questions is the uneven fragmentation of Internet activities. It is a fundamentally fragment set of autonomous systems. The following question arises. For many years it was the shift in lifestyles and gatherings to amplify the perception of fragmentation issues which everyone can reach in their own way. So we constantly bump into the various forms of mild slow which states where you stand depends on where you see. Foods are laid out to совfication, but because everyone doesn’t know why, an ambiguous, mathematical model could help, but it hasn’t been created yet. So the question is what needs to be done to reach consensus? It seems that in trying to find the consensus, we need to find the foundation of the model. The foundation of the model is the internet invariance defined by ISOC in 2012. It is a great foundation. Any bridge of any invariance could be considered a form of fragmentation. But we also have bad news. In a universally understood model, Junt kernel can be used for various options. Here they are. affecting the entire internet. For example, any attempt to confiscate all IP addresses of one or more states would have dramatic consequence for the internet. We will account an example of deep structural fragmentation. We will get a chance to see real split internets without trust, unique identifier, globality, and much more. A similar case almost happened in March 2022 when some officials sent a demand to deprive Russia of all allocated IP addresses. But the technical community made the only correct decision not to do so. And this saved not only Russian users, but also the entire internet. When someone tries to punish someone by stripping them of the internet’s core values, there is punishing the entire internet by stripping it of its core values. But how many people think it’s obvious? So while this case shows that only the existing internet government’s ecosystem can protect Russian internet users, I’m afraid we will have to prove it. And probably the only way to do it to create a mathematical model of risk assessment. The entire internet is similarly impacted by sanctions that limit the ability of market participants to make payments for the facilities and services necessary to provide global internet access. The question is how to prove it. Filtering and blocking undesired content and platforms is a political development. All states, without exception, do it. Each sovereign state has its own undesired content blocking policy. The concept of undesired content is read by each sovereign state in its own way. Some sovereign states may apply similar blocking policies for a certain period of time. If you want to see for yourself, check out Blocking Website as Proxies for Policy Alignment by Nick Merrill and Steve Weber of the Center for Long-Term Cybersecurity at the University of California, Berkeley. In March 2022, access to some global platforms was blocked in Russia, some of Internet traffic disappeared and we were sure we would never see it again. Fortunately, we were mistaken. Customers were looking for an alternative. Finally, the Russian customers changed their preferences and started using other platforms. The graphics shows the very relocation of Russian customers with their content from one platform to another. What do Metcalfe’s Law and Dunbar’s Number have to do with this case? Metcalfe’s Law states that the network influence is proportional to the square of the connected user’s number. Metcalfe’s law is constrained by practical limitations, such as infrastructures, access to technology, and bounded rationality, which can be defined by Dunban’s number. Dunban’s number is a suggested cognitive limit on the number of the people with whom a person can maintain stable social relationships. This example allows to suggest that before March 2022, there were several Russian clusters on these platforms connected to each other and to other clusters by Metcalfe’s law and Dunban’s number. Links to the other clusters have forced Russian users to look for global alternatives. However, the limited number of such links prevented fragmentation of the user experience. The blocking did not have a significant impact on the content and user experience. The blocking had a significant impact on some platforms in some region. The good news is that some bounds may affect individual platforms, but not content. The bad news is we cannot predict how many resources need to be blocked to reach the border of the unfragmented Internet and cross the red line. Today, we can only analyze as post-factual, but we need an accurate prediction. Experts suggest four ways to avoid Internet fragmentation. The questions are, which way is right? Why one and how to reach consensus? Look like we can’t do without dull figures. We have a set of technical, political and commercial developments that may have an impact on fragmentation. Each development can be quantified in terms of its distribution, intentionality, impact and nature. Each case can be viewed as a function of these variables. The function value can be used to quantify one or more key dimensions. The question is, why don’t try to define a formula for fragmentation? We seem to be in dire need of scientists and science centers. Has anyone ever tried to define a formula for fragmentation? The good news is that the answer is yes. A part of this model is in front of you. The bad news is that the model was created in 1997. The Internet has changed increasingly since then. So, we don’t know if we can use this model or not. We need to check up. But verbal descriptions are not always convincing. Sometimes they are complex, full of emotion, and virtual versions of Maezler. Dolphins have a more powerful impact. Let’s put aside irrationality. Let’s get scientists involved. Let’s get started trying. Thank you. And here are some important references that I used to prepare my presentation. Thank you.

Moderator 1:
Thank you so much, Olga. That was a very, very interesting and comprehensive analysis. Thank you so much. I hope it will serve as a good basis for not only this discussion, but for many other deliberations on the topic. Because what we value usually in the Russian position is the comprehensive and inclusive approach with all the points. And we just witnessed the very, very profound approach to the topic. Thank you so much. We have Barak Otieno online. If you can continue from the region of Eastern Europe, which is Russia belonging to according to UN classification, even though it’s one-eighth of the world’s surface, we can move to Africa. And Barak, if you’re with us, can you please share the approaches of the technical community from your region?

Otieno Barrack:
Good morning. Good afternoon. Good evening, everyone. I hope you can hear me, Mr. Moderator.

Moderator 1:
Yes, yes.

Otieno Barrack:
Thank you very much. It’s 2am in Nairobi, but the beauty of the internet is that we have a common platform to be able to share in this course, especially on matters of global importance. I think looking at the subject and taking up from where the previous speaker has left, I would like to look at this from a perspective of Global South, especially in terms of the issues that we are dealing with insofar as internet development is concerned. My background is largely in internet infrastructure and internet policy development, both at regional and at policy level, and I’m a believer in the mantra of the Internet Governance Forum of thinking locally and acting globally. I think internet governance is more important if it is relevant at local level, despite the fact that it’s a global public good. And what I would like to stress insofar as internet fragmentation is concerned is that, especially for global South nations or developing South nations, it’s important that we take into consideration internet design principles. The Internet Society has continuously emphasized on the right internet design principles as it is. Most of the regions in the world, the global South, not limited, are using a system or a solution that was largely designed in the global North. And I think when I say design, context is very important. designing of buildings. You may find, for example, in parts of the global south, some designs which take into consideration environmental factors such that people don’t live in permanent houses, for lack of a better word. You find nobody communities that build temporary structures that consider the harsh or the hot weather in those particular areas. If I just juxtapose this or compare this to the internet, what should be the recommended design principles that each of the region should consider? I’m saying this because design is key, because it inevitably affects the structure of the internet and can easily result in fragmentation. We have seen the rise of internet shutdowns, especially in global south nations, where probably the design is not robust and there are single points of failure or single points where infrastructure can very easily be controlled or taken advantage of. Again, when we are looking at countries that have been affected by internet shutdowns, and I think the Internet Society and other organizations have actually done extensive studies on the cost of internet shutdowns on the global internet economy. We also see a scenario in which areas in which we witness a lot of internet shutdowns do not have established internet governance mechanisms. When I say internet governance mechanisms, I’m looking at national fora or opportunities such as this that bring together stakeholders to discuss on equal footing matters that affect internet governance in those particular jurisdictions. When I add to this, it’s also important for all stakeholders. to pay careful attention to their roles and responsibilities because this also inevitably affects internet, or rather affects the issue of internet fragmentation. When I look at, for instance, the technical, let me just look at the stakeholders in a local internet governance ecosystem, I’ll start with private sector. When private sector does not invest in the right technological competence, you find that we have half-baked engineers who then build infrastructure that can easily be captured, for lack of a better word, or that can easily be compromised. When I say compromised, it can be compromised either locally, or it can be by state actors, or it can also be compromised by non-state actors. We have seen situations in which cyber criminals take charge of internet infrastructure that affects various publics, or that affects various private sector interests. I would also like to consider, say, the role of government. Government is key because it creates a level playing field for all actors. So you find when governments don’t pay attention to internet governance conversations, there’s a likelihood that there’ll be a wrong impression, or feeling that they are under threat, and they are likely to respond wrongly whenever they feel that they are under threat by creating scenarios that result in internet shutdowns. As I have mentioned earlier, internet shutdowns have a profound effect on local internet economies, leave alone the global internet economy. Let’s bring into perspective the role of academia. shapes the skills of the engineers who build the local Internet and those who build the global Internet. So if academia is not paying attention to Internet architecture, to best practices, there’s a likelihood that we will end up with wrong architecture that can very easily result in fragmentation. And last but not least, I will talk of again two important actors and I’ll look at the media and I’ll look at the technical community. The media is an important watchman and the media should continuously point out whenever any of the stakeholders is not in step with what they’re supposed to do, or whenever any of the stakeholders is misusing the privileged opportunity that they have insofar as Internet governance. is concerned. So this would be my initial comments with respect to the subject of Internet fragmentation. And I must say that, especially for global South countries, there’s a scramble to implement various technologies, whether satellite related, whether fiber optic cable, which if we don’t pay attention to important Internet architecture development principles, it’s likely to result in a lot of Internet fragmentation. So I’ll stop at that and return the floor back to you, Mr. Moderator. Thank you.

Moderator 1:
Thank you so much for your very comprehensive and interesting speech. And I believe that the global South perspective is key when we are speaking about fragmentation, exactly to avoid situations when fragmentation may be a result of the lack of technologies and critical infrastructure. countries of the Global South. And that’s why we need international cooperation to ensure that all countries in all regions have the same level of technological equipment. And I believe that we will continue with the Global South perspective now. And Roberto, please, we are now moving to the LAC region. Can you share the perspective of the technical community and civil society of the LAC region and tell us your insights? Thank you.

Roberto Zambrana:
Thank you very much, Roman. And also I want to say hello to everyone in this panel and attending the session and in the distance to Barack, a very close friend as well. Well, I would like to maybe review a little bit of the history of internet that many of us will know. And if we remember back at the end of the 60s, the first and most important motivation was to get together everyone. I mean, in that moment, what I’m calling about everyone was the scientific and academic community. So nobody was thinking about security, nobody was thinking about superignity. No. The idea was to actually try to everyone get connected to this network that was starting to grow. It reached some other places in Europe and Asia and, well, as we all know, these big networks that started to be called as internet, then try to connect everyone. And then in the 90s, that’s another fact that we have to remember is that when the private companies, of course, started to put this kind of services, not only for the scientific community, but for the citizens as a whole in all our nations. Once again, it It was important to get everyone connected. I will say that the people wanted to be connected. The people wanted to have these services that we had, like email, access into information, et cetera. So suddenly, many, many people started to join to this network. We are talking about not tens of thousands, but maybe hundreds of thousands and millions. And something that increased this growth was the invention of the HTTP, the protocol that allows to navigate the internet. But then, of course, some other issues started to appear as well. Security issues, people that was taking advantage of this kind of infrastructure to do some bad things. And I think that’s where the society initially, of course, started to worry about these issues, and then, of course, the governments. And they deployed some sort of actions that perhaps could be understood as various ways of fragmentation. We all know that now. But I will say that those are not the only actions that we could westernize about. I will say that in terms of technical dimension of internet, claiming that we can have a better internet, maybe a more secure internet, and that we may actually be, let’s say, putting some other features to current protocols, we could actually have some better way of connections, more secure connections, more efficient connections. And then, we can see in this other technical dimensions that could also threat to the way internet was supposed to be from the beginning, as Barack was saying, to think about what. that the architecture with the principles of the Internet, as we know, and as we want it to be in the future, could be threatened by this kind of new initiatives. And one reason for that is that if we remember as well, back in those years, at the beginning of the Internet, one important entity coming from this technical community was IETF, which is currently the one organization that works with this large technical community and allows, of course, to come up with very clever, very interesting, and very evolved protocols during all these years. And I could say that from the information that I got even recently, because there are some interesting options that we can find now in the boot, talking about these new protocols, of course, they didn’t come up in a community or from a community in this way. Those new protocols might be interesting, might be good, but, again, it’s difficult to think about the results of these initiatives if we don’t see a community behind this, a big community that can have, in this case, technical decisions coming from the bottom up. So that’s another thing that we need to reflect on. And finally, another way of fragmentation that I think, and particularly this affects us in the global south, is related with business models for providing Internet services, of course. And in this case, I wouldn’t say it’s an action that actually any of the multistakeholders is doing that might be actually another way of fragmentation. But in this case, I think it’s a lack of actions, actually. A lack of actions, independently, if this is coming from the government or from the private sector, or even from the civil society when they have to demand this kind of services. The problem is that this lack of action is preventing that many people, that I will say in the Global South, is more than half of the population, is still not connected. That’s another big problem. And of course, at least for me, that’s another important way of fragmentation, if we’re trying to analyze all these different ways. So finally, we were listening about the other approach. And I understand very well about sovereignty. I understand the position exercising the rights of using its mandate. I mean, I’m talking about the different governments in different places. And in the ones particularly that, of course, in the way to maybe face some particular problems regarding security or some other motivations, finally, they decide to define laws that could be understood as another way of fragmentation. And that’s something that I started to reflect on during the last year. If we consider internet as an entity, as once some years ago, we started to consider the world, the mother world actually, or the mother earth, better said, as an active entity, as an entity that we need to respect, as an entity that we need to exchange with. And if we go and analyze internet as another complex entity in which we actually, part of our lives, we spend this then we also need to respect some sort of rights and and I will relate those rights with the principles that Internet was designed from the beginning, and we all need to keep them also in the future, and we also need to talk about Internet’s sovereignty as well, and I think with that concept,

Moderator 1:
it’s time to go back to Europe, and Dr Jovanovic, who also was a speaker in the previous edition of this session, can you please tell us something new, maybe, what you didn’t mention last time, and maybe reflect on those interventions which we expected from Europe today, hopefully, that will help you kind of in thinking about Internet fragmentation. Thank you.

Dr Milos Jovanovic:
Thank you, it’s my pleasure to be here in Kyoto to discuss this topic. When we speak about Internet fragmentation, the whole idea is that it’s a very complex issue. It’s a very complex issue, and it’s very difficult to see like three types of fragmentation. It’s technical fragmentation, governmental fragmentation, and commercial fragmentation. And speaking about geopolitical perspective, because I can’t, you know, separate all what’s happening in speaking about Internet fragmentation from the geopolitical perspective, I would put focus on governmental fragmentation, because I think it’s a very complex issue. And I think it’s a very complex issue, and it requires some, you know, group of users and certain users of the Internet to create, distribute, or access information. So it’s all about information. And you, Roberto, mentioned Internet sovereignty and information sovereignty. And I think it’s a very complex issue. And I think it’s a very complex issue. information sovereignty, and so on. And so it’s really important to discuss this. On another hand, we have technical fragmentations and the aspect about condition which underline infrastructure and some systems to fully operate. We saw some accidents in the past about it. And of course, we see commercial fragmentations, speaking about business practices, which prevent also certain users to create some informations and to spread information across the globe, speaking about their own interests and what they think is right. So when we speak about technical fragmentation, there are many aspects, speaking about routing corruptions, for example, which is really important. So blocking of new GTLDs, some alternate of DNS zones. This is really important, speaking about DNS system and who controls DNS system. When we speak about sovereign internet, which there are examples in China and other countries where they developed sovereign internet, it’s all about how we route our traffic inside of country. And speaking about small country, I’m from Serbia. I think we have a challenge regarding our inter-routes and all what’s happening right now. And it’s all about how we want to think in a way, how we want to secure our own infrastructure. Because when we speak about technical aspects, we usually speak about critical infrastructure. And it’s crucial for, I would say, sovereign internet of every state of the United Nations. After that, we came to some different approach, speaking about TOR, anonymization services, and VPNs, and so on and so on. This is also part of technical. fragmentation aspects. On the governmental side, there are, you know, also different points of view, and I will start with filtering and blocking services with some kind of censorship, but we shouldn’t say that it’s censorship if some governmental organizations say, okay, this is our right to protect interest of our citizens. You know, so that’s a good example, you know, and we see what’s happening right now in geopolitical perspective, you know, speaking about fragmentation processes, you know, between East, West, North, South, and, you know, for example, China is a good example. You can’t, you know, access many Western, I would say all Western services in China. When we speak about Russia, it’s also about Roskomnadzor and who protects, you know, rights of citizens of Russian Federation, all data should be stored, and so on, and so this is a part of governmental aspects, and, you know, that’s normal, because when we speak about Internet, I wouldn’t say that there is someone who has the right to say, you know, ownership of Internet is in our hands, so it’s, you know, decentralized network. That’s how I see, you know, from logical perspective, you know, there are different aspects, speaking about attacks on national networks, cybercrime, architectural and routing challenges inside of every country and between continents, you know. You know, last year when we were in Africa, we discussed, you know, lack of connectivity in some parts of Africa, you know, it’s a less connected continent, so that’s also an issue, you know, because if you speak about fragmentation, you know, we should, you know, see some parts of world where people do not have access to the Internet, you know, so after that, you know, when we discussed in last years and in different forums as well, you know, there are international frameworks, you know. We should speak about common approach, how to solve some challenges, nevertheless of, you know, what’s happening right now and with a geopolitical perspective, because, you know, if you want to achieve, you know, sustainability, which I think is really important, you know, we should focus on building minimum common framework, how to deal with such challenges, because, you know, living in a 21st century, many people do not think and they don’t think that such events are possible, and I would say geopolitical confrontation and, you know, fragmentation and so on and so on. But I think it’s crucial to understand that it’s very important to think how to sustain, to make this sustainable and to grant all people across the globe to access services. And, you know, we also have from governmental side, when I said accessing different services, many people would think about social networks, about controlling information channels, traffic flows and so on and so on. But I think, you know, it’s sovereign right of every state to control their own information flow. And in this circumstances, we should think about minimum common framework and how to make this all sustainable, because there are different interests of, you know, every player in this global arena, including, you know, global east, global west, global north, global south, you know, we should focus on building sustainable approach, and that’s my perspective. Moving back to commercial fragmentation, this is, you know, a challenge speaking about interconnection agreements, about policy interoperability, speaking about internet of things and emerging technologies, speaking about artificial intelligence, blockchain, there are different, you know, approaches and aspects and so on. So, blocking, discriminatory, you know, discriminatory aspects, speaking about net neutrality, what is neutrality, geo-blocking aspects, content, you know, potential cyber attacks on critical infrastructure as well, because if you use some, they would say, non-secure equipments. For example, in Serbia, in a country where I belong, you know, we have some agreement and our government signed that we will not use equipments in our critical infrastructure, which is from non-secured, you know, countries. So what that means, you know, this is also part of commercial fragmentation, you know. And for example, I will give an example from the United Arab Emirates. They signed a contract with Huawei speaking about 5G, you know. So speaking about commercial aspects, about infrastructure, about hardware, about, I would say, all what is critical infrastructure in every country, it’s also part of fragmentation on some, I would say, industrial level. So it’s a huge discussion, you know. If you want to use, for example, some Western hardware equipments and so on, do we belong in, I would say, geopolitical, you know, geopolitically to some, you know, aspects, I would say, and policy, and so should we respect this or not? So that’s always about, you know, what’s happening right now, speaking about NVIDIA microchips and some different server equipments and so on and so on. So I would say that, you know, we see internet fragmentation processes. From my perspective, it’s all started, you know, 2015, you know, 2015 year, 2014, you know. But now we are going deeply in these aspects, and we see three segments, I would say, technical, governmental, and commercial fragmentation, and it’s not all… It’s not only about, I would say, theoretically how we see, it’s about technically. And I know this forum, I mean Internet Governance Forum, there is, as Roberto mentioned, a huge technical community and in the last days we discussed some techniques speaking about anonymization and so on and so on. It’s also about how to secure your own information channels. So we speak about encryption techniques, which is really important, and you know, another topic which is very important is about how to secure metadata of communication. So this also includes ISP as a providers and other stakeholders in this process. So yeah, I want to conclude that right now I see, and I always mention this, you know, colleagues know, and Dr. Chukow and Mr. Glushenko also knows this, I always conclude, you know, speech with what we see right now as, you know, evidence, as a real thing, that we see three technical and I would say technological zones, you know, we see a Western European zone, we see Russian technological zone, we see Chinese technological zone, and it’s a good example when you visit China, you can’t use Western services. In Russia there is a strict laws, you know, all data of Russian citizens should be stored in the territory of Russian Federation. In a Western part of world it’s a huge discussion about Huawei equipments and the non-secure equipments, ZTE, Chinese initiatives, you know, we speak about 5G right after, I would say that China won 5G battle, and right after that, you know, American companies founded the 6G alliance, where they bring together all American companies in a position to try to won 6G battle, so it’s all about automation, about the new emerging… technologies, artificial intelligence, but this is a good, I always ask this, you know, who can define what is artificial intelligence exactly? And a few days ago, actually I think it was your day or first day, WindSurf proposed that artificial intelligence is machine learning. So when we speak about artificial intelligence, we speak about different algorithms, techniques, machine learning, data mining, and so on and so on. So speaking about only artificial intelligence, artificial intelligence, artificial intelligence, I think it’s useless. So we see some emerging technologies, of course, a machine learning, AI, blockchain, different processes, but it’s all about wider aspects. So it’s all connected with 5G, 6G, automation processes, smart city, sustainability, agenda 2030, and so on and so on. Global approach, speaking about fighting against, you know, against pollution, for example, China is a good example. 15 years ago, you know, you see how Beijing was and now. So there are initiatives in this, how we use technologies to fight against real, real problems. And I want to add at the end, you know, as a conclusion, that this fragmentation processes will continue. I don’t see that we are going in a direction of, as I proposed, you know, before, of minimum common frameworks, you know, how to deal with such challenges. I see, you know, strong direction in a way that this fragmentation processes will continue, will deepen, and this is all connected with geopolitical and strategical processes, which definitely started. And I would say that this is, speaking about internet technology and all aspects, it’s just a part of, you know, shifting of power from, you know, the West to the East, and of course we see some tendentions and some processes of global north-south cooperation because, you know, their colleague from Africa, they mentioned the challenges and so on, so, yeah, this will continue. I don’t see that internet fragmentation will stop, and I mean technological fragmentation and all, so, and I see this as a part of multipolar processes, so, the processes of rising of multipolar worlds. Thank you very much. Thank you.

Moderator 2:
Thank you very much, Milos. This is indeed a very insightful presentation, and as we see, even judging by the attendance of our today’s audience, this topic is still more interesting for global south representatives. We do not see global north here, and when global north countries host discussions on the topic of fragmentation, they discuss completely different things. Global south representatives tend to discuss the fundamental aspects of functioning of the internet, because they are still struggling to ensure internet as their basic human right, and this is the difference between approaches, and it’s not only in the sphere of internet, in the sphere, even in the sphere of the values, I would say, because very different level of development always causes such, let’s say, existential disputes, and we are happy to continue to convey the points of view of the global south countries, even though Russian Federation is… is the geographical North country. At the same time, after the commonly known events in 2014, when Russia withdrew from the so-called G8, me as an expert in this topic, in the sphere of G8, G20 and BRICS, I believe that it was the transition period of actually in a turning point of Russia to go into global South, which is quite interesting. We will see historically what it will lead to. The history seems to be repeating. I asked my colleague. Yes, yes. I asked my colleague, His Excellency Vladimir Glushenko, to summarize the discussion and share his vision and the vision of the expert community who participated in stock taking of GDC process conveyed by the Center for Globality Cooperation, please. Hello. Yes, good morning, everyone. Roman, thank you very much for giving me the floor. And I would like firstly to thank all our experts that expressed their very valuable and interesting views on such a hot, I would say, topic as internet fragmentation. Indeed, we have been discussing this topic for quite some time, if I’m not mistaken. The substantive discussions of internet fragmentation started at the IGF of 2019 in Berlin. And since then, really, this discussion has never stopped. Well, for me personally, I like the expression of, I don’t know which expert, but he or her or. She said that, indeed, the Internet has never been unfragmented. So there has been always a problem of fragmentation. And this is why the Secretary General of the United Nations, Antonia Guterres, just decided to suggest a global digital compact. And one of the priorities of this future document of the soft law is avoiding of Internet fragmentation. Really, it’s difficult to say at the moment that the global digital compact can do something to stop the fragmentation of the Internet, but it’s quite capable to formulate the universal rules and principles of decentralized development of national segments of the Internet. This document, I hope, can launch the international dialogue on the future of Internet on the basis of a common vision, if it contains provisions with clear criteria for the responsible behavior of all interested actors in the digital sphere. To my mind, in the most optimistic scenario, the global digital compact should define the framework and criteria for the operation and accountability of global digital platforms, ecosystems and metaverses. And it should also ensure that respect the right of UN member states to independently determine the parameters of the circulation of information and content within their jurisdictions. And this will greatly reduce tensions in the international discourse on the principles of freedom of expression and self-expression in the digital age. It will make it possible to demonopolize the rights and practice of individual countries and IT giants to censor the flow of information solely in their own interests. So, it was sort of a quote from the… from the contribution of the part of Russian export community. I’m not, of course, I’m not representing the whole Russian export community, but the organizations that took part in the discussion of the Global Digital Compact. And I’m sure that the discussion on the Internet fragmentation will, of course, continue. And to my mind, the name of today’s session, You Snooze, You Lose, is very well, very well characterizes the state of discussion around Internet fragmentation. I am sure that the IGF community has been doing very good work in this sphere, and specifically, I would like to thank the public network on Internet fragmentation for very substantive discussions and very interested outcomes. With this, I would like to thank again our today’s speakers and experts, and wishing all of you a very fruitful IGF. Thank you.

Moderator 1:
Thank you. Thank you very much, Vadim. And to be time efficient and not to delay the session, let us please conclude here. Thank you, everyone, for this morning exchange of views. It was very, very insightful, very interesting. I believe that those experts globally who will watch the broadcast and who were online with us and present in the audience had the chance to make their own conclusions about some new ideas our speakers presented. And I imagine that this is certainly not the last discussion. on this important topic and I kindly invite everyone to continue enjoying this beautiful forum sessions and have a productive experience for the next workshops and sessions. Thank you very much, have a great day and good ending of the forum. Thank you. And thank you technical team. Thank you. So, you.

Dr Milos Jovanovic

Speech speed

151 words per minute

Speech length

2080 words

Speech time

826 secs

Moderator 1

Speech speed

115 words per minute

Speech length

496 words

Speech time

258 secs

Moderator 2

Speech speed

135 words per minute

Speech length

850 words

Speech time

379 secs

Olga Makarova

Speech speed

112 words per minute

Speech length

1503 words

Speech time

807 secs

Otieno Barrack

Speech speed

130 words per minute

Speech length

990 words

Speech time

455 secs

Roberto Zambrana

Speech speed

144 words per minute

Speech length

1137 words

Speech time

473 secs

Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Hiroshi Esaki

The analysis covers a wide range of topics related to smart and sustainable solutions, the ethical use of technology, green designs, energy efficiency, the role of the younger generation in technological change, government-initiated smart cities, multi-stakeholder approaches, data ownership, and the future of education infrastructure. The overall sentiment of the analysis is positive, highlighting the potential benefits and necessary actions in each area.

One of the key arguments is the integration of smart and sustainable solutions in universities, which play a crucial role in shaping the minds of the next generation. The analysis emphasizes the need for universities to embrace the digital revolution and create campuses that are both state-of-the-art and environmentally friendly.

The importance of green designs and retrofitting existing structures to enhance energy efficiency is also highlighted. The panel stresses the significance of adopting net-zero footprint strategies and aligning with global standards, focusing on making existing buildings more energy-efficient rather than solely focusing on new construction.

Another area of focus is the G20 Global Smart Alliance, which aims to establish global norms for the ethical and responsible use of smart technologies in cities. The analysis expresses support for the alliance’s work and emphasizes the importance of setting global standards to ensure ethical use of technology for sustainable development.

The analysis also discusses the expansion efforts of the Global Smart City Alliance, which includes more than 36 pioneer cities globally. It highlights the importance of collaboration and knowledge sharing among cities to address common challenges and promote sustainable development.

The role of the younger generation in driving technological change is also emphasized. The analysis recognizes the power and potential of younger people in shaping the future and emphasizes the importance of investing in their education and empowerment.

There is also mention of the view that government-initiated smart cities can be a mistake, arguing for a multi-stakeholder, agile approach involving academia, industry, and government support.

The importance of data ownership is discussed, with a focus on individuals having ownership of their own data. The analysis highlights the need for discussions on data privacy and usage to ensure ethical and responsible data practices.

In terms of the future of education infrastructure, the analysis expresses optimism and discusses the role of advancing technologies in shaping educational settings. It mentions the Smart Campus Blueprint as an initiative to integrate technology into educational environments.

Overall, the analysis provides valuable insights into the various topics discussed. It emphasizes the significance of integrating smart and sustainable solutions, establishing global norms for responsible technology use, expanding smart city alliances, retrofitting existing structures, empowering the younger generation, adopting multi-stakeholder approaches, prioritizing data ownership, and embracing technology in education. The analysis encourages individuals to actively contribute to these efforts by joining initiatives such as the G20 Global Smart Alliance Network.

Audience

During the discussion, Taro emphasised the significance of STEM education, encompassing the fields of science, technology, engineering, and medicine. He stressed the need to prioritise these disciplines in the education system as they play a crucial role in driving innovation, economic growth, and societal development.

Taro argued that STEM education offers students a comprehensive understanding of the world and equips them with the necessary skills to navigate challenges in the rapidly advancing technological landscape. By fostering an interest and aptitude for STEM subjects, students can develop critical thinking, problem-solving, and analytical skills highly sought after in today’s workforce.

Supporting his argument, Taro cited statistics highlighting the increasing demand for STEM professionals in the job market, as well as the higher salaries typically associated with careers in these fields. He also referred to studies demonstrating the positive impact of early exposure to STEM education on students’ academic performance, engagement, and career prospects.

Encouraging active participation, Taro invited the audience to pose relevant questions, creating an inclusive environment where different perspectives could be shared and discussed. This facilitated a deeper exploration of the topic and a more holistic conversation.

In summary, Taro’s emphasis on STEM education stems from the belief that it is crucial for preparing future generations to thrive in an increasingly technology-driven world. Through a focus on science, technology, engineering, and medicine, students can acquire the skills and knowledge necessary to contribute to innovation, solve complex problems, and drive societal progress. The audience was encouraged to engage in the conversation by asking thought-provoking questions, leading to a more comprehensive understanding of the topic at hand.

Corey Glickman

The analysis focused on various aspects of sustainable urban development and energy efficiency in India and the United States. It highlighted the need for promoting equitable wellness and resilience in urban landscapes, acknowledging that smart monitors and controls in transport, buildings, environment, life, events, infrastructure, and utilities can enable communities to transform the urban landscape. The vision for a zero-carbon built environment includes the goal of achieving equitable wellness and resilience for all.

Decarbonization efforts were seen as requiring democratized action and support from all stakeholders to succeed. It was argued that enforced decarbonization standards at the government level without the involvement of the community, experts, learning institutions, and businesses can lead to failure. The transformation towards decarbonization takes place when there is participation from various stakeholders, ensuring that everyone’s needs and perspectives are considered.

The analysis expressed concern about the increase in building construction in India, which has led to a significant rise in building energy use. With India poised to become the fifth-largest economy in the world, the construction of new buildings at a rate of 8% annually has contributed to the escalating energy demands. However, it was also recognized that India has inherent advantages for building energy efficiency. These include a strong tradition of passively cooled buildings, a wide occupant tolerance to heat, a ready supply of local sustainable construction materials, inexpensive labor and craft costs, and careful use of resources.

Collaboration between the United States and India was emphasized, particularly in the field of building energy research and development. The U.S.-India joint center for building energy research and development, called CBERD, was highlighted as an example of such collaboration. It aims to develop building technologies that improve energy efficiency, comfort, and health safety. Through CBERD, significant collaborations between Indian and U.S. scientists have taken place, resulting in the development of nine new technologies, more than 100 peer-reviewed publications, and fostering mutual respect.

One notable aspect of the collaboration between the United States and India is the development of tools and resources for energy-efficient building design. These tools and guides aim to provide best practices for designing low-energy buildings and are specifically suited to the cultural, climatic, and construction context of India. They serve as valuable resources for the public and contribute to the advancement of sustainable building practices in the country.

The analysis also discussed the importance of digital transformation and leadership alignment in sustainable city development. Partnerships between the University of Tokyo and Microsoft were highlighted as contributors to this transformation. The adoption of technologies like digital twins and IoT devices was noted since these technologies already exist and can be utilized in the process of digital transformation. Furthermore, it was emphasized that alignment between visionary leadership and the actual implementers of policies is crucial for successful implementation.

The analysis advocated for using existing policies as a starting point for building sustainable urban environments, suggesting that the Green Sustainability City Alliance is working on embodied carbon for existing buildings and sustainable procurement as initial policies. However, it acknowledged that issues can arise due to complexities in zoning and challenges from local and national governance.

Localization was presented as an important factor when implementing policies related to sustainable urban development. It was acknowledged that what works in one city may not necessarily translate to another, and additional actions may be required upstream or downstream for policies to make sense in different contexts.

The discussion highlighted the positive role that policy discussion and collaboration can play in accelerating progress towards sustainable urban development. It was noted that policy leaders often have open attitudes towards discussions and are willing to share their networks, facilitating collaboration and the exchange of ideas.

Finally, the analysis acknowledged the significant role that global IT companies, particularly Microsoft, and other hyperscalers, will play in shaping the future of smart buildings and campuses. These global IT companies are viewed as instrumental in establishing the digital backbone necessary for sustainability and efficiency. The analysis also identified a potential winning formula for smart city development, which involves collaboration between university-based academic research, major IT service providers, and policymakers. This combination has been observed to be effective, particularly when implementing projects that involve academic-led investigations in controlled city areas or airports, supported by major IT service providers and policymakers.

Overall, the analysis offered valuable insights into the various aspects and challenges of sustainable urban development and energy efficiency in India and the United States. It emphasized the need for holistic approaches, stakeholder involvement, collaboration, and the leveraging of existing resources to achieve sustainable and resilient urban environments.

Hiroshi Esaki

The analysis highlights the potential of digital technology in enhancing energy efficiency, particularly through the use of cloud computing. It suggests that adopting digital technologies can result in over 80% energy savings. A footprints analysis reveals that following the EP100 plan can increase renewable energy usage to 25-30%. Therefore, digital technology can improve energy efficiency by up to 50%.

The analysis also emphasizes the positive impact of cloud computing and sharing economy in reducing energy consumption. Migrating from on-premise computers to data centers can lead to a 30-40% energy cut, thanks to high-performance HVAC systems. Additionally, cloud computing can save 70-80% energy through sharing economy.

Digital twin technology is highlighted as a tool for optimizing energy usage in system operation. A 12-year-old implementation resulted in a 31% energy productivity improvement, and current digital twin technologies can further reduce energy use.

Redesigning physical systems using digital technologies can significantly reduce carbon footprint. Comparative cost analysis shows improved energy productivity when digital transportation replaces physical transportation.

Collaboration between academia and industry is essential for effective decarbonization strategies. An example is provided where Tokyo University achieved over 30% decrease in energy consumption through collaboration. Young students working with seniors are seen as crucial for the future.

Hands-on experience and technology usage are emphasized, not just as theoretical study tools. A visit to Microsoft’s Redmond headquarters illustrates the importance of a concrete touch in the system.

Criticism is raised towards the government-initiated ‘smart city’ approach, advocating for a multi-stakeholder action involving academia and industry.

The concept of democratization is discussed, particularly in relation to data privacy and ownership. It emphasizes the need for a multi-stakeholder discussion.

In conclusion, digital technology has transformative potential in improving energy efficiency and reducing energy consumption. Cloud computing, sharing economy, and digital twin technology are key drivers. Collaboration between academia and industry is crucial, and hands-on experience and technology usage are essential. The government-led ‘smart city’ approach is criticized, and democratization in data privacy and ownership is highlighted. Policymakers, industry professionals, and researchers can benefit from these insights for a sustainable future.

Masami Ishiyama

Microsoft is leading the way in sustainability by adopting a comprehensive approach. By 2030, they aim to achieve carbon negativity, water positivity, and zero waste. This ambitious goal demonstrates their commitment to reducing their environmental impact and addressing sustainability challenges across their entire company. Microsoft is actively involved in various sustainability initiatives, including the G20 Global Smart City Alliance project, showing their dedication to collaborating with other organizations to drive sustainable change on a global scale.

Data and technology play a crucial role in Microsoft’s sustainability strategy. They have developed innovative solutions that leverage data analytics and technology to optimize energy usage and reduce their environmental footprint. For example, their smart building solution, in partnership with Ionic and equipped with Power BI, Azure IoT, and Dynamics 365, has shown a 6-10% reduction in annual energy consumption. Microsoft also utilizes one of the world’s largest corporate real estate data stores to optimize operations and save money, highlighting the value of data in driving sustainability efforts. Their operational platforms, Data and BI, along with Azure Digital Twin, contribute to enhancing sustainability by providing efficient data management and processing capabilities.

Microsoft recognizes the importance of data ownership and privacy in the digital age. They are committed to safeguarding customer permissions and protecting their data against potential threats. By empowering customers to have control over their data, Microsoft ensures transparency and supports their data privacy concerns. This strong emphasis on data ownership aligns with the principles of industry innovation and strong institutions outlined in the Sustainable Development Goals (SDGs).

The implementation of effective smart campus strategies exemplifies Microsoft’s commitment to sustainability in both their internal operations and external collaborations. For instance, their partnership with Temple University has resulted in optimizing energy efficiency and reducing resource usage. Microsoft’s smart campus strategy involves streamlining processes, identifying clear Internet of Things (IoT) use cases, managing construction schedules, and maintaining accurate floor plans. By prioritizing energy optimization and resource management, Microsoft demonstrates their dedication to creating sustainable campuses and positively impacting the environment.

Furthermore, Microsoft provides software solutions, such as Azure Digital Twin, that have the potential to reduce electricity consumption. By utilizing this technology in buildings, energy efficiency can be improved, contributing to the goal of affordable and clean energy outlined in the SDGs.

Data ownership and governance concerns are major obstacles in today’s digital landscape. Microsoft recognizes the growing importance of generative AI and data and supports the need for clear data ownership and controls. They assert that data ownership belongs to the customer and that a multi-stakeholder decision-making process is crucial in addressing data ownership concerns. This stance aligns with the principles of peace, justice, and strong institutions highlighted in the SDGs.

Overall, Microsoft’s comprehensive sustainability approach is demonstrated through their goals of carbon negativity, water positivity, and zero waste by 2030. Their involvement in global sustainability initiatives, use of data and technology to optimize energy usage, commitment to data ownership and privacy, successful implementation of smart campus strategies, and software offerings for reducing electricity consumption all showcase their dedication to sustainability. Microsoft’s approach not only aligns with the SDGs but also highlights their commitment to responsible corporate citizenship and driving positive change.

Session transcript

Moderator – Hiroshi Esaki:
We’ll give him a microphone. Good morning, everyone. I’d like to warmly welcome all of you to this vital session where we delve into the concept of smart campuses and their potential to revolutionize the way our universities operate, not just technologically, but also with a perspective of social, economic, and environmental responsibility. Universities play an integral role in shaping the minds of the next generation. And as we stand at the cusp of a digital revolution, it is imperative for these institutions to integrate smart and sustainable solutions into their infrastructure. Today’s session will unveil the intricacies of creating campuses that are both state-of-the-art and sustainable. Today we are honored to have with us, sorry for that, today we are honored to have with us Mr. Corey Krigman, Task Force member from the G20 Global Smart Alliance, and Mr. Masami Ishiyama from Microsoft Japan, and Dr. Hiroshi Ezaki from the University of Tokyo. And I’m Yuta Hirayama, as a moderator and advisor to the G20 Global Smart Alliance. This is a session overview. We also should write on the inspiring new public-private partnership, or PPP, led by esteemed institutions and corporations. A notable highlight of this initiative is the collaboration between the University of Tokyo and Microsoft, alongside other key players. This initiative is facilitated by the G20 Global Smart Alliance, which I belong to, and aims to build a global campus network. The essence of this network is to harness the potential of IT, networking, data security, and governance practices to foster cutting-edge research on sustainable design and emerging technologies. We also explore the pathway to achieving a net-zero footprint through pioneering digital infrastructures that leverage IT, IoT, generative AI, and more. The focus is not just on creating new green designs, but also on retrofitting existing structures to make them energy-efficient, aligning with global standards and supporting the green economy. Okay. This is a session overview, and I’m opening and introducing now, and after this, you know, I try to explain about what the G20 Global Smart Alliance is, and then I will move to the other speakers. Good. So maybe you may not know what the G20 Global Smart Alliance is, and this activity was born in 2019. So at that time, Japan government was the G20 presidency, and we tried to put the smart city, the world’s smart city, it be kind of the topic in G20 discussions. In 2020, also, Saudi Arabia government is also pushed to discuss about what the importance of the smart city. 2019, 2020, there are so many smart city projects built all over the world. On the other hand, technology governance is the issue. For example, the privacy issue, or vendor lock-in issue, or fragmented business model is also very difficult. So we tried to, our mandate is, sorry for that, our mandate was to bring together global stakeholders to establish and advance a set of global norms for the ethical and responsible use of smart technologies in cities. So that is what we wanted to do. After that, actually from 2019 to 2022, we developed five principles for responsible and ethical smart cities, and also we developed some model policies. Model policy is like, you know, there are so many technology governance issues over there. However, we gathered many experts from all over the world, and like Esaki-sensei and Corey is one of the task force members, but we discussed what the issue in the city and what, you know, policy should be more, like, prioritized to adapt to the cities. And we discussed a lot, and then we are developing some policies. For example, one of that is the accessibility policy. So like, you know, there are so many, you know, accessibility issues over there, so we try to put such policy to the cities, and then we try to increase that kind of, you know, reduce that kind of gap in here, and also, like, privacy impact assessment policies also we developed. This is very important policy for many cities, and in Japan, this policy, it was introduced by the cabinet office, and then now gradually implementing to some cities. For example, the Tsukuba city, one of the super city, is implementing this policy to their cities. On the other hand, and also open data policy is also very important, but our, you know, project was not only developing the policy, but how to implement this kind of policy to the cities, and then we try to develop some, you know, city network here, and then now, you know, globally, we have more than 36 pioneer cities, and also we have some, you know, local cities, local kind of the regional alliances is over there, and like Japan, we have more than 37, 8 cities in Japanese community, and also now we are developing Latin America or ASEAN network, so we are developing such kind of regional alliances, and, you know, in 2021, the Global Smart City Alliance was received the Governance and Economy Award in Smart City Expo World Congress. So our project was kind of, you know, popular these days, but I know many people doesn’t know this, so today I’m very honored to introduce our project. And lastly, last March, we had a joint event with Japan government, and so these photos are the G7 official public-private event high-level roundtable for the G7 Sustainable Urban Development Minister’s meeting, and in this event, Dr. Ezaki-sensei and Corey had met in the session, and now we are starting to discuss about today’s main topic, green building policies. So I’d love to introduce what the Global Smart City Alliance was, you know, did. Okay, my story is too long, so I’d like to pass to Corey. So, Corey, are you okay?

Corey Glickman:
Yes, I am. Can you hear me? Yeah, yeah, I can hear you. Please. Yes, excellent. Okay, well, first of all, thank you very much. So I’m Corey Glickman, and I just want to spend a few minutes talking a bit about the transformation component. So first part is talk about the overview of the transformation of the built environment for wellness across multiple sectors, and that would include the idea of residence, agriculture, administration, industry and commerce, education and research, infrastructure services, and transportation and communication, and these components make up the diverse community activities that we all experience in our urban environments. And what works very well that we know is putting in smart monitors and controls across all aspects of cities, we would focus on areas of transport, buildings, environment, life, events, infrastructure and utilities. And when we do this, we enable communities to transform the urban landscape. Next slide, please. So there are four aspects that we synthesize or levers that we use in this idea of transforming the built environment. The first one is decarbonization. So radically reduce the emissions for a zero carbon built environment. Second is democratization. So provide equitable wellness for resilience for the living environment. The third is digitalization, having a digital backbone that smartly connects our buildings, our distributed energy resources, our people and our businesses. And the fourth is demonstration. The ability to visualize our hypothesis and our tests that sets the direction for the next generation of city transformation experts. These are absolutely vital for us to be able to show what progress can be made and what ideas can be put forward across this year. And then lastly, what I’d like to talk about very quickly is the vision. So we create this vision for a zero carbon built environment by promoting this equitable wellness and resilience. And probably the most important lesson that I can share with you, having done this for several years now in several cities around the world, and what we’ve done with the G20 and our partners here, is we know that decarbonization is actually a user-centric, multi-stakeholder approach. That will fail when it’s enforced by governments that are not supported by democratized action. That means you can set those standards as a government level and policy level, but if everybody does not contribute and participate, it is going to fail. We see that happen. So the action item that we can most leave you with is that you need to demonstrate by leading. You need to have the whole community participate, particularly those that are experts and those that are in their learning institutions and those in the businesses. And when that happens, that democratization, teams with government, teams with public and private entities, is when you truly see transformation take place. So with that, I’d like to thank you for your time, and I’d like to pass it on to the next speaker.

Moderator – Hiroshi Esaki:
Okay. Thank you, Corey, for those enlightening insights. Moving forward, it’s crucial for us to view these transformations through the lens of one of the tech industry giants to discuss Microsoft’s vision on achieving net zero with digital. I’d like to welcome Mr. Masami Shiyama. Over to you, Masami.

Masami Ishiyama:
Thank you. This is Masami from Microsoft Japan. So I’m going to introduce the Microsoft Sustainability Initiative and Smart Campus Matter very quickly. So the reason why I’m here is that Microsoft is a task force member at the G20 Global Smart City Alliance project, as Yuta-san said. And also, another reason is that Microsoft has just announced, agreed and signed the strategic MOU with the University of Tokyo on the green transformation last August. In this agreement, Microsoft is exploring a way to support the University of Tokyo’s effort to achieve net zero emission through the use of our technology. So I will touch on those details later on in this session. So firstly, let us introduce how Microsoft has been tackling the sustainability agenda as a whole company. Here is a bit of history on our journey and the future goals. Since back in 2009, Microsoft established our first carbon emission reduction goal. For more than a decade, we have a steady build on our commitment to innovation and investment in technologies. Onward to 2050, we will continue to reduce by removing the company direct or electricity use emission since we were founded in 1975. Big commitment and big announcement. And this slide, here is a simplified view of our future goals. Carbon negative, water positive, zero waste by 2030. And we are also building a planetary computer to better monitor, model and manage the world ecosystem and protect more land than we use. Across the company, we are driving this ambitious goal internally and helping set best practice and new standards for business around the world with software-driven innovation. Already, we see a new area of solution emerging driven by data. Through our work with customers and partners such as managing data using advanced analytics, machine learning and a virtual model in the cloud, we are helping organization in many aspects. As you can see, we are building on space topic, supply chain topic, also circular economy topic and also smart grid infrastructure solution topic. When it comes to data, as the G20 alliance focused on technology governance, discussion often lies around the ownership and control of data. At Microsoft, we have fundamental principle. Your data belongs to you. We don’t use your data for our business. When you or your customer desire to open up your data, we commit to safeguarding your permissions and protecting your data against potential threats. Today’s main is building and space, so let’s see our own example first. When it comes to sustainability campus at Microsoft, we run like a medium size of city that is scattered across the globe. That vision is to build, deliver and operate connected, accessible, sustainable and secure workspace that creates the best employer experience. So this is customer number one for us, for smart building solution. Our initial effort to reduce power consumption in our building was focused on the headquarter, Microsoft Redmond campus, which spanned 125 buildings serving more than 60,000 people. Across the campus, there were multiple building system, 60 million annual UTT spend. Microsoft used Ionic, who is a partner solution running on Azure and extended with Power BI, Azure IoT and Dynamics 365 to remotely monitor and manage the building across the campus. As a result of initial effort, Microsoft achieved a 6 to 10% reduction in annual energy usage with implementation payback in less than 18 months. So when we think about the smart campus, employer experience or student experience is very key, meaning such as productivity, hybrid, wellness or access. In order to improve the employer or student experience with the campus, we need to platform and operations that help optimize how we build and run our real estate. We have two operational platforms, Data and BI, and also Azure Data Twin, and six operational functions on the right side. So today’s agenda is the smart campus, so my slide will touch on Data and AI and Azure Data Twin today. So first one, Data and BI. So we run one of the world’s largest corporate real estate data store, which we rely on to optimize the operations and save money. There are about 20 resources, sources of data inputted. However, the real value comes from the ability to combine the data source for insight. For sustainability example, we have UTT cost data for electricity, natural gas, fuel, including transport fuels, waste, including recycled and water. The next level up is to apply machine learning to it. Like two use case, number one is space optimization, batch data plus Wi-Fi MAC address. Number two is energy efficiency, a more smart start. So another one is Azure Data Twin, and other foundational platform. So it’s to create the digital replicas of our physical world. The digital twin is a normal world. Our physical world means things, place, people, and state, and the slide shows example of each. So like data, having the digital representation of physical world is only valuable when we use it. For example, sensor system that detect environmental conditions such as temperature and air quality. We have a lot of smart campus space practice and case studies around the world, but we’re going to introduce the campus universities case study. So this one is about Temple University in Philadelphia. Temple University facility and operations need to create a smart building strategy to optimize operation across its 240 buildings to reduce cost and enhance service for its school, business, employers, and student. So Microsoft partner eMagic utilized Microsoft Azure Digital Twin solution in five buildings on Temple’s Philadelphia campus as the initial phase of the integrated facility management solution. This solution enables the university to cut cost, optimize energy efficiency, and reduce technology and resources and improving service level on the campus. So as I mentioned at the beginning, based on those technology component and case studies, as I said, we are exploring a way to support the University of Tokyo’s effort as a first step to achieve net zero emissions through our technologies. Of course, the University of Tokyo has been doing various activities about green transformation so far, such as the Sustainable Campus Project starting in 2008, and also participation in the Net Zero, Race to Zero campaign, and also publication of the UTokyo Climate Action last year, starting last year. The goal of our first campus GX project is to help them improve energy efficiency from sustainability perspective. This has both environmental impact and its technology architecture could apply to other smart campus scenarios outside of the University of Tokyo, not in Japan, not to all over the world. So as we mentioned, the G20 Smart City Alliance focus on technology governance. Microsoft stick to a basic rule, as I said, your data is yours. As I stated in the bottom right corner of the slide, open data environment. This one, yeah. And we started the campus GX project as a pilot, which aimed to reduce energy through smart campus technology, and have been discussing the architecture and how to adapt the technology. With that, we will expand the current smart campus pilot project, which aims to reduce energy consumption with Microsoft technology, collaborating with GUTP, Green University of Tokyo project, which Esaki-sensei is leading, to create smart building reference architecture, which would influence other smart building policy and the entire industry. So this is last slide of my session. I’ll end by mentioning some lesson learning that Microsoft about the smart campus. Number one, starting with data, begin by collecting and analyzing data from sensor and system to identify the campus issue and opportunities. The data insight forms a foundation for effective strategy. Number two, optimize process. Before introducing the new technology, optimize the existing process for effective strategy. So number three is define IoT use case. So let’s specify a clear use case for IoT device, such as monitoring energy consumption or improving security. Number four, importance of the floor plan. So it is crucial for smart campus implementation. So let’s have an accurate floor plan, so that’s a key. Number five, lastly, the construction schedule. So properly manage construction schedule for new infrastructure and technology, meeting the budget and deadline requirement. So thank you for listening, and hand over to Dr. Esaki-san.

Hiroshi Esaki:
Thank you for introduction. I want to share with you a concrete number or concrete action based on the vision the Microsoft or Hirayama-san or WEF are having. The important thing is we should show what we can do using digital technology or using the Internet. First one is many of you may not know about EP100, that’s the electrical energy productivity 100%, which means using the digital technology, you want to improve the efficiency, especially energy efficiency by double, meaning the same work can be done by a half of energy. That is relatively quite easy in the case of digital. For example, when we use Google or Microsoft regarding their application, when we have the cloud computing, more than 80% of energy saving be able to do. That is not a false number, that’s really, really we can do. This is the footprint in 2022, how many carbon footprint each country has. The important thing is this is the how many or high ratio of the renewable energy introduction in each country. Some of the country already 90% or 80%. Most of the developed country probably 30% or 20%, means it’s large percentage of renewable energy we have to introduce that you may consider. When you think about EP100, the number you have to introduce into your world about renewable energy going to half, that’s the real number. This is, for example, Germany or UK or Spain or Ireland, when you have every single industry, every single factory or campus went to an EP100, we can reduce the power energy consumption into 50%, then only 25% of the increase in the renewable energy. In the case of Germany or UK or Spain, you can think about this as a practical number you can do. This is India, USA, and Japan. We need just a plus 150% renewable energy increase. That would be possible to do, not the five times, not the 10, 10 times larger renewable energy. That is the power of digital or the internet, you can realize. Also, I want to put in front of you three techniques for decarbonization. First one is going to already built system that’s as-is system solution. Second thing is energy grabs by the digital twin for the system operation. That means there are many opportunity to apply data-centric operation or artificial intelligence that this IGF team, that’s going to be applied to quite easily when we have accurate data. Second one is to be for the future infrastructure design. That is quite important for developing country or emerging countries, even for developed countries. So in the case of design, we must reduce the number of physical resources using digital technology. Also we design the system by design, the construction and operation, how we use the digital technologies. This is one of the example when you think about both IT or by IT as-is and to be. The left-hand top, that’s going to be explained by the Microsoft, that is digital twin. That is graphing whole of the system behavior or how the system are going to do. Important thing is the computer itself be able to analyze and visualize the system operation when you have the digital twin. This is one of the example 12 years ago, I hacked, I’m sorry, I digitized digital twin at my university against the earthquake, shocking in Japan. My campus spending 66 megawatt. My building consume one megawatt. When we have digital twin, we can reduce 31% or 22% energy saving. I don’t want to say energy saving, that is energy productivity improvement, going to be 30% or 20%. It was 12 years ago, technology going to be improved a lot. So more complicated, more good, digital twin going to be done. And also at that time, we are academia. Microsoft is industry. Important function of the academia is want to have interoperability. So we hate lock-on by Microsoft, nor Google, nor Met, even that, right? That is important thing is a multistakeholder discussion should have those kind of global standard for interoperability. So next one is the ASIS of IT, that is yet another interesting thing you can do. This is the actual example, practical resolution. Also this is more than 10 years ago, BMW in Germany has their own IT set of facilities. They analyze all of the tasks in their company. Then they realize only 20% of the tasks require small latency and very critical data. It must be in nearby their facilities or 80% of the tasks allowing large latency and no critical data like R&D simulation or the others means 80% of the tasks be able to migrate to 100% renewable energy country, which is Iceland and Sweden, right? Since the Internet or computer system can be globally distributed, then you can select the location or soil, whatever you want. That is 12 years ago lesson learned, we did. Tech technology be able to apply those kind of thing, right? So this is the lesson learned from this, 100% renewable energy, going to be done in somewhere on the earth. Then also some of the on-premise computers be able to go into the data center. Then at least 30 or 40% energy cut be able to, due to the very high performance HVACs. When you use a cloud, as I mentioned, 70 or 80% be able to cut by sharing economy. Sharing economy is also good, not only for the power saving, but also the resource reduction. The physical resource like computers or HVACs or the other, or building itself, large reduction of the system be able to do. So the other one, especially for developing country or emerging countries, 2B, how you think about design infrastructure. This is the cyber first I mentioned. By IT, for the 2B environment, think about assuming you have sophisticated good digital technology. So this is one of the example. This is the logistics in about 200 years ago. It was exclusive logistics system every single industry, every single company has. That is the exclusive use, exclusive build, the infrastructure. What the very good invention by the human being was container and pilot. This went to sharing economy in physical package transportation. When you have a container or a pilot, every single material be able to put into the same package. The package be able to transfer by airplane, train, ship, or car, whatever you have, which is a completely perfect sharing economy for existing material or merchandise as well as future materials. One of the example using this particular infrastructure was Amazon. So this is before the Internet. What the Internet did was exactly the same thing as container and pilot. Digital information going to be transferred everywhere on any technology like Wi-Fi, glass fibers, copper fibers, and also any material digitized thing going to be able to transfer everywhere on the earth, like text, video, voice, whatever you have, a program as well, or recipe for the 3D printer as well. One of the other thing I want to share is the cost of carbon footprint regarding the physical object transformation versus digital object transformation. The huge cost is going to be different. Huge energy productivity improvement can be done, replace the physical transportation to digital transportation is going to be done. This is actual number, material, electricity versus digital bits, two order of magnitude. This is real number I discussed with a power company in Japan, how the difference cost on the operational cost, the investment cost, install, and operation, and replacement, then digital bits are going to be 100 once compared to electricity. Electricity versus material, yet another two order of magnitude difference. This is very interesting. So this is the reason why I put in those slides is we want to show the demonstration, what we can do at the concrete number from the figures. Thank you.

Moderator – Hiroshi Esaki:
Thank you very much, Esaki-sensei. We have now arrived at our interactive session. So this is a golden opportunity for all attendees to pose questions, share thoughts, or discuss any of the topics we’ve touched upon today. So does anyone have any questions here? Maybe a first note. I think, Corey, are you there? Maybe you wanted to introduce one video, right?

Corey Glickman:
Certainly.

Moderator – Hiroshi Esaki:
So could you introduce shortly about the video, and I will ask the IT operator to start the video.

Corey Glickman:
Absolutely. So this video represents a program that I had worked on with Berkeley University, with India, and with the U.S. government, looking at the transformation of cities in the use of the technologies of the areas that we’ve discussed. So the way to view this video is a program that was ran for seven years and went across three countries, and it’s sharing some of the lessons and some of the activities that took place. If you would like to run the video, that would be great. Yeah, okay. India is poised to become the fifth largest economy in the world. As more buildings are added at a healthy rate of 8% every year, building energy use is skyrocketing. Trends in the Indian construction, especially the new construction, the urban heat increase and the high occupancy levels in India present unique challenges to the building ecosystem. India enjoys many advantages, including a strong tradition of passively cooled buildings, a wide occupant tolerance to heat, a ready supply of local sustainable construction materials, inexpensive labor and craft costs, and careful use of resources. At Lawrence Berkeley National Laboratory, we are committed to working with Indian research community, industry, and government to develop building technologies that enhance building comfort, push the envelope for efficiency, and improve the health, safety, and life of building occupants in both countries. The United States and India have been collaborating on a U.S.-India joint center for building energy research and development called CBERD. CBERD is a dynamic public-private partnership that involves academic research institutions and partners in both countries that do collaborative research to bring new energy efficiency technology to both U.S. and India. In CBERD, we deploy what we call a three-by-three model. The first three is make sure that we advance government policies, industrial practice, and research findings about energy-efficient buildings, and the second three is making sure that we understand how to design them right, how to build them right, and how to operate them right. Only when this happens, we are able to implement on a wide scale throughout the economy energy-efficient buildings with technologies that are highly cost-effective and are able to reduce energy consumption per square foot by about a factor of five below what is the norm. Through the collaborative research between U.S. researchers and Indian researchers, over the last five years of CBIRD, we have developed nine new technologies, 40 significant exchanges between Indian scientists and U.S. scientists, more than 100 peer-reviewed publications, four patent disclosures, and we have more than 10 demonstrations. One of the guiding principles of doing that was to bring together information technology and physical systems. U.S. has had a long lead for building world-class physical systems, facades, HVAC systems, high-efficiency chillers, and so on. India has a fantastic depth in technical prowess in information technology. Our goal was to bring them together in a way that benefits both countries, and each country gets more than what they put in. Working shoulder-to-shoulder on common problems, developing joint publications, joint technologies, having joint demonstration projects, has led to such a deep mutual respect and understanding that I couldn’t have imagined we would be ending at this point. The expertise that the U.S. scientists brought in in this Indo-U.S. collaborative project on building energy efficiency was very helpful. It helped in accelerating the research, developing products and processes which can be deployed and make a real difference in the building sector in India. Another way we collaborate between the U.S. and India is by developing tools and resources for the public that are available on our websites, as well as new facilities like this game-changing facility called FlexLab. FlexLab is the world’s most advanced testbed for energy-efficient technologies. FlexLab is also a testing system to allow us to integrate the systems with the electric grid, with batteries and photovoltaic systems. I want to mention the new Best Practices Guide that is a tool for how to design energy-efficient buildings, and it has a lot of information on designing the façade, the HVAC systems and other components for low-energy buildings. These best practices are particularly suited to the cultural, climatic and construction context of India. The guide is based on three core principles. One, using a triple bottom-line framework for energy-efficiency decision-making, using financial capital, environmental capital and enhanced working environments as a theme. Two, aggressive but achievable energy performance targets. And three, creating a shared set of values across all stakeholders, from building owners, developers, builders, architects, engineers and policy makers. The strategic insight into design, the idea of integrating the building with its electromechanical systems in conceptualizing solutions is a real lesson here. It is the technical depth, the analytical framework and the advice that is given, whereas as the guide goes across various climatic zones and looks at different technical solutions, is extremely helpful indeed. I think it’s a great piece of work. I feel like India is being propelled into a digital and decarbonized future, and buildings are a prime opportunity to actually use this advantage and really make and shape the future.

Moderator – Hiroshi Esaki:
So Corey, thank you for introducing the video. As Esaki-sensei mentioned, India and the US and Japan are not advanced in using renewable energy, right? So I think we have much space to increase this kind of field. So Corey, I want to ask you, based on your experience in the digital transformation landscape, what do you believe are the primary obstacles, not only the universities, but today we discussed about green building policy in universities, but not only the university but more like the business field. So what is the obstacle of this field? Do you have any thought?

Corey Glickman:
Sure. I would say experience has taught us that the vision really has to be led, I think, with a portion of the city. So just as you’re talking about the University of Tokyo teaming with Microsoft, that is a great place to start. You can define what is a smart space or a smart city. And so an obstacle would be, although you have to have very large ambitions, you need to choose a section that is doable, and you need to start fast, actually. And many of these technologies, these digital twins, and these ideas of IoT devices, they exist, right? So I would start with tried and true technologies. If you think too far out that only technologies you can depend on five years, 10 years from now are being discovered, you’re not going to move very fast. You should start with known technology, do something that’s sizable, but also look at scale and do responsible R&D. And I think the biggest obstacle is ultimately aligning the visionary leadership to the actual implementers, right? It goes back to that democratization and getting people on the ground to do this. Ideas of digital twins and visualization is a huge way of overcoming this and really having success. Great. Thank you. So I think you are developing the green building model policy in the G20 Global Smart Alliance, right? So if possible, could you introduce some point about you are developing the policy? Certainly. So one of the programs that we are leading is looking at what we call the Green Sustainability City Alliance right now. And it’s about taking policies that, of course, would make sense for cities, but there’s a lot out there, right? Many organizations doing things. So what we looked at was saying, let’s look at existing policies and start with areas that would have the most impact and build upon others’ work already versus reinventing or going in a different direction. So our first policy is actually embodied carbon. And we said embodied carbon for existing buildings. We’re going to do new buildings eventually, but we take existing structures first. And then the second part that we’re going to be looking at for policy is actually procurement. So the idea of sustainable procurement. How do you choose the right materials? How do you get to the right economics coming across there? And then the third area we’re still exploring. It takes about six to eight months to do a policy. We’re just finishing the embodied carbon one. And we’re starting the sustainable procurement. We’ll likely be zoning. And zoning is so important, but it’s a very complex government issue, locality issue. And I would say the lesson that we’ve learned over and over again that we hear from everybody, it’s about contextualization or localization. You can take a great policy that works in London or that works in Tokyo. And does that translate to Kyoto? Or does that translate to another city? You probably have to do something upstream or downstream in order for that policy to make sense, right? And I would say the other one is that when you ask other policy leaders who are working on these programs, they’re very open to discussing and to sharing their networks. And that’s another very powerful thing. I think often policy groups try to work in their own silos, and they don’t reach out enough. And when they do, you can quickly accelerate what’s taking place. So that’s really what we’re looking at right now.

Moderator – Hiroshi Esaki:
Great. So thank you. So what role do you see for global IT companies shaping the future of smart campuses or smart buildings? I mean, so now, yeah.

Corey Glickman:
So they’re going to play a very key role. Because ultimately, these systems have to live in a digital backbone, right? They have to be digitalized for this to work. So that’s the hyperscalers. This is the Microsofts, right? This is these tool sets that come across there. So IT global, even as we talk about whether it’s generative AI or other areas that are more traditional about running systems, think of this. All buildings already run off of systems. We already have systems that look at our economics, that look at our energy, that look at our mobility. However, as we look at sustainability, and we look for these efficiencies that Dr. Esagai was talking about, we have to build things upstream and downstream connectors to those backbones. When he talked about BMW, unless it works in their centralized system, they’re building attachments. They’re not rebuilding things from scratch. And that’s what’s important for this consistency. Because it’s this specialized factory approach combined with academic R&D leadership, I think is really what does very well. And I will say that the winning formula that I see right now is what I’m seeing taking place at this point at this table. And what it means is this. If you can take a university academic-led project and look at something like an airport or a controlled part of the city, and you can get a major IT global service provider with that, and with the policymakers, you have the chance to have that winning formula.

Moderator – Hiroshi Esaki:
Thank you very much. So back to the Tokyo University’s cases. So I think you already realized more than 30% decreasing of the energy consumption, right? So what is the key point? I think you have more key kind of the issue to implementing such kind of decarbonized decision. Do you have any thoughts?

Hiroshi Esaki:
Well, simple thing is we love technology, and we love Earth, and we love globe. So also, we really love the students. They’re working together. Also, they are future power to change the world. So that’s an important thing when we have a collaboration with industry and academia. In the case of academia, not only the senior professors, they don’t have any power anymore, right? The younger people have a lot of powers and experience to the future. So when I talk with a colleague, he initiated leading universities’ collaboration about such a technological hackathon or demonstrations. In his slide, there is a demonstration is quite important, right? How we show the fact or knowledge, experience, sharing those things is quite important, not only by document, by real experience. But touching to the computer system in the arriving building or campus, that is quite important. So that is that we share with Microsoft when we went to the Redmond headquarter office. We really share. Engineers or executives should touch on real system. They realize what’s going on. Then think about the real solution or concrete solution, not the politician we are. That’s the colleague firstly mentioned. The mistake of the smart city at this point of time is government-initiated, not a multi-stakeholder action. We didn’t. So we must have multi-stakeholder, agile approach with academia and industry, with supporting by government. That is the important model we want to share based on the practical experience. That is the IGF should do. The other thing is the democratization. That is yet another point the colleague mentioned about. Not controlled by the single large company nor large government. The data itself owned by users, right? So how to protect those privacy or intellectual property? Then though we must have the kind of collaboration in the case of the public sectors, infrastructures or private sectors. That kind of very careful, very healthy multi-stakeholder discussion about how to manage the data privacy or data usage is yet another thing. Important thing is that is not determined by government. It must be determined by multi-stakeholder discussion.

Moderator – Hiroshi Esaki:
So do you want to introduce? Okay. So thank you, as I can say. Then move on to the, I’d like to ask to Ishii-san about. So this is actually, so when I heard the, you know, Microsoft, you know, Azure Digital Twin, I’m very interested because, you know, using the IT software, you know, so this means, you know, we use the electricity, but we can reduce the electricity, right? So this is kind of a compliment, but, you know, this is very interesting. So as Esak-sensei mentioned, but, you know, like the Microsoft is definitely the giant. And if you provide such kind of software to each building, maybe many building owners or, you know, some developers or they kind of, you know, worry about that, right? So as a technology governance issue, so what is the obstacle on your business field? Or if you have any thoughts or, you know, things, could you share this?

Masami Ishiyama:
Yeah, thanks for your questions. Well, as Yuta-san said, governance of the IT and also data is very important. So we see that not only the general IT, but also now the generative AI is very, like, appearing very rapidly. So as I said, the ownership of the data and control of the data is really important, even more important than ever. So as Dr. Esak said, the multi-stakeholder decision-making is really important. So to do that, it’s – so we think about the – how I can say? We think about the ownership of the data. So that could be the obstacle. As Microsoft said, Microsoft said that data ownership is the customer, but we need to – multi-stakeholder need to recognize that to move forward very smoothly. Yeah, I guess.

Moderator – Hiroshi Esaki:
Thank you very much. But Corey mentioned, like, you know, so as a global smart alliance, we are developing the kind of green building, you know, model policies. But I think, you know, for many companies, if we have such kind of the guideline, you know, model policy, I think it’s very easy to discuss for, I mean, you know, what is the standard. And, you know, if we know that this is very, you know, easy to implement such kind of thing. So I think, you know, we really needed to implement such kind of policy to the market. Yeah. Thank you very much. So still we have three, four minutes. So if the, you know, participants have any questions, I’d like to ask to the speaker, but do you have – no? Oh, yeah, online also. Okay, I can’t see any questions. So maybe, you know, after the session, if you want to communicate with each speaker. Okay, could you read this?

Audience:
Taro mentioned, I think it should be science, technology, engineering, medicine. STEM, the education thing. So please feel free to add any question.

Moderator – Hiroshi Esaki:
Okay, so could you back to the slide? Sorry. So I just want to mention some points. So I know, you know, in this venue, you know, there are so many experts here. And definitely, you know, what we discussed today, we are a lot of experts. And if you want to join the G20 Global Smart Alliance Network, let me know that. So there are so many, you know, experts, policy makers, academia and private sector, you know, experts joining our, you know, project. And they are, you know, discussing about what, you know, policy should be implemented to the city. And, you know, we are always welcome. So let me know if you want to join this. And as a conclusion, so thank you very much for participating today. What an enlightening session we have with the HUD. From understanding the Smart Campus Blueprint to discussing cutting-edge technologies role, it’s clear that the future of education infrastructure is on a promising path. A special thank you to our esteemed speakers for sharing their knowledge and to all attendees for their active participation. We don’t have any questions. Let’s carry forward these learnings and insight to make our campus smarter and our world a better place. Thank you and see you in the next session of IGF. Thank you very much. Thank you. Thank you very much for coming. Thank you very much.

Audience

Speech speed

114 words per minute

Speech length

26 words

Speech time

14 secs

Corey Glickman

Speech speed

161 words per minute

Speech length

2403 words

Speech time

898 secs

Hiroshi Esaki

Speech speed

118 words per minute

Speech length

1826 words

Speech time

931 secs

Masami Ishiyama

Speech speed

125 words per minute

Speech length

1653 words

Speech time

796 secs

Moderator – Hiroshi Esaki

Speech speed

143 words per minute

Speech length

2013 words

Speech time

844 secs

RITEC: Prioritizing Child Well-Being in Digital Design | IGF 2023 Open Forum #52

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During the discussion, different concerns and questions were raised regarding various aspects of children’s digital life. One of the concerns highlighted was the issue of tokenism and the need for genuine child participation. The Belgian Safer Internet Center, which operates under the InSafe umbrella, was mentioned as actively working towards achieving a true representational group of young people. The sentiment expressed was one of concern, aiming to avoid using children as tokens and instead promoting their meaningful involvement in decision-making processes.

Another concern raised was the need to provide guidance on the evolving capacities of children. Jutta Kroll from the German Digital Opportunities Foundation mentioned the existence of a special group on age-appropriate design within the European Commission, indicating a recognition of the importance of tailoring digital content and experiences to suit children’s developmental stages. The sentiment expressed in this regard was one of questioning, suggesting a desire to better understand how to navigate the evolving digital landscape in a way that benefits children’s well-being and educational development.

The importance of involving parents in their children’s digital life was also emphasized during the discussion. Amy from ECHPAD International highlighted the importance of parents being actively engaged in their children’s gaming and digital experiences. Additionally, Carmen, a parent, expressed the view that online life is not a necessity for children, underscoring the critical role of parental education in safeguarding their well-being in the digital world. This sentiment emphasized the need for parents to stay informed and involved to ensure their children’s online safety and well-being.

Another worrisome issue identified was the lack of pedagogical understanding among developers. Carmen expressed concern regarding developers’ limited experience in educational theory and practice, highlighting the importance of incorporating pedagogical expertise into the development of digital content and platforms aimed at children. This worry reflected the need for developers to have a deep understanding of how children learn and develop so that digital resources can effectively promote quality education.

Finally, the speakers questioned the next steps to address these concerns. David from the Association for NGOs Insurance Group in the Asia-Pacific region specifically raised the issue of creating guidelines for parents, educators, and workers. This standpoint emphasized the necessity of establishing clear guidelines and engagement strategies to support parents, educators, and those working with children in effectively navigating the digital landscape and ensuring children’s well-being and educational growth.

Overall, the speakers stressed the importance of promoting online safety and well-being for children. Genuine child participation, appropriate guidance for evolving capacities, parental involvement, pedagogical understanding among developers, and the creation of guidelines for parents, educators, and workers emerged as key areas of focus. These observations highlighted a collective desire to ensure a positive and supportive digital environment for children, where their rights, education, and safety are prioritized.

Shuli Gilutz

Digital play is increasingly recognised as a crucial component of children’s well-being and development. Research has shown that digital play can provide positive experiences that promote children’s overall welfare. It is considered one of the most important ways for children to interact with the world. However, there is a pressing need for the design industry to prioritise the creation of safe, engaging, and beneficial digital play experiences specifically tailored for children.

Many designers are eager to create positive and empowering digital play experiences for children, but they lack the necessary training and guidance to do so effectively. Collaborative efforts are underway to work with designers and understand their requirements. The aim is to develop a comprehensive guide that will enable them to create positive digital experiences for children.

The project is built upon research, and the current stage involves consulting with designers from companies across the globe. The ultimate goal is to provide businesses with a guide that is grounded in real data about children and technology. The team hopes that this will dispel myths and misconceptions surrounding the topic and educate designers on best practices.

Creating a guide for businesses based on real data about children and technology is crucial in ensuring that child-friendly digital experiences are prioritised. By aggregating information from global companies, the team plans to develop a prototype that will serve as a valuable resource for designers. The final product, expected to be released in the autumn, will provide designers with the knowledge and insights necessary to create safe and beneficial digital play experiences for children.

In addition to the design industry’s responsibilities, there also needs to be a broader shift in designing for children. Instead of viewing it as a mere regulatory requirement, there should be an understanding that this is the future. Designers must embrace the challenge of creating a fully holistic environment for children to thrive in, focusing not only on safety but also on their overall well-being.

Companies that fail to adapt their design approaches to meet the needs of children may ultimately be left behind. The industry must pivot its perspective and prioritise designing for children. This shift in approach is vital to ensure that children have access to digital experiences that enhance their development and well-being.

Beyond the design industry’s role, parents also play a crucial part in supporting their children’s digital play experiences. Engaging in digital games with their children helps parents understand the gaming world and actively participate in their children’s activities, thereby contributing to their well-being. Furthermore, direct discussions between parents and children about concerns and motivations are proven to be effective in helping children understand the importance of activities such as playing outside or balancing their digital and non-digital pursuits. These conversations enhance children’s understanding and overall well-being.

In conclusion, digital play is a critical aspect of children’s well-being and development. The design industry needs to prioritise the creation of safe, engaging, and beneficial digital play experiences. Efforts are underway to develop a guide based on real data about children and technology for businesses to ensure child-friendly design practices. There needs to be a broader shift in designing for children, viewing it as the future and creating a fully holistic environment. Companies that fail to adapt may be left behind. Parental engagement and direct discussions with children are essential in supporting their well-being.

Adam Ingle

LEGO Group is committed to prioritising the well-being of children in their digital products. They actively avoid incorporating addictive qualities or manipulative design patterns into their games. By doing so, LEGO ensures that children can engage with their digital experiences in a healthy and balanced manner.

In addition to designing responsible digital products, LEGO Group is taking the initiative to improve overall digital experiences for children. They are collaborating with UNICEF to drive this effort and aim to elevate industry best practices. By working together with other industry leaders, LEGO Group intends to create a coalition that will promote better digital experiences for children worldwide.

Recognising the online safety crisis, LEGO Group is actively promoting proactive measures and cultural change within the digital industry. They understand that the failure to invest in children’s well-being can lead to potential harm and a loss of trust in the digital industry as a whole. By addressing the crisis head-on, LEGO Group demonstrates their commitment to protecting children and building a safer online environment.

Adam Ingle, a prominent advocate for children’s well-being, believes in a holistic approach to digital design. He emphasises the importance of not only focusing on safety and protection but also nurturing children’s creativity and imagination. Ingle argues that an overemphasis on addressing online harms could result in sterile digital environments. He believes that a certain level of flexibility and age-appropriate design is necessary to create engaging and beneficial digital experiences for children.

Moreover, Ingle calls for governments and policymakers to establish regulatory frameworks that incentivise the development of productive digital experiences for kids. He highlights that current discussions primarily revolve around addressing online harms and urges for a broader perspective that considers the impact on children’s well-being. Government intervention, according to Ingle, can play a crucial role in fostering child well-being in the realm of digital design.

To implement age-appropriate design, LEGO is actively involved in the EU’s AADC (Age Appropriate Design Code) method. This method allows tailoring privacy policies, default settings, and aspects of game design to cater to the specific social interaction needs of different age groups.

When it comes to teenagers, finding the right balance between their social connections online and the associated risks is crucial. It is acknowledged that some level of social connection is necessary for teens’ well-being, as it enables them to form organic friendships online. However, measures can be implemented to mitigate the risks associated with teens’ online interactions, such as disabling certain features for younger age groups and promoting online safety education.

In conclusion, LEGO Group’s commitment to prioritising children’s well-being in their digital products is evident through their conscious design choices and collaboration with UNICEF. They actively address the online safety crisis and advocate for a holistic approach to digital design that balances safety, protection, creativity, and imagination. Adam Ingle’s call for regulatory frameworks and the promotion of age-appropriate design further underscores the importance of creating productive and beneficial digital experiences for children.

Sabrina Vorbau

The strategy for a better internet for kids is being revised through a co-creation approach. This approach involves actively involving children by consulting them across Europe. Open discussions with adults, mainly focusing on parents and teachers, have also taken place. Additionally, experts from various fields including industry, academia, and policymakers from the national level have been invited to provide their insights. This collaborative effort ensures that the revised strategy takes into account the perspectives of all key stakeholders involved.

The importance of involving young people in policy decision-making is emphasized. By including children and young people in all aspects of the decision-making process, it ensures that the policies and tools implemented effectively meet their needs. This can be achieved through various means such as conducting consultations, involving young people in expert groups, and actively cooperating with them in organizing events like the Safer Internet Forum. This approach recognizes the expertise that young people possess and highlights the significance of their input in shaping policies that concern them.

Meaningful youth participation is considered vital in the pursuit of better internet policies. While progress has been made in this area, more efforts are needed to ensure that children and young people are involved as part of a multi-stakeholder approach. It is crucial to see young people as experts in their own right, rather than merely as a necessity in decision-making processes. By acknowledging their expertise and actively involving them, it maximizes the positive impact of policies and initiatives implemented.

Furthermore, there is a call for more stakeholders, particularly industry and policymakers, to implement the policies that have already been established. The big plus strategy, which is seen as a significant policy framework, plays a crucial role in ensuring children’s well-being. It is essential that this policy is effectively utilized and applied to achieve its intended goals. By implementing these policies and involving key stakeholders, including industry and policymakers, a more robust framework can be created to address the challenges and concerns surrounding children’s well-being in the digital world.

In conclusion, the co-creation approach to revising the strategy for a better internet for kids involves the active involvement of children, consultations with adults, and engagement of experts from various backgrounds. The inclusion of young people in policy decision-making processes is essential to ensure that their needs are effectively met. Meaningful youth participation, along with the implementation of existing policies, particularly by industry and policymakers, is crucial for achieving a safer and more inclusive internet environment for children. The big plus strategy sets the framework for addressing children’s well-being, and it is vital that it is adequately implemented.

Josie

The session concentrated on the significance of prioritising children’s views and well-being in the digital environment. Shuli Gillets, a renowned expert in child-centred design with over 20 years of experience, discussed the power and importance of designing technology that has a positive impact on children. Gillets stressed the need to focus on three key principles: protection, empowerment, and participation.

Adam Ingle, the Global Lead for Digital Policy at the LEGO Group, explained the motivation behind prioritising this issue. He argued that businesses have a responsibility to uphold high standards of safety, privacy, and security in their digital products. Ingle advocated for policies that give children more agency online and highlighted the potential risks associated with neglecting to invest in the well-being of children.

Professor Amanda Third introduced the Ritech Responsible Innovation in Technology for Children framework, which aims to create a digital world that prioritises children’s well-being. She emphasised the importance of conducting research centred around children and their experiences in the digital age. Additionally, an ongoing research project on responsible innovation in technology for children was discussed.

The session concluded with panelists sharing their thoughts on taking action to achieve positive design for children’s well-being. They underlined the need for collaboration between government, industry, and young people, as well as the importance of taking tangible steps in the pursuit of this vision.

In summary, the session provided valuable insights into the importance of prioritising children’s well-being in the digital environment. It highlighted the role that design, policy, and research play in creating a positive and secure digital space for children.

Amanda Third

The analysis examines various aspects of children’s digital play experiences, covering topics such as wellbeing, safety, participation, and design. It explores both positive and negative elements, providing a comprehensive understanding of the subject.

On the positive side, the analysis highlights the diverse and enjoyable experiences that children have with digital play, emphasising the joy and connection it brings. It also acknowledges the positive impact of creativity on children’s wellbeing, underscoring the importance of involving children in design processes.

In terms of safety, the analysis recognises that children face challenges online, including encounters with inappropriate content and potential safety issues. It emphasises the need for measures to protect children from these risks.

The analysis also explores the concept of child participation, noting its role in developing protective capabilities in children. It stresses the importance of reaching out to vulnerable and diverse children through partner organisations with expertise in engaging these groups.

A key focus of the analysis is the development of a wellbeing framework that supports the enhancement of children’s wellbeing through digital play. This framework, based on data analysis and children’s experiences, proposes indicators and measures to evaluate the impact of digital play experiences. Ongoing research involves testing the effectiveness of this framework through real-world digital play experiences.

Additionally, the analysis emphasises the importance of understanding children’s digital play experiences comprehensively. It advocates for actively listening to children and incorporating their perspectives into the design and evaluation process. This approach ensures that the framework and subsequent considerations reflect children’s actual experiences and needs.

The analysis also touches on the rights of the child as a guiding principle in this context, suggesting that any actions or decisions should be taken consciously and with a strong commitment to upholding children’s rights.

In conclusion, the analysis underscores the significance of children’s digital play experiences, providing insights into both the positive and negative aspects. It emphasises the need to ensure children’s safety, enhance their wellbeing, promote their active participation, and consider their diverse needs. Through ongoing research and the development of a wellbeing framework, the analysis aims to provide evidence-based solutions that contribute to the optimal design and enhancement of children’s digital play experiences.

Session transcript

Sabrina Vorbau:
really a co-creation approach where we tried, where also the European Commission endorsed, to really make it a multi-stakeholder approach when we are talking about better internet for kids. Together with our colleagues from the INSAFE and the INHOPE network, some of them are sitting here, or SAFER Internet Center, so really the contact point for us at national level, they did a consultation with children and young people across Europe, I think more than 750 children were consulted on their needs, on their priorities, and this was really the foundation of the revision of the strategy, to really take it to the young people first, to understand what they’re doing online, what they’re concerned about, but also what they enjoy online. In addition to this, we then also did an open consultation with adults, so mainly focusing on parents and teachers. This went mainly through social media, we developed a survey, we also translated the survey in all the EU languages, and we gave opportunity to teachers and parents to complement what the young people already mentioned to us. And then the last stage was of course also to invite other experts to reflect on what should be included in the policy, so that was of course industry, but also academia and policy makers from the national level. So we can already see that the process of revising the strategy really happened with everyone around the table, including children and young people. And then last May, the new strategy was adopted, and it’s really put at its heart and its front children and young people. It’s based on three pillars, child protection, child empowerment, and child participation. And I think especially pillar two and pillar three are really, really important. We do believe, and there’s really great endorsement and support from the European Commission to make sure that really young people are part of the action, that they’re considered as experts as well, that they have a seat around the table when decisions are being made, but also when new technologies are being developed. So it really encourages stakeholders to make sure when they work on Better Internet for Kids related policies or tools to really invite and include the young people in this process. Of course it’s a policy, it’s a policy document on Better Internet for Kids, so it was also very important to make, to create it in such a way that children and young people are aware of what is written in the strategy, aware of their rights. So this is why we also worked on a youth and child-friendly version of the strategy. I brought one copy here, but you can find it online, which also really happened in a co-creation process with the young people. They advised us on the wording, how this child-friendly version should be formulated. They also advised us on the colors they choose, and they said okay, these icons, these colors, this is really like what attracts us, what we like. What was also interesting, that they advised us to put a sort of like cluster in the end to better explain some terms. I think for us the term policymakers, we all know what that means, but for the young people it was not clear, they didn’t understand what that meant. So that was really refreshing and helpful for us to really understand how we should go about it. This is also translated once again, because that’s also really important. Of course the the common language is English, but we really want to reach young people at national and local level. So we also, with the help of our colleagues at the Safer Internet Center, made sure this is translated in all the EU languages. What happened until then, almost a year after, when it comes to implementation on our side, and again this is with support of the European Commission, we really try to include young people in all our actions. When, for example, we are doing consultations with stakeholders, when we form expert groups, we are inviting young people to be part of these groups. Those young people we are working with are young people that are working at national level together with the Safer Internet Centers. They are typically between the age of 13 to 18, 19 years old, and they have the opportunity through the Safer Internet Centers to also get involved in the work we are doing. Maybe I conclude with a very tangible example. Every year we are hosting our annual conference on behalf of the European Commission, which is the Safer Internet Forum, and what happened last time, that for the first time we involved the young people in the whole development process of the conference. We had a small group that we worked with really on the program. We discussed what should be the key topic of the conference, what should be the slogan of the conference, how should the visual identity look, and what should we do on this day, what kind of sessions do you think would be useful, what do you think works when engaging with stakeholders, who should we invite to speak at the conference. And I think this was a very, very refreshing process, and I think that’s also the point we’re trying to make, to really try to involve the young people from the beginning, from the early stages on, and not give them a finalized document or a finalized tool and a policy and say, okay, this, please use this, we feel it’s useful for you. So we have to educate with children and young people and not to them or for them. So I think I conclude it here. Thank you. Thank you so much, Sabrina. I think we’ll

Josie:
return to a few of the concepts you’ve introduced. Firstly, you know, the three pillars of the strategy, protection, empowerment, participation, I think really speaks to the spirit of this project, but also the importance of prioritizing children’s own views. And we’ll hear from Amanda about the first phase of this, which really embodied that, I think. Our final perspective to complete the triangle for this first part, I’m such a pleasure to introduce the newest member of our UNICEF team. Shuli Gillets is a global expert in child-centered design with over 20 years of experience working both in industry and academia, leading UX research, design, and strategy of digital experiences for children and families. In the past decade, Shuli has served as a Google Launchpad UX mentor, a teaching fellow at Tel Aviv University, and a founding board member of Designing for Children’s Rights Association, and now a member of UNICEF’s Business Engagement and Child Rights team. So welcome, Shuli, and taking us from the policy or government perspective to the industry perspective. And governments, as we know, have an essential role in creating the enabling environment for businesses to respect children’s rights. The actions of industry itself is another essential piece of the puzzle when it comes to prioritizing child well-being in the digital environment. Some of us in the room might be wondering, you know, why are we focusing on design specifically? What does this mean? Can the design of digital experiences really matter for children? And is good design possible? And what’s the power of designing positive technology for children? It would be great to hear your views on that.

Shuli Gilutz:
Thank you. Good morning, everyone. It’s great to see everyone here, and thanks for that, Josie. It’s always great to start talking about children being part of this, but after we hear from children and we hear their need for this, we really have to find a way to make impact in a broad sense. And regulation, legislation, and policy are important tools in children’s positive digital play experiences, and extremely important in guiding and limiting industry in protecting children online. However, in digital play, impact goes beyond mitigating harm. And while that is still the baseline and critical, research has shown, and we’ll hear more about it soon, that digital play can afford positive experiences that promote children’s well-being in different ways. And that is really what we’re trying to do and reach out to companies to help them achieve this. So I’d just like to mention a few terms we’re gonna all refer to so you know what we’re talking about, because they can be used in different ways. So there are many digital experiences for children online. Why do we talk about digital play? I mean, children do a lot of things. So first of all, play is one of the most important ways in which children interact with the world. I mean, and develop an essential knowledge and skills and experiences. That’s also why it’s a child’s right, and everybody here knows that. And children treat digital play the same way they treat physical play. They don’t make those differences. That’s for us, the older generations. And they expect the same safety and joy they have from all the physical play. And of course, that’s not the case, as we know, because it came in later. So we want to help create the environment for them by guiding industry to do so. And Ritech, this project, looks at children’s well-being. So we define children’s well-being by a spotlight on children’s own lived experience. So their subjective experience with digital play. How do they view it? What makes it a good experience or bad experience for them? We found, talking to children, that safety and security is key. But there are also additional outcomes that make up well-being in the eyes of children when it comes to digital play. Like empowerment, social connection, competence, and creativity. In many cases, digital play is a critical lifeline for children’s well-being, enabling all these in a way no other context can. So when we talk about good design in Ritech, we talk about where designers and industry can help support these interactions and then that kind of thriving with children. And most designers today want to create a positive and empowering digital play experience for children, but they don’t know how. I mean, they haven’t trained either in child rights nor in child psychology or in any way. They’re just designers. And they would like to do the right thing. So this is a complementary piece to policy work that is like a top down approach. We were looking at a bottom-up initiative to give designers and industry the tools to create positive digital play experiences and promote the benefits that those have for children. What we’re doing now is working with designers to understand their needs and develop a guide for business that they can implement easily in their design process. To create online experiences that are safe and private and also connective, creative, expand learning, competence, curiosity, and creativity. And of course, fun, exciting, joyful, and inspiring. Thank you.

Josie:
Thanks, Shuli. And that’s a great segue to the second part of the of the session. And next slide, please. Where we will dive in a little bit more to this particular project, Ritech Responsible Innovation in Technology for Children. And I won’t give a long preamble, only to say that this is the question that we’re reflecting on. How can, practically, businesses and policymakers create a digital world that prioritizes the well-being of children and maximizes the opportunities and the potential for positive impact? And with that, I’d like to introduce the next speaker, Adam Ingle. Next to me is the Global Lead for Digital Policy at the LEGO Group, where he helps LEGO maintain high standards of safety, privacy, and security in their digital products, and advocates for policy that empowers children online. Previously, Adam led the Information Commissioner’s Office Emerging Technology Unit, assessing the data protection impact of emerging technologies, and advised both industry and government on how to mitigate privacy risks. Can you tell us, Adam, a little bit about what motivated the LEGO Group to prioritize this topic? And from your perspective, what are the potential pitfalls associated with businesses failing to invest in getting it right when it comes to designing for children’s

Adam Ingle:
well-being? Thanks, Josie. So, at LEGO, kids are at the center of everything we do. You know, they really are the DNA of the company. It’s wonderful to see a child here listening to this talk as well. I mean, really, we associated with our physical bricks. That’s what everyone knows us for. You know, even in the booth that we have out in the Exhibition Hall, everyone comes up to us and say, what is LEGO doing here? What is LEGO doing with the digital space? And yes, I mean, we have this great history of being there in physical play, but we also want to be where kids are. And increasingly, that is online. And, you know, we also need to carry over our commitment to learning, our commitment to safety, our commitment to child well-being from the physical to the online world. And while we’ve been online and, you know, building games and building digital experiences for many, many years now, we want to understand what best practice is. And that isn’t just best practice in safety and protection, that’s best practice in enabling children to grow, to learn, to thrive online. But you can’t just make that out of thin air. You’ve got to do the research, you’ve got to do the hard yards, you’ve got to work with fantastic people like Amanda and Julie and others who have, you know, deep expertise in these areas. So that was really the impetus for starting this Rightech project. It’s to, along with UNICEF, it’s to understand, you know, fundamentally at a research level, what are the building blocks that support child digital well-being? And how can industry really commit to building these products in a way that’s empirical and measured and sustainable? So we want to be the flagship digital service, the flagship kind of industry provider building well-being in our digital products. We want to lift industry best practice, we want to build coalitions in this space. We all know that the, and I feel like this phrase has been said many many times at this conference, but the internet is not designed for kids. Digital experiences aren’t designed for kids. They should be. That should be the future. And I think there’s increasing consensus around this, so we want to drive that alongside UNICEF through this project. And I think it all starts with really embedding these things in our company first. So for example, we’ve already begun the process of, you know, internalizing the initial Ritech findings. So we have a responsible child engagement team. They actually run this project internally for Lego, but they’re also a horizontal team that consults on child rights, child well-being, child issues across all digital design and gaming experiences at the Lego Group. We’ve got responsible digital engagement managers, we’ve got responsible gaming managers. They’re all looking at the Ritech framework and as our product teams build and develop experience for kids, they’re consulting with these managers whose mandate is child well-being and making sure that these aspects are reflected in our digital design experiences. We have a responsible gaming framework, which is a kind of a must check box thing for any games that we make that includes healthy game design. So that talks about, you know, how do you build games that help children emotionally regulate, that don’t have addictive qualities, that don’t have negative enforcement cycles, that don’t have manipulative design patterns. So that’s already integrated into our gaming experiences there. We’re also building kind of digital design cards and digital design principles. So for example, these kind of build on not just the Ritech work, but some of the work that’s come out of the Digital Futures Commission in the UK. So they have kind of key tenets like how do you ensure safety, how do you allow for open-ended play, how do you enhance imagination and creativity. So kind of building on those best practices, as well as the findings from the Ritech framework. And we’re also, you know, wanting to actually measure our company’s performance and the gaming performance that we have on well-being. So we’re building a well-being KPI at Lego to actually push product teams, developers, to meet as a criteria for success, kind of well-being outcomes. Now that’s difficult to do over in the process of doing that, but that’s, you know, a key aspiration of actually performing and measuring against well-being. And I think I can share briefly kind of an outcome from the initial research. So we used one of our games in the Ritech phase 2 research, Lego Builders Journey. And this is kind of a challenging puzzle game with a strong narrative. And initial findings kind of associated, you know, the experience of increased competence, relatedness, and belonging that kids had. Because they were able to enact with Lego minifigures, and they were empowered to explore the game, and were awarded for kind of success. And they kind of had this open and imaginative play experience based in a Lego world. because they had open-ended play, because they had this sense of autonomy and agency that did increase kind of findings of competence. So we’re already kind of seeing the existing games designed and measured against the framework, but I think when this becomes much more formalized and robust, you know, and we build it in, you know, we can really augment and enhance those outcomes. So I think that’s all what we’re doing internally and I think, you know, the success of it so far and the sense of positive feedback we get is itself a reason to do it, but it’s also just the right thing to do. I think the pitfalls of industry not doing this is, you know, really losing, one, the creating potential for extreme harm for kids at this really kind of crucial development age, but two, just like losing a sense of trust and we already see a massive trust deficit in the digital industry at the moment. You know, there’s an online safety crisis happening at the moment. We’ve seen, you know, reports from the U.S. Surgeon General talking about teen mental health crises, issues across the board and that’s, there’s a regulatory response happening to, you know, to ensure that we mitigate some of these harms, but, you know, that’s, it’s not going to solve the challenge if regulation just gets handed down and industry is forced to do it. We need to be proactive and you actually need a cultural change in industry in order to ensure that, you know, the harms are mitigated and not just mitigated, but the well-being is enhanced. So that’s really what we’re trying to do. I’ll leave it at that. Thank you, Adam.

Josie:
Really interested to hear that experience of how do you build this into incentive structures within the company, you know, making it a KPI is a really interesting example and I’m sure we will have time to unpack parts of that in the discussion, but also a nice segue. You mentioned the framework. You might be thinking, but what framework? Well, next slide, please. This is, we will hear a little bit about this piece. It’s a little bit difficult to read on screen, but it’s reproduced in the handout in front of you and this massive banner. I would like to introduce Professor Amanda Third, who is a professorial research fellow in the Institute for Culture and Society, co-director of the Young and Resilient Research Center at Western Sydney University and a faculty associate in the Berkman Klein Center for Internet and Society at Harvard University. She’s an international expert in youth-centered participatory research and has led child-centered projects to understand children’s experiences of the digital age in over 70 countries, working with partners across corporate, government, not-for-profit sectors and children and young people themselves. It’s a real pleasure that you’re able to join us. Amanda, can you tell us a little bit about this framework, what we mean by this phase

Amanda Third:
one research and what does this tell us? Yeah, sure. Thank you, Josie, and good morning, everyone. It is really nice to see everyone, especially the younger members of our audience here this morning. Before I leap into talking about the framework, I would just begin with a little reflection that it’s been so nice over the last few IGFs to see our conversations progressively mature and move away from thinking only about protection and to think about protection and participation in tandem. It’s really, really refreshing and I’m really pleased that wellbeing is finally making a big splash on the agenda for children’s digital practices because, of course, the work that I have done and many others have done too shows that actually when children engage with digital media, whether that is scrolling through videos to watch, choosing which games to play or who to interact with online, that question of their wellbeing is really top of their mind constantly. They’re constantly reflecting on whether or not this is good for me at some level and they make their choices accordingly. So, it’s really time for us to take wellbeing very seriously. So, in this project, we were very excited to be able to work with almost 400 children across 13 countries, predominantly in the Global South, and to re-analyse the data from 30,000 survey participants to work out how children’s digital media practices impact their sense of wellbeing and what we can do to really augment their wellbeing through good design. So, basically what we did was we used a creative and participatory-based workshop method to engage with children in languages that they speak in their own contexts to really dig deep into their experiences of digital play. And what we found from that was that children have got very kind of diverse experiences of digital play, but one thing that really stood out across the sample was that digital play brings children a lot of joy and a lot of connection with others and that there’s really a lot for us to work with there in terms of augmenting their experiences online and supporting their wellbeing. So, also though, and as Aditi was gesturing towards in her opening words, children also though really do understand that there are limits to digital play. They’ve got a very strong sense that their safety is at stake. They do have unpleasant experiences and actually what really came through as we spoke to them this time around is that their experiences of diversity really, you know, diverse children have or meet with different kinds of obstacles online, discrimination, barriers to their good participation and culturally inappropriate content, things like this. So, really we do need to pay very good attention to diversity online. They overwhelmingly talked about how wonderful digital play experiences are for connecting with other people online and I think this is the reality for children. They do interact with other people online. They mostly interact with their friends. They occasionally interact with strangers but, you know, that those social dimensions are really things that we need to foster because they bring children a lot of joy and that has positive impacts for their wellbeing. Safety for them is also a priority. So, they are calling on governments and in particular our private enterprise to really safeguard their wellbeing online. They want us to do more to make sure that they are protected and this includes everything from the most serious risks of harm right through to things like the ways that they might encounter advertising in those settings. They also talked about how games are one way for them to express their creativity and sort of talked about creativity as an integral part of their digital play experiences and clearly creativity comes along with a whole range of benefits from, you know, sort of feeling empowered and to take action to express oneself. These are all things we know are positively correlated with wellbeing. So, these are some of the things that came out of the interactions with children and then what we did was we sort of distilled, analysed this in conjunction with the survey data and we distilled it into this wellbeing framework that you see in front of you. So, the eight pillars of this interim framework and I stress that it is an interim framework, it is going to be revisited shortly but these are, if you like, the design principles that we need to take forward and to use to shape the digital play experiences that children have online and you can see they very closely correlate with the kinds of experiences I’ve just very quickly summarised for you. So, from here to the other thing that we’ve done to support this framework is we’ve developed a series of indicators and sample measures that we can then use to measure whether or not digital play experiences are hitting the mark. So, this is a sort of an attempt to, if you like, embed children’s experiences at the heart of our measurement processes to make sure that we are really, really making the impacts we intend. Okay, and so I think, you know, there’s still more work to be done here. This is only phase one that we’ve completed so far. We’re about to do complete phase two but I think what’s really come through very strongly is, as Sabrina was pointing to, well actually all of us have pointed to in different ways, the importance of engaging children in these design processes and I think if you’re here in this room you’ve already got some inkling that this is important somehow and I know I’m preaching to the converted here but what I would urge you is to really stay attuned to the meanings of engaging children and young people. Let’s not get lazy about the ways that we think about participation. Let’s not turn it into a tick box. Let’s make sure that we continue to reflect on our practices, reflect on what value children can bring to these processes and really continue to refine the ways that we do these things over time because I think by doing so, not only do we get better results in terms of the design of products but we also build the next generation of change makers. Thank you.

Josie:
Thank you so much, Amanda. If I may, I have a quick follow-up question which is to ask you a little bit about, you know, we keep saying phase one, phase two, research and of course research takes time and the project is ongoing but can you tell us a little bit about what does this phase two research actually consist of and what can we expect to see? Yes, so thank you, Josie and I’ll make it

Amanda Third:
quick because I know we’re under pressure but phase two is a new phase of research carried out by a range of different institutions around the globe, interestingly. So, the Centre for the Digital Child in Australia, New York University and the University of, oh I’m going to get this wrong, Sheffield, thank you. That was my instinct but, you know, I’m a little jet lagged and what they are doing now is they are taking the framework and testing that against a particular, you know, a set of real world digital play experiences and they’re doing that in a range of different ways using different methods to really understand how children’s experiences play out and how then we might need to refine the framework accordingly. So, we’re doing everything from measuring, you know, sweat and heart rates right through to sort of like the more ethnographic style of research which is talking to children about their experiences as they play and we’ll integrate all of that into a revised version of the framework and roll that out with designers through a range of

Josie:
initiatives. Great, thank you so much. We are coming close to the section where we will have a bit of interaction and invite you to chime in with questions but before we do that, very briefly, Shuli, can you tell us just for those in the room what can they look forward to in terms of the next steps and how they can be involved? Yes, so as Amanda mentioned, we

Shuli Gilutz:
really started this project based on research. We want to base everything we do on real data. There’s a lot of, you know, myths going on around children and technology but after we do that, we want to take that into practice and use that for impact. So, the stage we’re working on now in parallel to summarizing the research is creating a guide for business. So, in order to create the guide for business, it’s not just about finding a way to summarize all the research but it’s really to create something that businesses will use and we’re talking about executive levels but also like we mentioned designers in practice. So, what we’re doing is actually talking to designers from companies that create digital play all over the world. It’s very important for us to reach out and get a diverse group of companies, not only ones that create in English for English speaking kids but a large sample from all over different countries and we’re working with country offices and that comes all over the world to do that and talking to designers about their challenges and needs and designing for children and we’re going to have all that information aggregated and find a way to create some guide for them which will be something applicable for their design process, design tools and assessment for applying Write Tech framework. So, the next stage after we finalize all the information from the companies is actually to create kind of a prototype for the designers and test it, pilot it with different countries that are designing different digital experiences and then hopefully by next fall we’ll have something to show everybody that has been developed together with all these companies from all over the world. If you would like to chat more about that please visit us at our booth and I’m sure we

Josie:
will be able to discuss at more length. We are challenged to think about really actions and concrete things through these sessions. So, to wrap up the panel part, I’d like to just invite each of us one by one in 10 seconds, 20 seconds, just one action that you think should be prioritized by any stakeholder group whether that’s government or industry or young people when it comes to achieving this vision of positive design for child well-being and then we’ll throw it open but this will really help us I think try and distill everything that we’ve spoken about. May I invite Sabrina to start?

Sabrina Vorbau:
Yeah, sure. I would say meaningful youth participation. I would hope and Amanda said that there is you know more that progress has been made but more needs to be done. So, I would wish for a multi-stakeholder approach where we would consider children and young people to be an equal part of it. So, to consider them really as expert and not as a necessary and coming back just to the big plus strategy, I think it’s a very great piece of policy. It sets the framework, it’s there so I would encourage all the other stakeholders especially industry and policymakers to really implement it to put it in action. It’s there, it’s meant to be used so I think that’s the the only way forward.

Josie:
Fantastic, thank you. Adam?

Adam Ingle:
I’m sure Amanda surely might cover off the industry expectation so I’ll be a bit policy wonky and say that I really would welcome I think and Lego would really welcome governments and policymakers to actually recognize the need for a holistic approach to digital design. So, right now there is a lot of discussion and rightly so around addressing online harms but an over focus on harms can lead to sterile environments and we actually need to build experiences and have the regulatory frameworks that incentivize experiences that allow us to tick off on all these eight competencies. That is safety and protection is one but kind of creativity, imagination and you need some level of flexibility in design in order to do that. So, government’s thinking about how you holistically increase child well-being in digital design and creating frameworks that enable companies to design like that.

Shuli Gilutz:
Thanks, I’ll talk about companies and industry. I think there needs to be a shift from looking at designing for children just something that’s regulated and they need to do by law and they’re different ages and you know complying with different frameworks like GDPR or COPPA or others. The shift should be to understanding that this is the future. There is no going back. We have to design a fully holistic environment for children to thrive in not just to be safe in and whoever isn’t doing this will be just left behind. So, I think industry really has to change pivot the way it’s looking at designing for children and I hope that will happen.

Amanda Third:
Okay, it’s always tough going last on this little tweet link thing. So, I think I would challenge us to continue to really problematize some of the distinctions that we make. Often what we do is we pitch protection against participation. We talk about them as two separate things and I think there’s a lot of value in thinking about how participation breeds protective capabilities. So, that would be the first. The second would be to really look closely at young people’s practices or children’s practices. Sometimes we dismiss their practices out of hand and we say they’re mindlessly scrolling or they’re just mucking around, but actually those things we need to look closely at. There’s a lot going on in those little spaces that support and sustain their well-being and there’s again a lot of fertile ground there for us to talk about. The last thing I would say is, this is really not tweet link, sorry Josie, but the last thing I would say is that design is really, really, really important. But we’re also investing a lot of hope that design is going to solve a lot of problems. So, for us to think about what are the limits of design and where do other pieces of the puzzle need to fit in.

Josie:
Fantastic. Thank you. Thank you to our panelists. Now is the time to please raise your hands. We will have roving microphones and we’ll take a few questions at one together and then we’ll portion them out to two panelists. So

Audience:
let’s start and go around. Please. Hi everybody. I’m Niels from the Belgian Safer Internet Center. We work under the InSafe umbrella where Sabrina is a part of. Something that stays a constant struggle for us in order to avoid using child participation as a sort of tokenism as I said before or simply a box to tick. How can we reach like a true representational group of young people? Like without a constant focus on this we enforce this Matthew effect where representation can even be a misleading thing. Because when only privileged people are being reached we get the wrong idea about a certain situation. So is there any interesting research or findings that are best practices about this? Because for example at the Belgian Safer Internet Center we’ve been experimenting over the past years. For example when we were doing trainings with parents we would allow them to bring their children for example. A small thing but which allows more people to be part of something. But I’m looking for more ideas here because this stays a constant struggle. Thank you. Yeah thank you. My name is Jutta Kroll from the German Digital Opportunities Foundation. First of all I want to thank you not only for the presentations but for the wonderful approach and project. I really really believe in it. My questions regard the principle of evolving capacities of children. You’re talking about designing for children’s well-being but they are not all the same and therefore I’m really interested how that can be done. In parallel to the big plus strategy the European Commission has set up a special group on age-appropriate design which is working in this regard. I would like to know whether this could be brought together. Thank you. Great question. Thank you. I think we had a few on this side of the room. Oh we have another mic yes. Yeah thank you very much. Yeah my question well one of my questions has been stolen but absolutely I think I think just to add to Jutta’s point sorry my name is Amy from ECHPAD International. To add to Jutta’s point I guess this how do we navigate the difference between platforms designed for children platforms used by children and how can we build in a kind of an experience that is flexible enough so that older use sort of an experience that doesn’t work for them but also children are supported. And I guess the second thing is about parents. We hear often that you know some research shows just the ongoing importance of parents being involved in the gaming life and the online digital life and accompanying children in that and what does the framework address that in some way to kind of also bring parents on that journey. Thank you. My name is Carmen so I speak on capacity as a mom today. I come from nuclear physics and internet systems to different worlds but I’m also mom. You just told my question because it is very important to involve parents because when I gave birth to my children they didn’t come out with the phone. So we provide them a phone and actually I have two daughters and they don’t have a phone. They only use their computer when they are at school. They don’t live online. They live outside. So we’re talking about something we give for granted that the children they will live online their life. They’re not gonna live there online and as you said there is no turning back but there is a turning back because we can walk in parallel ways. The online life and the offline life. So if we only if they only live online now we take away all the senses so we won’t feel pain anymore when we walk on top of a brick. Lego brick. So and it’s very nice. I see them playing and every year they get from Santa a lot of Lego and this makes the children build up this new world together. And then I was pretty worried to hear that the developers they have no pedagogical experience. So we expect this from from the teachers. So I would expect this also from the developers to have this kind of knowledge. Otherwise you just give something to the children and they have to figure it out. And some parents you should educate the parents because we see a lot of parents we give they give a telephone to children and they think it’s the babysitter of the children and they don’t explain all the threats that they’re online. So they give data and all these kind of things. So there are there are certain. It’s pretty interesting what you said and I love your speech when you said you involve the children which is extremely important. But first you should educate parents as well because this is not a substitute to a parent. So it’s like giving a nice Tesla to the children and just say just go out and drive. It doesn’t work like that. Thank you so much. Thank you to you. We have five minutes left but I notice we have one question from behind us and then. OK. There’s David from the Association. We are also working for working with NGOs insurance group in Asia Pacific region. So my question is basically about our next step and also the engagement of other stakeholders. First about the next step. Knowing that right now is creating the guideline or the guide for business and policymakers. And it’s also at the other audience was mentioned. Parents engagement is very important in a sense and also the workers. So I’m just wondering for the next step. Would that be any guideline also for parents and as well as the workers who work closely with children as well as educators. So there’s the first question. And the second question is about right now is on phase two for the research. I’m just wondering for for NGOs and also for Institute from the other regions. How can we involve in like stage two stage three or afterwards. So there’s many about next step and for our actions. Thank you so much. We’re going to have to be very economical with our answering. But I think that I’ll be very quick just to respect that we have an online participation as well. And we’ve got a young person who’s obviously very passionate and is doing sounds like doing amazing things in Bangladesh. So he’s been quite active in the chat and he’s wondering how he can become involved in global initiatives and like this to represent kind of children at a global scale and also really agreeing with the points that have been made that you know a child understands children’s priorities most. So really reinforcing this importance of having developers you know have this insight and really respect children. So that that one’s from the online chat. Brilliant and it’s

Josie:
wonderful to see that engagement coming off live. We have three minutes left and I know the next session is in the room preparing to get ready. I’m gonna have to cluster this into representation and diversity and in the access we have to children in in in the research side evolving capacities and parents. Takers. Okay we’re just doing a round where we’ll each have maybe 30 seconds to to answer whichever question spoke to you most. Please Amanda and then we’ll go

Adam Ingle:
this way. I’ll be super quick and I’ll before I forget I’ll mention that for all the everyone that attended this early session I’ve got Lego loot if you want it at the end so please come see me and I can give you some stuff. So on the evolving capacities matter and kind of age-appropriate design and designing for brackets you know there there is methods to do this and and you know as as it’s already required by age appropriate design codes I mean Lego is a part of the EU AADC method as well. So we need to think you know what’s an appropriate level of social interaction for a 10 to 13 year old 13 to 16 year old you know. You actually probably need for teens well-being some level of social connection you know form organic friendships online. However that comes with risks you know you are have a contact risk with strangers so maybe you disable certain features for 13 to 10 year olds. Equally the level of communication and language that you use can be tailored so privacy policies default settings certain aspects of game design can be kind of tailored in a certain way. So there is through the wonders of kind of technology ways to really tailor these these different things. So I’ve got more to say but I’ll stop that. Thank you. Thanks. Okay I’ll quickly talk about the points that

Shuli Gilutz:
raised about parents. I think it’s critical and it always comes up because it’s a big challenge for parenting today. So parenting is hard and we all appreciate that. It’s hard to teach your children something that you didn’t have when you were growing up and I think the two main recommendations that come out with a lot of research and working with parents and families are number one play with your kids. So when parents don’t know what’s going on here or in the Xbox they tend to put in all these myths and stuff and they can’t really help and support their children and making good decisions and that’s what we’re really doing in parenting. So once you play with the children this goes back to child’s participation and learn what they’re actually engaging and then you can have a meaningful discussion you can see the well-being you can see the good things but you can also see when it’s not that great and then you can really talk about it so that’s very very important even if you think you don’t want to play this game go sit down play learn and have a discussion you may even enjoy it and the second one is talk to children about what you’re worried about about the playing outside why why do you want them to play outside that will be a very interesting discussion and children appreciated because children just wanna do you have well-being they want to do what’s fun and good for them they want to be healthy they want to enjoy and that’s why they still play with Legos and they still play outside because it’s fun it’s great it’s good for them so have those discussions with children rather than try to tell them what not to do without really knowing what’s going on in their lives thanks very quickly on

Amanda Third:
the representation question spot on I would say something controversial and say I don’t think representations a useful idea when you’re doing child participation I think what we need to pay attention to is reaching out through partner organizations who have deep expertise engaging vulnerable and diverse children to reach the children that that will give us a diversity of opinion and then I think we have to really make sure that we are tailoring our methods to make sure that we can can speak meaningfully with with different kinds of children right and that means often letting go of this idea that there is a perfect research method and a perfect way of engaging with children and going with the flow being guided by your sense of the rights of the child and and and yeah you know moving forward consciously I guess yeah thank

Sabrina Vorbau:
you and the final word no I just wanted to thank everyone for for the reflections and I think everyone that posted a question and made a comments I think that’s our job to try to connect these dots I think children and adults they need to have a conversation we need to approach this as a conversation and not educating to them and for them but with them thank you so much for all your

Josie:
participation and let’s continue the conversation outside and yeah thanks

Audience:
yeah yeah I’ve got to make sure I’ve got it right right I think it’s at 15 yeah to 15 to 315 yeah focus as you said yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah yeah

Adam Ingle

Speech speed

180 words per minute

Speech length

1632 words

Speech time

544 secs

Amanda Third

Speech speed

171 words per minute

Speech length

1713 words

Speech time

600 secs

Audience

Speech speed

148 words per minute

Speech length

1354 words

Speech time

550 secs

Josie

Speech speed

164 words per minute

Speech length

1170 words

Speech time

427 secs

Sabrina Vorbau

Speech speed

169 words per minute

Speech length

1254 words

Speech time

444 secs

Shuli Gilutz

Speech speed

178 words per minute

Speech length

1433 words

Speech time

484 secs

Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Albert Antwi Boasiako

Ghana has made significant progress in integrating child protection into its cybersecurity efforts. The country has passed the Cybersecurity Act, which focuses on child online protection. Additionally, Ghana has established a dedicated division within the Cybersecurity Authority to protect children online. This demonstrates Ghana’s commitment to ending abuse and violence against children, as highlighted in SDG 16.2.

Furthermore, Ghana has seen a remarkable improvement in its cybersecurity readiness, with a rise from 32.6% to 86.6% between 2017 and 2020. This progress aligns with SDG 9.1, which aims to build resilient infrastructure and foster innovation.

Research and data have played a crucial role in shaping Ghana’s cybersecurity policies and laws. Through research, Ghana has identified the challenges faced by children accessing inappropriate content online, leading to more comprehensive child protection strategies. This highlights the importance of evidence-based decision-making, as emphasized in SDG 9.5.

However, Ghana has faced challenges in implementing awareness creation programs, particularly in reaching a larger percentage of the population. With a population of 32 million children, Ghana has only achieved around 20% of its awareness creation goals. Overcoming this challenge is crucial in combating cyber threats effectively.

Fragmentation within governmental and non-governmental spaces has been a significant obstacle in child online protection efforts in Ghana. To address this, Ghana needs to institutionalize systematic measures and promote collaboration among stakeholders. This will ensure a unified approach and enhance response effectiveness.

Albert Antwi Boasiako, a proponent of child protection, advocates for the integration of child protection into national cybersecurity frameworks. Albert emphasizes the importance of research conducted with UNICEF and the World Bank in shaping cybersecurity policies, aligning with SDG 16.2.

Public reporting of incidents is also essential for maintaining cybersecurity, as supported by Albert. The establishment of the national hotline 292 in Ghana has proven effective in receiving incident reports and providing guidance to the public. This aligns with SDG 16.6’s objective of developing transparent and accountable institutions.

Implementing cybersecurity laws can pose challenges, particularly in certain developmental contexts. Factors like power concentration and specific country conditions can hinder their practical application. Overcoming these challenges requires continuous effort to ensure equal access to justice, as outlined in SDG 16.3.

In the African context, achieving uniformity in cybersecurity strategies is crucial. Discussions on streamlining online protection and combating cyberbullying in Africa are vital for better cooperation and enhanced cyber resilience across the continent.

Ghana supports regional integration for successful cybersecurity implementation, sharing its expertise with other countries. However, fragmentation within the region remains a challenge that needs to be addressed for effective collaboration and coordination in countering cyber threats.

In conclusion, Ghana’s efforts to incorporate child protection, improve cybersecurity readiness, and promote evidence-based decision-making are commendable. Overcoming challenges related to awareness creation, fragmentation, law implementation, and regional integration will contribute to a more secure digital environment for children in Ghana and beyond.

Marija Manojlovic

Online child safety is often overlooked in discussions surrounding digital governance, which is concerning as protecting children from online harm should be a priority. This issue is further exacerbated by a false choice that is frequently posed between user privacy and online safety. This notion that one must choose between the two is flawed and hinders progress in safeguarding children in the digital realm.

The fragmentation within the digital ecosystem hampers progress in advancing child online safety. Marija, a leader in the field, has observed that collaboration and coordination among various stakeholders, including governments, the private sector, and academia, are crucial. However, there is an alarming level of fragmentation that impedes progress and the development of effective strategies to ensure children’s safety online.

One positive aspect that emerges from the discussions is the recognition that failures and learnings should be shared openly. Marija proposes that companies and organizations not only share what has worked but also what has failed. Transparency and the sharing of experiences can lead to better solutions and a more cooperative approach to addressing online safety challenges.

To truly drive change, it is essential to understand the root causes of digital challenges. Marija suggests moving upstream and examining the design and policy choices that contribute to online safety issues. This entails exploring how societal norms and technological design enable child exploitation, gender-based violence, and other online hazards.

Creating a unified digital agenda is crucial for maximizing the benefits of digital technologies and ensuring online safety for children. Misalignment in digital agendas can hinder progress, but engaging in meaningful discussions and sharing innovative solutions can help establish an internet environment that is beneficial for all, particularly children.

An evidence-focused and data-informed approach is necessary to effectively protect children online. Marija emphasizes the significance of testing, experimentation, and the sharing of results to inform decisions and shape policies. Building evidence through a cooperative spirit between different stakeholders is key.

Ghana serves as a unique example where child protection has been institutionalised in their cybersecurity work. This highlights the importance of countries actively integrating child protection into their cybersecurity strategies and policies.

However, it is disheartening to see that the innovation ecosystem is not always inclusive of individuals who require safety measures due to various reasons, including concerns for their well-being. This exclusion reinforces the need to address safety concerns to create a more inclusive and diverse innovation ecosystem.

The intersection of online child safety, inclusive digitisation, and gender balance should not be disregarded. Ensuring online safety is crucial for promoting inclusivity and achieving gender equality in the digital realm.

More work needs to be done in preventing gender-based violence and image-based abuse online. These serious issues require attention and effective strategies to protect individuals from harm.

Additionally, it is essential to challenge and address the prevailing narratives and perceptions of these digital challenges that are rooted in gender norms. Overcoming these deeply ingrained biases and stereotypes is crucial for creating a safer and more equitable online space.

While the internet presents numerous opportunities for young people, their participation and protection must be prioritised. Their experiences and perspectives need to be recognised and incorporated into decision-making processes to ensure their safety and well-being.

Moreover, it must be ensured that existing vulnerabilities, such as the gender divide, toxic masculinity, and extremism, are not exacerbated in the online world. Digital platforms should actively work towards a safer and more inclusive environment that nurtures positive interactions and discourages harm.

Lastly, increased investment in the field of online safety and protection is needed. Governments, industry leaders, and other stakeholders must allocate resources and finances towards robust initiatives that safeguard children from online threats.

In conclusion, addressing online child safety is essential and should not be overlooked within the digital governance discourse. It is imperative to dispel the false dichotomy between user privacy and online safety, overcome fragmentation, and foster collaboration among diverse stakeholders. Sharing successes and failures, understanding the root causes of digital challenges, building a unified digital agenda, adopting an evidence-focused and data-informed approach, institutionalising child protection, promoting inclusivity, challenging gender norms, ensuring youth participation and protection, and increasing investment in online safety are all integral to creating a safer and more inclusive digital environment for all, particularly children.

Mattito Watson

The analysis examines four speakers discussing various aspects of USAID’s strategies and initiatives related to youth and digital experiences. Firstly, it is noted that USAID’s digital strategy was released in 2020, indicating their adoption of digital technologies in development practices. As one of the largest development organizations globally, this digital adaptation is significant in terms of reach and impact.

Additionally, USAID has implemented a child protection strategy, demonstrating their commitment to safeguarding children’s well-being. Mattito Watson, who leads the child protection efforts within USAID’s child, children, and adversity team, plays a key role in this area. Moreover, USAID has a youth strategy that emphasizes collaboration and partnership with young people, rather than a paternalistic approach.

The analysis highlights the importance of involving youth in decision-making processes. To facilitate this involvement, USAID established a digital youth council, which serves as an advisory body and nurtures future leaders. The council consists of 12 members, including a gender-balanced representation of seven girls and five young men, underscoring USAID’s commitment to inclusivity.

Understanding the digital experiences of youth is vital. Mattito Watson’s efforts to comprehend the digital experiences of different youth demographics have led to the establishment of the Digital Youth Council, reinforcing the commitment to engage and empower young people.

In conclusion, the analysis reveals USAID’s strategies and initiatives to involve youth and incorporate digital experiences. The release of the digital strategy, implementation of child protection and youth strategies, and the establishment of the digital youth council showcase USAID’s efforts to stay relevant and foster inclusive development practices. By recognizing the importance of involving youth and understanding their digital experiences, USAID is taking a forward-thinking approach that can drive positive change and reduce inequalities in line with the Sustainable Development Goals (SDGs).

Andrea Powell

The internet has brought both great opportunities and risks for children. On one hand, children now have more access than ever to knowledge, entertainment, and communities, empowering them in various ways. However, there are also troubling aspects of cyberspace, with the dark web being used for criminal activities.

In terms of digital diplomacy and internet laws, there is a call for coherence. The belief is that everything that is forbidden in real life should also be forbidden online, and everything guaranteed offline should also be guaranteed online. Efforts have been made to implement this belief, such as discussions on how to apply the UN Charter or Geneva Convention within a conflict.

Solutions to digital challenges should come from a cooperative effort involving all stakeholders. Governments, companies, civil society organizations, and researchers all have different responsibilities and prerogatives that can contribute to problem-solving in the digital sphere.

One pressing issue is the lack of attention and resources given to child protection online compared to other areas. The field of child protection online is weaker, with less funding and organization, especially in comparison to efforts against terrorist content.

Creating an environment where there is effective testing and sharing of solutions to digital issues, such as age verification, is crucial. Different approaches to age verification exist, each with different levels of privacy, efficiency, and centrality. Finding the right balance is important.

Image-based sexual violence is a growing global issue that disproportionately affects vulnerable groups. There are over 3,000 websites designed to host non-consensually shared intimate videos, and young people are increasingly exposed to this form of violence. Survivors often experience psychological distress, trauma, anxiety, and even suicidal thoughts. Shockingly, over 40 cases of child suicide as a result of image-based sexual violence have been uncovered.

There is a need for better knowledge and public awareness of image-based sexual violence. Most law enforcement agencies lack knowledge of the issue, and public misunderstandings perpetuate victim-shaming attitudes. Global regulation and policies need to be harmonized to tackle this issue effectively. Barriers to addressing the issue include the need to prove the intent of the abuser, and it is argued that online sexual violence should be classified as a serious crime.

Tech companies are also called upon to take more accountability and engage proactively. Currently, there are over 3,000 exploitative websites that could be de-indexed, and survivors are left to remove their own images, effectively cleaning up their own crime scenes. Tech companies should play a more active role in preventing and dealing with image-based sexual violence.

In order to support victims of image-based sexual violence, global standardization of support hotlines is necessary. The InHope Network provides a model of global hotline support for child online sexual abuse, and this approach could be expanded to address the needs of victims of image-based sexual violence.

In conclusion, while the internet provides numerous opportunities for children, it also poses risks that need to be addressed. There is a call for coherence in digital diplomacy and internet laws, solutions to challenges should involve a cooperative effort from all stakeholders, child protection online requires more attention and resources, image-based sexual violence is a pressing global issue that demands better knowledge and regulation, tech companies should be more accountable, and global standardization of support hotlines is crucial.

Henri Verdier

The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation, security and safety, and violence and gender issues. It reveals that a significant portion of online crime occurs on the dark web rather than social networks, with real-time videos of crimes for sale. To combat this, the analysis suggests increasing police presence, investment, and international cooperation. It also highlights the issue of internet fragmentation at the technical layer, which needs to be addressed. Additionally, there is a disparity in trust and safety investment by internet companies, with greater investment in larger markets and less in smaller ones, especially in Africa. The analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a more balanced approach to internet companies. Contrary to popular belief, the analysis argues that innovation and regulation can coexist, with regulations sometimes driving innovation. Furthermore, the analysis emphasizes that security, safety, and innovation are not mutually exclusive, and solutions can be found by considering all three. The analysis also explores the interconnectedness of violence and gender issues, noting that social networks play a role in radicalization and that violence often targets gender and minority groups. Ignoring gender issues can lead to overlooking other interconnected issues. In conclusion, the analysis provides a comprehensive examination of various topics and offers valuable insights for addressing these complex issues.

Cailin Crockett

The analysis highlights unanimous agreement among the speakers on the importance of addressing gender-based violence, particularly online violence. They argue that all forms of gender-based violence stem from common root causes and risk factors, often driven by harmful social and gender norms. Furthermore, they emphasize that these crimes are majorly underreported.

The Biden-Harris administration strongly supports efforts to end all forms of gender-based violence. They have taken a comprehensive approach to tackle the issue, including setting up a White House Task Force dedicated to addressing online harassment and abuse. This demonstrates their commitment to promoting accountability, transparency, and survivor-centered approaches with a gender lens. The administration acknowledges that gender-based violence has ripple effects on communities, economies, and countries.

In combating online violence, the speakers underline the importance of prevention, survivor support, accountability for both platforms and individual perpetrators, and research. These pillars form the basis of the strategy against online violence. The task force comprises various government departments, such as USAID, the Justice Department, Health and Human Services, Homeland Security, and more. The Biden-Harris Administration has already outlined 60 actions that federal agencies have committed to taking to address online harassment and abuse.

The speakers note that the United States’ federalist nature leads to multiple approaches being taken across different states and territories to address abuse issues. This diversity reflects the unique challenges and needs of each region. Additionally, they assert the need to balance the interests of children with the rights of parents, as parents may not always be inherently able or willing to represent the best interests of their children.

Investing in prevention and adopting an evidence-informed approach are crucial in addressing gender-based violence. The administration recognizes the importance of maximizing options and support for survivors of abuse to effectively prevent and combat violence.

The CDC’s analysis, titled ‘Connecting the Dots’, aims to identify shared causes of violence across the lifespan. This research contributes to a better understanding of the various forms of interpersonal violence and helps inform prevention strategies.

Finally, the speakers call on civil society to demand government investment in tackling these issues. They emphasize the importance of allocating resources to effectively combat gender-based violence and online violence. This partnership between civil society and the government is crucial for making progressive changes and achieving the goal of ending all forms of violence.

Overall, the analysis emphasizes the urgent need to address gender-based violence, with particular emphasis on online violence. It acknowledges the comprehensive measures taken by the Biden-Harris administration and stresses the significance of prevention, survivor support, accountability, and research. The speakers’ insights shed light on the diverse approaches taken across the United States and highlight the importance of balancing the rights of children with the rights of parents. Investing in prevention and evidence-informed policy is considered essential, and the CDC’s efforts to identify shared causes of violence are valued. Lastly, civil society plays a vital role in advocating for government resources to effectively combat these issues.

Salomé Eggler

The extended summary of the analysis highlights the significant role played by GIZ in integrating child online safety into its projects. GIZ is committed to incorporating child online safety from the outset of its projects, ensuring that the protection of children in the digital space is a top priority. This proactive approach underscores GIZ’s commitment to safeguarding children’s rights and well-being.

Furthermore, GIZ takes a comprehensive approach to ensure child online safety is embedded in every aspect of its projects. By integrating safety requirements at every stage, GIZ creates genuine child online safety projects specifically designed to address the unique challenges and risks faced by children online. This holistic approach is crucial in effectively protecting children from online threats and promoting their digital well-being.

To aid in the implementation of child online safety, GIZ utilises user-friendly tools that do not require extensive expertise in child protection. The Digital Rights Check tool is one such example, helping to assess projects in terms of human rights considerations, including child online safety. This tool allows GIZ to evaluate the extent to which its projects uphold fundamental rights and make necessary adjustments to ensure the protection of children’s rights.

However, the analysis highlights the challenges faced in implementing child online safety. Various cross-cutting issues, such as gender, climate change, and disability and inclusion requirements, need to be balanced with child safety considerations. This requires GIZ practitioners to find a delicate balance between these competing priorities to ensure that child online safety is not compromised. Moreover, limited budgets and time constraints further complicate the implementation process.

Nevertheless, the analysis indicates that increasing digitalization projects present an opportunity to mainstream child online safety. As GIZ’s digital projects continue to expand, there is a chance to incorporate child online safety into more frameworks and tools. By leveraging the digital rights check and other appropriate measures, GIZ can ensure that child protection considerations are integrated into larger projects, leading to a safer online environment for children.

Overall, the sentiment towards GIZ’s efforts in integrating child online safety is positive. GIZ’s commitment to embedding child online safety into its projects and using tools to assess projects in terms of human rights, including child online safety, demonstrates a proactive approach towards protecting children’s rights in the digital age. However, the challenges associated with implementing child online safety, along with limited resources, highlight the need for ongoing commitment and collaboration to overcome these obstacles.

In conclusion, GIZ’s role in integrating child online safety is crucial. By prioritising child protection from the outset of projects, adopting a comprehensive approach, utilising user-friendly tools, and capitalising on digitalisation opportunities, GIZ demonstrates its commitment to creating a safer online environment for children. Continued efforts, collaboration, and resource allocation are essential to overcome challenges and ensure the effective implementation of child online safety measures.

Moderator

Omar Farouk, in collaboration with UNICEF and the UN Tech Envoy, is actively involved in Project Omna, aiming to tackle pressing digital issues such as cybersecurity, bullying, and privacy on a global scale. The project is focused on addressing the challenges faced by children in the digital space and ensuring their safety.

The importance of balancing child safety and economic growth in the digital realm is a key aspect of the discussion. It is evident that as the world becomes increasingly interconnected, it is crucial to protect children from the potential harms that exist online while fostering an environment that promotes economic growth and innovation.

One of the primary arguments put forward is the need for strong partnerships between government, businesses, and civil society to effectively address child safety in the digital space. Collaborative efforts among these stakeholders are crucial in developing strategies and implementing measures that protect children from online threats. By working together, they can leverage their respective expertise and resources to create a safer digital environment for children.

The summary highlights the related topics of child safety online, government-business partnerships, civil society, and the digital space. It is evident that these topics are intertwined and interconnected. Effective child protection in the digital space requires cooperation and collaboration among all these stakeholders.

Furthermore, the discussion emphasizes the role of partnerships in achieving one of the Sustainable Development Goals (SDG 17: Partnerships for the Goals). This demonstrates the global recognition of the importance of collaboration in addressing complex challenges like child safety online.

The summary does not mention any specific supporting facts or evidence. However, the involvement of UNICEF and the UN Tech Envoy in Project Omna provides a strong indication of the credibility and importance of the initiative. Additionally, the fact that the summary mentions the need for partnerships suggests that there is evidence supporting the argument for such collaborations.

In conclusion, the expanded summary highlights Omar Farouk’s involvement in Project Omna, undertaken in partnership with UNICEF and the UN Tech Envoy, to address critical digital issues. The discussion emphasizes the necessity of balancing child safety and economic growth in the digital space and calls for strong partnerships between government, businesses, and civil society. By working together, these stakeholders can effectively tackle the challenges faced by children online and create a safer digital environment for all.

Julie Inman Grant

The issue of online safety for children is a significant concern that requires attention. Children make up one-third of global internet users, and they are considered more vulnerable online. The sentiment towards this issue is mainly negative, with arguments emphasising the need for safety measures and awareness to protect children.

One argument highlights that the internet was not designed for children, and thus, their safety should be considered. This emphasises the negative sentiment regarding the lack of adequate safeguards for children online. The related Sustainable Development Goal (SDG) is 3.2, which aims to end preventable deaths of newborns and children.

Another argument focuses on the long-term impacts of children becoming victims of online abuse. Victims of child abuse are more likely to experience sexual assault, domestic violence, mental health issues, and even become offenders themselves. This negative sentiment highlights the serious societal costs associated with online abuse of children. The related SDGs are 3.4, which promotes mental health and well-being, and 5.2, which aims to eliminate all forms of violence against women and girls.

Education and awareness are seen as crucial factors in addressing online safety for children. The positive sentiment is observed in the argument that prioritising education and awareness regarding internet safety is essential. Programmes and initiatives aimed at parents and young people demonstrate the commitment to promoting safety. The related SDG is 4.7, which focuses on education for sustainable development and global citizenship.

The inadequacy of age verification on online platforms is highlighted, with a negative sentiment towards platform responsibility. The argument is that platforms need to improve age verification, as even eight and nine-year-olds are reporting cyberbullying. It is emphasised that young children lack the cognitive ability to handle risks on such platforms. The related SDG is 16.2, which aims to end abuse, exploitation, trafficking, and violence against children.

The importance of developing technology with human safety, particularly children, as a core consideration is emphasised. The positive sentiment is expressed in the argument that the welfare of children should be considered from the beginning of technology development. Anticipating and mitigating risks is crucial to ensure their safety. The related SDGs are 9.5, which promotes enhancing scientific research and technological capabilities, and 16.2, which aims to end abuse, exploitation, trafficking, and violence against children.

The effectiveness of self-regulation in dealing with cyberbullying and image-based abuse is questioned, expressing a negative sentiment. It is argued that self-regulation is no longer effective, with evidence suggesting a 90% success rate in removing cyberbullying content and image-based abuse. The related SDG is 16, which focuses on peace, justice, and strong institutions.

Cooperation between regulatory bodies and industry is advocated as necessary for prevention, protection, and proactive and systemic change. The positive sentiment is observed in the argument that such cooperation is essential to effectively address the issue. Initiatives and networks have already been established to work together in removing abusive content. The related SDG is 17, which emphasises partnerships for achieving goals.

It is noted that there is no need to start from scratch when building regulatory models for online safety, expressing a positive sentiment. The argument is that localized materials have been developed in multiple languages to ensure wider accessibility, and sharing experiences, including mistakes, can help prevent future harm. The related SDG is 16, which focuses on peace, justice, and strong institutions.

Lastly, it is argued that online safety must be a collective responsibility, reflecting a positive sentiment. The argument emphasizes that no one will be safe until everyone is safe. This highlights the importance of individuals, communities, and organizations working together to ensure online safety for all. The related SDG is 16, which focuses on peace, justice, and strong institutions.

In conclusion, the importance of online safety for children is a pressing issue. The negative sentiment arises from concerns over their vulnerability and the long-term impacts of online abuse. Education and awareness, improved age verification, technology development with child safety in mind, and cooperation between regulatory bodies and industry are crucial for prevention and protection. The limitations of self-regulation are observed, and the need for collective responsibility is emphasized. Addressing these issues is vital to ensure a safer online environment for children.

Audience

During the discussion on the protection of children’s rights, several key points were raised by the speakers. One speaker emphasised the need to draw practical measures to prioritise child rights. This is particularly important in addressing issues such as abuse, exploitation, trafficking, and violence, which are central to SDG 16.2. The speaker highlighted their work at the Elena Institute, a child rights organisation, and their involvement in the Brazilian Coalition to End Violence.

Another speaker emphasised the importance of laws and design in avoiding fragmentation and effectively implementing new ideas. This is crucial in the context of child rights, as effective implementation requires a holistic approach. The speaker did not provide any specific supporting facts for their argument, but the need for coordination and coherence in policy and legislation is broadly recognised in this field.

The discussion also touched upon the need for better cybersecurity strategies and laws to protect online users, especially in African countries. The speaker highlighted the progress made by Ghana in this regard and stressed the importance of addressing cybersecurity in the context of digital inclusion and progress. They suggested gathering best practices and suggestions at both the national level and civil society level to combat issues such as cyberbullying.

There were also concerns expressed about balancing parental supervision tools with a child’s right to information and seeking help. The speakers pointed out the high rates of online abuse in Brazil, and the potential risks of violence coming from within the family, highlighting the need for caution with supervision tools.

The debate over prevention measures, such as sexual education, in conservative countries was mentioned as well. The discussion highlighted the challenges faced in advocating for such strategies, as they can be seen as taboo in conservative countries. The importance of finding practical approaches to deal with child abuse and exploitation, while considering cultural and social contexts, was emphasised.

In conclusion, the discussion emphasised the importance of practical approaches in safeguarding children’s rights. It called for the development of effective strategies and laws to address issues such as abuse, exploitation, and violence in both physical and online contexts. It highlighted the need for coordination, coherence, and best practices at multiple levels, including national and civil society. The debate also shed light on the challenges of balancing parental supervision tools with a child’s right to information and the difficulties in advocating for prevention strategies in conservative countries. Overall, the discussion underscored the need for comprehensive and contextually sensitive approaches to protect and promote children’s rights.

Ananya Singh

The USAID Digital Youth Council plays a crucial role in involving youth in digital development. The council has been created by USAID to ensure that the voices of young people are incorporated in the implementation of their digital strategy. They provide a platform for youth to have their voices heard and influence the development strategies. This initiative is aligned with SDG 4: Quality Education and SDG 8: Decent Work and Economic Growth.

The speaker, who is part of the USAID Digital Youth Council, actively works towards providing the platform for youth to have their voices heard and influence development strategies. This highlights the importance of giving young people a voice in shaping digital development. The sentiment is positive towards this argument, as it recognises the need for youth to have a platform to be heard.

Furthermore, the council has been instrumental in guiding the implementation of the USAID’s digital strategy and raising awareness about digital harms. They have co-created sessions on emerging technologies, which indicates their active involvement in shaping the digital landscape. This is in line with SDG 9: Industry, Innovation, and Infrastructure, and SDG 17: Partnerships for the Goals.

Moreover, the council members have designed apps to educate young people about digital harms, showcasing their creativity and commitment to addressing challenges in the digital world. This demonstrates the council’s dedication to empowering young people and equipping them with the necessary knowledge to navigate the digital space safely.

Involving youth in decision-making processes has been found beneficial, and the Digital Youth Council exemplifies this. Ananya Singh, a member of the council, was allowed to share the stage with USAID administrator and U.S. Congress representatives, indicating the recognition and importance given to the council’s involvement. Additionally, young council members were involved in planning and speaking at multiple sessions of the USAID at the Global Digital Development Forum, further highlighting their active participation in decision-making processes. This aligns with SDG 16: Peace, Justice, and Strong Institutions.

Overall, the Digital Youth Council’s work has been a success story in empowering youth and promoting digital engagement. By providing a platform for young people’s voices to be heard, guiding the implementation of digital strategies, raising awareness about digital harms, and actively participating in decision-making processes, the council is contributing to the advancement of SDGs and ensuring that youth are active and equal partners in digital development.

Session transcript

Marija Manojlovic:
Welcome, everybody. Welcome to people in the room around this huge roundtable at the end of the day. My name is Maria Manojlovic, and I’m director of SAFE Online. This event is called SAFE Digital Futures, Aligning the Global Agendas. I want to welcome participants as well, and if you’re joining us online, please, we have an online moderator, my colleague, Natalie Shroop, so please drop in the chat quickly where you’re joining us from and feel free to drop in the questions throughout the session. We will be monitoring the chat and making sure that we can respond to your questions. As I said, my name is Maria. I lead the work on SAFE Online as part of the End Violence Global Partnership. We are the only global fund focused on the safety of children issues online. We fund system strengthening across different sectors. We fund research and data, and we fund technology tools that are looking into tackling harms and risks to children in digital environments. So far, we have invested around $100 million in over 100 projects, making impact in over 85 countries. So this work, we interact with a wide variety of players and stakeholders from various sectors and fields of engagement. We interact with governments, private sector, and industry. We work with child protection organizations, with civil society organizations, as well as industry and academia. And through that engagement, we have realized one thing, and this is the reason why we have organized this session today on the alignment of various digital agendas, which is that there is alarming level of fragmentation in this ecosystem, which is truly hampering progress in many aspects. But in particular, when it comes to safety of children, children are too often left aside and not even considered in the discussion of digital governance and development. So some of the common reactions when you work in this field are the following. We have had literally people turn their backs on us when we mentioned children in our interactions in relation to online issues. They would say, well, we work on infrastructure or protocols, or we work on connectivity or access, but we really don’t deal with kids. Our engagement is focused only on women and girls. Sorry, we can’t really speak about kids more broadly. Or we just work on human rights more broadly, but kids are not really part of that. Or we really just care about education. Education is really critical in access, but safety is not that critical for what we do. But somehow placing children and safety in the overall global agenda on digital development and human rights has been particularly hard. Then there is the famous privacy and safety dichotomy, the tension between how do you assure privacy of users at the same time ensuring that users can. And not only you, I mean, I actually hate the term users. It’s humans and people. Like users, it’s not some other category of creatures roaming around. So when we think about privacy and safety, we need to be thinking more broadly about how do they interact at the level of humans. But when you speak about prevention and response to online child sexual exploitation and abuse, that’s even more harsh distinction. So you will find, if you find yourself at the end of the spectrum who cares about online safety, you will end up being accused of various things. Like the latest really fancy thing that we’ve been accused of is that we are trying to end privacy online, which sounds really cool. But it’s just, it’s unbelievable. So we believe that this dichotomy is really false. And we believe that more nuanced conversations are needed. We believe that we should not be forced into choosing which one matters more. And when we know that we can and should have both. So as we were discussing this yesterday with Maria Ressa and Justina Arden was saying, you can and should have both. And this should not be a matter of choice that we should be making. And sometimes having much more deep and upstream discussions is going to be needed for us to be making some nuanced and meaningful contributions in that regard. So now that I can vent and complain a lot, let me be more positive. What are the causes of this misalignment? And I believe there are a few things that we can think about. But in order to advance the state of the internet, which is beneficial for humans, and in order to maximize the benefits of digital technologies, we have to invest efforts to understand where these misalignments come from and how we can overcome them so that we are in fact more aligned and more impactful. That is why I believe that the most important, this is the most important discussion that we can have. And in order to do that, I want to ask you to do a couple of things. First one is let’s move more upstream. Instead of focusing on manifestations of the issues in these various fields, like technology facilitated at GBV or gender-based violence, lack of connectivity and access, cyber crime, and so on, to actually upstream design infrastructure and policy choices that enable these things. We are repeatedly seeing that the driving forces and engagement techniques behind radicalization, violent extremism, political extremism, misogyny, child sexual exploitation and abuse are very similar, both from societal and norms perspective, as well as from the technological point of view, in terms of how the design choices of digital platforms are enabling these phenomena and how not only they’re enabling them, making them worse and exacerbating them. Second thing that I want to ask you to do today is share learnings and failings openly, not only what has worked and succeeded, but what has not in your previous engagements, so we can do better. You will hear a lot from our speakers today about that, but also speak about solutions and approaches that work across the landscape. How to engage with governments, how do you create political will, what will it take to do that, how do you engage with industry and create incentive for more action, accountability and transparency will be critical. So today we have brought a lot of speakers, eight speakers and experts from various fields to help us frame this discussion. We will not make them speak all at once, so don’t be scared. I will introduce them throughout the session, but we want to make the session as interactive as possible, so we’re going to split the session in three segments. We’re going to have speakers introduce for five minutes their catalyzing, igniting remarks, and then we will open the floor for discussion. We have asked people, and again asking people to please come to the table if you want to join us at the roundtable, but also people online, please drop in your questions in the chat and we’re going to be making sure that you can participate. For all of those online, yes, please get engaged. And finally, there’s a huge diversity of perspectives and expertise in the room, so please be respectful when you speak, and this is a safe space for people to express their opinions. For people who are new to this field of online child sexual exploitation and abuse, there may be some sensitive things said here and some triggering facts, so we’re just giving some warning to you. Take care of yourself. If people need to step out, please step out at some point. And then again, be mindful that we all want to speak, so please keep your remarks concise and focused. And with that, let’s dive in. So, the first segment we’re going to talk about today, we have labeled cybersecurity and online safety, but again, these are such fluid agendas, and you will see how we are going to try to unpack all of that. I will kick off with my question to Ambassador for Digital Affairs, Andrie Verdier from France, and what we want to do is see how various agendas around cybersecurity and online safety interact around issues of child online safety. So, Andrie, in many ways, we spoke this morning as well, but you basically sit at the intersection of this issue that you want to discuss today. You are someone who has worked in private sector, public sector. You have worked in academia. You have worked on digital commons. You have worked on counterterrorism. You have been one of the instrumental people leading on the Christchurch call from French side, but also on the Paris call for cybersecurity. And most recently, and that’s how we started interacting, you’ve been leading the work on child online protection lab. So, as somebody who is wearing literally like 15 hats, can you tell us a little bit more about, from the global perspective, but also French perspective, how do you see all of these issues aligning, and what have you learned throughout these engagements, and what are the opportunities and challenges around these potential efforts to make these things more aligned? Over to you.

Andrea Powell:
Wow, in five minutes. Thank you very much for the invitation and for the opportunity to exchange with such a panel. Yes, as you said, we try, like our friends of the US, for example, to build a global and coherent digital diplomacy, because everything is interconnected and you can start from cybersecurity or education or else, at the end of the day, you have to to be coherent. And since I’m the first speaker, probably you, all of you will say the same, but let’s recognize that internet is something great, even for children, that they have access to more knowledge, more entertainment, more communities, more empowerment than ever, and that something is disbalanced now. So first, we have some troubles with the cyberspace itself, with the dark web, which is a very efficient tool for criminal activities. We did commoditize a lot of things, like payment or taking a room, which is very efficient for a lot of businesses, even for crime business. We have big companies that are very, very big, monopolistic, and why not, but sometimes they have unexpected negative externalities from their business model. And for example, we can observe filter bubble or echo chambers or radicalization. And regarding all of this, we have to find solutions that does respect the promises of internet. That’s the first point. And for this, yes, I was thinking this morning during another panel, 30 years ago, John Perry Barlow did write the Declaration of Independence of Cyberspace, because at this time, we could consider cyberspace as something external to the society. There were a place somewhere named cyberspace. Today, we could say cyberspace did eat the world. So it did contaminate and transform everything. And so to start to answer to your question, we have two principles for diplomacy. First, everything that is forbidden in our life should be forbidden online. And everything that is guaranteed in our life should be guaranteed online. So freedom of speech. So we have to forbid what is forbidden and to protect what should be protected. That seems very simple, but we all remember how difficult it was to implement. For example, when I go to New York to discuss in the UN about international law in the cyberspace, we are speaking about few very simple laws, like Geneva Convention. But we did discuss during 25 years to be sure that we do understand in the same way how we will apply the UN Charter or the Geneva Convention within a conflict, which is just one topic. So this idea that seems simple is not so simple to implement. But the second thing is that we, government, we didn’t build this system. We don’t understand how it does work. I’m an entrepreneur. I did create three internet companies, small, but I understand, but I don’t build it. I did never seen the algorithms themselves. I don’t access to the source code. So the companies and of course, civil society and researchers, but the companies has to be part of the solution. So we need not even a multi-stakeholder approach, but an efficient multi-stakeholder approach, which cannot just be a room where we discuss politely. We need to put the pressure, we need to ask for results. We need to, everyone in the room has responsibilities and prerogatives, and sometimes a business model or mandates, but we don’t have any other way. We need to be sure that we will find the solution all together and that the companies will contribute to find the solutions. Here I’m speaking generally about terrorism, harassment, gender balance, and child protection. If we go to child protection, what I did learn in this journey is that this is a very difficult topic, maybe one of the most difficult. First, as you say, this is very difficult to engage the conversation on those issues. People don’t want to recognize that, I don’t know, in France, for example, 25% of children less than 10 years did accede to pornographic content. That’s a big problem and we know that 20% of the adults did, were a victim of some kind of sexual offense in their life. So that’s one person out of five. So people don’t really want to recognize this because it would conduce them to change a lot of things and we could recognize that there is much less money in this field than, for example, in the fight against terrorist content. Regarding terrorist content, you have strong organization, you have a lot of technologies, you have a lot of monies. Or let’s mention this, if you want to, all of us, we could try, if you try to publish a small part of a Hollywood movie online, in 10 minutes it will be removed because Hollywood did finance solutions to detect this and to intervene very fast, very quickly. So this is a weaker field with less money and, I’m too long? Okay, I finished, with a wide range of issues and that’s the second thing, because everyone agreed to protect children, but here we can speak about strong and heavy criminality like human trafficking or whatever you can imagine, child abuse, but you can go to harassment, online harassment, so something lawful but harmful. But you can even speak about the consequences of some algorithms regarding the way you observe your body, for example, and the connection between some over-representation of some pictures and anorexia, and we should pay attention to this. So this is a wide range of topics, very impressive, not the same level of heaviness, if I may. So I will conclude and we’ll continue, but you did ask for a project, something more positive. As you know very well, we are trying to launch the Child Online Protection Lab. The idea here is to build evidences all together, in a cooperation spirit between companies, civil society organizations, research and governments, because I feel that one part of the issue, this is a very ideological conversation, everyone we should make this or this, and no one tests, no one experiments, no one shares the results. So for example, and I finish with this, if we just speak about age verification, so which should be normal, you should be able to test the age of someone pretending to go to a pornographic website, but you have dozens of approaches and some of them are better for privacy, others are more efficient, others are centralized, decentralized, etc. So we need to see the details and to test and to implement and to share the results. That’s one approach France will encourage deeply during the next years. Thank you. Thank you, Henri, and I like really

Marija Manojlovic:
the focus that you put on evidence and data that can really help us bridge these debates, but also bring back home the actual work on solutions, not only at the level of principles, and I think that’s something that we can also jointly think about as one of the ecosystem pushes to be more evidence-focused and more data-informed in our discussions and experiment more cross-sectorally as well. So thank you for that. So moving on from this kind of a very global and an interesting initial intervention, I would like to move to Dr. Albert from Ghana. Dr. Albert is the Secretary General of the Cybersecurity Authority of Ghana, and Ghana is really unique in many ways, but one of the ways that we really are particularly interested in is that it is a unique example in the world where issues of child protection have been streamlined fully into the work on cyber security at the national level. You are director of the Cybersecurity Authority and you have been at the center of those developments. Can you tell me a little bit more about what were the key factors leading to this outcome? The fact that you’ve managed to institutionalize child protection as part of the cybersecurity work, what was the political will, the ripeness of the issue, public attention, institutional setting, legislation, what were kind of the key driving forces behind that? In five minutes. Thank you.

Albert Antwi Boasiako:
Oh, certainly. Thank you. First of all, a pleasant afternoon to my colleagues here, participants, but also our colleagues who have joined virtually. Maria, I want to thank you on behalf of my government for the invitation, but not just that. The support your institution and violence against children has rendered to us over the last few years. You’re right. I think I’ve been around for a while. For the past six years, I’ve been leading Ghana’s cybersecurity development as a national advisor and then as director general of the agency responsible for cybersecurity development. You are right. A number of competing factors there. There’s a national security interest. Of course, the issue of terrorism, cyber terrorism comes up. There’s a private sector interest issue of protecting critical information infrastructure, the intelligence aspect of cyber, the civil society, academic part, but you can’t take away the critical concerns around children. I think at a national level, one always expects to have a 360 degree about some of this development. I think we’ve achieved some successes when we started this process. Ghana’s cybersecurity readiness according to the IT ranking was around 32.6%. That was the middle of 2017. At the end of 2020, Ghana’s rating jumped to 86.6, basically university cycle from F to A. A number of things have been done. Permit me to highlight some of this. I think one approach we also adopted, of course, the political commitment is key. I keep on telling my colleagues, I’m lucky to be running Ghana’s cybersecurity because I have the support of my government. My minister runs when she’s presented with a sound policy or personal matter. We don’t delay. Of course, my government is the president is committed to that. I think within the past six years, it’s been quite a sighted journey, notwithstanding the challenges, including financial challenges. We had a unique approach in terms of based on this approach on data. Research is key. We had to conduct research for this process to reference. We work with UNICEF in 2017 to look at opportunities, but also risks of children and interesting dynamics. On one hand, which you all know, there’s a trend in which children are using internet and devices. Consequently, we established that four out of 10 children had also had contact within a part with sexual content. On one hand, you have a positive development with respect to opportunities for children, even on seven underserved communities using the internet. On the other hand, you also have this disturbing trend in which they are always coming into contact with content that certainly had a potential to impact on their well-being. We also had to do a research with World Bank support, with Oxford University, the cyber security capacity maturity model, which also highlighted the gaps around the protection of children. This research led to a number of interventions. The first one was legislation, we’re going to pass a cyber security act that incorporates child online protection as a whole division within my authority, but also the law criminalizes certain sexual offenses. We’re lucky to tackle what has become the sextortion quickly within our law. It has had a lot of positive impacts afterwards. Of course, awareness creation also was put into legislation to make it mandatory for the states to lead that process. That is one aspect of the institutionalization of child online protection, but we had to also look at policy aspect of things. We developed a child online protection framework, which incorporated a number of best practices, including the WeProtect framework, but also ITU guidelines that we provided. They’re pretty important. As part of the institutionalization, I’ve mentioned my agency has the division for child online protection headed by a director, a very senior person. It’s not just that. Through the work we did with UNICEF and support from my agency, we established a child online protection forensic lab, which was the first one in the south region. I was to help investigative bodies in terms of forensic evidence to support the work because deterrence is key. Criminal justice response is also one of the areas that I need to look at it as a national response mechanism. Most importantly, and this is where I think I draw a lot of inspiration from, the institutional arrangement. Certainly, my experience, somebody needs to lead. You need a champion, but you need to carry people along. Different agencies, the gender ministry has a responsibility, education ministry, civil society, academia, the telecommunication service providers. We needed to bring all these actors together. I think anybody who has visited us has seen we’ve achieved a lot of success. There’s a consensus on the table on a way forward to be able to address child online. The last two areas that we’ve also achieved success is incorporating awareness creation around those risk areas and children into our national program. Ghana launched what we call a safer digital campaign. We came out with four pillars, government, business, public, but also children. Specifically, dedication. This has been institutionalized. You don’t treat awareness creation around the risks that children are facing as a sub-team. No. I think that’s one area we’ve achieved a lot of success is going through the schools across the whole country in terms of raising awareness with the collaboration with UNICEF. The last one is also reporting. We need to empower the public with the children to report. Ghana, when you call 292, it’s free. You can call that on a smart device or any other device and you can report incidents. We’ve become lucky. This is just to conclude. Initially, when we set this national hotline, we thought it was going to receive only incidents. In other words, the citizens, children are going to report only after they have been more affected. No, it has changed. It’s becoming more like a tool for them to seek guidance. When they make a call, somebody says, send me your note, or click on this link, they’re able to call. We advise and encourage them, please call 292 free. Don’t pay anything, 24 hours, and just at least conduct some minimum due diligence. I think that has been personally, as a public servant, the most important deliverable, a service good for the public. I really want to recommend that we look at those options as a best practice. Of course, there has been challenges. I wish we can speed up in terms of awareness creation. Ghana is big, 32 million. I don’t think I’ve been able to achieve my awareness creation mandate even 20%. I feel very uncomfortable. There’s a huge gap. I think we need to scale up our efforts, and the needs are there. Thank you.

Marija Manojlovic:
Thank you so much, Albert. It was really good. I do want to say this morning, when we talked about working Ghana, one thing that really struck me was how you eloquently described, well, actually, your strategic intent behind immediately legislating, because you wanted to remove this uncertainty around, there is political will now, but there may not be next political cycle. How do you immediately ensure that you institutionalize this thing, and also create incentives for ecosystem ownership? Not only that it’s you who has political will that leads on these things, but make everybody take the bit of responsibility and accountability over it, and create that ecosystem responsibility be shared. Thank you for that. Julie, I want to get back to you now. Your work is globally known. You are the first governmental regulator and independent agency focused on online safety. You’ve done tremendous work, both for Australians, but also global population and across. We use your resources all the time, and they’re really always the highest quality possible. eSafety is a regulator, but also it’s an agency that works on prevention of various forms of crimes. You have wide range of powers and functions that you try to apply really comprehensively. What is interesting about your agency is that many people don’t know that it started as only being focused on children. It’s kind of went from children to become everything else. It’s really been great to have you here to give us kind of a sense of how, because it seems it was centered around kids, how do you see kids’ issues now being embedded in this broader risks and harms ecosystem, and what are some challenges and opportunities for us to make that, as you did, a very big and part of a joint up effort? Over to you. That is a great and very hard question to encapsulate

Julie Inman Grant:
in five minutes, but really it was actually a political decision that it would be focused on children initially. There was a well-known media personality who is open about her mental health struggles. She had a nervous breakdown. She was very active on Twitter. I was interviewing for a role with Twitter to start their trust and safety and public policy roles across Southeast Asia, Australia, and New Zealand. She tragically ended up taking her life. It became known as the Twitter suicide, and a petition started to government that just said, government, you need to step in and do something. This was in 2014. Because of concerns about freedom of expression, the ICT minister at the time, Malcolm Turnbull, who became the prime minister, said, we’re going to start small with children’s e-safety, because nobody can argue that children aren’t more vulnerable than adults. We took a bunch of functions from across the government, put it into the Children’s e-safety commissioner, and that included, we’re the hotline for Australia, child sexual abuse material, taking reports on terrorist content, but also set up the world’s first youth cyberbullying scheme, where we serve as a safety net when the platforms fail or they miss cultural content, and the seriously harassing, intimidating, humiliating content targeting children doesn’t come down. When I took the role in January 2017, I was asked to set up the revenge porn portal. I said, no, I’m not going to call it revenge porn. Let’s call it what it is, image-based abuse for everyone. That’s how that started, but I think it’s really important to know that we take a vulnerability lens to everything we do, and nobody can argue, again, that children aren’t the most vulnerable cohort online, because the internet clearly was not made for children, although children make up one-third of the global internet users. And young people today don’t differentiate between their online and offline lives. It is their playground. It is their school room. It is their friendship circle. All that said, we had a very stunning national research, the Australian Child Maltreatment Study, that found that a stunning 28.5% of Australians have experienced sexual abuse by the time they’re 18. That’s more than one in four. Beyond thinking about it as an online issue or an individual issue, which is why we take down content, because it’s retraumatizing, but the comorbidities that exist that follow a child throughout their entire life, they’re more likely to experience sexual assault later in life, to be in family and domestic violence situations, to have drug and alcohol dependencies, to have serious mental health and suicidal ideation, and also to become sex offenders themselves. So we need to think about this in terms of the long-term societal costs as well. And did you know that our Canadian counterparts found that 20% of survivors are recognized on the street for the child sexual abuse series that they’ve been seen in? So you can imagine how traumatizing that is. So when we have that debate about adults’ privacy versus a child’s dignity or a child’s right to be free from online violence, I think, what about a child’s right to privacy when they’re being tortured and abused? We need to really rethink about how we rebalance this. So what have we done in just three broad areas to address this? We have these complaint schemes where we’re taking trends analysis all the time. So we know that kids are actually coming to us younger reporting youth-based cyber bullying because when kids on TikTok and Instagram and Snap at eight or nine, so now we’re getting reports of cyber bullying of kids. And this goes back to Henri’s comment about age verification. We need the platforms doing a better job. Like eight and nine-year-olds have no business being on these platforms. They don’t have the cognitive ability to be able to address this. So we do the fundamental research. We’ve got the programs. We know that 94% of Australian children have access to a digital device by the time they’re four years old. So parents need to be the front lines of defense. We’ve got a program for parents of under fives to be safe, to be kind, to make good choices, and to ask for help. And then when they get into the primary years, it’s about the four hours of the digital age, respect, responsibility, digital resilience, and critical reasoning skills. We have youth advisory committees so that we can hear from young people about what is going to work for them. So we have them running our scroll campaign. So it’s authentic and it’s resonating. We have them writing letters to big tech saying, this is what we want from you. We want you to take abuse seriously. We want you to take action. We are your future customers, users, humans. But then we also have systemic and process powers where we’re compelling more transparency from the major platforms on what they’re actually doing to address child sexual exploitation and sexual extortion and harmful algorithms. And next week we’ll have a major announcement and enforcement action. We’ll be holding five more companies to account in this area. So the more that we can shine light on what is and isn’t happening, the more we can push safety standards up. And that goes to the whole idea of safety by design as well. Again, we can’t have safety be an afterthought. The welfare of children to be an afterthought. We really need to revolutionize the way that technology is developed with humans and safety at the core. Again, not after the damage has been done. We need to get ahead of technology changes so that we’re anticipating the risks. We’re never going to get a hold of generative AI if we’re not focusing the scrutiny on how the data is chosen and it’s trained. And if we wait until it’s extricated out into the wild, we’re going to be playing a huge game of whack-a-mole or whack-a-troll, as I like to say. There we go.

Marija Manojlovic:
Thank you, Julianne. And thank you for always grounding us back into the research and data that you collect and how you always try to think in terms of long-term engagement. Because engaging with kids as young as zero to five, we are building future for healthier engagements later on. And from the prevention lens, that’s really, really critical for prevention. Because we are seeing that, again, perpetration is also starting to be done earlier and earlier. And we keep on engaging with just a certain group of kids, which is like adolescents, but no engagement with younger generations. So thank you for that. I know that you and Dr. Albert will need to leave at some point. So I’m just going to give that heads up to people. But with that, I’m going to give the floor for anybody who wants to ask any questions at this point in time after the first round of interventions. If there are any questions or comments, please now raise your hand and we can pass you on the mic. Or if there is anything online that is coming in, do you want to… So is there anybody in the room who has… Oh, there it is. There is one. I think you can use the mic over there. Yes, thank you.

Audience:
Hello, I’m Ana from Brazil. I work in the Elena Institute. It is a child rights organization. And we are part of the coordination team of the Brazilian Coalition to End Violence. And I would like to hear your thoughts based on INSPIRE. How can we draw some measures and practical measures to think about the priorities in this area? Is the law, is the design, or how can we think about standards to avoid the fragmentation and to implement all these new ideas that you were talking about?

Marija Manojlovic:
I’m looking at Julie, but anybody can pick up the mic, please.

Julie Inman Grant:
I think we all will probably agree that self-regulation is no longer enough. And this sounds strange coming from a regulator. I don’t think purely regulation is going to be enough either. And that’s why we have this 3P model with prevention, protection, and what I call proactive and systemic change. And that does mean working cooperatively with the industry to achieve outcomes. We have a 90% success rate in terms of getting cyberbullying content and image-based abuse down because we work informally and cooperatively with the networks. And that is the way we get that content taken down more quickly. To sort of solve this issue as more governments are thinking about how they set up either their own independent regulatory authorities or how they start small if they don’t have the political will, we’ve started the Global Online Safety Regulators Network. We now have six members of the network. I’m going to be calling you Dr. Albert soon. But we also have observers who don’t yet have independent regulators who can learn from these models. But please go to esafety.gov.au. We have a strategy. We’re trying to do as much capability and capacity building as we can. We were the only ones for a while doing it, trying to write the playbook as we go along. And we’ve made a lot of mistakes. We’re happy to share those as well. But I don’t feel like anybody needs to start from scratch. Even if it means we’ve localized a lot of our materials into multiple languages, take it, use it, localize it in a way that works for you. We’ve got to be in this together. None of us are safe until all of us are safe.

Marija Manojlovic:
Thank you so much, Julie. And Dr. Albert, do you want to? Just a quick one. I felt the sentiment of my Brazilian colleague, especially when she used the word fragmentation. That’s the reality. She’s speaking from a context. And I think I saw this when I was first appointed.

Albert Antwi Boasiako:
Again, it is a problem because you see, this is what I call ad hoc. Ad hoc happens even in the non-governmental space. Ad hoc activities are happening in government testing. I think that recognition is key for effective response. So institutionalization means essentially you are taking some systematic measures. What Ghana did, and I keep on my own experience from the development context, developmental context. Again, there’s a lot of things from our context. You are now starting to put the necessary structures in place. Champion is key. You need to have someone who drive to bring all this. In Ghana, I identify among the civil society institutions. I identify one of them which was quite active, very respected. And I use. So we did CSA and A institution. And we mobilized others around them. In government, we took that, that we had to carry gender, ministry, children, education along. That was deliberate, intentional. Other countries haven’t succeeded. I will mention, it’s a struggle. And the power concentration, and I’m saying those from the developmental context, they are real. And without being conscious and identify what I call champions in all sectors, it’s likely to be a little bit problematic. You may still have the law. And I think some of my Western colleagues keep on sometimes you are surprised. But you have this law. It’s been there, but nothing is working. In my context, the law is good, but frankly, getting people even to sit at the table can be a challenge. And that is why really I felt like in terms of the Brazilian situation, picking champions are wrong. Of course, the child online protection ecosystem is a collection of different players. And I think the first step is to look at those who are quite active. They will drive them. They are respected within the ecosystem. And able to mobilize ideas along there. Thank you.

Marija Manojlovic:
Thank you, Dr. Albert. And just one more note for Brazil. I know that you’re a member of the We Protect Global Alliance as well. Thinking about the model national response is a framework that one of the frameworks that you can use to start thinking and charting different areas of engagement and how that needs to be happening. But again, we’ll be happy to chat with you afterwards as well. I will excuse Dr. Albert and Julie who need to go to the next session. Yes, I know. Can I make a brief comment? Ambassador, sorry. Thank you so much. My name is… Sorry, I make a brief comment. I want to make this in front of you.

Henri Verdier:
So first, let’s recall that a large part of the issues we are speaking about is not on social networks. In the dark web, for example, if you want to buy a real-time video of a rape of a baby because it does occur, it’s not on business companies. It’s on the dark web. And here we need more policemen, more investment, more international cooperation. But this is not about company regulation. This is about fighting crime. Regarding company regulation, I understand the fact that it would be better to have a world with one common rules. But this is not what I call fragmentation. The fragmentation of internet is a fragmentation of the technical deep layer. We have to fight this. But we are democracies. We have the right to have proper rules. Or we are not democracies anymore. And we are not here to build a unique market for five companies. And I want to say there is another fragmentation, and that’s very important. That’s a fragmentation of the investment regarding trust and safety. Because most of those companies, and we can understand why they do this. They do invest regarding their sales. So they do invest a lot on big markets and much less on small markets. And of course, especially in Africa, for example, they don’t invest a lot. And we should ask them, we could do this in this framework of the UN, to equalize a bit the investment. And to take a small part of the investment in Europe or US to invest this in Africa, for example, or Brazil, why not?

Marija Manojlovic:
Thank you. Thank you.

Audience:
Maybe first in the room, and then we can do the online. But also, we will need to move to the next segment as well. But go ahead. Okay, thank you. My name is Peter King. I’m from Liberia. I would like to thank the NCH boss from Ghana. The question is an open question, but I would like for him to help in terms of suggestions that can be of best practice for other countries like Liberia, who are struggling to have cyber security strategy laws to protect online. What are some of the suggestions he can offer to African countries that are not at the level of how Ghana is moving with certain issues of tools that are used to create awareness on cyber security issues? The reason is because we look at inclusion, and then the level of progress, I’m thinking about uniformity. I just want him and other panel members who can share some of these best practices or advice on how to actually streamline online protection and cyber bullying in our African context. The European context may be different. Maybe four years in Europe, he has an idea of how to use a mobile phone. In our African context, he doesn’t even know that is it. Can we look at these dynamics and what are the best practices and suggestions for national level, civil society tools that he can use, and also at a level of maybe even the security sector to combat these kind of issues? Thank you so much. With permission of you, Dr. Albert, I will put you in touch with…

Albert Antwi Boasiako:
In fact, I brought a card and just a quick one because I’m being moved to another session. But a good thing is we are in touch with Liberia. I think a number of African countries have messages to tell us, and they keep on coming. We share the letter we’ve achieved. I think we’re sharing within the region. The only problem I’ve seen within the region is just fragmentation. So you have one ministry visiting you, others are left out. That’s why I was stressing the Brazilian situation. So there has been contact with the body in Liberia. Fortunately, we haven’t been able to really integrate the structures. But please, we will discuss. Thank you very much.

Marija Manojlovic:
Thank you so much. Nathalie, do you want to move ahead or do you want to ask a question? One brief question from online.

Moderator:
So from Omar Farouk, a 17-year-old from Bangladesh, passionate about child safety online. Started Project Omna and working with local and international organizations like UNICEF Bangladesh, UN Tech Envoy, to tackle digital issues like cybersecurity, bullying, and privacy, not only in their country but globally. Given the rise of cyberbullying and privacy concerns for children, how can we strike a balance between protecting kids online and fostering innovation and economic growth in digital space? What strategies can be developed to create strong partnerships between government, businesses, civil society, ensuring child safety is top priority? So just perhaps speaking to that balance between the economic growth and innovation piece along with child safety. So perhaps, Ambassador, if you don’t mind speaking a little bit to that.

Marija Manojlovic:
Ambassador Verdi, do you want to take that one? We’re kind of looking at you trying to avoid the look. That’s the eternal question, the big question.

Henri Verdier:
As a former entrepreneur myself, I just want to say two things. First, so sometimes if you forbid something, you forbid it. So it doesn’t, that’s not a problem of innovation. For example, I don’t know, a century ago when we did forbid child work, private sector said, ah, we cannot work like this, et cetera. But finally, we did all adapt. So that’s important. There is not always a contradiction between innovation and regulation. And some regulation can be tools for innovation, like, for example, a good standardization can be regulation and good for innovation. The second thing is that very often people oppose for security and privacy and innovation, for example. But very often, if we work a bit more, we can find solution. But you have to take in consideration that you are looking for three goals, for example, security and safety and innovation. So probably your first idea won’t be a good idea. You need to work a bit more, but you can still find solutions. And that’s why we need those multistakeholder, efficient multistakeholders to work all together and find solutions. I could, I won’t, but I could share dozens of examples on how we did a very fine tune, some good balances between all those goals. So it was not the first idea, but then we did find solutions. Thank you for that.

Marija Manojlovic:
And thank you for the question from Bangladesh. I think one of the things that I think neatly ties into the next segment that I want to open now is that sometimes innovation ecosystem is not inclusive of people who need to be part of it because various reasons, including safety. So women becoming part of the innovation ecosystem was for a long time not an option because they just didn’t feel welcome in certain environments. So making sure that innovation is not separate from ensuring safety in various environments. So now we want to move to a segment on gender based violence and image based abuse. One of the key things that we really want to unpack right here is how can online child safety be better positioned as crucial to inclusive gender balanced digitization. And another thing that we always struggle with is how can more be done in prevention work to address common narratives and perception of these issues grounded in gender norms and better center survivors. So with that, I will introduce Kaylin. Kaylin, you’re a senior advisor to the White House Gender Policy Council. You’re working on issues of technology facilitated abuse and harms. And you’ve been involved in the development of some of the landmark principles, guidance and coalitions in this space, including the Global Partnership of Action to Tackle Defacilitated GBV. So how do you see convergence of these various agendas from the White House perspective? And also from the perspective of the drivers of abuse, harassment and other harms? And how do they intersect with child safety and protection? Huge question, but over to you for five minutes. Thank you so much, Maria, for that question. And to Maria, Natalie, safe online for hosting this critical discussion that is really so important to be present at the Internet Governance Forum.

Cailin Crockett:
As Maria mentioned, I’m Kaylin Crockett. I am a senior advisor with the White House Gender Policy Council. I’m also director for military personnel and defense policy with the National Security Council. And for the past two plus years, I’ve coordinated the Biden-Harris administration’s efforts to address sexual violence in the military and also to counter online harassment and abuse as a feature of our domestic and foreign policy. These two portfolios might seem really quite distinct, but they actually share a lot in common and I think speak to the heart of our discussion today. And the first and foremost is that all forms of gender based violence and interpersonal violence across the life course share root causes and common risk and protective factors that perpetuate and are driven by harmful social and gender norms. And they are some of the most underreported crimes and abuses because survivors are too often shamed, silenced and made to feel invisible. This certainly has been true for survivors of sexual violence in the military, as well as for survivors of child sexual abuse. There are core values also that I think bind together the child online safety agenda with the ongoing work we must all do to promote a safe, secure and inclusive digital ecosystem for all people, but particularly for women and girls, children and LGBTQ plus people. This really means three things, I think, in particular, accountability, transparency, being survivor centered with a gender lens and, of course, prevention. I’m really fortunate to work for an administration led by a president and vice president that have been lifelong champions to address gender based violence and to stand with survivors. The administration really understands that the consequences and costs of gender based violence impact in addition to individual survivors, communities, and the ripple effects of gender based violence and all forms of abuse are felt across our communities, our economies and our countries. And it must be said in this conversation that women and girls from marginalized communities, including people of color, LGBTQ plus people and individuals with disabilities, among others, are disproportionately impacted. And it’s important to also be clear here, excuse me, that online violence is violence, and it can result in dire consequences for victims, ranging from psychological distress self censorship and decreased participation and political and civic life to economic losses, increased self harm, suicide, and forms of physical and sexual violence. In this campaign, President Biden made a commitment to convene a national task force to develop recommendations for federal and state governments, technology platforms, schools, and other public and private entities to prevent and address all forms of online harassment and abuse, with a particular focus on tech facilitators. gender-based violence. And in June of 2022, the president issued a memorandum that established a White House task force to address online harassment and abuse, which I’ve had the fortune to coordinate. This is an interagency effort that I think really speaks to that ecosystem approach that other colleagues have raised. It is co-led by the Gender Policy Council and the National Security Council, and involves many diverse government departments and agencies from USAID to the Justice Department to Health and Human Services, Homeland Security, and several others. And the senior representatives across the agencies that comprise the task force have met regularly with justice system practitioners, public health professionals, researchers, advocates, parents, youth, and importantly as well, partner governments to identify best and promising practices, gather recommendations, and learn from lived experiences to inform a blueprint for action. The initial actions of which were previewed in an executive summary that the White House released this past March, and will ultimately be fully captured in a public final report and blueprint of the task force that we’re working on to compile towards the end of this year. And again, most importantly, we’ve met with survivors, and especially youth, who shared how experiences of online violence have disrupted their lives, impacting their well-being, their health, relationships, careers, and career aspirations. And while each of their stories is unique, they share common threads and lessons that inform the work of the task force, and they have outlined concrete measurable actions, 60 and counting so far, that federal agencies have committed to address online harassment and abuse. And I know I’m already over time, so I’ll just briefly mention the four pillars of the blueprint that are inherently multisectoral. Those are prevention, survivor support, accountability for both platforms and individual perpetrators of harm, and research. And as an administration, we’re working truly across the whole of government. We’ve committed to updating and expanding resources to address gender-based violence online, and including child sexual exploitation. For example, the Justice Department is dedicating an unprecedented amount of resources to address cybercrimes that particularly impact women and girls, including image-based sexual abuse. And we’ve also really recognized the outsized impacts and harms of online harassment and abuse on children and youth, including in May, Surgeon General issuing an advisory on youth mental health and social media, which particularly emphasized the intersection of gender-based violence and child sexual exploitation online. So with that, I will look forward to sharing more in the Q&A. Thank you. Thank you so much, Kaylin, and thank you and the Biden administration for really

Marija Manojlovic:
taking such a strong lead and position on these issues, because we, you know, as everybody has been saying, a majority of the platforms and companies we speak about are based in the U.S., and what U.S. does is really going to matter for a lot of the other people across the world. So we are really looking into you for action on this. Particularly, thank you and the team and everybody else in the global partnership also for making sure that we are not siloing the work on child online protection, as well as the issues on gender-based abuse and violence. Unfortunately, Andrea Powell from the Panorama Image-Based Abuse Program has not made it in time from the airport, so we will, if she comes, we’ll just include her in the discussion, but if not, we will move ahead. I’ll just open the floor for one or two quick questions or comments on this, and then if there are none, we will move ahead with the next segment. I’ll wait for a little bit. Natalie, is there anything online coming in, or anybody in the room? Oh, there is. Please come on in.

Audience:
Hello, everyone. First, thanks for the great session. I’m really, really happy to be listening to you all. I’m Emanuela. I’m from Brazil as well. I work in Instituto Alona with child rights, and one question that I have when we talk about this theme of gender-based violence and also about child abuse and exploitation. In Brazil, we have high rates of abuse that happen online, so that happens at home, so I have two questions about this. The first is about supervision, parental supervision tools. How can we balance this complicated debate when we have the supervision, but we also know that the violence can come from the family, and this could be a risk for a child’s right to information, a child’s right to seek help, and how to do this in a practical way? This is my first question. The second is that we also have a very conservative country, and when we talk about prevention measures like sexual education, this can be a tough, tough debate that raises a lot of different issues. I would really like to hear you guys’ approaches on advocacy on this kind of prevention strategies that we can use because of the maybe taboo that this theme could evoke in more conservative countries. Thank you.

Marija Manojlovic:
Emanuela, over to you.

Cailin Crockett:
Thank you so much for that question, and as many of the experts in this room are aware, the United States, we are a federalist country, so we have 50 diverse states and territories as well as that, and so there are a multitude of approaches that have been coming up across the states on how to address these issues, and so for the administration’s perspective, we want to be really careful about balancing what you’ve said and recognizing those concerns given that parents may not always be inherently able to represent or willing to represent the best interests of their children, and we always want to maximize options and support for survivors of abuse at any age, so I think it’s a very timely question, and I think it’s really important that in line with your second question, we really take an evidence-informed approach and really focus on prevention as well. One of the areas that we’ve continued to invest in is through our Centers for Disease Control and Prevention really taking a public health approach to recognizing the shared causes of violence across the lifespan. We have an analysis that the CDC has done called Connecting the Dots that I quite like because what it really does is it connects the dots between multiple forms of interpersonal violence from sexual violence, intimate partner violence, child abuse and neglect, cyberbullying, and so youth violence, community-based violence, and so that’s one area where we’ve seen promise, but of course with everything, resources are so important too, and so the voice of civil society to really demand governments proportionally investing in these problems is so critical, so thank you for your work.

Henri Verdier:
A brief comment. Of course, there is a tendency everywhere, including in France, to say that those are questions for woke, decadent, and very liberal people, but actually, no, everything is connected, and I share your point. This is about violence, and maybe I can share two examples. First, within the crisis group, now we are speaking about algorithmic radicalization. Most of the terrorist attacks were done in Europe. I don’t mention Israel, which is a different situation, but in the EU, most of the terrorist attacks we had to face were done by very young people with the role of the social network in the radicalization process, and most of the terrorist attacks we had in Europe were coming from masculinist movements. It was not jihad or, I don’t know, it was masculinist people against LGBT or against, so everything, all those issues are connected, and if you pretend to avoid gender balance and gender protection, you will miss a large part of the other parts. Thank you so much. This is exactly why this session exists, to make these links and make

Marija Manojlovic:
them really clear in everybody’s mind, but also in our ability to create policy and otherwise responses to these phenomena. Andrea has made it. She literally just ran into the rooms, so she’s still in time for her intervention and the same segment, so perfect timing, Andrea. I hope you have had a time to take a breath. So, Andrea Powell is the director of the image-based abuse initiative at Panorama Global. In your work, you’re building partnerships and mobilizing efforts to ensure that no one experiences the enduring trauma that results from image-based abuse and other types of online harm. We are very proud to be part of this coalition and your work, and have been above all so impressed with how you’ve ensured that lived experiences, lived experience experts are an essential part of this coalition. What I want to ask you is, what opportunities do you see for better alignment between IPSA work, the image-based abuse, sexual abuse work, with online child protection, but also from your perspective, what have been some definitional and content-related issues, as well as in terms of practical tools and best practices we can build between these two fields. So, over to you. Thank you very much. My apologies for

Andrea Powell:
being late. Very happy to be here with all of you. Again, I’m Andrea Powell, and I am the director for the image-based sexual violence initiative housed at Panorama Global, and we most recently launched a new coalition, the Reclaim Coalition, that brings together over 50 stakeholders from 23 countries, most notably from civil society, academia, law policy, as well as lived experience experts, often called survivors. And what I mean in that context is not just individuals who’ve endured this ongoing trauma, but individuals who are active in the field of addressing image-based sexual violence. Image-based sexual violence as a threat are the act of creating and sharing intimate images without someone’s consent. It is a form of online sexual violence and a violation of privacy that disproportionately affects women and girls, LGBTQ+, and indigenous and BIPOC individuals. Anyone who deliberately views, shares, or recreates these non-consensual images is participating in a sex crime whose unique feature is that the abuse lives on long after it’s over, growing in magnitude for the whole of the world to see. When non-consensual imagery is shared over text messages, online forums, or posted in social media platforms, it can quickly reach a global audience via uploads to pornographic websites that do not or cannot reliably verify age or consent. Those who are victimized live in a state of constant trauma and fear when their victimization may happen again. Will their parents find out? Were their friends, co-workers, college admission counselors, or future employers? It is never post-traumatic stress disorder because the trauma lives on continuously. This type of technology-facilitated gender-based violence is growing in global prevalence. There are over 3,000 websites online that purely are designed to host this non-consensual intimate videos and images reflecting a vast enabling environment that facilitates this form of abuse. And what we know from the survivors in the Reclaim Coalition is that younger and younger children are ever more being exposed to this form of violence. Those who are impacted, whether they are adults or children, frequently experience elevated levels of psychological distress, trauma, extreme and prolonged anxiety, and suicidal ideation. In the early stages of the formation of the coalition, we uncovered over 40 cases of children who’ve ended their life as a result of image-based sexual violence, often within 24 hours of their abuse, leaving their parents little to no time to intervene. As a woman who was, as a child, a victim of sexual violence, I chose not to reach out for help. I chose to live in silence. And I never thought that silence was a privilege. Yet the survivors who bravely advocate in the Reclaim Coalition never got a chance to make that choice. Their sexual thoughts are there for all the world to see. And thus, this is a global problem, but there is hope and there are global solutions. Many child victims of image-based sexual violence are adults when they discover their victimization. And many survivors who are now adults and are part of the Reclaim Coalition experience reputed abuse online every time they dare to advocate publicly on this issue. As a matter of fact, as I boarded this plane, I was working with an individual who just came out publicly and had all of her images re-uploaded to a site called Pornhub. This trauma does not stop on their 18th birthday. The very real harms do not go away. And abusers continue to share and re-upload more content. Leading up to the launch of the Reclaim Coalition, we hosted a private summit with lived experience experts from eight countries. What I thought was going to be an informative, well-agended program became a witnessing session where survivors shared their stories and created the formation for 17 recommendations that we shared with our colleagues, most notably at the Gender Policy Council, as well as forming the basis for our first landscape report, I Didn’t Consent, focusing and centering this issue around privacy and consent in an innovative way that eliminates the question of why was the image put up there? What was the intent of the abuser? Because in all reality, we don’t ask domestic violence victims why their husband hit them. We don’t need to ask online survivors of image-based sexual violence why their abuser abused them. I came up with five core areas of intervention that I think we can take lessons from the area of child protection and build upon this to look at this issue, not as siloed intermittent interventions across children’s spaces and adult spaces, but things that we can do across those divides. First, we need to build knowledge. The public misunderstands and lacks awareness about online sexual violence. Without knowledge, survivors don’t know where to get help. Law enforcement don’t know how to intervene. And frankly, the public misunderstands and continues to shame victims instead of the abusers. We need to harmonize global regulation and policies. The policies to address good child and adult online sexual violence can and should be more harmonized. This includes removing the barrier of proof of intent of the abuser, as well as classifying this as a serious sex crime. In fact, we should ensure that across the globe, the online sexual violence of children and adults, most affecting women and girls, is taken seriously and given the serious type of criminal penalties that offline sexual violence endures. We also need to standardize global hotline support to ensure that hotlines that address the adult abuse image-based sexual violence receive the same standardized global standards as does in the child space. There is an allied network that may have been brought up today called the InHope Network. It is a phenomenal network of, I believe, over 80 hotlines across the globe looking at child online sexual abuse. We need to do this in the adult space as well. We also need more tech accountability. Those 3,000 websites that I mentioned earlier could simply be de-indexed and go away. So why haven’t they been? There needs to be an opportunity for tech to engage in a proactive intervention way with civil society, learning from lived experience experts, and this is quite possible. We also know that image removal is a critical piece of justice and healing for survivors. It’s very difficult to heal if your abuse is continuing to be placed online where anyone can Google your name, your address, your school, and learn everything about your exploitation. Image removal should not be different across different platforms and sites, but what we hear from survivors is they’re effectively left to create their own digital rape kit and clean up their own crime scene, and that is an unacceptable standard. In closing, I wanted to say that we have the will, we have the solutions, and our children depend on us. If we address image-based sexual violence for everyone today, there truly will be less child victims tomorrow. Thank you.

Marija Manojlovic:
Thank you so much, Andrea. There is so much I want to pick up on, but there is literally no time, so we’re going to just leave you to have conversations after this gets mic-dropped, literally. So, thank you for that. I will need to immediately move to the next speakers because there is not enough time. We have only 15 minutes left. What I’m going to move now on towards is the digital innovation ecosystem. That was a question we had from online, but also we just want to move to discuss a little bit more broadly this entire ecosystem. Salome, I want to go to you. You go from the German Development Agency, GIZ, and you are the director of the Digital Transformation Center in Kenya. GIZ is famous for investing in working in the field of digitization, innovation, cyber security, and skills. I’m very curious to hear from you a bit about challenges and opportunities of integrated child online safety into this work across all of these areas, whether that has been done successfully so far, or what are the plans for the future?

Salomé Eggler:
four minutes. So sorry. I’ll try my best, Maria. Thank you so much for that question. I’ll start with two disclaimers. I’m sitting here as a non-child online protection expert. I’m sitting here as a practitioner who has as a goal to mainstream, to bring in child online protection into our activities. And as you were saying, there are many fold, right? They range from digitizing government service to working with the tech ecosystem and tech entrepreneurs. SMEs going digital to working on a transition and everywhere there are angles around child online protection. And yet, and maybe that’s the entry point I’ll take. We have a twofold approach in GIZ, how we try to work around this question. So the first part of the approach is really that mainstreaming idea. And I like to use the image of a braid. When you braid the hair, we ideally want to braid in child online safety measures and considerations from the onset of a project and not, and I say, we’re also guilty of that as GIZ in the past, not adding it as a bow in the end, right? Of your braid, but you really have it as per design into all of our activities. And the second part of the approach is really genuine child online safety projects that focus not only on integrating and mainstreaming the topic into other activities, but generally trying to tackle a certain topic. For instance, one of our activities that we have jointly also with children, for children created is a set of online training nuggets where children starting from the age to 10 to 15 can explore how to navigate the online world safely in a very easygoing way. And this is one of the aspects as well, where we try to have that initiative and these trainings available in up to 10 languages by now. Also in Kiswahili, for instance, for Kenya, and we saw how important it is as well to translate all these phenomenal tools that we have by now, by ITU, by UNICEF, et cetera, to other languages as well, to make them accessible to all the children around the world and youth growing up. So that’s the twofold approach that we are pursuing with GIZ at the moment. And now maybe come to your questions around challenges, opportunities, I see. What struck me in your introduction was your point where we’re saying, oh, you talk to someone working on infrastructure, they’re saying, oh, that’s not about children. You talk about someone working on digital skills, and they say, oh, no, no, that’s not really what we’re doing. Reflecting on that, within GIZ, it might be a slightly different variation. I would more call it an attention economy, where I, as a practitioner on the ground, I’m interacting with my colleagues that are working in the child protection unit. And they tell me how it is important that we mainstream these activities and considerations and safeguards, et cetera. And I see the importance. And at the same time, I talk to my colleagues in the gender department, in the climate change department, in the disability and inclusion department, et cetera. So in the end, I end up with, I know that all these considerations are so important. And yet, my reality on the ground is, I have a highly political, highly dynamic political environment. I have technological debates that evolve so quickly. I have a limited budget and limited time. And often, and maybe that’s the lessons learned, also for myself, I end up maybe going with those that scream the loudest. Or, and maybe that’s the positive side, and maybe what could be helpful, what has been helpful for me, where I have resources that are easy to use off the shelf. And I don’t have to become half of a child protection expert in order to implement these activities, but it’s some tools that I can really use, take, implement. And that has been really helpful for some of the activities that we have been, for instance, on data protection, been able to do, take these, apply them without having to generally become an expert in itself, because we’re working at a very interdisciplinary level in the end. So that’s maybe one of the challenges slash opportunities that I see. And, okay, then I’ll come to the opportunity. That was the challenge. Let me talk about an opportunity that I see as well, is developing agencies, financial assistance organizations, et cetera, are setting up bigger and bigger digitalization projects. When I started in GIZ, we had a 3 million project, that was it. And now I don’t even know how much, only the Digital Transformation Center in Kenya is a 30 million project, right? So we are getting bigger and bigger. And I think it’s the opportunity as well for us to mainstream in our own frameworks, in our own tools, ways to include these considerations and something, maybe a best practice that I could mention here, we’ve developed what we call the digital rights check. It’s an online tool. It takes you 30 minutes as a practitioner to go through that tool, to assess your project, either at the onset of project design stage or implementation stage. And that check tells you, it’s a bit broader. It’s about human rights in general, but there’s a specific part on child online protection, tells you exactly, have you thought of XYZ? This is what you could do. These are further resources. These are people you can contact. And that has been highly valuable to me because I can cater towards all these needs that have, the importance is clearly there, but it kind of meets my daily environment in which I operate. So maybe that’s also an opportunity there to have these kinds of hands-on tools like the digital rights check to guide our activities on the ground. Thank you so much Salome,

Marija Manojlovic:
because I think you kind of brought back some reality into the context in terms of lack of resources or trying to align all of the resources across different agendas. And I think when you speak about how do you actually decide what you need to work on, I think I have a perfect answer for you because there is Matito and Ananya who are going to tell us a little bit more about USAID’s work. But let me just introduce that for a minute. So Matito, you are USAID’s lead on child protection within the child, children and adversity team. And most recently you have been leading on a cross-agency effort to define USAID’s approach and roadmap of digital harms. And I think you found yourself with a set of premises that you embarked on the process and then you switched everything around when you started involvement with young people. And I think that’s really what I thought was really the most thoughtful thing that USAID could have done is to engage at early stages with young people. So tell us more about that journey, what you’ve learned through that journey. You’ve established your digital council. Ananya is here who was part of the first cohort and became an advisor afterwards. So I’m going to give you both six minutes. Sorry to give us that. Please. Thank you so much. I’m so sorry that this is going to become like a

Mattito Watson:
running game. It’s first of all, thank you for everyone who is last one speaking. Good news is a lot of people have said things I’ve already wanted to say today so I can zip through my talking points. Bad news is that we’ve lost part of our crowd. So thank you for everybody who has stayed to the very end. You’re going to get the best part of the session right now. So I want to thank you for also saying that USAID was thoughtful. We’re not always called being thoughtful over here at USAID. We’re one of the largest development organizations. We’re the international branch of the U.S. government. Our job is to really save lives, reduce poverty, strengthen democracy, and get people out of assistance. And so to do that, we’ve got to be always looking towards the future. And so USAID came a little late to the party in terms of our digital strategy. It just came out in 2020. It’s very comprehensive. It’s very robust. I recommend people go online and read it. But when they were developing it, the question came up from our team, the child protection team, where are the children and youth? They are our future. They are going to be picking up whatever we lay down, and they’re going to be driving it as the next generation moving forward. So as the child protection person at USAID, or one of them, but leading in terms of our children and diversity team, they asked me to lead on our digital strategy. I am a field practitioner. I spent 25 years in Africa working with children and youth. I am not a digital person, and that ended up being a good thing for reasons I was going to talk about. We’ll skip over for the moment. But I said to myself, how do I get around my blind spot? How do I really understand what’s happening with you? How do I understand what’s going on with a 16-year-old girl online in Brazil? Or how do I understand what’s happening with a 12-year-old boy in Ukraine? And so my brilliant idea, which I somewhat stole from Microsoft, I’ll give them that nod of credit, was to develop a digital youth council. So at USAID, with our youth strategy, we want to work with youth, not for youth. And so that means bringing youth to the table, listening to what they have to say, and incorporating their viewpoint in our strategy and our implementation. So back two and a half years ago, I created a 12-member digital youth council consisting of seven girls and five young men, five young women, six, seven young women, five young men, to not only advise us in terms of, is our strategy on point? Where are we going? But also to build the next generation of changemakers, the next generation of leadership. We’ve had our second year happening now. Putting my money where my mouth is, also to make this last part go as quick as possible, I am going to turn the floor over to Ananya, the voice of the youth, to tell us what was it like working with USAID? What did you see us responding to your voice? And how did you feel in terms of the overall process?

Ananya Singh:
First of all, thank you very much for inviting me today. Not only is this topic very close to what I’m deeply passionate about, but I’ve relentlessly worked on this for the past three years. And this session provides us an opportunity to reflect on what some of the best practices in this area have been. And as the youth advisor to the USAID Digital Youth Council, I am very happy to have been invited to shed light on the success story that our council has been. And hence, as I speak, I hope that the story inspires more people to take action for the future, with the future. As a generation of young people born into the digital age, we understand how digital technologies impact or impair our aspirations and rights. All we need is a platform to be actually heard. Given that digital technology helps to enhance our capacity to engage with and empower the youth, there is no excuse anymore to not reach out and actually seek input from the youth in a more participatory way, treating them as the active and equal partners of digital development as they are. Recognizing this, the USAID, which has for long prioritized positive youth development, established the Digital Youth Council in 2021. I consider it to be my absolute privilege to have been a part of the Digital Youth Council since its very first day. Over the past 2.5 years, the council has not only served as an important voice in helping to guide the implementation of USAID’s digital strategy, but has also helped to raise awareness about digital harms in many countries and influence national leaders, the private sector, civil society, local communities, and other youth on how best to keep safe while learning, playing, exploring in the digital world. We have co-created and led sessions on innovation, emerging technologies, such as machine learning, artificial intelligence, large language models, chat GPT, and tried to establish a connection with digital harms which target young children, including our young council members. With the support, training, mentorship, resources, and encouragement that we provide through our extremely carefully designed program, our council members have been able to design apps that educate young people about digital harms through interactive games and other modern features. In fact, one of these apps is about to go live on the Google Play Store by the end of this year. We’re also very proud to have involved our young council members in planning and speaking at multiple sessions of the USAID at the Global Digital Development Forum in 2022 and 2023. Personally, I have had the opportunity to speak at the USAID Youth Policy Launch in 2022. The USAID enabled a young person like me to share the stage with and ask questions directly to the USAID administrator and the U.S. Congress representatives. I also had the opportunity to emcee the USAID’s International Youth Day event in 2022, where 1,200 people from across the globe joined us to celebrate young people and engage in a panel discussion on intergenerational solidarity, inclusion, protection, and mental well-being. But our magnum opus, the first virtual symposium on protecting children and youth from digital harms, attracted the attention of thousands of leaders in government, civil society, and private sector. And we organized this in collaboration with Save the Children and Tech Change. This even brought together influential policymakers and our young council members for panel discussions on themes including, but not limited to, online harms, hate speech, and cyberbullying. This symposium helped to further the U.S. government’s APCA strategy and USAID’s digital strategy. Thank you everyone for being with us this early afternoon here in beautiful Kyoto, Japan, and welcome all the remote participants that are following from all over the world, from my colleagues in Latin America, and must be extremely late at night, but I’m sure that some of us, some friends of us are there. And sure, there’s so much for me, and my name is Olga Cavalli, I am the National Cyber Security Director of Argentina, and also I chair the South School of Internet Governance here with me, my dear friend, Tracy Hawkshaw. Thank you.

Mattito Watson:
And as you can see, she makes my life a lot easier, because the voice of the youth are us being able to really provide that platform. We’re actually seeing that change start to happen, and we’re actually getting it right in terms of where we should be investing in U.S. government, in terms of dollars, in terms of protecting children and promoting employment for them. Thanks.

Marija Manojlovic:
Thank you so much, Matita, and Ananya, you’ve made my job also easier, because that was a really perfect closure to this discussion. I do want to thank all the participants. I’ll just take two minutes, or maybe like one minute, to try to sum up some of the main takeaways, but I think we all agree, and young people tell us, that the Internet is great. They love it. They like to be online. They like to engage online. It’s opening so many opportunities for them. But online and offline worlds are not, for them, separate. This is just the way that the world is. And we need to make sure that what the rules apply in the online world can be applicable, and offline can be applicable in the online world as well. And some things that are going to help us align across different agendas will be really much more rigorous and strong focus on participation of people who have lived experiences, people, young people who can tell us what the needs are, but also participation in terms of really using a vulnerability lens to understand the trends and threats online to make sure that we can, as we are building this great online world, that we can make sure that we are not exacerbating existing vulnerabilities, existing gender divide, existing issues around the gender norms and toxic masculinity, and issues around radicalization, extremism, and all other forms of expressions of violent behaviors and power dynamics that exist in the offline world, in the online world. And the last thing I want to say is that we have really seen and are calling for action in terms of increased investment in this particular field, because it is really sorely lacking investment, dedicated investment from both governments, but also industry and other players, whether it’s investment in foreign policy goals or investments domestically or investments in internal organizational infrastructure or in frontline services that we all need to have. So with that, I will thank you all for participating in this discussion. I have definitely been too ambitious in terms of the topics we want to cover and people we want to hear, but I’m really grateful that you are all here. I will run to my plane right now, but I will leave you all to chat a little bit more. Hopefully you go for drinks or something. Those who are online, please reach out. We will be happy to engage with you. Go on safeonline.global and follow us on social media, and we will be happy to engage with all of you, and thank you for the session. Thank you all for joining us and we look forward to seeing you again tonight. Again.

Cailin Crockett

Speech speed

149 words per minute

Speech length

1232 words

Speech time

497 secs

Albert Antwi Boasiako

Speech speed

156 words per minute

Speech length

1675 words

Speech time

644 secs

Ananya Singh

Speech speed

165 words per minute

Speech length

738 words

Speech time

268 secs

Andrea Powell

Speech speed

164 words per minute

Speech length

2426 words

Speech time

886 secs

Audience

Speech speed

162 words per minute

Speech length

608 words

Speech time

225 secs

Henri Verdier

Speech speed

166 words per minute

Speech length

731 words

Speech time

263 secs

Julie Inman Grant

Speech speed

166 words per minute

Speech length

1386 words

Speech time

500 secs

Marija Manojlovic

Speech speed

194 words per minute

Speech length

4378 words

Speech time

1352 secs

Mattito Watson

Speech speed

190 words per minute

Speech length

655 words

Speech time

207 secs

Moderator

Speech speed

179 words per minute

Speech length

131 words

Speech time

44 secs

Salomé Eggler

Speech speed

177 words per minute

Speech length

1057 words

Speech time

359 secs