WS #134 Data governance for children: EdTech, NeuroTech and FinTech

WS #134 Data governance for children: EdTech, NeuroTech and FinTech

Session at a Glance

Summary

This discussion focused on data governance for children in the context of emerging technologies, specifically EdTech, FinTech, and NeuroTech. Experts explored the risks and benefits associated with processing children’s data in these domains, as well as governance models and regulatory frameworks.

The panel highlighted potential benefits of these technologies, such as personalized learning in EdTech and enhanced financial literacy through FinTech. However, they also emphasized risks like privacy concerns and potential exploitation of children’s data. The importance of multi-stakeholder governance models was stressed, with examples including regulatory sandboxes and public-private partnerships.

Participants discussed the challenges in implementing existing regulations and the need for better guidance for schools and teachers in choosing EdTech products. The conversation touched on the convergence of technologies and the difficulty in predicting future developments, particularly in NeuroTech.

The panel explored the global divide in both technology access and regulatory frameworks, emphasizing the need for a level playing field. They discussed the potential future implications of these technologies, including the possibility of cognitive enhancement and the integration of financial services into various digital platforms.

The discussion concluded by emphasizing the importance of maintaining a balance between innovation and protection in future regulatory approaches. Participants stressed the need for a holistic child rights approach when considering the future of technology and data governance for children.

Keypoints

Major discussion points:

– Risks and benefits of data processing in emerging technologies like edtech, fintech, and neurotech for children

– Multi-stakeholder governance models and regulatory approaches for children’s data protection

– Implementation challenges and gaps in existing legal/regulatory frameworks

– Future trends and concerns regarding these technologies and their impact on children

Overall purpose:

The goal of the discussion was to explore data governance issues related to emerging technologies that impact children, identify challenges and promising practices, and consider future implications and regulatory needs.

Tone:

The tone was primarily analytical and forward-looking, with speakers offering expert insights on complex issues. There was a sense of cautious optimism about potential benefits balanced with concern about risks. The tone became more speculative and urgent when discussing future trends and the need for proactive governance approaches.

Speakers

– Jasmina Byrne: Chief of Forsythian Policy at UNICEF

– Sabine Witting: Assistant professor for law and digital technologies at Leiden University, co-founder of TechLegality

– Emma Day: Co-founder of TechLegality

– Melvin Breton: From UNICEF

– Aki Enkenberg: From Government of Finland

– Steven Vosloo:

Additional speakers:

– Jutta Croll: From the Digital Opportunities Foundation in Germany

Full session report

Data Governance for Children in Emerging Technologies: A Comprehensive Overview

This discussion brought together experts from various fields to explore the complex landscape of data governance for children in the context of emerging technologies, specifically focusing on EdTech, FinTech, and NeuroTech. The panel, which included representatives from UNICEF, academia, and government, aimed to identify key challenges, opportunities, and future implications of these technologies for children’s rights and well-being.

Benefits and Risks of Emerging Technologies

The discussion began by acknowledging the potential benefits of these technologies for children. Emma Day highlighted the personalised learning opportunities offered by EdTech, such as adaptive learning platforms. Melvin Breton emphasised the role of FinTech in enhancing financial literacy from a young age, including through gamified savings apps. Aki Enkenberg noted the potential benefits of neurotechnology in health and education sectors, such as early detection of learning difficulties.

However, these opportunities were balanced against significant risks. Jasmina Byrne raised concerns about privacy and security risks associated with data collection, particularly the potential for data breaches in educational settings. Melvin Breton warned of the potential for manipulation and exploitation in FinTech, particularly given children’s vulnerability to persuasive design techniques. Aki Enkenberg cautioned about the risk of unconscious influencing through neurotech, especially as it moves from medical to consumer spaces.

Governance Models and Implementation Challenges

A key theme that emerged was the need for multi-stakeholder governance approaches to address the complex challenges posed by these technologies. Sabine Witting and Emma Day both emphasised this point, highlighting the importance of involving diverse stakeholders in shaping governance frameworks.

Emma Day and Melvin Breton discussed the value of regulatory sandboxes as a means of fostering innovation while ensuring compliance with regulations. Day explained that these sandboxes allow companies to test new products or services in a controlled environment, under the supervision of regulators, helping to identify potential risks and regulatory issues before full market deployment.

The discussion highlighted significant implementation challenges, particularly in EdTech. Emma Day noted that the main issue was not necessarily gaps in the regulatory framework, but rather difficulties in implementing existing regulations, particularly at the school level. This emphasized the importance of capacity building and support for educators and administrators.

The cross-border nature of many of these technologies was identified as a particular challenge by Emma Day, highlighting the need for international cooperation in governance approaches. Additionally, the panel discussed the digital divide and its implications for data governance in different parts of the world, recognizing that approaches may need to be tailored to different contexts.

Regulatory Frameworks and Gaps

While Emma Day emphasised implementation challenges, Steven Vosloo suggested that existing laws may not fully cover new technologies, particularly in the realm of neurotechnology. This highlighted a tension in approaches to regulation, with some speakers focusing on better implementation of existing frameworks and others calling for new regulatory approaches.

Steven Vosloo recommended that countries conduct policy mapping exercises to identify regulatory gaps, particularly for neurotechnology. This proactive approach was seen as crucial given the rapid pace of technological development and the move of neurotechnology from medical to consumer spaces.

Aki Enkenberg highlighted the challenge of regulating converging technologies that cross traditional regulatory boundaries. He also provided insights into Finland’s approach to data governance for children, which includes strong protections for children’s data and efforts to promote digital literacy.

Jasmina Byrne raised the issue of global fragmentation in regulation, emphasising the need for more uniform safety standards across different jurisdictions. Emma Day noted different approaches to enforcement, with some regulators taking a more collaborative approach while others favored punitive measures.

Future Developments and Challenges

Looking to the future, the panel identified several key trends and challenges. Aki Enkenberg and Melvin Breton both highlighted the ongoing convergence of different technology domains, with FinTech expanding into new areas such as gaming, the metaverse, and NFTs.

Steven Vosloo raised the possibility of a future divide between “treated, enhanced and natural humans” as a result of neurotechnology, highlighting potential equity issues that may arise from cognitive enhancement technologies.

Emma Day noted the geopolitical influences on EdTech development, highlighting the dominance of American and Chinese companies and European efforts to develop alternatives. This geopolitical dimension was seen as a crucial factor shaping the future landscape of educational technologies.

Throughout the discussion, Jasmina Byrne emphasised the need to shape technology development with child rights in mind, calling for a holistic child rights approach when considering the future of technology and data governance for children.

Conclusions and Future Directions

The discussion concluded by emphasising the importance of maintaining a balance between innovation and protection in future regulatory approaches. The panel stressed the need for adaptive governance models that can respond to rapidly evolving technologies while ensuring robust protections for children’s rights.

Key takeaways included the need for multi-stakeholder governance models, the importance of addressing implementation gaps in existing regulations, and the value of proactive approaches such as regulatory sandboxes and policy mapping exercises.

The panel identified several unresolved issues, including how to effectively regulate converging technologies, address global fragmentation in regulation, and incorporate child rights principles into technology development.

Emma Day mentioned UNICEF’s ongoing work on case studies about innovations in data governance for children, demonstrating continued efforts to address these complex challenges.

In conclusion, the discussion highlighted the critical importance of developing comprehensive, rights-based approaches to data governance for children in the context of emerging technologies. As these technologies continue to evolve and converge, ongoing dialogue and collaboration between diverse stakeholders will be crucial to ensuring that children can benefit from technological innovations while being protected from potential harms.

Session Transcript

Sabine Witting: EdTech, FinTech and Neurotech. My name is Sabine Witzing. I’m an assistant professor for law and digital technologies at Leiden University and the co-founder of TechLegality together with my colleague here, Emma Day. And we are joined today by a variety of speakers both online and offline. And I will ask the speakers to introduce themselves when I hand over to them. And I would really like to encourage participation both online and in the room here. Be critical, ask questions. We have brilliant people here who have possibly all the answers we’ll see about that. But otherwise they will ask you more questions. So I think it will be an interesting session. So let’s get started and let me hand over straight away to Jasmina. She is online for introductory remarks and setting the scene. Jasmina, over to you.

Jasmina Byrne: Hello everyone and good afternoon. I hope you had a productive day of sessions today. Sabine, shall I just… Yeah, I’m Jasmina Byrne, Chief of Forsythian Policy at UNICEF.

Sabine Witting: We can’t hear the online speaker. Oh. Jasmina, just hold on a second. Okay, we can hear you now, please proceed.

Jasmina Byrne: Oh, good afternoon, everyone. I’m Jasmina Byrne. I’m Chief of Forsythian Policy in UNICEF. Shall I hand over to colleagues or proceed with my…

Sabine Witting: No, please go ahead. We’re welcome in setting the scene.

Jasmina Byrne: Oh, okay, all right. Thank you so much, Sabine. Well, I hope you all had a productive day at IGF and I’m really sorry I’m not there in person. This is one of my favorite conferences, but you are in really good hands with Emma, Sabine, and Steve, my colleagues. And online, we have Melvin Breton, also from UNICEF, and Arki Enkenberg from Government of Finland, who is actually our key partner in the implementation of this initiative. And this session today is about rights-based data governance for children. across three emerging domains, education technologies, neurotechnology, and financial technology. So we have been working with about 40 experts around the world to understand better how these frontier technologies impact children, and particularly how data used through these technologies can benefit children, but also if it can cause any risks and harm to children. We all know that globally EdTech has been at the forefront of innovation in education. It can help with personalized learning. We see that the data sharing through education technologies can improve outcomes in education, facilitate teacher sessions, plans, administration, and so many other things. Other innovative technologies like Neurotech are currently being tried in diverse settings, and they offer great opportunities for improving children’s health and optimizing education. Financial technologies as well allow children to take part in digital economy through digital financial services. So all of these innovative technologies have also created data-related risks, particularly in relation to privacy, security, freedom of information, and freedom of expression. At the same time, we are seeing really rapid introduction. As we see a rapid introduction of these technologies into children’s lives, the policy debate is a little bit lagging behind. So this is why we hope that this initiative and the partnership with Government of Finland will not only help us identify what are the benefits and risks for children through use of these technologies and data sharing through these technologies, but also to help us. formulate policy recommendations for responsible stakeholders. And in this case, there are ministries of education, finance, consumer protection authorities, data protection authorities and others. So I’ll hand over to Sabine now to moderate the session and I hope we are going to have a productive discussion. Thank you all.

Sabine Witting: Thanks so much, Jasmina, also for laying out the kind of three blocks that we will be discussing in the session today. So we will first look at the risks and benefits associated with processing and collection of children’s data in these three domains. Then we will look at the governance models and lastly at the regulatory and policy frameworks. So let’s dive right into the first block. And as I’ve mentioned, we want this to be an interactive session. So after each block, we will have a Q&A session. So Emma, maybe I can start with you. As Jasmina was saying, there are lots of risks and benefits associated with data processing in the context of these emerging technologies. And maybe let’s zoom into the first domain into edtech, which I think is the most obvious one when you think about data governance and children. And maybe you can tell us a little bit about the examples that you have where edtech may be used or the data governance may be used for good in the context of children. Thank you.

Emma Day: Yeah, thanks so much, Sabine. So I think you’re probably aware that there’s currently a lot of debate about the benefits that can be derived from edtech in general, including first the pedagogical benefits, so the benefits for teaching and learning. So when we think about data processing, any data that’s collected from children must be both necessary and proportionate for this to be lawful under data protection law. So for edtech to be necessary, it must first serve an educational purpose. And there’s still much debate about to what extent edtech products do serve an educational purpose and where that purpose has been identified, then it’s also not yet really clear what benefits can be derived from the data that’s processed by edtech. For example, by sharing those data with the school, with the government to analyse for more evidence-based kind of policymaking. I think there’s still a lack of clarity around exactly what data would be helpful. What are the questions that we’re seeking to answer with these data? There’s much debate about the potential for personalised learning. And this relies on algorithms which learn from individual children’s data and steer them. their learning to suit their personal learning needs. And data from these kinds of tools can also potentially be shared with teachers. And then perhaps their teachers can identify early which of their students are falling behind, particularly if they have a very large class of students, they may miss a student, but if they have this, an algorithm can show them which students in their class are falling behind the rest of them. And it may also help them to look at equity to ensure that girls, children with disabilities, and children in rural areas are receiving the same opportunities as everyone else. And then finally, on this point, there’s some interesting projects looking at how children can have more agency, so they are actually benefiting themselves and they’re able to share their data for their own benefit in privacy preserving ways. So for example, in the UK, the ICO, which is the Information Commissioner’s Office, has just started a sandbox project with the Department of Education. And this is aiming to enable children to share their education data securely and easily with higher education providers once they reach the age of 16. So I will leave it there. I’m sure there are many other benefits and we’ll let the audience come in with more a little bit later.

Sabine Witting: Thanks so much, Emma, for laying out these benefits in the context of EdTech. And Melvin, if I can hand over to you and maybe you can tell us a little bit about the benefits and risks in relation to the FinTech sector. Melvin, over to you.

Melvin Breton: Thank you so much, Sabine. I think similarly to EdTech, you can really think about all these technologies that are enabling better data processing as sort of double-edged swords. You can think about, in the application with FinTech, ways in the most obvious way in which it benefits children. is in enhancing financial literacy from a young age, right? The better, the more data, the better data collection that you carry out as some of these technologies are being used by children, you can learn about their money habits and perhaps can have personalized nudges that alert them that they’re overspending in central categories that they need to save or nudge them towards developing healthy saving habits and healthy spending patterns as well, right? So the better the processing using emerging technologies, the better this kind of ongoing feedback and real-time feedback becomes and helps kids develop good money management skills. And you can also think about at the intersection FinTech and EdTech about using this data to develop purpose-built applications for education in financial literacy. So that’s on the positive side. There are other many applications. If you think about the intersection of public policy and FinTech, you have the commission of the rights of the child establishes the right to social security and social protection. And there’s a lot of applications of FinTech in handing social security and social protection benefits and cash transfers in different contexts that are enabled by FinTech. And the better the data processing technologies become, the more efficient and agile the social protection applications of financial technology can become. We’re looking at in different parts of the world. issues with the population and you’re also looking at future, the future of labor markets and people are talking about universal basic income. How about universal child benefits? Starting there and seeing how emerging technologies can enable us to make universal child benefits universal and much more efficient. So that’s on the benefits side, many more. On the risks, there’s always the risk of exploitation, as with any technology, more information means more opportunities for bad actors to target their attacks to children, promoting on the weak side, on the downside of better spending habits, you can also promote children and young people, not just children, to overuse some of these financial technologies, sometimes to their detriment. And we’ve seen some alarming cases with, for example, trading apps, stock trading apps, addressing mental health issues and harms to children or to young people, rather. And there’s also the potential for manipulation, for making children buy things that they don’t necessarily need, making it available for them to buy products and services that are harmful. And then there’s the whole issue of facilitating addictive behaviors and through in-app purchases and things like that. So we can get into either any of those more, but I’ll just leave it there for now. for the time being, over.

Sabine Witting: Thanks so much, Melvin, for that. I think that was really interesting to see also how a technology like FinTech that we maybe might not have thought about initially when you think about children’s data also has these risks and benefits. Thanks so much, Melvin, for laying these out. Aki, maybe you can share a few examples and your experience from Finland and this area around the risks and benefits across these frontier technologies. Aki, over to you.

Aki Enkenberg: Yes, absolutely. And I’m very happy to be here. Thanks, UNICEF, for inviting me to be part of the panel. It’s quite a timely issue that does require strong multistakeholder cooperation. And the IGA is a really good platform for taking these issues, debate around these issues further. And we also have to keep in mind, and this is a broader point, that the recently approved global data digital compact puts issues around data governance for the first time firmly on the global development agenda. And we should be also mindful of systematically including a child lens in these discussions going forward. But from the Finnish standpoint, looking at what we’ve done nationally, a couple of remarks with a specific focus on the education or education system. We’ve long recognized that children and youth do need to be considered through specific perspectives in relation to digital technologies, AI and data. This kind of perspective has been part of our kind of national thinking around AI policies, data policies, and we’ve also worked together with UNICEF on these issues, both on AI and data governance with important benefits for our national policymaking. The tradition has also been that we’ve had strong multistakeholder cooperation in place at the national level to be able to uncover evidence, make informed choices, take informed action, et cetera. in our context. So this realisation that children and youth are in the forefront from the point of view of evolving use of new technology is quite crucial, especially in relation to social media. They’re often early adopters of new services but also potentially less mindful of privacy concerns, they’re less informed about their data rights, perhaps care less about those rights, etc. And in national policymaking there’s often this tendency to really prioritise the potential and promotion of technology in national AI or data strategies, for example in education or health, but a lot less focus on safeguarding rights or child rights specifically. Children and youth are faced with quite complicated legal frameworks, insufficient understanding of their own rights, social pressures that make it difficult to opt out, etc. And of course when we’re talking about young children specifically, they’re not in a position to make these choices in the first place, so they have to rely on others to make them for them. But in terms of our measures, first I’ll bring up this priority of strengthening the agency of children and youth to kind of regard them as active agents in their own right when it comes to data governance, to support their capacity and competence to act. And this is also something we’ve considered quite important from the point of view of developing democratic citizenship also in Finland. So data and AI literacy as a first step has received special attention in our case. We’ve realised the need to update media literacy education for the data and AI age. There’s a number of research and development projects focusing on developing guidance and approaches for schools and teachers, etc. in this field. And the focus most often is on making sure that child rights are integrated in how schools adapt. and use tech or digital services in their daily operations. There are some flagship projects by several universities, also by Sitra, our national innovation fund, funded by the Academy of Finland, educational authorities, et cetera. For example, there’s a project called Gen-I, funded by the Council of Strategic Research, which focuses on exactly this evolving landscape of data and AI literacy and what it takes to be able to understand the implications on data governance as well. But secondly, besides this, there is this realization that this ongoing datafication of schools and educational settings call for improved standards and certifications for technology. Because when you look at what’s going on in the private sector, there’s an increased focus on measuring cognitive processes, emotional response of children, behavior of them, by them in different settings, where they learn and are being taught. And of course, the key benefit there is that by automizing learning analytics, teachers can then focus on student interaction and support individual learning better. But there is this tendency of growing and continuous data gathering, where neurotechnology is also increasingly part of the problem. It provides deeper insight into processing of information, learning by children, but also raises new questions around how that data is governed. So as a response, our Finnish National Agency for Education is preparing a comprehensive package of guidance at the moment, not only focusing on what children should learn and how they should learn in the digital age, but also what kinds of tools and services should be used by schools and teachers to ensure the quality and safety of digital content and services, and to engage in regular dialogue with the actors involved in producing these contents and services. And as I mentioned in the beginning, the belief really is that none of this can be done by the governments alone or our authorities alone, but through active cooperation with research community, edtech companies, schools and parents.

Sabine Witting: Thank you. Thanks so much, Aki, for this intervention, for sharing the experience from Finland. And you provided me with the perfect segue into the kind of second block of the conversation, which is around governance models. You said that none of the stakeholders can do it alone, and I think that holds true for a lot of the topics we’re discussing at IGF, but specifically for these new forms of data governance. And you also mentioned the Global Digital Compact and how the Global Digital Compact is also encouraging this multi-stakeholder governance model. So maybe we can think a little bit about what data governance could look like for these three domains. And of course, when we think about data governance, we first think about the DPAs, the data protection authorities. But of course, this topic is much broader than only focusing on the DPAs. I would like to hear a little bit more about the multi-stakeholder models that can be deployed to govern these frontier technologies. And Melvin, maybe I can start with a question to you in the context of fintech. What are some of the multi-stakeholder governance models that are working in this particular space?

Melvin Breton: Yeah, thank you, Sabine. I think with fintech, it’s particularly complex, right, because financial services are a very established area of regulation. And fintech comes and adds the technological layer on top of that and creates intersections. I was mentioning before with edtech, but with social media and many other environments in which data is being processed. So it needs to be multi-stakeholder if we’re going to have effective governance. You can think about, there are some examples of public-private partnerships that allow companies to opt in to some sort of data, more advanced data protection regulations in the context of a regulatory sandbox to see how that might work. And there are other sort of frameworks like open banking conglomerates that allow better sharing of information between financial institutions and the government that you can also bring FinTechs into to make sure that all the information is transparent and complies with data governance regulations. So the challenge really is that as you develop these technologies, you’re creating new tools and you’re creating new data that may not be covered by existing either financial regulations or data protections and data governance regulation. And if you have a very wide ranging data governance regulation, but there’s the financial sector operating in a sort of separate environment where data is not flowing from financial systems to the broader government, then you run into a problem where you have, in principle, data regulation, but you don’t know what you don’t know, right? You don’t know what… information is being generated through the use of these fintechs necessarily that may be covered in principle by the data governance regulation but may not be visible to the regulators on the data governance side and maybe not even to the financial regulator, right? So the multi-stakeholder model since this is such an emerging and rapidly evolving area, we’re seeing the successful use of regulatory sandboxes as I was mentioning before where companies can opt in to see how these processes of sharing information and sharing data can balance issues like privacy, governance but also the efficiency and effectiveness of some of these services and when it comes to children right now we are seeing very little in terms of regulatory initiatives in fintech that take children into account specifically mostly that’s happening at the level of data governance regulations and that’s where children are protected but fintech per se is not yet perhaps because the regulatory landscape is still maturing it’s not taking steps to to protect data related to children specifically so that’s that’s something that we would like to see, open bank and conglomerates, public-private partnerships, regulatory sandboxes for fintech companies to opt in and work closely with the government to see the intersection of data governance regulations and financial regulations and fintech-generated information and data in the future. So I’ll leave it at that, over.

Sabine Witting: Thanks so much, Melvin. I think we all see this as a very complex issue, and the more we dive into it, the more complex it gets, and I think you highlighted the importance of regulatory sandboxes as an innovative data governance model, and also the importance of public-private partnerships in this context. Of course, one player that is very important, especially also at a forum as the IGF here, is the role of civil society. Traditionally, many contexts of society are upholding the importance of human rights and children’s rights in this context. And Emma, maybe you can tell us a little bit about more, what role do you see for civil society in these various multi-stakeholder models for data governance for children?

Emma Day: Yeah, great question. And before I get specifically to that, I just want to loop back to this issue of regulatory sandboxes, because I think these come from the fintech sector, as Melvin is describing, but as part of this project on data governance for children that UNICEF is leading at the moment, we’re producing a series of case studies on innovations in data governance for children. And one of those case studies is going to look specifically at the role of regulatory sandboxes in data governance for children. And I think these are a very promising model of multi-stakeholder governance that could have great potential for the education sector. Now, we see that they’re usually used a little bit more narrowly by regulators, so often data protection authorities will put out a call for applications to the private sector, and private sector companies will then work with the regulator on some of these kinds of frontier technologies like edtech or fintech, or perhaps even neurotech, where it’s not clear yet how the law or the regulation applies in practice, because this is such a new technology. And then there is a set period of time and there’s an exit report, which is publicized usually so that other people in the sector can learn, other companies can learn what are the boundaries of regulation, and the regulator can then learn how they maybe should change that regulation. and move as the tech moves also. But I think what’s most promising is what we’ve seen. There is an organization called the Datasphere Initiative, and they’re looking at the role of regulatory sandboxes much more from this multi-stakeholder perspective. So including also civil society is the missing piece in these sandboxes, working together with regulators and with the private sector on these big questions about how to govern these frontier technologies. What is still missing though is involving children. We haven’t seen an example yet of a regulatory sandbox. There are some which are about children, but there are not any which actually involve the participation of children. And the other, I think, innovative aspect of this multi-stakeholder regulatory sandbox that the Datasphere Initiative is promoting is they’re looking at cross-border sandboxes also. So many of these tools, like edtech tools in particular, are used across many different countries, often they’re multinational companies. And so it’s really not a question for one regulator. And in fact, it’s much better for everyone if these kinds of technologies are interoperable and regulators can come together and tackle these questions together as much as possible, and also involve civil society as much as possible from the regions where this edtech will be deployed. So I think this is not yet happening to our knowledge within the education sector, but it seems to be a very promising model for the future.

Sabine Witting: Thanks so much, Emma. Lots of potential, as you can hear, with the different data governance models. And maybe let us pause here for a second, because I think this was already a lot of content. And if you were listening to Emma and wondering the whole time, what is a regulatory sandbox, also please, you see, okay. So maybe before we go into the first block of Q&A, maybe Emma, a quick explanation what a regulatory sandbox is. Thank you.

Emma Day: Yeah, so a regulatory sandbox is an arrangement usually between a regulator. So it could be, often it’s a data protection authority actually, because they’re usually about data processing. And so the data protection authority wants to work with the private sector to explore how the regulation should be put into practice. So if you think about in an example from EdTech, say there was a new kind of immersive technology that suddenly became available for education where children could become avatars and they could put on a glove and feel things, there would be some risks and some benefits, and maybe the regulator would want to explore those with the company. And so there’s always this question of trust, right? where the company is worried that the regulator is just going to bring an enforcement action against them. And so within this sandbox, it’s kind of a protective framework where the companies can explain the technology they’re exploring and the regulator can then have an interaction with them and tell them if the direction they’re going in is going to be lawful or if they’re gonna end up in a risky area. It’s still, in most countries, regulators still will not allow the company to experiment with something that is not lawful or that is actually prohibited by regulation. But it’s a way for usually a product that’s still in the development phase to get the guidance from the regulator on how to navigate that space forwards. I hope that makes sense. Yeah, absolutely.

Sabine Witting: Thanks so much, Emma. Yeah, so essentially before you unleash technology on lots of people, maybe let’s first try from a compliance perspective, what is it that we can do to avoid the most severe adverse impacts? So that’s the idea to then strengthen compliance once the product is on the market. So let me stop here. And you can also ask another question on regulatory sandboxes in case that wasn’t clear. So let me maybe give the opportunity to people in the room on these first two blocks to ask any questions pertaining to what we’ve heard, risks and benefits with regards to these technologies, governance models and multi-stakeholder models. Any questions from the floor at this point in time? Yes, there in the back. Do we have a running mic? Yeah. Sorry, can I take yours? Yeah, yeah. Thank you so much. Thank you. Yeah.

AUDIENCE: Thank you. This is to Emma. Emma, you mentioned about regulatory sandboxes. Have you seen, I know, which countries or which regulators are great examples to follow?

Emma Day: Thank you. So from what I’ve seen of this particular model of multi-stakeholder governance, which includes civil society, the focus has been in Africa on health tech. And there have been cross-border regulatory sandboxes that the Data Sphere Initiative has been coordinating. And so the Data Sphere Initiative is a third party, which maybe also makes it easier that it’s not the regulator who is actually leading the sandbox, and they bring all of the different stakeholders together. The regulatory sandboxes that we… We see more within Europe, generally more just the regulator with the private sector without that civil society piece so far. But if anyone has any examples they know of that they want to share, we’d also love to hear more about those.

Sabine Witting: Thanks so much. Emma Jutta, please.

Jutta: Yes. Jutta from the Digital Opportunities Foundation in Germany. My question goes to Malcolm. Probably it’s also interesting for the person that was talking about ad tech. I just think that the data of children in the fintech sector are of huge interest because they will be the customers of the future. And we’ve been talking about privacy, but what about security of these data? How do we make sure that these data are not exploited for any purpose that we don’t want them to be? Thank you.

Sabine Witting: Thanks so much, Jutta. I think for Melvin. Melvin, maybe you want to start and then Aki, if you want to add anything to that.

Melvin Breton: Sure. That’s the million dollar question, right? I think if we knew how to prevent these data from being exploited and used for nefarious purposes, we probably would be doing it already. I think there is an intense tension between innovation and development of new technologies and new applications in the fintech sector and the protection of data related to children. It’s also not clear cut because a lot of the use of financial applications is not necessarily happening in fintech apps, but it’s happening in social media apps that have payments enabled or where you can purchase certain items. It’s happening in games where you have in-app purchases. and loot boxes and all these things that you can purchase from within the game and that don’t necessarily require multiple instances of approval from a parent. So you set it and forget it in a way and then you have the credit card data or whatever payment form that you have and then you run with it. And then there are a lot of transactions that are being carried out by children in platforms and apps that have the parent’s information data. You can think about online shopping platforms where children often have access to their parent’s account to purchase this or that item. So that’s to say the information that is generated and collected about children and that is generated from children in financial applications and financial technologies is scattered. I think regulatory sandboxes for fintech applications are a good first step to see how we can develop ways of collecting that dedicated information that’s being generated in the context of the fintech apps and services. We’ll see how that develops. Then there are, as I was saying, the other financial applications of technologies that are not necessarily fintech apps where the conversation is part of a broader conversation related to the data that’s being generated and used in those other applications. I mentioned games and I mentioned social media. There’s currently the debate about the Kids Online Safety Act in the US. What are the, I don’t know that there’s a lot of focus on the financial aspect within that legislation. How can we pay more attention to financial applications and financial transactions that kids are carrying out outside of dedicated FinTech apps at the same time as we use regulatory sandboxes to try and regulate that within the dedicated FinTech apps? I think that’s gonna be a big question. And that’s to not even mention crypto blockchain, decentralized finance, which is perhaps another kind of warms. So I’ll leave it at that for now.

Sabine Witting: Thanks so much, Melvin. I think more questions now, but I think one point was very important is that because some of you might’ve wondered like how often does a child actually make a bank transfer on an app? But I think that aspect what you were mentioning about how FinTech is embedded in typical digital environments where children are engaged. I think that was a very important point. And then to think about in a second step about data processing and also secondary data processing and all the problems that come with it. I had then two hands up on both sides. Let me give to Steve first and then to Emma. No, you didn’t want to? Oh, sorry. Okay. He’s like, Emma, please.

Emma Day: Thanks. So just, I wanted to come back on this point about cybersecurity, which I think is a really important point. There’s a big part of this discussion that what we’ve been seeing, we’ve been interviewing regulators around the world. So data protection authorities, and it’s clear that really in every country, it’s very common that at a school level, there is a big security breach and children’s data is leaked. And even at levels of ministries of education. So when we’re talking about the benefits of sharing all of this data, it’s not really something an ed tech company can necessarily, the problem may not be with them. The problem may be with the school or with the government in terms of the cybersecurity they’ve put in place. So we need to, that’s a big part of the picture to enable it to be a safe and trusted environment to implement these new technologies.

Sabine Witting: Yeah. that comes with accountability for all of the stakeholders that are involved in the deployment of these technologies and clear roles of who should be held accountable and how. So any other questions on these topics at this point in time from the floor? Also online, I don’t think we have any questions online. Any other questions from the floor? No? All right, wonderful. So then let’s move on to the next two blocks. So we spoke about the risks and benefits. We spoke about governance models. And of course, we can’t say governance without saying law and regulation. So let’s look at that next. So when we are looking at these kinds of emerging technologies, of course, the classic conflict comes up. How does law and regulation keep up with that? Technology is changing all the time. Children’s vulnerabilities in this context are changing all the time. So how can we address these? And maybe, Stephen, you can tell us a little bit about more. What do you see in the context of the legal and regulatory framework? And how does it apply to the field of neurotech, which is the kind of third domain that we haven’t spoken about yet? But Stephen, maybe before you go into the regulatory context, maybe explain quickly what neurotech is and how it impacts children.

Steven Vosloo: Thanks, Sabine. That’s a great lineup. Thank you. And good point, because not everyone knows what it is. So very quickly, neurotechnology is any technology that looks at neural or brain signals and the functioning of the brain or the neural system. So it could record those functions. It could monitor those. It could modulate or even kind of write to. I’m a computer scientist, so I must kind of write to the brain and write to brain data and make some neural changes. And so it could impact children in many ways. I’ll talk a little bit later about, let’s say, neurotechnology in the classroom to help monitor levels of concentration, for example. And so that’s kind of monitoring brain activity, and we’ve seen examples of this in some classrooms around the world. So just one other thing on that, neurotechnology is either generally, the technology itself is either invasive or non-invasive. And so the invasive side is what you may have seen with very severe neural disorders like quadriplegics, who actually have a chip implanted in the skull, kind of on the brain. And so with their thoughts, they can move a mouse or communicate or kind of interact with computers. So it gives an incredible amount of agency and autonomy to people who otherwise are physically paralyzed. The other side is non-invasive. And this is actually where the space is going to probably go more and impact children more. So this is less accurate than the very heavy kind of medical, clinical invasive side. But it’s also less invasive. You know, it could be a headband that you wear, so it’s much easier to kind of buy this technology. And again, it could look at your levels of concentration or so forth. So you asked about the laws and regulations. Neurotechnology is not advancing in a regulatory void or vacuum. We have existing regulations, existing laws, including the Convention on the Rights of the Child. The question is, do they apply to this frontier technology? And so we see, for example, in the UK, the ICO, which is the Data Protection Authority, looking, has done some research into looking at existing laws within the UK to see if they provide cover for neurotechnology. And they’re in the investigation phase. And the same is happening in Australia. The Australian Human Rights Commission has been investigating, you know, does the existing regulatory framework cover neurotechnology? So then what is the answer? And we’ve been thinking of two camps and I’ll give you some examples. In Europe, for example, the European Parliament also did an investigation and basically found that they think the existing laws and frameworks do provide enough cover. So there’s the EU Charter on Fundamental Rights and Freedoms and there’s the European Convention on Human Rights. And then what we know in the context of data governance, particularly the GDPR, which probably broadly applies to neurodata, because I should have said earlier, you know, any kind of monitoring of brain functioning translates into data essentially. That’s how you record it and that’s how you analyze it. And so there’s GDPR. There’s also the European AI Act that’s coming into effect soon, which doesn’t speak about neurotechnology directly, but for example prohibits the use of emotion detection AI in the workplace and in the classroom. And that would often be captured by a neurotechnology. And here we see, I also should have mentioned, a real convergence of technologies and that’s what complicates the space more, because neurotech is not new. It’s been around since the 70s, but it’s recently that it’s really made advances and that’s in part due to advances in AI and the ability to process large amounts of data that’s getting captured. So other countries have said no, the existing laws don’t provide enough cover, they need to make some changes. And these especially come from Latin America. In Chile, for example, there was a constitutional amendment in the last two years that really picked out the sensitivity of brain and neural data. And there was a world first case recently, or you know, fairly recently in Chile, where there was a commercial neurotech product And somebody bought the product and they said they’re not happy with the terms and conditions in the product where you don’t quite know where your data is going to and who’s processing it and who are the third parties. And that went all the way to the Supreme Court and the Supreme Court judged that the Neurotech company needed to cease operation until it kind of addressed that whole. is introducing a law that will a broad law that will result in 92 new articles and 35 amendments to existing laws because they also didn’t think that and these are health laws these are across a range of sectors didn’t think that the existing space provided enough cover for novel kind of issues around Neurotech. And then lastly in the US two of the states California and Colorado have updated their data protection the kind of personal data privacy data protection regulation to really pick out neural data and brain data and there the FTC which is a consumer protection body has also gone after some companies more actually for misrepresentation where companies say this product will help you can read your brain data and help you to do X and it can’t really it’s still too rudimentary and so it’s misrepresent misrepresentation. So I’ll close there just to say that some countries feel there’s enough cover others don’t and it seems to be landing in different ministries and and kind of looked at through different lenses. Our recommendation is that all countries should do a policy mapping exercise to look at what’s is at the national level and look at the opportunities and risks and emerging use cases from Neurotech and whether there is sufficient protection and cover in place.

Sabine Witting: Thanks so much for that explanation also the different examples and I think you were also speaking about convergence of technologies and I think it’s also convergence of regulations that we see right and and how they can be applied. and what gaps we have. And I think at a very practical level, what you said last is, there are different ministries involved and who is going to lead now, law reform, but also implementation of the laws. Is it, for example, in the concept of neurotech, is it the Ministry of Health? Is it a data protection authority? Is it a communications regulator? Are these three all working together? You mentioned in the EU, the AI Act, and how does the AI Act apply together with the GDPR in this context? So I think it’s exactly that. It’s that mapping exercise first to really understand how these regulatory mechanisms all interact. Emma, maybe to say, okay, if we recognize there might be some gaps, even though we might look at convergence of different regulatory frameworks and we pull everything we have together, we still have gaps. How are we gonna fix this?

AUDIENCE: I think Stephen’s recommendation is a very good one, this mapping exercise, first of all, to see where the gaps are. I would say in terms of edtech, I think it’s less about gaps, actually, and more about implementation. And so I think you can have gaps in putting the frameworks in place, but definitely maybe even a bigger gap in terms of implementing the regulations we already have. So in the context of edtech, really the edtech that’s being used at the moment is generally still to do with data protection and perhaps AI regulation, of which we now around the world have quite a lot of regulation. Maybe if we’re looking to the future, there will be neurotech embedded in the edtech and it’s gonna become then all of the issues that Stephen raised. But I think that’s where we at the moment need to do the work is on the implementation. And if you think of edtech, education in many, many countries around the world is a devolved responsibility. And when it comes to the choosing edtech products to be used in schools, it’s often teachers or the school management who will choose what products are gonna be used at the school level. And they need guidance to be able to make these choices. they have to think about, is this a good tool for education, what about data protection, what about cyber security, what about AI ethics, and so I think the gaps here are a little bit like Aki was talking about, they’ve been developing in Finland this kind of guidance, some of the key kinds of tools that can be used for this are procurement rules, where governments decide that if schools are going to procure edtech to use in a school, then they need to meet certain requirements for data protection, cyber security, and even educational value also, they can be, like Aki was mentioning, certification schemes, so that an edtech company has to be audited and then they’re certified that they meet these minimum standards, and industry also can create standards, and there can be guidance and codes of practice, and we know that some regulators are starting to work on this for schools, but this is really an emerging area, and I think it’s a gap everywhere, that maybe there’s also room that every regulator doesn’t have to start from the beginning, that there can be some common themes and regulators can learn from each other, for example, the Global Privacy Assembly has been working with UNICEF on this project of data governance for edtech, and different regulators from around the world are coming together through the Global Privacy Assembly to look at what the common challenges are, and maybe what some of the common solutions could be as well.

Sabine Witting: Yeah, thanks so much, and I think that’s a very important point, it’s not so much, you know, usually we think, oh, there is a regulatory problem, we need law reform, but oftentimes more law and more specific laws, and let’s say, oh, we need a neurotech law specifically, it’s not going to solve the issue, because the issue usually lies in the implementation and the application of the existing legal frameworks, and also what you said around procurement rules, and I think looking at these different aspects, for example, of edtech, one of the things would also be to, for example, say, you need to also conduct a data protection impact assessment. as part of that, right, for schools to really actively think about and to point them towards the risks associated with edtech, because they might just not be thinking about that at all. And also, as you mentioned, the kind of joint thinking through bodies like the Global Privacy Assembly, the IGF and others, how we can really move forward in these kind of spaces. Jutta, I see a question. Please, please come in. Sorry, can we have a microphone? Oh, yeah. Jutta is on the move. Thanks, Jutta. Stephen is on the move. Stephen, come to rescue. There we go. Thank you.

Jutta: Yes, I just wanted to refer to Section 508 in the US law, which was introduced, I do think, 20 years ago, making accessibility a precondition for any procurement. And if we would have that for all the technology that we’ve been talking about, making child rights assessments or child safety assessments in procurement, that would be a good recommendation. Thank you.

Sabine Witting: Thanks much, Jutta. Yeah, because I think it brings the problem much closer to the people who actually deal with it, right? And because it is not just an abstract data protection issue, it becomes a procurement issue. And a procurement issue is what schools deal with. And they know procurement and they know rules around that. So if you bring the abstract issue of data protection down to that level, it’s much more likely that people actually think about. So thanks so much for that point. Any other questions on this particular block around regulation? What’s your experience in your country around that? Do you see regulatory frameworks? Do you see implementation gaps? What might be required? Any points from the floor? Otherwise, any other examples? Yep. He’s moving. Very good. Go ahead, please.

AUDIENCE: This isn’t actually an example, but it’s more just to say how challenging the space is. So I really like that point, Jutta, about bringing in a condition for procurement. And in the US, the government is such a massive buyer of ed tech that this really has, that really has teeth and that can move the needle. This is more of a challenge. On your last point, just after me about convergence, the thing that government ministries and regulators do so badly is work outside of their silo. We all do it badly, even within departments within UNICEF. So I’m not pointing fingers. I’m saying it’s a real challenge to all of us when you get technologies or issues like data governance that touch on neurotechnology. Is it an education issue? Is it a health issue? Is it a data governance, data protection issue? So it’s really going to challenge all of us to kind of think outside of the box or think outside of the silo and work together. Yeah.

Emma Day: Just another challenge I see is that I think there are different challenges in different geographies of the world. And there are some countries who are still struggling with access to internet. So I think equity is a big challenge. So in terms of ed tech, you talk to some regulators and really they’re trying to make sure that every school has access to education and has access to the internet. And if you’re talking about immersive technologies, the reality is there is not the infrastructure to support this in most schools in many, many parts of the world. And then for many regulators, they’re not financed. They don’t have the resources to have that kind of oversight, often over foreign companies who are deploying their products in their country, possibly financed by development aid as well. It becomes quite a complicated picture. So I think that’s where we also need to look at this multi-stakeholder governance model and think about who are all those actors who we need to include and make sure the procurement may or may not happen at the national level in all countries. It may happen actually from a donor as well. So there are different actors who need to be brought into these discussions, I think.

Sabine Witting: And I think what we also see, I think in the global south context, is there is a competing interest right and I think from my experience what I’ve heard from from many schools is that they say well like data protection issues yeah there might be risks but there’s really not something that we can prioritize to prioritize data protection because a much more tangible issue here is access to education that’s what we need to deal with first and I think always loops back to the problem that children’s data governance is an abstract issue it’s nothing that a lot of people really see what you know and not understand and I think that’s why it’s easily pushed aside and rather than really considered within the CRC as they’re also equally competing right oh yes yes Milo sorry please interrupt me anytime go ahead

Jasmina Byrne: thank you so much Sabine I was just listening to this discussion about regulatory frameworks and various stakeholders and I wanted to say sometimes these policies or strategies that come from different divisions departments in the government and so on could also help us advance any any any potential work on on data governance and I’m now thinking about digital public infrastructure that is actually an approach being adopted by so many countries which actually facilitates the government services and a layer of these platforms that are set up on this digital public infrastructure which includes financial payments and includes data sharing and it includes digital IDs and when different governments in collaboration with ministries private sector as well are developing these strategies this is where we also need to be vigilant to think about how these data sharing practices can impact children at all levels there are currently about 54 of such strategies in place and there is a big push for an adoption of digital public infrastructure across the world. So to answer to your questions, Sabine, where are the good examples? I think we probably need to look much more closely to see how to engage with those stakeholders who are advancing DPI in their countries and regions to think about data governance as well across different domains. Thank you.

Sabine Witting: Thanks so much, Jasmina, for that intervention. There’s another question in the back. I think we don’t have a microphone. Thank you.

AUDIENCE: Thank you. Just building on from what Jasmina said and following on from what Eman said as well, when we, you know, where are the best practices? That’s important. Another area that I want to emphasize is, you know, there is operational activities like skill capacity building when it comes to educators, right? How do they know what is, what does good look like? And then when we look at the strategy, that’s at a different level altogether that we need to think about. So it’s, I think, I don’t have the answer, but just an observation. And in different parts of the world, so I come from Australia. Well, Australia has been strong enough to advocate, you know, child rights and standing strong against matter, but it’s not all countries who can do that. So it’s an interesting or challenging area, but I think an area that we all have to collaborate together so that I think that collaboration piece plays a role, a very strong role, as well as where are the best practices. Thank you.

Sabine Witting: Thank you so much for that intervention. And maybe, Emma, do you briefly want to speak a bit about the case studies that are looking at these kind of innovations?

Emma Day: Yeah, I think in terms of, I think what you’re saying is right, and again, it comes back to this question of resources, really. And I think in no country can a regulator, like a data protection authority, have oversight over every tech company that’s operating in its country. It’s just impossible, really. But I think that’s why, then, we’re looking at innovations in data governance to try and see what are some examples of how you plug those gaps. So we will publish next year, it will be a UNICEF collection of innovations in data governance for children. And some examples, we had the regulatory sandboxes, but also the certification schemes. So certification schemes are generally led by a non-profit or even by a company themselves. And it’s a way of, I suppose, outsourcing some of that oversight. And you always have attention because you can get the commercialization, then, of the certification schemes. So it has to be done. properly and we’re trying to look at some examples and this case study will then try to look at some of the considerations. It’s quite difficult to find shiny example best practices. We often start looking for those and then we end up looking at promising practices and take a little bit of what seems good from different examples. So I think in these case studies that’s what we’ll be doing is looking around the world and if anyone has any ideas along these themes they want to contribute we’d love to hear from them and the other the other case study we’re looking at at the moment is on children’s codes. So looking at there is a UK age-appropriate design code, Ireland has produced a similar code and then there are other codes developing in Indonesia, Australia, look which in these codes generally actually So if this is kind of our way of looking for best practices or promising practices and getting those out there and sharing them.

Sabine Witting: Somebody said online that the captioning has stopped working.

Melvin Breton: I think it’s back, it went out for a little bit and now it’s back. Could I ask a question? So in the theme of regulatory authority we have all these different tech domains and we have one issue that cuts across all of them which is data governance and data regulation. I think something that could be explored is how can we explore is empowering the regulatory bodies, data regulation, data government, governance authorities, a lot more within the government. Because if I’m thinking about fintech, you have very strong financial regulations in many countries and financial regulatory bodies. It’s not so clear that they look at the advice from data governance authorities, but those data governance authorities often have such a wide remit that it’s very difficult for them to give direction that’s tailor-made for areas like fintech. So encouraging collaboration from the financial regulatory bodies with the data protection authorities to develop more tailor-made regulations on data governance for fintech, for neurotech, for edtech, for whatever the case may be, might be a good first step. And then once those regulations are well established, making them more binding. Because it’s one thing, the financial regulatory body regulating fintech, but they may not be applying regulations directed at protecting children’s data beyond what I think is now accepted as the norm, which is like the data needs to be encrypted, data needs to be anonymized. But beyond that, it’s not super clear that the data protection regulations are very specific to children’s needs in all these domains, across all these domains.

Sabine Witting: Thanks so much, Melvin, for that. And I think, yeah, I see lots of nods here next, left and right here. You want to add something, Emma?

Emma Day: Yeah, maybe. I think it’s interesting then the enforcement side of things, and different regulators have very different approaches to this. So some regulators see themselves as being kind of collaborators with the private sector who really want to, they’re kind of balancing this approach of promoting innovation in their own country, in their own tech ecosystem, and also making sure that the tech companies don’t overstep the mark too much. But often, from that perspective, the regulator will meet with the companies and kind of warn them verbally first. In other countries, the regulators are much more, take a punitive approach where they will directly it’s more about bringing enforcement actions and they’re not very approachable and there are pros and cons to each. In other countries, particularly like we were discussing before, where it may even be a foreign company that’s the problem in the country, there are few resources and it’s very difficult to know actually how technically this would happen, where would be the jurisdiction, how will they hold this company accountable in their own country. So there are definitely issues related to enforcement and accountability as well which probably deserve a whole other case study just to try and unpack.

Sabine Witting: Thanks so much Emma and I think this was a very rich discussion, very interesting block around laws and regulation. What actually does a gap look like? Do we have a gap about convergence of technologies, convergence of regulatory frameworks, implementation problems and then Emma also what you said about best practices, promising practices and maybe only practices. So we’re changing the bar I guess as we go but it’s a learning space and we need to think outside the box all of us. So maybe after looking at the risks and benefits, governance models, laws and regulations, maybe that was very much looking at the status quo, maybe we can close the session by looking ahead a little bit and look at the next 10 years, 15 years and these different frontier technologies, edtech, neurotech and fintech and really think about what might be the upcoming issues in terms of data governance because of course we already need to think ahead, predict things and find solutions as we go forward. Maybe Aki I can start with you, just some concluding thoughts on that.

Aki Enkenberg: Yes, thank you. I think it’s been a very interesting discussion so far and Already, I think many of the issues related to the future of these fields and how they should be or could be governed have come up. So maybe we can also build on those in this final segment. I do agree with, I think, Stephen, who raised this issue of convergence earlier, which makes it quite difficult to predict or make predictions about where neurotech or edtech or fintech will evolve or go in the next five to 10 years, because they interact with each other, right? So they merge into each other. And out of these combinations, different fields will emerge, different problems will emerge, and so on. Definitely, that’s one key point to watch. Secondly, we can think about technology on its own, and often it’s very useful to kind of make these kinds of predictions. But we also should keep in mind that it doesn’t evolve autonomously, so it’s also governed and constantly being steered by governments and other stakeholders in the process. So we should definitely also think about, at the same time, whether we want the technology to evolve and how we can be part of the process and what role governance plays. On neurotech, quite an interesting field. I think we’ll see a lot of unexpected things over the next five years, even. In addition to these leaps in kind of measuring brain activity or neural activity, definitely there’ll be a growing focus on acting on humans, acting on the brain or stimulating the brain. new interfaces also for doing this. Many of us have heard about the Neuralink, but that’s only one example. I think there’ll be a whole explosion of these kinds of interfaces, how humans and their brains will be interacted on or acted on. So definitely in the clinical field, there’s a lot of potential for these technologies, and that’s already proven, but they will also trickle down to consumers eventually in different kinds of context. And one discussion point today about the convergence of neurotech and edtech will be quite important to follow, how these technologies eventually come to schools and classrooms to monitor learning or behavior, but also to stimulate learning. And certain type of behavior is quite interesting, but also quite controversial, I’m sure. The downsides also from the kind of interface of neurotech and AI, this risk of unconscious influencing for political purposes, for commercial purposes, for marketing, advertising, or changing people’s minds, influencing them when their brains are still evolving in the case of children and youth, extremely important to keep in mind. As Stephen mentioned, the EU AI Act already recognized this danger, and definitely when it comes to regulation at this point in time, it seems to be wise to focus on the risks posed by specific uses of technology. It will be very difficult to govern or prohibit certain technologies or allow other technologies per se, but it will be possible to govern how they’re used. and applied, and the EU AI Act through its approach is a good example of this one. On fintech, finally, definitely in my mind at least, this kind of financialization of everything and embedding of financial services or financial angle in every other type of digital service we consume or games or entertainment, social media, etc. So definitely moving from this the situation where fintech, we regard fintech primarily as a new means for making payments, saving, investing, in the future also more and more about lending, to a world where financial services will be part of every other thing we do. And of course combined with the very likely scenario where everyone will be quite easily identified online also through digital identity systems, this KYC or Know Your Consumer problem will be less important than it is today. People can be recognized online, their identity is known, and that they’re conducting financial transactions or so on everywhere where they go and through different means, not only through specific apps or banks and so on. And then finally, definitely we’ll move into a world where not only our kind of behavior and choices and actions that are visible will be measured and tracked, but also our bodily activity and brain activity more and more. And this will become a focus for data governance also. And when we think about how AI is also developing, we’re trying to create… these independently acting AI agents that are currently sort of learning from what exists, the data that exists and is available online, but in the future they will also, I mean there’s a need for them to also, these systems to learn from humans directly, from their activities, behaviors and their thoughts and so on. So our data, our bodily data, our brain data will become commercially crucial or important for this endeavor. So this really highlights the role of personal data and bodily data in the future in data governance. And then finally, was it Jasmina I think who mentioned this issue or Emma on the global divide. So whereas we’re in the global north, we’re trying to keep pace with technology and also develop very advanced regulations to tackle some of the issues we see, we do have to keep in mind this need to develop a level playing field globally and really to address also not only the technology divide, but also the regulatory divide. So these are my thoughts. Thanks.

Sabine Wittingg: Thanks so much Aki. Well, Stephen, good luck following that. Maybe just some concluding thoughts for this very comprehensive analysis now.

Steven Vosloo: Thank you Aki, that was excellent. Yeah, I don’t have too much to add. Aki very eloquently highlighted the technological use cases, but also the broader issues. Maybe I’ll just pick out one quick thing. So on neurotech anyways, this move from neurotechnology beginning in the medical space that’s highly regulated and has ethical oversight, now moving into the consumer space. And in many countries, consumer electronics devices don’t on subject to that level of oversight. So there’s clearly a gap there. And there’s from a data governance and just protection perspective, there’s a huge area to focus on. But in terms of where the space is going in the consumer side, anyways, we will definitely see in the education space, that’s come up a lot. And this isn’t just me speaking, this is through consultations we’ve done with neurotech experts from around the world. So in the classroom to kind of support learning and the opportunities and risks that comes with that. But in the home space, the cognitive enhancement is also an element of that. area to really watch. And so this is not where you have a neural disorder, where you get treated through neurotechnology. This is where you’re healthy, but you can perform better. And in our consultations, people from certain countries that are highly competitive in terms of getting into universities and so forth, where already you pull all the levers you can to advance your child, whether it’s through tutors, whether it’s through medication, whether it’s literally, you look at all your options. If neurotechnology promises that, that is something that people will look at. And if it works, it comes back to the equity issue of AKI. So how do you compete in the global south against your peer in the global north who’s just performing so much better? So that kind of touches on not just treatment, but also enhancement. In one of the consultations, one of the folks said something that was really great. He’s from Zimbabwe, and he said, you may get a future world where you get those, the treated from neurotechnology for disorders, and you may get the enhanced who are healthy. And then we added in the group, the naturals. And this could be the future. Anyway, we’ll leave you on a controversial note.

Sabine Witting: Very good. Thank you so much, Stephen. Emma, controversial note.

Emma Day: Well, I think mine might be controversial in a slightly different way. I’d like to go back to what Aki said about, there’s obviously a trajectory of the development of technology, but we are governing how that continues into the future. And I think sometimes there is a kind of inevitability that we hear about the direction that technology will evolve in, and that we’re all going to end up with chips in our brains. But I think that these are decisions that we make, and we can decide what’s in the best interests of children for their education. We can put the guardrails in place, and we can maximize some of the benefits that are being promised here. But also, we can decide not to end up with chips in our brains if we don’t want to, at the really extreme of that end point. I think there’s also, just really focusing on edtech, I think some of it is also to do with geopolitics of how this develops. We’re seeing at the moment, quite a monopoly by American and Chinese tech companies. There are a couple of big tech companies who deploy their edtech kind of more infrastructure around the world really and then at a national level you see in most countries in the world there is an ecosystem growing now of apps that plug into those big company platforms for things like language, mathematics and they’re more culturally and linguistically appropriate and maybe those ecosystems are going to grow more and you also see within Europe there is the Gaia X project at EU level which is being led by the German government and the aim there is to try and find European level solutions based on secure and trustworthy exchange of educational data so that they don’t have to use the big tech companies for edtech so it depends how all of that plays out and we don’t really know what direction that’s going to move in but it’s likely to also have an influence on the kinds of technology we see and the values that underpin those technologies as well I think. Thanks Angela for that very good point. Melvin?

Melvin Breton: Sabine, thank you.

Sabine Witzing: Tell us, more problems.

Melvin Breton: More problems, no I think it’s useful maybe to think about it in terms of the extensive future and the intensive future in terms of fintech. I think Aki already alluded to some of the extensive future in the sense that I’m using it here where we’re seeing fintech across an increasing range of domains. We started by just having a web or app layer on top of financial services and now we’re seeing it getting into gaming, getting into social media where there are obvious financial applications there that are relevant for children that we haven’t yet completely come to grips with in terms of regulation and data protection beyond maybe encryption and anonymization which is still not even applied across the board but at least we know those two things are important but then we’re getting into other things like the metaverse which is maybe an extension of games to just like social life and in a parallel world where there will also inevitably be transactions and we’re already seeing things like NFTs and digital land that you can purchase and what kind of implications does that have for children and data and you’re seeing also in social media that there’s financial transactions are becoming public and another source of information about the lives of children that is becoming more more prevalent so what what are we going to do about that I think those are very much open questions not to even not to even mention neurotech which is I think scary to think about the the prospect of the intersection of neurotech and fintech but something that we need to keep in mind nonetheless I think there are some good news there’s I think age detection through AI for purposes of age gating is getting a lot better I think now companies say that they can detect a person’s age to plus minus one year roughly just through their the use of AI but then that opens the question the question what else does it know about you in terms of your financial life and and the transactions that you’re likely to make and what potential does that open for manipulation and exploitation of children. There’s also the AI and fintech intersection front, there’s the algorithms are getting a lot better, for example, for deciding who to lend to, banking services using that to be able to process more applications for loans and things like that. That has consequences for financial inclusion, it enables more financial inclusion of families that previously maybe didn’t have access to financial services. The technology itself allows people to, or the technologies themselves allow people to be more integrated into the financial systems, so that’s a plus for financial inclusion, but then those same tools, if you’re thinking about AI or machine learning algorithms used to decide who gets and doesn’t get a loan, that can also have another edge, which is that it can lead to financial exclusion because it’s a lot easier to see who has a risk of becoming non-compliant. So the inequality aspect here is important. Also to mention that whatever applications that require connectivity will just compound the digital divide that already exists, so something to think about there. On the positive side of applications, I think social protection and transfers, cash transfers. for social protection are going to benefit immensely from these new technologies that are becoming more efficient, fewer data requirements, more points of entry, and as things like central bank digital currencies, stable coins, and things like that become more prevalent, it’s going to make it a lot easier to expand and scale up social protection systems and transfers, again with the caveat of, you know, we need to be conscious of the digital divide. And then on the education front, financial education is going to become, yes, just wrapping up.

Sabine Witzing: Yes, thank you.

Melvin Breton: Yes, financial education is just going to allow for a longer financial life. Starting earlier in your financial journey and becoming more savvy is going to be something that’s going to be beneficial for children. But again, a pinch of salt that we need to be careful about the risks. Over.

Sabine Witzing: I also love how you just kindly brought in the metaverse and stable coins. Yes, Mina, please answer all of our questions now in the last two minutes.

Jasmina Byrne: Thank you so much. I mean, it’s been a great pleasure listening to all of you and so many fantastic contributions and ideas. And we talked about integration of technology and regulation and stakeholder, multi-stakeholder approaches to these issues. And when we talk about the future, obviously, we need to think about how some of these technologies are going to be evolving. EdTech is already much more mature. The challenge is going to be the size of the market. How do we capture everyone who is introducing some EdTech tools to the market, but also piloting of new technologies? that is happening, how are we also trying to work with those companies who are testing and piloting new approaches and new technologies. In the financial sector, we heard from Alvin also that includes integrating blockchain, crypto and so on. And basically AI integration into everything that we are going to be seeing more and more in the future. I think what is going to be a big challenge for all of us is the global fragmentation of regulation, which can lead to uneven safety standards and standards for children in particular. I think that fragmentation can potentially lead to a lack of trust in these technologies and their adoption and application for good, as we said in the beginning that there are so many benefits. So the question is also for us who are working for children and children’s rights in the context of digital technologies, is how do we even shape the future of technology? How do we use this knowledge and this understanding of implications for children to shape the development? And somebody was mentioning, I think Jutta was mentioning also standards or recommendations for even procurement of some of these technologies and maybe going even back towards the development of these technologies and integration of child rights principles into development of these technologies. We also need to think about the future of regulation. So when we talk about the future of technologies is one thing, but also what are going to be the future approaches to regulating technologies and how do we strike that balance between innovation and protection? We talked a lot about benefits, we talked about risks, but then ensuring that the future regulation strategies, policies actually accurately. in a way, create that balance and maintain that balance and allow for innovation while at the same time safeguarding children. And I just want to end on the child rights note. We haven’t mentioned so much children’s rights. Many of you, particularly online, have worked over the past several years on really integrating child rights into any kind of tech policy. And we heard from Aki the opportunities under the digital compact to integrate more effort in relation to children’s data governance. So I would just like to remind everyone again that children’s rights are comprehensive, but also they need to be looked at both from the positive and protection side. And when we think about the future of tech and future of technology, that holistic child rights approach, I think is the best way forward. Thank you so much.

Sabine Witting: Thank you so much Ms. Yasmina for wrapping up and thanks so much to the audience here in the room and online and the speakers for a fantastic panel and enjoy the rest of your day. Good evening to the people here in Saudi Arabia, I think. And we will see you all tomorrow here at the IDF. Thank you.

Jasmina Byrne: Thank you.

E

Emma Day

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Personalized learning potential of EdTech

Explanation

EdTech has the potential to provide personalized learning experiences for students. This can be achieved through algorithms that learn from individual children’s data and tailor their learning to suit their personal needs.

Evidence

Data from these tools can be shared with teachers to help identify students falling behind or ensure equity for different groups of students.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Need for multi-stakeholder governance approaches

Explanation

Data governance for emerging technologies requires a multi-stakeholder approach. This involves collaboration between regulators, private sector, and civil society to address complex issues in data governance for children.

Evidence

Example of the Datasphere Initiative looking at regulatory sandboxes from a multi-stakeholder perspective, including civil society.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Sabine Witting

Agreed on

Need for multi-stakeholder governance approaches

Importance of regulatory sandboxes for innovation

Explanation

Regulatory sandboxes provide a protected framework for companies to explore new technologies under regulatory guidance. This allows for innovation while ensuring compliance with data protection and other relevant regulations.

Evidence

UK ICO’s sandbox project with the Department of Education to enable children to share their education data securely with higher education providers.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Melvin Breton

Agreed on

Importance of regulatory sandboxes

Implementation gaps in applying existing regulations

Explanation

The main challenge in edtech regulation is not necessarily gaps in the law, but rather implementation of existing regulations. This is particularly challenging at the school level where decisions about edtech are often made.

Evidence

Example of teachers or school management choosing edtech products without sufficient guidance on data protection, cybersecurity, and AI ethics considerations.

Major Discussion Point

Regulatory Frameworks and Gaps

Differed with

Steven Vosloo

Differed on

Approach to regulation of emerging technologies

Geopolitical influences on EdTech development

Explanation

The future development of EdTech is influenced by geopolitical factors. This includes the current monopoly of American and Chinese tech companies and efforts in Europe to develop alternative solutions.

Evidence

Example of the Gaia X project at EU level, led by the German government, aiming to find European-level solutions for secure and trustworthy exchange of educational data.

Major Discussion Point

Future Developments and Challenges

M

Melvin Breton

Speech speed

111 words per minute

Speech length

2374 words

Speech time

1277 seconds

Financial literacy enhancement through FinTech

Explanation

FinTech can be used to enhance financial literacy from a young age. Better data collection and processing can provide personalized feedback to help children develop good money management skills.

Evidence

Examples of personalized nudges alerting children about overspending or encouraging healthy saving habits.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Potential for manipulation and exploitation in FinTech

Explanation

FinTech also presents risks of exploitation and manipulation for children. This includes the potential for bad actors to target children or promote overuse of financial technologies.

Evidence

Examples of alarming cases with stock trading apps addressing mental health issues and harms to young people.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Need for collaboration between financial and data regulators

Explanation

There is a need for increased collaboration between financial regulatory bodies and data protection authorities. This collaboration is necessary to develop tailored regulations for data governance in fintech, particularly concerning children’s data.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Emma Day

Agreed on

Importance of regulatory sandboxes

Expansion of FinTech into new domains like gaming and metaverse

Explanation

FinTech is expanding into new domains such as gaming, social media, and the metaverse. This expansion raises new questions about data protection and regulation, particularly for children.

Evidence

Examples of financial transactions becoming public on social media and the emergence of NFTs and digital land purchases in the metaverse.

Major Discussion Point

Future Developments and Challenges

Agreed with

Aki Enkenberg

Agreed on

Convergence of technologies creating new challenges

A

Aki Enkenberg

Speech speed

137 words per minute

Speech length

1770 words

Speech time

770 seconds

Neurotechnology benefits for health and education

Explanation

Neurotechnology offers potential benefits in health and education sectors. It can be used to monitor learning or behavior in classrooms and stimulate learning.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Risk of unconscious influencing through neurotech

Explanation

Neurotechnology presents risks of unconscious influencing for political or commercial purposes. This is particularly concerning for children whose brains are still evolving.

Evidence

The EU AI Act’s recognition of this danger and its focus on governing specific uses of technology rather than prohibiting technologies per se.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Convergence of different technology domains

Explanation

There is an increasing convergence of different technology domains, such as EdTech, FinTech, and NeuroTech. This convergence makes it difficult to predict future developments and creates new challenges for regulation.

Major Discussion Point

Future Developments and Challenges

Agreed with

Melvin Breton

Agreed on

Convergence of technologies creating new challenges

J

Jasmina Byrne

Speech speed

136 words per minute

Speech length

1180 words

Speech time

519 seconds

Privacy and security risks of data collection

Explanation

The collection and processing of children’s data through emerging technologies pose risks to privacy and security. These risks need to be balanced against the potential benefits of these technologies.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Global fragmentation of regulation as a challenge

Explanation

The global fragmentation of regulation poses a significant challenge for ensuring consistent safety standards for children. This fragmentation can lead to uneven protection and potentially undermine trust in these technologies.

Major Discussion Point

Regulatory Frameworks and Gaps

Need to shape technology development with child rights in mind

Explanation

There is a need to shape the future development of technology with children’s rights in mind. This involves integrating child rights principles into the development of technologies and future regulatory approaches.

Evidence

Mention of the opportunity under the digital compact to integrate more effort in relation to children’s data governance.

Major Discussion Point

Future Developments and Challenges

S

Steven Vosloo

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Existing laws may not fully cover new technologies

Explanation

Current laws and regulations may not provide sufficient coverage for emerging technologies like neurotechnology. Some countries are investigating whether existing frameworks are adequate, while others are introducing new laws.

Evidence

Examples of investigations by the UK ICO and Australian Human Rights Commission, and new laws in Chile and Brazil specifically addressing neurodata.

Major Discussion Point

Regulatory Frameworks and Gaps

Differed with

Emma Day

Differed on

Approach to regulation of emerging technologies

Need for policy mapping to identify regulatory gaps

Explanation

Countries should conduct policy mapping exercises to identify gaps in their regulatory frameworks regarding emerging technologies. This would help determine if there is sufficient protection in place for children’s data.

Major Discussion Point

Regulatory Frameworks and Gaps

Potential divide between treated, enhanced and natural humans

Explanation

The advancement of neurotechnology could lead to a future divide between those treated with neurotechnology for disorders, those enhanced for better performance, and those who remain ‘natural’. This raises significant ethical and societal concerns.

Evidence

Quote from a participant from Zimbabwe during consultations on the future of neurotechnology.

Major Discussion Point

Future Developments and Challenges

J

Jutta Croll

Speech speed

149 words per minute

Speech length

150 words

Speech time

60 seconds

Role of procurement rules in ensuring standards

Explanation

Procurement rules can play a crucial role in ensuring standards for child safety and rights in technology. Making child rights assessments or child safety assessments a precondition for procurement could be an effective approach.

Evidence

Reference to Section 508 in US law, which made accessibility a precondition for procurement 20 years ago.

Major Discussion Point

Data Governance Models and Implementation

S

Sabine Witting

Speech speed

176 words per minute

Speech length

2497 words

Speech time

847 seconds

Need for multi-stakeholder governance approaches

Explanation

Data governance for emerging technologies requires involvement from multiple stakeholders. This is particularly important for complex issues surrounding children’s data in new technological domains.

Evidence

Reference to the Global Digital Compact encouraging multi-stakeholder governance models.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Emma Day

Agreed on

Need for multi-stakeholder governance approaches

Agreements

Agreement Points

Need for multi-stakeholder governance approaches

Emma Day

Sabine Witting

Need for multi-stakeholder governance approaches

Need for multi-stakeholder governance approaches

Both speakers emphasized the importance of involving multiple stakeholders in data governance for emerging technologies, particularly for complex issues surrounding children’s data.

Importance of regulatory sandboxes

Emma Day

Melvin Breton

Importance of regulatory sandboxes for innovation

Need for collaboration between financial and data regulators

Both speakers highlighted the value of regulatory sandboxes in fostering innovation while ensuring compliance with regulations, particularly in the context of emerging technologies.

Convergence of technologies creating new challenges

Aki Enkenberg

Melvin Breton

Convergence of different technology domains

Expansion of FinTech into new domains like gaming and metaverse

Both speakers noted that the convergence of different technology domains creates new challenges for regulation and prediction of future developments.

Similar Viewpoints

Both speakers highlighted challenges in implementing and enforcing regulations, with Emma focusing on implementation gaps at the school level and Jasmina emphasizing the global fragmentation of regulation.

Emma Day

Jasmina Byrne

Implementation gaps in applying existing regulations

Global fragmentation of regulation as a challenge

Both speakers emphasized the importance of proactively addressing regulatory challenges, with Steven suggesting policy mapping exercises and Jasmina advocating for integrating child rights principles into technology development.

Steven Vosloo

Jasmina Byrne

Need for policy mapping to identify regulatory gaps

Need to shape technology development with child rights in mind

Unexpected Consensus

Importance of procurement rules in ensuring standards

Jutta Croll

Emma Day

Role of procurement rules in ensuring standards

Implementation gaps in applying existing regulations

While not explicitly stated by Emma, her discussion of implementation challenges aligns with Jutta’s suggestion of using procurement rules to ensure standards. This unexpected consensus highlights a practical approach to addressing implementation gaps.

Overall Assessment

Summary

The speakers generally agreed on the need for multi-stakeholder approaches, the importance of regulatory innovation (such as sandboxes), and the challenges posed by the convergence of technologies. There was also consensus on the need to address implementation gaps and shape future technology development with children’s rights in mind.

Consensus level

Moderate to high consensus on key issues, with speakers often approaching similar concerns from different angles. This level of agreement suggests a shared understanding of the complex challenges in data governance for children in emerging technologies, which could facilitate more coordinated efforts in addressing these issues.

Differences

Different Viewpoints

Approach to regulation of emerging technologies

Emma Day

Steven Vosloo

Implementation gaps in applying existing regulations

Existing laws may not fully cover new technologies

Emma Day argues that the main challenge in edtech regulation is implementation of existing regulations, while Steven Vosloo suggests that current laws may not provide sufficient coverage for emerging technologies like neurotechnology.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the adequacy of existing regulatory frameworks and the specific approaches to governance for emerging technologies.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of addressing data governance for children in emerging technologies, but have slightly different perspectives on how to approach regulation and implementation. These differences do not significantly impede the overall discussion on improving data governance for children, but rather highlight the complexity of the issue and the need for comprehensive, multi-faceted solutions.

Partial Agreements

Partial Agreements

Both speakers agree on the need for collaboration in governance, but Emma Day emphasizes a broader multi-stakeholder approach including civil society, while Melvin Breton focuses specifically on collaboration between financial and data regulators.

Emma Day

Melvin Breton

Need for multi-stakeholder governance approaches

Need for collaboration between financial and data regulators

Similar Viewpoints

Both speakers highlighted challenges in implementing and enforcing regulations, with Emma focusing on implementation gaps at the school level and Jasmina emphasizing the global fragmentation of regulation.

Emma Day

Jasmina Byrne

Implementation gaps in applying existing regulations

Global fragmentation of regulation as a challenge

Both speakers emphasized the importance of proactively addressing regulatory challenges, with Steven suggesting policy mapping exercises and Jasmina advocating for integrating child rights principles into technology development.

Steven Vosloo

Jasmina Byrne

Need for policy mapping to identify regulatory gaps

Need to shape technology development with child rights in mind

Takeaways

Key Takeaways

Emerging technologies like EdTech, FinTech and Neurotech offer both benefits and risks for children’s data governance

Multi-stakeholder governance models are needed to address the complex challenges of regulating these technologies

There are gaps in existing regulatory frameworks to fully address new and converging technologies

Implementation of existing regulations is a major challenge, especially in resource-constrained settings

Future developments will likely see further convergence of technologies and expansion into new domains, requiring adaptive governance approaches

A holistic child rights approach is important when shaping future technology development and regulation

Resolutions and Action Items

UNICEF to publish a collection of case studies on innovations in data governance for children next year

Recommendation for countries to conduct policy mapping exercises to identify regulatory gaps for neurotechnology

Unresolved Issues

How to effectively regulate converging technologies that cross traditional regulatory boundaries

How to address the global fragmentation of regulation and create more uniform safety standards

How to balance innovation with protection in future regulatory approaches

How to incorporate child rights principles into the development of new technologies

How to address the digital divide and ensure equitable access to benefits of new technologies

Suggested Compromises

Use of regulatory sandboxes to allow innovation while exploring appropriate governance models

Development of certification schemes as a way to outsource some regulatory oversight

Incorporation of child rights assessments into procurement processes for new technologies

Thought Provoking Comments

EdTech, FinTech and Neurotech. My name is Sabine Witzing. I’m an assistant professor for law and digital technologies at Leiden University and the co-founder of TechLegality together with my colleague here, Emma Day. And we are joined today by a variety of speakers both online and offline.

speaker

Sabine Witting

reason

This opening comment sets the stage for the entire discussion by introducing the three key technology domains that will be explored: EdTech, FinTech, and Neurotech. It establishes the interdisciplinary nature of the panel and the focus on legal and technological aspects.

impact

This framing shaped the entire flow of the discussion, providing a structure for exploring data governance issues across these three domains throughout the session.

So we have been working with about 40 experts around the world to understand better how these frontier technologies impact children, and particularly how data used through these technologies can benefit children, but also if it can cause any risks and harm to children.

speaker

Jasmina Byrne

reason

This comment highlights the global, collaborative nature of the research being discussed and frames the key tension between benefits and risks of these technologies for children.

impact

It set up the discussion to explore both positive and negative impacts, leading to a more balanced and nuanced conversation throughout.

I think there’s still a lack of clarity around exactly what data would be helpful. What are the questions that we’re seeking to answer with these data?

speaker

Emma Day

reason

This comment cuts to a core issue in data governance – the need to clearly define the purpose and value of data collection, especially for children.

impact

It shifted the conversation from general benefits to more specific considerations about data utility and necessity, encouraging more critical thinking about data practices.

You can think about, in the application with FinTech, ways in the most obvious way in which it benefits children is in enhancing financial literacy from a young age, right?

speaker

Melvin Breton

reason

This comment introduces a concrete benefit of FinTech for children that may not have been immediately obvious, broadening the scope of the discussion.

impact

It opened up exploration of specific use cases and benefits of FinTech for children, leading to a more detailed discussion of both opportunities and risks in this domain.

We’ve long recognized that children and youth do need to be considered through specific perspectives in relation to digital technologies, AI and data.

speaker

Aki Enkenberg

reason

This comment emphasizes the importance of child-specific considerations in technology governance, highlighting Finland’s proactive approach.

impact

It shifted the discussion towards more child-centric policy approaches and the need for tailored governance frameworks.

Neurotechnology is not advancing in a regulatory void or vacuum. We have existing regulations, existing laws, including the Convention on the Rights of the Child. The question is, do they apply to this frontier technology?

speaker

Steven Vosloo

reason

This comment raises a crucial question about the applicability of existing legal frameworks to emerging technologies.

impact

It prompted a deeper exploration of regulatory gaps and the need for adaptive governance approaches for frontier technologies.

I think it’s less about gaps, actually, and more about implementation. And so I think you can have gaps in putting the frameworks in place, but definitely maybe even a bigger gap in terms of implementing the regulations we already have.

speaker

Emma Day

reason

This insight shifts focus from creating new regulations to the challenges of implementing existing ones, especially in the education sector.

impact

It led to a discussion about practical challenges in governance, such as procurement rules and guidance for schools, rather than just focusing on regulatory frameworks.

So I would just like to remind everyone again that children’s rights are comprehensive, but also they need to be looked at both from the positive and protection side. And when we think about the future of tech and future of technology, that holistic child rights approach, I think is the best way forward.

speaker

Jasmina Byrne

reason

This concluding comment brings the discussion full circle, emphasizing the need for a holistic, rights-based approach to technology governance for children.

impact

It provided a unifying framework for the diverse topics discussed and reinforced the importance of balancing innovation with protection in future governance approaches.

Overall Assessment

These key comments shaped the discussion by progressively deepening the analysis of data governance issues for children across EdTech, FinTech, and Neurotech domains. They moved the conversation from general benefits and risks to specific implementation challenges, regulatory gaps, and the need for child-centric, rights-based approaches. The comments highlighted the complexity of governing frontier technologies, emphasizing the importance of multi-stakeholder collaboration, practical implementation strategies, and the need to balance innovation with protection. Throughout the discussion, there was a consistent focus on the unique considerations required for children’s data, which culminated in a call for a holistic, rights-based approach to technology governance for children.

Follow-up Questions

How can regulatory sandboxes be adapted to include civil society and children’s participation?

speaker

Emma Day

explanation

This is important to ensure a more comprehensive multi-stakeholder approach in developing and regulating new technologies affecting children.

What are some examples of successful cross-border regulatory sandboxes?

speaker

Emma Day

explanation

This could provide insights into how to regulate multinational edtech companies more effectively across different jurisdictions.

How can data protection authorities be empowered to provide more tailored regulations for specific tech domains like fintech, edtech, and neurotech?

speaker

Melvin Breton

explanation

This could lead to more effective and specific data governance regulations for children across different technology sectors.

What are the best practices for implementing existing data protection regulations, particularly in the education sector?

speaker

Emma Day

explanation

This is crucial for addressing the gap between existing regulations and their practical implementation in schools.

How can we address the equity issues arising from the potential use of neurotechnology for cognitive enhancement?

speaker

Steven Vosloo

explanation

This is important to prevent widening global inequalities in education and cognitive performance.

How can we integrate child rights principles into the development of new technologies?

speaker

Jasmina Byrne

explanation

This is crucial for shaping future technologies in a way that respects and promotes children’s rights from the outset.

What approaches can be developed to balance innovation and child protection in future regulation strategies?

speaker

Jasmina Byrne

explanation

This is important to ensure that future regulations allow for technological innovation while safeguarding children’s rights and safety.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #15 Digital cooperation: the road ahead

Open Forum #15 Digital cooperation: the road ahead

Session at a Glance

Summary

This discussion focused on implementing the Global Digital Compact (GDC) and fostering partnerships to address digital challenges worldwide. Participants shared experiences and insights on collaborative efforts to close digital divides, promote digital literacy, and ensure secure and inclusive digital spaces.


Key themes included the importance of cross-sector partnerships, cultural adaptation of digital initiatives, and addressing common challenges across different regions. Examples were given of projects like Finland’s work on AI strategies in African countries and efforts to connect post offices globally to expand digital access. Participants emphasized the need for secure-by-design approaches in digital infrastructure and the importance of energy efficiency in connectivity projects.


Funding emerged as a persistent challenge, with many noting the difficulty of financing good ideas and initiatives. The discussion highlighted the value of platforms like the IGF for connecting actors and increasing visibility for digital cooperation efforts. Participants also stressed the importance of data governance and cybersecurity frameworks that protect all nations, not just developed economies.


The session underscored the complexity of digital cooperation, with issues ranging from cultural translation of initiatives to aligning incentives for partnerships. Ultimately, there was optimism that while challenges exist, they are not insurmountable if stakeholders work together effectively. The discussion concluded with a call for continued collaboration and implementation of agreed-upon digital cooperation principles.


Keypoints

Major discussion points:


– The importance of partnerships and collaboration in implementing the Global Digital Compact (GDC) objectives


– Challenges in finding and forming effective partnerships, including funding, cultural translation, and connecting relevant actors


– The need for platforms to increase visibility and connect potential partners


– Data governance and cybersecurity as key areas requiring global cooperation


– The complementary relationship between the GDC and existing frameworks like WSIS


Overall purpose:


The discussion aimed to explore concrete implementation challenges for the GDC objectives and gather insights on effective partnerships and collaboration strategies from various stakeholders.


Tone:


The tone was generally constructive and solution-oriented. Participants shared examples of successful partnerships and initiatives while also highlighting ongoing challenges. There was an underlying sense of optimism about the potential for collaboration to address digital development issues, even as speakers acknowledged the complexities involved.


Speakers

– Filippo Pierozzi: Moderator


– Isabel De Sola: From the Office of the Tech Envoy


– Roy Erikkson: Ambassador for Global Gateway in Finland


– Kevin Hernandez: From the Universal Postal Union


Additional speakers:


– Nandipha Ntshalbu: Online participant


– Shamsher Mavin Chowdhury: From Bangladesh, online participant


– Alisa Heaver: From the Ministry of Economic Affairs in the Netherlands


– Patricia Ainembabazi: From CIPESA (Collaboration on International ICT Policy for East and Southern Africa)


– Damilare Oydele: From Library Aid Africa


– Guilherme Duarte: From the Brazilian Association of Internet Service Providers


Full session report

Expanded Summary of Discussion on Implementing the Global Digital Compact


Introduction:


This discussion focused on implementing the Global Digital Compact (GDC) and fostering partnerships to address digital challenges worldwide. Participants from various sectors and regions shared experiences and insights on collaborative efforts to close digital divides, promote digital literacy, and ensure secure and inclusive digital spaces.


Key Themes and Discussion Points:


1. GDC Objectives and Implementation:


Isabel De Sola from the Office of the Tech Envoy opened the session with a poll on GDC objectives, highlighting the need for stakeholder-driven implementation through partnerships. She later mentioned that the UN would provide an implementation map for the GDC in the coming months, encouraging organizations to endorse the GDC vision and principles online.


2. Importance of Partnerships and Collaboration:


The discussion emphasized the crucial role of partnerships in implementing the GDC objectives. Roy Eriksson, Ambassador for Global Gateway in Finland, shared examples of knowledge sharing and capacity building across countries, particularly Finland’s work on AI strategies in African countries. He highlighted the Global Gateway initiative, which focuses on infrastructure investments and digital development projects in Africa.


3. Infrastructure Development and Connectivity:


Kevin Hernandez from the Universal Postal Union introduced the Connect.Post programme, which aims to connect all post offices globally to the internet by 2030, transforming them into hubs for digital services. Guilherme Duarte from the Brazilian Association of Internet Service Providers highlighted the role of small ISPs in connecting underserved areas.


4. Cultural Adaptation and Relevance:


Speakers agreed on the importance of ensuring digital initiatives are culturally relevant and inclusive. Audience members stressed the need for culturally relevant digital literacy programmes, while Patricia Ainembabazi from CIPESA highlighted learning opportunities across regions with similar challenges.


5. Data Governance and Cybersecurity:


Shamsher Mavin Chowdhury, an online participant from Bangladesh, raised concerns about data governance and cybersecurity frameworks for developing countries. An audience member discussed challenges in procuring secure ICTs and addressing education gaps in cybersecurity. The discussion highlighted the need for inclusive frameworks that protect developing countries’ interests in the digital space.


6. Energy Efficiency and Sustainability:


Nandipha Ntshalbu brought attention to the often-overlooked issue of energy efficiency and sufficiency in digital infrastructure development. This focus highlighted the need to consider sustainability in digital development projects and the intersection of digital and green transitions.


7. Funding and Resource Allocation:


Financing emerged as a persistent challenge. Roy Eriksson shared an example of Finland outsourcing expertise to support AI strategy development in Zambia, demonstrating innovative approaches to resource allocation in international development cooperation.


8. Platforms for Collaboration:


The discussion highlighted the value of platforms like the Internet Governance Forum (IGF) for connecting actors and increasing visibility for digital cooperation efforts. Patricia Ainembabazi mentioned the Forum for Internet Freedoms Africa (FIFA) and the African Parliamentary Network for Internet Governance (APNIC) as examples of regional platforms fostering cooperation and knowledge sharing.


9. Alignment with Existing Frameworks:


Alisa Heaver from the Ministry of Economic Affairs in the Netherlands raised questions about aligning the GDC with existing frameworks like the World Summit on the Information Society (WSIS) action lines. Isabel De Sola responded, emphasizing the importance of building on existing work and avoiding duplication.


10. Innovative Projects and Initiatives:


Several innovative projects were mentioned during the discussion:


– The Library Tracker project by Library Aid Africa, presented by Damilare Oydele, which aims to map and support libraries across Africa.


– The SYNC digital well-being program, focusing on developing preventative interventions for high schoolers in Saudi Arabia.


– The Dynamic Coalition and Cyber Security Hub video, presented by an audience member, showcasing efforts in cybersecurity education.


Agreements and Consensus:


There was broad agreement on the importance of partnerships, culturally relevant initiatives, and addressing common digital challenges across regions. Speakers from diverse backgrounds found unexpected consensus on the similarities of digital challenges across different geographical areas.


Differences and Unresolved Issues:


While there was general agreement on overarching goals, differences emerged in the focus of digital development efforts. The discussion revealed unresolved issues, such as ensuring fair and transparent data governance, addressing power imbalances created by data monopolies, and fostering global cooperation on cybersecurity for developing countries.


Conclusion and Next Steps:


The discussion concluded with a call for continued collaboration and implementation of agreed-upon digital cooperation principles. Key takeaways included the crucial role of partnerships, the importance of cultural relevance, and the need to address energy efficiency in digital infrastructure. Stakeholders are encouraged to engage with the GDC implementation process and contribute to ongoing efforts in digital cooperation.


The session underscored the complexity of digital cooperation, with issues ranging from cultural translation of initiatives to aligning incentives for partnerships. While challenges exist, there was optimism that they are not insurmountable if stakeholders work together effectively. The discussion highlighted the need for further dialogue on specific implementation strategies, prioritization of actions, and allocation of resources to ensure the successful realization of the Global Digital Compact’s objectives.


Session Transcript

Filippo Pierozzi: over to Isabel.


Isabel De Sola: Thanks, Filippo. I’m Isabel De Sola from the Office of the Tech Envoy, and I think what we can do, since we’re warmed up and rolling into the next session, is focus our thoughts now on answering some of those concrete implementation challenges. So I would like to invite Ambassador Roy Eriksson of Finland to join me on the stage, and also Thelma Kwei of Smart Africa, if you’re with us here in the room. Thelma? Okay, she’s on her way. And on the note, from our colleague who was just online asking about connectivity, I wanted to take a little poll, if you’ll bear with me, here in the room. So if you’re familiar with the GDC objectives, you know that GDC objective one is about closing all digital divides and working on connectivity. I’m thinking about those who are unconnected, either physically, because of lack of infrastructure, or because of the skill set and the affordability of infrastructure. So I want to take a little poll of the organizations that are here in the room. Could you stand up if you envision that you’re contributing to closing digital divides implementation of GDC objective one? You’re working on infrastructure, you’re working on digital skills. Anybody in the room, can you stand? Okay, excellent, nice. Now, stay standing if you’re working on objective two, the inclusive digital economy, or stand up if you’re working on tech transfers to the developing world, if you’re working on connecting businesses to the internet, if you’re selling services online. No, don’t be shy, don’t be shy, stand, stand. Okay, so slightly less, slightly less. Objective three, we’re thinking about open, safe online spaces. We’re thinking about women and girls safety online, gender-based violence. Okay, excellent. You’re thinking about misinformation, disinformation. This is your concern. Wonderful. Okay, there’s a lot of us here. Objective four, you are a company that has a lot of data, and you’re governing the data, or you have data for development. You’re thinking about how to apply the data to development challenges. Okay, one person at the back of the room, or you’re thinking about interoperability, crossing borders with data. No? Okay, this is the best student. Now, who’s worried about AI, governance of AI? Who’s working on that here, or concerned? Great. Okay, wonderful. Thank you so much for participating in that exercise. You are in the right session. You have come to the right session, and if you still haven’t made a decision, or still haven’t clarified how you’d like to participate in GDC implementation, this is also the right session. So, forgive me for some of the abstract thinking, or sorry, abstract comments from the UN at this stage. I mentioned before that GDC implementation is going to be primarily conducted by the stakeholders, so by governments, by businesses, civil society, academia, scientists, children as well. And the wonderful thing about GDC implementation is that it’s already happening. The UN will play a role by providing and opening up a platform, by convening the stakeholders, and allowing information about implementation to circulate, and we’re working on that in the form of an implementation map that more news should be coming in the month of January about how you can all get involved in that. But we do want to hear your thoughts on the design, and we’ve gotten a lot of comments in this, in the previous session on the design, and we do want to hear your thoughts about how working across sectors is going to make a real difference for that. So, we’ve invited a couple of guests, just two voices this morning, to tell us their thoughts on how some of those partnerships are assisting them, or will be assisting them, to take GDC objectives forward. I’d like to turn over to Ambassador Erickson first, and then Thelma from Smart Africa, and then, unfortunately, there’s no free breakfast at this session. I’m going to come into the audience. I want to hear about what you are doing, and see if that can inform the UN’s design, or the next steps that we take forward on this road to digital cooperation. So, over to Ambassador Erickson first. Thank you so much for being here at such an early start this morning, and tell us your thoughts.


Roy Erikkson: Thank you. Maybe I should first introduce myself. I am the Ambassador for Global Gateway in Finland, and Global Gateway is a EU initiative to have big infrastructure investments in the global south, or new emerging markets. And it’s interesting, because I had structured my intervention exactly the same way as you said. So, the first three goals, closing the digital divides in order to achieve the sustainable development goals, and expanding inclusion in the benefits of digital economy, and then foster an inclusive, open, safe, and secure digital space that respects and protects and promotes human rights. All these are what we are taking into consideration when we’re doing projects under the umbrella of Global Gateway. The GDC also mentions gender equality and the empowerment of all women and girls, and the full and equal participation in the digital space, which is also very important for Finland, as well as accessible and affordable data and digital technologies and services, because it’s all right to have connectivity, but you need to have access as well. So, meaningful connectivity is important. In Finland, access to the internet is considered a human right, and that’s why we are promoting through the Global Gateway connectivity issues. Global Gateway has five sectors, but digitalization is at least one of them, and we have chosen that as our focus. We work mostly in Africa, half of our investments will go to Africa, a quarter to Asia, and another quarter to Latin America. But we are not only bringing connectivity, building, for example, submarine cables, or building in the last mile connectivity. We are also looking into not only the hard infrastructure, but also the soft infrastructure, meaning capacity building and increasing digital literacy skills and capacity. And I actually have a couple of good examples what Finland is doing. Just a second, I’ll have to find it, because the page has now… There it is. We have one project that is coming to an end, but it’s continuing under a different name, but it’s African digital and green transition. And in this project, for example, we sent an expert for six months into Zambia, and they wrote the artificial intelligence strategy for the country. So, this is some sort of capacity building that we do hands-on. This partner of ours found out that there’s a lot of demand for this kind of service, so they actually wrote a book on ethics for digitalization. So, these are concrete examples of how we can help and share our knowledge with other partners. Well, maybe it’s best to give the audience the possibility to ask clarifying questions, but we want to provide the whole package to our partners. So, we build the connectivity and help with digitalization, but we also emphasize schooling, education, and skills, so that our partners have the whole package, and they can manage what the challenges are with the digital economy. Thank you.


Isabel De Sola: Thank you so much. And tell me, so you represent the government of Finland. And was it the government of Finland, the example that you gave, that went to some African countries and found the partners there? Was it an expert from within the Ministry of Foreign Affairs that helped to write this book, or how did this work get done? Because it sounds like you were working through partnerships.


Roy Erikkson: Yes, yes. We actually, we outsourced this. We found somebody who would be able to send an expert of theirs and we paid the costs for having that expert residing in Zambia and writing this strategy.


Isabel De Sola: So what I’m hearing in your story is that actually the organization where you worked act as a sort of broker of different actors who wanted to collaborate on the ground to bring ethical AI ideas to a certain African context and translate these into the local context.


Roy Erikkson: Yes, that’s correct. My work is actually like a facilitator. I find out what kind of projects there are and then reach out to companies and academia in Finland if they would be able or interested in participating in it, as well as trying to find financing for these projects. Financing seems to be a crucial point. There are lots of really, really good ideas, but finding financing for those, that is the crucial thing.


Isabel De Sola: Indeed. And just one more question and then we’ll go to the audience. Did it all work out well? Were there any challenges or bumps along the way? What did you learn from the experience that can help others who are in similar positions of trying to connect the actors in order to get things done?


Roy Erikkson: Well, this specific project that I mentioned, it’s quite surprising that the challenges are the same. You’re from north or south, east or west. It is the same challenges that you have to deal with. So there’s a lot of benefits from having this kind of technological diplomacy, sharing your experience, so that the wheel doesn’t have to be invented everywhere from scratch. You can help and give some advice, and this is something that is important. Another issue that has come up a lot is, I participated in a big conference in Latin America, and there cyber security issues are ones that need to be tackled. And there you can have a lot, because we might be a little bit more advanced, but we have the same challenges. So in order for having a secure digital space, it is to share our experience and help others to raise the standards so that they can also fight against cybercrime.


Isabel De Sola: Thank you for this reflection.


Kevin Hernandez: Hi, everyone. My name is Kevin Hernandez. I’m from the Universal Postal Union, which is the UN organization that focuses on the postal sector. And we’re here to talk about the challenges that we face when it comes to connecting the actors in order to get things done. So, we’re in the postal sector, and we have a program called Connect.Post that aims to connect all the post offices in the world to the Internet by 2030, and then transform them into one-stop shops where citizens can access government services, digital financial services, and also leverage them as hubs for community networks. So, this implies partnerships across governments, international donors, private companies, and it’s been quite interesting. We’ve had some projects off the ground in several countries already. We’ve partnered with UN organizations, private companies, governments, of course, and across different industries and governments at different levels. And partnerships are key. There’s no other way to do this other than through partnerships. But, you know, it’s not being able to function without it. So, we have a usually, we can pull up a designated postal operator in each country, and they need to be given the authority to deliver other types of services for this to be able to work, and they need to be able to be given the legal authority to operate a community network for this to work. So, you need to facilitate a lot of discussions and also need to introduce them to a lot of people, and we need to also help them frame the way that they want to go about enabling change in a way that they’re not used to doing so. So, there’s a lot of challenges. But anyway, we will have a session later today. So, if anyone is interested in what I’m saying, I think we’re in a workshop on 10 out of 335, but the name of the session is Connect Our Posts, Connecting Communities to the Postal Network.


Isabel De Sola: I’ll just give an example of the name of the purse. Purse.


AUDIENCE: This is about finding the right cooperation and finding the right people to work with. I’ll give two examples of the work that we’ve been doing in the past two years and the reports that we produce. The first is that we looked at do governments procure their ICTs secure by design, and we found that the answer is almost zero. So, if industry doesn’t get the incentive to buy, produce secure ICTs, then we will always remain insecure. And we’ve come up with ideas how to build up capacity in that topic to come up with procurement rules, but somebody has to start listening to what we have done and that’s a major challenge already. The second is education and skills, and what we found on both topics, like Ambassador Erickson was saying, there is not that big a difference between the whole world. No one is procuring secure by design. Almost no one is procuring secure by design. In education and skills, whether you live in Papua New Guinea or in the Netherlands where I live, there’s about a 20 year gap difference in what the industry demands from their education, tertiary security education, and what is on offer, and what are the best practices in the world, and that is something that we want to find out. We’re going to present that at 12.30 at the Dynamic Coalition booth. We made a great video on that, called the Cyber Security Hub, and it’s something that we want to build and create the programs where exactly the digital compact will be about, and the sort of input that we want to deliver there. We need partners to do that, and that’s why I’m advocating ourselves here also, but we’re delivering, and that is what the Dynamic Coalition in the IGF is capable of, we can deliver on our promises.


Kevin Hernandez: So that is something which I invite you to join, is3c.org, and that’s where all the information is. Thank you for this opportunity, but also, we’re looking forward to work with you.


Isabel De Sola: Yes, thank you, and before you go, so we heard from our colleague from the Postal Union about the importance of discussions, so of bringing the actors together and then discussing when there’s new information or there needs to be new ways of working, and you’re saying that you have the great ideas, the good content, and you need a catalyst or a boost for visibility, so to connect these great ideas with the procurers, and that’s the partnership that you’re looking for. Is that right? So, the IGF is a place for visibility, I imagine, and you’re looking for others, other areas where, or platforms where you can have more visibility on these ideas?


AUDIENCE: Yes, I think that’s a great question, and I think it’s important to be able to fund the people who actually do the work, because that is the other challenge, that we need to find funding for the experts to pay them, and we have the experts, I can tell you that also.


Isabel De Sola: Thank you so much. So, funding, again, and we’ll go to one more in the room and then online as well, working my way to the front of the room. discussions. We need platforms for visibility. Tell us about your partnerships.


AUDIENCE: Okay, so SYNC digital well-being program based in the King Abdulaziz Center for World Culture. One of our projects is to develop, it would come under the heading of digital literacy, but not just how to use technology, more how to use technology in a way that is safe and health promoting. So this is to develop a preventative intervention for high schoolers across the kingdom of Saudi Arabia, so that they can engage with technology in ways that foster well-being and aren’t damaging to their health. And that’s in partnership with Johns Hopkins Bloomberg School of Public Health to develop the content, pilot the intervention, and also to make sure that it’s culturally resonant. I think that’s another huge issue in terms of moving into other territories and skills transfer, that it is culturally attuned and not dissonant with local values, things like that. So that’s been some of our experience with partnerships.


Isabel De Sola: That’s a great example. And so you found a partner that’s based in the US, and your organization is based here in Saudi, and your beneficiaries or stakeholders are Saudi youth. And so the translation from one culture to another has been part of the dynamic of your partnership. How have you made the most of the connection in the US? And then how have you landed it here in Saudi Arabia in a culturally relevant way?


AUDIENCE: So I think one, there’s many strands to ensuring it was culturally relevant. One of the partners at Hopkins is a Saudi national who grew up in KSA and studied in the US. So he’s one of the primary investigators, one of the project leads. But also we’ve done extensive stakeholder groups, stakeholder mapping with people in Saudi Arabia, teachers, parents, students.


Filippo Pierozzi: Thank you so much for that example. And if my mic… Yes? This one will go online. Let’s… Hello? Can you hear me? Okay. Yes. We’ll take one more example online and then start wrapping up. If you could introduce yourself from online, hopefully we can hear you. And I’ll ask the IT team to put you up on the screen. Thank you. I hope you can hear me.


Nandipha Ntshalbu: Can you hear me? We can hear you. Thank you. Thank you. Thank you. Probably I wouldn’t want to go to an example, but I think I would like to engage with the beautiful presentation from Inbal. And I want to highlight one area that I find missing in the discourse. With everything that we have been dealing with, both in IGF and even the compact itself, even the objectives, we seem not to want to be visible addressing the issue of energy efficiency and sufficiency. Because if we don’t address that in an objective, we will not be intentional. But if we look at fiat and non-fiat currency, all developments have pointed to not only the challenges with connectivity, they are a function of the availability of energy. And if one looks at the issues of energy, it becomes important that we address this. So that’s the first thing. The second thing is, if it could be possible for us online to be accorded an opportunity to get contacts of the colleagues that have just presented, and even the reports, because one would be interested in knowing in Africa, which states in Africa have been utilized beyond Zambia. And I’m asking this from an angle of data localization and data sovereignty, as you’re looking at ethical AI deployment. So one would be interested, Juan, in terms of how one can participate when in the initiatives from Finland, but in terms of the report, which are the African countries that have been contacted. Thank you.


Roy Erikkson: Okay, thank you. I would like to comment also on the secure by design on connectivity, because that is something that Finland especially has taken up in our discussions. It, for example, in Latin America, there is a digital alliance between European Union and Latin America and the Caribbean region. And the conference I was referencing to, we discussed the importance of having security by design, because the digital economy will be based on connections, and the connections need to be secure in order to increase public trust on digital services, but also for businesses, so that they know that their data is secure, that it doesn’t leak anywhere. So it is an issue that we are tackling and taking into consideration when we are designing projects. And it is true that energy and connectivity, they go hand in hand. In many places in Africa, for example, the communications towers are using energy provided by diesel generators. And of course, if we want to achieve our climate goals, we should try to find ways of using less and less fossil fuels. So one of our projects has been to provide solar panels to these communication towers, so that they are independent and can provide sufficient energy, so that the connectivity is actually better, because it doesn’t cut and so forth. So we are looking into that. That’s why it’s called the digital and green transformation, because we need to look at both climate issues and digital and connectivity issues. Data storage is an excellent question. We have mentioned it in other conferences in Africa, that we are building also data centers. And we want to provide, as I was referring to, the whole package, skills and the data, I mean, the hard infrastructure. But it’s also a question of who is in control of the data. And in Africa, I see that there is so much really good talent. I would say that, for example, on the financial sectors, the applications that you have invented there far exceed applications we are using in Europe. So if you can have the ownership of the data in the data centers, that could help to provide new applications using the data that the governments are gathering there. And the best way of participating in this project is to make an inquiry to the local EU delegation, and say that you would be interested in global gateway projects. And especially if you are interested in digitalization, tell that. Or if, because we don’t have embassies in all countries in Africa, you can also contact the Finnish embassy, because we are all part of the team Europe. So we work together. Thank you.


Isabel De Sola: Thank you for those inputs. And I think there’s one more person online. Let’s raise their hand. Please introduce yourself and the tech team will put your screen up so we can see you.


Shamsher Mavin Chowdhury,: Hello, everyone. My name is Shamsher Mavin Chowdhury, and I’m from Bangladesh. So I have a concern. May I start?


Isabel De Sola: Yes, please.


Shamsher Mavin Chowdhury,: That global digital conflict presents an opportunity to address these challenges through fair global governance of digital technologies. However, for this conflict to be effective, it must ensure that countries like Bangladesh are not left behind. It must prioritize inclusive data governance and cybersecurity frameworks that protect all, not just privileged few. With that in my mind, I would like to pose the following questions to this distinguished assembly. How will the global digital conflict ensure fair and transparent data governance that protects user privacy and enables countries like Bangladesh to retain control over their national data assets? Will that conflict address the power imbalance created by data monopolies where global tech giants dominate developing economies, digital ecosystems? And talking of the cybersecurity, what steps are being taken to foster global cooperation on cybersecurity so that developing countries like Bangladesh can access resources, expertise, and frameworks to combat cyber threats? Thank you.


Isabel De Sola: Thank you for those questions online. So actually, when we did our poll here in the room for objective four on data governance, our audience didn’t stand up. So there were a few of us here in the room working on data governance, which is perhaps the most ambitious of the GDC objectives. The GDC already has provisions on data governance to the person who asked this important question. It has two strands. One is to look and enhance data for development, so the data that we can use to spur and catalyze progress on the SDGs. And a second strand is on interoperability and governance of data across borders. On that note, you’ll be happy to hear that a working group on data governance, which is tasked to develop principles in the next two years for data governance, is already getting started. You can hear me? Sorry. It looks like somebody can’t hear me. It’s already getting started based out of the Commission on Science and Technology for Development in Geneva. I believe the working group will be composed and its members named in January or February of next year. And then they’ll have a year and a half to work on principles. So that’s good news, a rapid GDC implementation. On the question of cybersecurity, I’ll just mention this very fast, and then we’ll go back to the audience, that recently, a convention on cyber security on the European Budapest Convention, which has been for the last 10 or 15 years, I think, a bedrock of cybercrime work. And for a country like Bangladesh, they will have ideally participated in shaping that framework and then implementing it at the local level going forward. So I just want to summarize where we are and then maybe go back to the audience for their comments. We’ve been talking about the road ahead and partnerships. So a couple of things have popped up. One is the need for lots of discussion across partners to understand each other, the need for translation between different cultures, the difficulty of finding partners when one is based far away or, for example, participating online today, it’s much more difficult to find what you need, the utility of platforms like the IGF for connecting the actors or for getting visibility. So the supply and demand of partnerships. I have great content, but where are the clients that will use my great content? Or the ever-persistent question of financing, financing for these initiatives. I wanted to throw out into the audience a question about incentives to partner. But I also see that there’s a hand up, so let me hand you a microphone. There’s two hands up to help us keep thinking about these ideas. And if you could introduce yourself. Thank you.


Alisa Heaver: Good morning. My name is Alisa Heever. I’m from the Ministry of Economic Affairs in the Netherlands. And I actually wanted to circle back to the question Henriette asked you or asked in the previous session. So I won’t do it with a lot of introduction. But it was basically, so why doesn’t the GDC link to the WSIS action lines, but does link to the SCGs? Thanks.


Patricia Ainembabazi: Hi, everyone. I am Patricia Ainembabazi from Uganda. I work in the civil society with CIPESA. CIPESA is a collaboration on international ICT policies. We work in Eastern and Southern Africa. I wanted to first talk about partnerships and the work we do around the topic. We do trainings, all things advocacy, but also we do trainings for journalists, other CSOs, as well as parliamentarians. At the moment, we do have a parliamentary track. I don’t know if any of you knows APNIC. This is the African parliamentary network for internet governance on the continent. So we work with these groups of people to front things around internet governance. You’ve talked about partnerships. We do have one with the EU. Someone from East Asia mentioned. We also work with Smart Africa. I was waiting to see Thelma here. We’ve had trainings on data governance. It’s around harmonizing or aligning the EU data policy framework with the different policies in the different countries in Africa. So we have, I would say, partnerships do work. And it’s not about, obviously, the money helps, but also it’s aligning the goals that the different countries or the different organizations want. At CIPESA, we found that the issues that we talk about or we address in Eastern and Southern Africa, they are not only limited to these regions that we’re in. They are across the sub-Saharan. They’re actually in many in Europe, but the context matters, but the issues remain the same. So partnerships do work and we welcome organizations that would want to work with CIPESA and towards the goals that we all want. Thank you.


Isabel De Sola: Thank you for that. So just one comment on your intervention, as you said that you found this hybrid, the hybrid setup, it takes some skills training to do it correctly, the hybridness of our session. Just one point about your comments that struck me, as you said, when we looked across countries and regions, we found that there’s many similarities and some of the problems are the same with the desire for partnerships are the same. So maybe if I can rephrase what you were saying is actually there were learning opportunities across the region. So looking out there and finding others in a similar objective or frame of mind was useful for your organization. Is that correct?


Patricia Ainembabazi: Yes. We do have FIFA Africa, and this has nothing to do with soccer. It is the Forum for Internet Freedoms Africa. We have this every year. This year we’re in Dakar, Senegal. So we had almost 500 participants, and not only from Africa, but also from abroad. And we always have different streams and different topics and sessions. And at the end, when you’re looking at the reports and the submissions from all the different groups, it’s the same problems. It’s the same appetite towards open internet access to like all the principles of the GHSA.


Filippo Pierozzi: Thank you for that. And there was one last comment over here, two more comments. Okay. You could just pass the mic. And thank you for introducing yourself.


Damilare Oydele: Thank you so much. My name is Damilare Olidule. I work with Library Aid Africa. We leverage data technology and common sentiment approaches to transform libraries on a vibrant basis. And as I was speaking about data infrastructure in the first minute, I was speaking about how libraries are access points to digital connectivity and access. Over time as an organization, we’ve worked collaboratively with libraries across African countries in the context of transforming these libraries into vibrant spaces. And more recently, we’re working on what we call Library Tracker. We’re tracking the libraries across African countries to understand what are they doing in these libraries, the impact they’re making, and more importantly, how many of these libraries are connected. I use this data to engage policy makers and partners to understand areas that are good for libraries. And for the users of the platform, to be able to see what libraries are around them, and assess what libraries stand to offer for them. So and I say that also working on libraries with data features. This focus really on how we work with libraries to transform these libraries into data tech hubs. Reason? Because the needs of our community is changing and evolving over time. That means our libraries also need to change in that trajectory. Okay? And I say that also, we’re also working on upskilling librarians in African countries. And data skill and tech skills for them to make libraries much more vibrant and viable and thrive. Right? Over time, we’ve worked with library partners across currently non-African countries to implement our innovation. And we’re ever looking at how can we tap into the ecosystem of data and economy and data governance partners to see how we can cross-pollinate ideas and innovation, and not just that, bring an investment into the library ecosystem, so make libraries connected. Because if libraries are connected, economies are also connected, and that transforms society where these libraries are located. Thank you so much.


Guilherme Duarte.: Hello, good morning, everyone. My name is Guilherme Duarte. I’m from the Brazilian Association of Internet Service Providers. We are a membership of small ISPs that work in Brazil. We’ve been attending this BIGF for a few years. We’re a small ISP, but we’re a small ISP. We’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. Because our members do a lot of work in connecting schools in Brazil. We have some good experiences in public-private partnerships for building infrastructure in the Amazon and other under-assisted areas in Brazil. But we also have been working, have a good knowledge of how these small companies have been building up the infrastructure in Brazil by themselves as well. So private investment in public infrastructure as well. So I would like to, more of a question of how we can be more of a part of the work that we are doing here.


Isabel De Sola: Thank you for those last comments. No, that’s okay. Thank you for those last comments and I’ll go back to the start of our session to respond to the last question. So how to get involved? The first one window, can you still hear me? To get involved would be the endorsement of GDC. And it’s a window online that allows an organization to signal if they’d like to endorse the vision and the principles, that’s one thing. But you also have a path where you don’t endorse the vision and the principles, but you provide information on what your organization is doing and which of the five pillars of the GDC are of your greatest interest. That’s a way to get started. What’s coming up in the next few months is an implementation map of the GDC that the UN agencies are currently designing in our role, which is to provide a platform and a space in a sense to convene all the actors and to make it easier for them to find each other. So the implementation maps in September has been under design. There should be more news about it in January and a way for your organization or our friend from digital libraries or our child rights advocates and all the different actors, if they would like to, to voluntarily signal what they’re up to in GDC implementation. The utility of the map is hopefully not only for the cartographers. A cartographer is a map designer. The SG is the map designer, but it’s not meant to help him. It’s really meant to help the actors so that you could come and say, okay, Liberia, objective three, open security online, and you could see the different actors there. So watch this space. Hopefully we will have more news soon. I wanted to make sure to address the question from Henriette. I actually don’t know why the GDC objectives weren’t in the text mapped against WSIS action lines. However, that exercise has taken place already, and it’s available online at ungis.org, un-gis, ungroup on the information society.org, I think, has developed a map of where you can see how the GDC is connected to WSIS action lines. There has been a, it’s been difficult to describe in what ways the WSIS and the GDC work together. And part of the WSIS review will, the task is to describe how they are interactive. The way that our, that the WSIS was the starting point, and it’s been the primary framework for digital cooperation over 20 years. And after 20 years, the GDC has provided, in a sense, a refresher, a little icing on the cake. So whereas the WSIS tackles the fundamental starting points of digital cooperation, so connectivity, access to information, connecting businesses to the internet, it talks as well about how ICTs could be used for sustainability. The 20-year agenda is still very relevant, so we still haven’t connected the entire planet. Not everybody has the digital literacy and access to capacity building that they need to use the internet. We still aren’t making the most of ICTs for the environment. This agenda, the WSIS agenda, is still relevant. What the GDC does is it comes and it adds some new challenges and opportunities to this agenda after 20 years, which the member states felt was a timely moment to do so. So it adds data, DPIs, misinformation, artificial intelligence, et cetera. And the two agendas are very complementary. I hope that that goes some part of the way to answering your question. It’ll get very technical and the audience might not be that familiar with Action Line B4, but I think that’s the one that speaks about the ethics of ICTs. And today we speak about human rights online. So language matters and language has changed. In 2003, 2006, 2005, forgive me, we were thinking about the ethics of ICTs, but over 20 years there have been so many risks to human rights from the use of ICTs and from lack of use of ICTs that the conversation has shifted and the GDC reflects this evolution in the language, I think. I hope that goes some way to answering your question. I believe everybody needs a coffee break, so we might wrap this up on, I’ll say, maybe one or two ideas that I’m taking away from this conversation and I invite Ambassador Erickson to do the same, but in the road ahead, partnerships are going to be key. In fact, they have been all throughout these years, as many in the audience pointed out. It sounds like there’s appetite here for finding partners, for learning from others that are in similar situations across borders, and also for recognizing the similarities of the challenges that we’re facing. So I may be in El Salvador, but I can share with somebody in Denmark the same challenge about misinformation, for example, and learning from each other is very valuable. Partnerships, however, take time. They take discussion. They take going out there and beating the pavement, looking for people that you need. And it takes funding as well and alignment of interests so that there’s incentives to collaborate. Those are some of the things that I take away. And really, thank you for all of the participation and your attention this morning. And Ambassador Erickson, the final word is with you.


Roy Erikkson: Thank you. Yeah, my takeaway from this is that I think that we more or less know what the challenges are. It is now just trying to find the best ways of finding partnerships and doing together and implement what we have agreed on under the digital compound. I’m quite positive and optimistic that the challenges are not insurmountable. We can do it and we do it together.


I

Isabel De Sola

Speech speed

144 words per minute

Speech length

2669 words

Speech time

1109 seconds

Stakeholder-driven implementation through partnerships

Explanation

Isabel De Sola emphasizes that GDC implementation will be primarily conducted by stakeholders, including governments, businesses, civil society, academia, and scientists. The UN’s role is to provide a platform and convene stakeholders to facilitate information sharing about implementation.


Evidence

Mention of an implementation map that will be available in January to allow stakeholders to get involved.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Roy Erikkson


Kevin Hernandez


Patricia Ainembabazi


Agreed on

Importance of partnerships in digital development


R

Roy Erikkson

Speech speed

121 words per minute

Speech length

1275 words

Speech time

627 seconds

Finland’s Global Gateway initiative for infrastructure investments

Explanation

Roy Erikkson discusses Finland’s involvement in the EU’s Global Gateway initiative, which focuses on infrastructure investments in the global south and emerging markets. The initiative emphasizes digitalization and connectivity issues, along with capacity building and digital literacy skills.


Evidence

Example of sending an expert to Zambia for six months to write the country’s artificial intelligence strategy.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Differed with

Shamsher Mavin Chowdhury,


Differed on

Focus of digital development efforts


Knowledge sharing and capacity building across countries

Explanation

Roy Erikkson highlights the importance of sharing experiences and knowledge across countries to address common challenges. He emphasizes that technological diplomacy can help countries avoid reinventing the wheel and benefit from others’ experiences.


Evidence

Mention of participating in a conference in Latin America where cybersecurity issues were discussed, noting that sharing experiences can help raise standards to fight cybercrime.


Major Discussion Point

Benefits of Partnerships in Digital Development


Agreed with

AUDIENCE


Patricia Ainembabazi


Agreed on

Need for culturally relevant and inclusive digital initiatives


K

Kevin Hernandez

Speech speed

128 words per minute

Speech length

362 words

Speech time

169 seconds

Connect.Post program to connect post offices to the internet

Explanation

Kevin Hernandez presents the Universal Postal Union’s Connect.Post program, which aims to connect all post offices worldwide to the internet by 2030. The program seeks to transform post offices into one-stop shops for government services, digital financial services, and community network hubs.


Evidence

Mention of partnerships across governments, international donors, private companies, and different industries to implement the program.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Isabel De Sola


Roy Erikkson


Patricia Ainembabazi


Agreed on

Importance of partnerships in digital development


A

AUDIENCE

Speech speed

142 words per minute

Speech length

580 words

Speech time

243 seconds

Need for secure-by-design ICT procurement

Explanation

An audience member highlights the lack of secure-by-design ICT procurement by governments. They argue that without incentives for industry to produce secure ICTs, digital systems will remain insecure.


Evidence

Mention of a report finding that almost no governments procure ICTs secure by design.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Importance of culturally relevant digital literacy programs

Explanation

An audience member discusses the development of a digital literacy program for high schoolers in Saudi Arabia. The program focuses on safe and health-promoting technology use, emphasizing the importance of cultural relevance in skills transfer.


Evidence

Partnership with Johns Hopkins Bloomberg School of Public Health to develop culturally resonant content and pilot the intervention.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Roy Erikkson


Patricia Ainembabazi


Agreed on

Need for culturally relevant and inclusive digital initiatives


Finding financing for projects and experts

Explanation

An audience member emphasizes the challenge of finding funding for digital development projects and experts. They stress the importance of being able to fund people who actually do the work.


Major Discussion Point

Challenges in Digital Cooperation


N

Nandipha Ntshalbu

Speech speed

134 words per minute

Speech length

255 words

Speech time

113 seconds

Energy efficiency and sufficiency in digital infrastructure

Explanation

Nandipha Ntshalbu points out the need to address energy efficiency and sufficiency in digital infrastructure development. They argue that connectivity challenges are often a function of energy availability, which should be explicitly addressed in the GDC objectives.


Major Discussion Point

Challenges in Digital Cooperation


Data localization and sovereignty concerns

Explanation

Nandipha Ntshalbu raises concerns about data localization and data sovereignty in the context of AI deployment in African countries. They express interest in understanding which African countries have been involved in initiatives related to ethical AI deployment.


Major Discussion Point

Challenges in Digital Cooperation


S

Shamsher Mavin Chowdhury,

Speech speed

117 words per minute

Speech length

172 words

Speech time

87 seconds

Data governance and cybersecurity frameworks for developing countries

Explanation

Shamsher Mavin Chowdhury emphasizes the need for inclusive data governance and cybersecurity frameworks that protect all countries, not just privileged ones. He questions how the Global Digital Compact will ensure fair and transparent data governance for countries like Bangladesh.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Differed with

Roy Erikkson


Differed on

Focus of digital development efforts


Power imbalance created by data monopolies

Explanation

Shamsher Mavin Chowdhury raises concerns about the power imbalance created by data monopolies, where global tech giants dominate developing economies’ digital ecosystems. He questions how the GDC will address this issue.


Major Discussion Point

Challenges in Digital Cooperation


A

Alisa Heaver

Speech speed

152 words per minute

Speech length

68 words

Speech time

26 seconds

Aligning GDC with existing frameworks like WSIS

Explanation

Alisa Heaver questions why the GDC objectives are not linked to the WSIS action lines, while they are linked to the SDGs. This raises the issue of aligning the GDC with existing digital development frameworks.


Major Discussion Point

Challenges in Digital Cooperation


P

Patricia Ainembabazi

Speech speed

136 words per minute

Speech length

369 words

Speech time

161 seconds

Collaboration on internet governance policies in Africa

Explanation

Patricia Ainembabazi discusses CIPESA’s work on internet governance in Eastern and Southern Africa. They collaborate with various stakeholders, including journalists, CSOs, and parliamentarians, to promote internet governance issues.


Evidence

Mention of the African Parliamentary Network for Internet Governance (APNIC) and partnerships with the EU and Smart Africa.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Isabel De Sola


Roy Erikkson


Kevin Hernandez


Agreed on

Importance of partnerships in digital development


Learning opportunities across regions with similar challenges

Explanation

Patricia Ainembabazi highlights that the issues addressed by CIPESA in Eastern and Southern Africa are not limited to these regions but are found across sub-Saharan Africa and even in Europe. This presents opportunities for cross-regional learning and collaboration.


Evidence

Mention of the Forum for Internet Freedoms Africa (FIFA) event, which attracts participants from Africa and abroad to discuss common internet-related challenges.


Major Discussion Point

Benefits of Partnerships in Digital Development


Agreed with

Roy Erikkson


AUDIENCE


Agreed on

Need for culturally relevant and inclusive digital initiatives


D

Damilare Oydele

Speech speed

189 words per minute

Speech length

332 words

Speech time

104 seconds

Transforming libraries into digital connectivity hubs

Explanation

Damilare Oydele discusses Library Aid Africa’s work in transforming libraries into vibrant digital spaces. They are developing a Library Tracker to understand the impact of libraries and their connectivity status, aiming to turn libraries into data tech hubs.


Evidence

Mention of working with libraries across African countries and developing the Library Tracker tool.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


G

Guilherme Duarte.

Speech speed

196 words per minute

Speech length

185 words

Speech time

56 seconds

Small ISPs connecting underserved areas in Brazil

Explanation

Guilherme Duarte discusses the role of small Internet Service Providers (ISPs) in connecting underserved areas in Brazil. These ISPs are involved in connecting schools and building infrastructure in remote regions like the Amazon.


Evidence

Mention of public-private partnerships for building infrastructure and private investment in public infrastructure.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Public-private partnerships for infrastructure development

Explanation

Guilherme Duarte highlights the importance of public-private partnerships in developing digital infrastructure in Brazil. Small ISPs are involved in both public-private partnerships and private investments in public infrastructure.


Evidence

Examples of connecting schools and building infrastructure in under-assisted areas like the Amazon.


Major Discussion Point

Benefits of Partnerships in Digital Development


Agreements

Agreement Points

Importance of partnerships in digital development

speakers

Isabel De Sola


Roy Erikkson


Kevin Hernandez


Patricia Ainembabazi


arguments

Stakeholder-driven implementation through partnerships


Knowledge sharing and capacity building across countries


Connect.Post program to connect post offices to the internet


Collaboration on internet governance policies in Africa


summary

Multiple speakers emphasized the crucial role of partnerships in implementing digital development initiatives, sharing knowledge, and addressing common challenges across different regions and sectors.


Need for culturally relevant and inclusive digital initiatives

speakers

Roy Erikkson


AUDIENCE


Patricia Ainembabazi


arguments

Knowledge sharing and capacity building across countries


Importance of culturally relevant digital literacy programs


Learning opportunities across regions with similar challenges


summary

Speakers agreed on the importance of ensuring digital initiatives are culturally relevant and inclusive, taking into account local contexts while addressing common challenges.


Similar Viewpoints

Both speakers highlighted the importance of infrastructure investments and public-private partnerships in connecting underserved areas and promoting digital development.

speakers

Roy Erikkson


Guilherme Duarte.


arguments

Finland’s Global Gateway initiative for infrastructure investments


Small ISPs connecting underserved areas in Brazil


Public-private partnerships for infrastructure development


Both speakers expressed concerns about data governance, sovereignty, and the need for inclusive frameworks that protect developing countries’ interests in the digital space.

speakers

Shamsher Mavin Chowdhury,


Nandipha Ntshalbu


arguments

Data governance and cybersecurity frameworks for developing countries


Data localization and sovereignty concerns


Unexpected Consensus

Similarities in digital challenges across diverse regions

speakers

Roy Erikkson


Patricia Ainembabazi


arguments

Knowledge sharing and capacity building across countries


Learning opportunities across regions with similar challenges


explanation

Despite representing different regions (Finland and Africa), both speakers emphasized that digital challenges are often similar across diverse geographical areas, suggesting unexpected commonalities in global digital development issues.


Overall Assessment

Summary

The main areas of agreement centered around the importance of partnerships, culturally relevant initiatives, infrastructure development, and addressing common digital challenges across regions.


Consensus level

Moderate consensus was observed among speakers on key issues. This suggests a shared understanding of the importance of collaboration and inclusive approaches in digital development, which could facilitate more effective implementation of the Global Digital Compact. However, some divergent views on specific implementation strategies and priorities indicate the need for continued dialogue and negotiation.


Differences

Different Viewpoints

Focus of digital development efforts

speakers

Roy Erikkson


Shamsher Mavin Chowdhury,


arguments

Finland’s Global Gateway initiative for infrastructure investments


Data governance and cybersecurity frameworks for developing countries


summary

Roy Erikkson emphasizes infrastructure investments and capacity building, while Shamsher Mavin Chowdhury focuses on data governance and cybersecurity frameworks for developing countries.


Unexpected Differences

Energy efficiency in digital infrastructure

speakers

Nandipha Ntshalbu


Other speakers


arguments

Energy efficiency and sufficiency in digital infrastructure


explanation

Nandipha Ntshalbu raised the issue of energy efficiency and sufficiency in digital infrastructure, which was not prominently discussed by other speakers. This unexpected focus highlights an often overlooked aspect of digital development.


Overall Assessment

summary

The main areas of disagreement revolve around priorities in digital development, approaches to cybersecurity, and the scope of issues to be addressed in the Global Digital Compact.


difference_level

The level of disagreement among speakers is moderate. While there are differing focuses and priorities, most speakers agree on the overall goals of digital development and cooperation. These differences in perspective can contribute to a more comprehensive approach to implementing the Global Digital Compact, but may also present challenges in prioritizing specific actions and allocating resources.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of improving digital security, but Roy Erikkson focuses on knowledge sharing, while the audience member emphasizes the need for secure-by-design procurement.

speakers

Roy Erikkson


AUDIENCE


arguments

Knowledge sharing and capacity building across countries


Need for secure-by-design ICT procurement


Similar Viewpoints

Both speakers highlighted the importance of infrastructure investments and public-private partnerships in connecting underserved areas and promoting digital development.

speakers

Roy Erikkson


Guilherme Duarte.


arguments

Finland’s Global Gateway initiative for infrastructure investments


Small ISPs connecting underserved areas in Brazil


Public-private partnerships for infrastructure development


Both speakers expressed concerns about data governance, sovereignty, and the need for inclusive frameworks that protect developing countries’ interests in the digital space.

speakers

Shamsher Mavin Chowdhury,


Nandipha Ntshalbu


arguments

Data governance and cybersecurity frameworks for developing countries


Data localization and sovereignty concerns


Takeaways

Key Takeaways

Partnerships are crucial for implementing the Global Digital Compact (GDC)


There are similarities in digital challenges across different regions and countries


Cultural relevance and local context are important when implementing digital initiatives


Financing remains a persistent challenge for digital development projects


Data governance and cybersecurity are key concerns, especially for developing countries


Existing platforms like IGF are valuable for connecting actors and increasing visibility


Energy efficiency and sufficiency are important considerations in digital infrastructure development


Resolutions and Action Items

UN to provide an implementation map for GDC in the coming months


Working group on data governance to develop principles in the next two years


Organizations encouraged to endorse GDC vision and principles online


Stakeholders invited to provide information on their GDC-related activities


Unresolved Issues

How to ensure fair and transparent data governance that protects user privacy in developing countries


Addressing the power imbalance created by data monopolies in developing economies


Specific steps for fostering global cooperation on cybersecurity for developing countries


How to fully integrate small ISPs and local initiatives into global digital cooperation efforts


Detailed explanation of how GDC and WSIS action lines are interconnected and complementary


Suggested Compromises

Balancing global standards with local cultural contexts in digital literacy programs


Combining hard infrastructure development with soft skills and capacity building


Using existing institutions like libraries and post offices as hubs for digital connectivity


Thought Provoking Comments

We actually, we outsourced this. We found somebody who would be able to send an expert of theirs and we paid the costs for having that expert residing in Zambia and writing this strategy.

speaker

Roy Erikkson


reason

This comment reveals an innovative approach to international development cooperation, where a government (Finland) acts as a facilitator and broker to connect expertise with local needs.


impact

It sparked a discussion about the role of governments in facilitating partnerships and the importance of finding the right experts for specific projects. It also highlighted the need for cultural translation in such collaborations.


We have one project that is coming to an end, but it’s continuing under a different name, but it’s African digital and green transition. And in this project, for example, we sent an expert for six months into Zambia, and they wrote the artificial intelligence strategy for the country.

speaker

Roy Erikkson


reason

This comment provides a concrete example of how international cooperation can contribute to building digital capacity in developing countries, particularly in emerging technologies like AI.


impact

It led to further discussion about the importance of capacity building and knowledge transfer in digital development projects. It also raised questions about data sovereignty and localization in AI development.


We have a program called Connect.Post that aims to connect all the post offices in the world to the Internet by 2030, and then transform them into one-stop shops where citizens can access government services, digital financial services, and also leverage them as hubs for community networks.

speaker

Kevin Hernandez


reason

This comment introduces an innovative approach to leveraging existing infrastructure (post offices) to bridge digital divides and provide digital services.


impact

It broadened the discussion to include the role of traditional institutions in digital transformation and sparked interest in multi-stakeholder partnerships for digital inclusion projects.


With everything that we have been dealing with, both in IGF and even the compact itself, even the objectives, we seem not to want to be visible addressing the issue of energy efficiency and sufficiency.

speaker

Nandipha Ntshalbu


reason

This comment highlights an often overlooked aspect of digital development – the energy requirements and environmental impact of digital infrastructure.


impact

It shifted the conversation to include sustainability considerations in digital development projects and led to a discussion about the intersection of digital and green transitions.


We do have FIFA Africa, and this has nothing to do with soccer. It is the Forum for Internet Freedoms Africa. We have this every year. This year we’re in Dakar, Senegal. So we had almost 500 participants, and not only from Africa, but also from abroad.

speaker

Patricia Ainembabazi


reason

This comment introduces a significant regional initiative for internet governance and digital rights in Africa, highlighting the importance of regional cooperation and knowledge sharing.


impact

It emphasized the value of regional platforms for addressing shared challenges and learning from diverse experiences. It also underscored the global nature of digital governance issues.


Overall Assessment

These key comments shaped the discussion by highlighting the importance of multi-stakeholder partnerships, knowledge transfer, and capacity building in digital development. They broadened the conversation to include considerations of sustainability, cultural relevance, and regional cooperation. The discussion evolved from abstract concepts to concrete examples of implementation, emphasizing the need for practical, context-specific approaches to digital cooperation. The comments also underscored the global nature of digital challenges while recognizing the importance of local and regional initiatives.


Follow-up Questions

How can we address energy efficiency and sufficiency in digital development?

speaker

Nandipha Ntshalbu


explanation

This is important because energy availability is crucial for connectivity and digital development, but it’s not explicitly addressed in the current objectives.


Which African countries beyond Zambia have been involved in Finland’s digital and green transition projects?

speaker

Nandipha Ntshalbu


explanation

This information is important for understanding the scope of Finland’s involvement in Africa and potential opportunities for collaboration.


How will the Global Digital Compact ensure fair and transparent data governance that protects user privacy and enables countries like Bangladesh to retain control over their national data assets?

speaker

Shamsher Mavin Chowdhury


explanation

This is crucial for ensuring that developing countries are not left behind in the global digital landscape and can protect their citizens’ data.


How will the Global Digital Compact address the power imbalance created by data monopolies where global tech giants dominate developing economies’ digital ecosystems?

speaker

Shamsher Mavin Chowdhury


explanation

This is important for ensuring fair competition and preventing the exploitation of developing economies by large tech companies.


What steps are being taken to foster global cooperation on cybersecurity so that developing countries like Bangladesh can access resources, expertise, and frameworks to combat cyber threats?

speaker

Shamsher Mavin Chowdhury


explanation

This is essential for building a secure global digital ecosystem that includes and protects all countries, not just developed nations.


Why doesn’t the Global Digital Compact link to the WSIS action lines, but does link to the SDGs?

speaker

Alisa Heaver


explanation

Understanding the relationship between different global frameworks is important for coherent policy-making and implementation.


How can small ISPs be more involved in the work being done on digital cooperation and connectivity?

speaker

Guilherme Duarte


explanation

Small ISPs play a crucial role in connecting underserved areas and their involvement is important for achieving universal connectivity.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Networking Session #60 Risk & impact assessment of AI on human rights & democracy

Networking Session #60 Risk & impact assessment of AI on human rights & democracy

Session at a Glance

Summary

This panel discussion focused on assessing AI risks and impacts, with an emphasis on safeguarding human rights and democracy in the digital age. The speakers represented various organizations involved in AI governance, including government agencies, standards bodies, research institutions, and advocacy groups.


David Leslie introduced the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA) methodology recently adopted by the Council of Europe. This approach aims to provide a structured framework for evaluating AI systems’ impacts on human rights and democratic values. Several speakers highlighted the importance of flexible, context-aware approaches to AI risk management that can be tailored to specific use cases.


Representatives from standards organizations like ISO/IEC and IEEE discussed their work on developing AI standards and certification processes to promote responsible AI development. Government officials from Japan and the US shared insights on their national AI governance initiatives and how these align with international frameworks. The importance of stakeholder engagement, skills development, and ecosystem building was emphasized by multiple speakers.


Industry perspectives were provided by LG AI Research, which outlined its approach to implementing AI ethics principles throughout the AI lifecycle. The role of NGOs in advocating for strong AI governance and bringing public voices into policy discussions was highlighted by the Center for AI and Digital Policy.


Overall, the discussion underscored the need for collaborative, multi-stakeholder efforts to develop effective AI governance frameworks that protect human rights and democratic values while fostering innovation. The speakers agreed on the importance of proactive approaches to identifying and mitigating AI risks as the technology continues to advance rapidly.


Keypoints

Major discussion points:


– The development and adoption of AI governance frameworks and risk assessment methodologies, like the Council of Europe’s HUDERIA


– The role of standards organizations and governments in creating AI governance guidelines and policies


– The importance of stakeholder engagement, skills development, and ecosystem building in AI governance


– Approaches to operationalizing human rights considerations in AI development and deployment


– The contributions of NGOs and civil society in advocating for responsible AI and human rights protections


Overall purpose:


The goal of this discussion was to explore various international and organizational approaches to AI governance, risk assessment, and human rights protection in the context of AI development and use. Speakers shared insights from government, industry, standards bodies, and NGOs on frameworks and best practices for responsible AI.


Tone:


The tone was largely collaborative and optimistic, with speakers building on each other’s points and emphasizing the importance of working together across sectors and borders. There was a sense of urgency about the need to develop robust governance frameworks, but also confidence in the progress being made. The tone remained consistent throughout, focusing on constructive approaches and shared goals.


Speakers

– David Leslie: Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, Professor of Ethics, Technology, and Society at Queen Mary University of London


– Wael William Diab: Chair of the ISO-IEC JTC1 SC42 (AI standardization)


– Tetsushi Hirano: Deputy Director of the Iraq Digital Policy Office at the Japanese Ministry of Internal Affairs and Communications


– Matt O’Shaughnessy: Senior Advisor at the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor


– Clara Neppel: Senior Director at IEEE


– Myoung Shin Kim: Principal Policy Officer at LG AI Research, IEEE Certified AI Professor


– Heramb Podar: Center for AI and Digital Policy (CAIDP), Executive Director of ENCODE India


Additional speakers:


– Samara Jaideva: Researcher at the Alan Turing Institute


Full session report

AI Governance and Human Rights: A Multi-Stakeholder Approach


This panel discussion brought together experts from various sectors to explore approaches to AI governance, risk assessment, and human rights protection in the context of AI development and deployment. The speakers represented government agencies, standards bodies, research institutions, and advocacy groups, providing a comprehensive overview of current efforts and challenges in responsible AI development.


Key Frameworks and Methodologies


The discussion began with David Leslie introducing the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA) methodology recently adopted by the Council of Europe. This framework aims to provide a structured approach for evaluating AI systems’ impacts on human rights and democratic values. Leslie described HUDERIA as “a unique anticipatory approach to the governance of the design, development, and deployment of AI systems” anchored in four fundamental elements. He also noted the Japanese government’s support for the Council of Europe’s work on an AI convention.


Other speakers presented complementary frameworks and standards:


1. Wael William Diab discussed ISO/IEC standards for AI systems, emphasising the importance of third-party certification and audits to ensure responsible adoption.


2. Tetsushi Hirano outlined the Japanese AI Guidelines for Business, which differentiates aspects from the perspective of AI actors.


3. Matt O’Shaughnessy highlighted the NIST AI Risk Management Framework, emphasising its flexible and context-aware application. He also discussed the White House Office of Management and Budget Memorandum, which provides guidance on AI use in the federal government.


4. Clara Neppel presented IEEE standards for ethically aligned AI design, focusing on building ecosystems to implement these standards. She also mentioned IEEE’s work on environmental impact assessment of AI.


5. Myoung Shin Kim shared LG AI Research’s approach to AI ethics and risk governance, which includes internal processes and education. She discussed their XR1 generative AI model and detailed their AI ethics implementation process.


6. Heramb Podar presented CAIDP’s advocacy work, including their Universal Guidelines on AI and efforts to promote ratification of AI treaties.


Human Rights Considerations


A significant portion of the discussion focused on incorporating human rights considerations into AI development and governance. Key points included:


1. The importance of stakeholder engagement in AI impact assessments, with multiple speakers emphasising the need to involve affected communities.


2. Data quality standards for AI systems, as highlighted by Wael William Diab.


3. The need for detailed analysis of rights holders in impact assessments, as mentioned by Tetsushi Hirano.


4. Human rights impact assessments for government AI use, discussed by Matt O’Shaughnessy.


5. Incorporating human rights principles in AI standards, as emphasised by Clara Neppel.


6. Educating data workers on human rights, a focus for LG AI Research according to Myoung Shin Kim.


7. The role of NGOs in advocating for human rights in AI governance, highlighted by Heramb Podar.


International Cooperation and Implementation


The speakers agreed on the importance of international cooperation and interoperability between different AI governance frameworks. This was evident in discussions about:


1. The Council of Europe’s work on an AI convention, mentioned by David Leslie.


2. Efforts to ensure interoperability between AI frameworks, highlighted by Tetsushi Hirano.


3. How U.S. domestic AI policies inform international work, discussed by Matt O’Shaughnessy.


4. IEEE’s global network of AI ethics assessors, presented by Clara Neppel.


5. LG AI Research’s collaboration with UNESCO, shared by Myoung Shin Kim.


6. CAIDP’s advocacy for ratification of AI treaties, mentioned by Heramb Podar.


Practical Implementation Challenges


The discussion also addressed the practical challenges of implementing AI ethics principles:


1. Matt O’Shaughnessy emphasised the need for context-aware application of risk management frameworks.


2. Clara Neppel discussed the importance of building ecosystems to implement ethical AI standards.


3. Myoung Shin Kim outlined LG’s AI ethics impact assessment process and mentioned their upcoming annual report on AI ethics implementation.


4. Heramb Podar highlighted the need for clear prohibitions on high-risk AI use cases.


5. Several speakers noted the challenge of balancing innovation with responsible AI development.


Education and Public Engagement


Myoung Shin Kim from LG AI Research emphasized the importance of education in AI ethics implementation. She discussed initiatives to educate data workers on human rights and improve citizens’ AI literacy. While other speakers touched on stakeholder engagement, Kim’s presentation provided the most detailed discussion of education efforts.


Conclusion


The discussion underscored the need for collaborative, multi-stakeholder efforts to develop effective AI governance frameworks that protect human rights and democratic values while fostering innovation. The speakers presented a range of approaches and methodologies for responsible AI development, highlighting both progress and ongoing challenges in the field. As David Leslie noted in his closing remarks, the conversation demonstrated the complexity of the issues and the importance of continued dialogue and cooperation among diverse stakeholders in shaping the future of AI governance.


Session Transcript

David Leslie: Can everyone hear me? Samara, can you hear me? Hello? Hello? Yes? Yeah? Okay. Perfect. If everyone’s ready, we can get started. I believe everyone’s joined us online. Perfect. Good evening. Thank you so much for joining us here today, this evening. We know it’s the last session, but I can promise you we have an other networking session on assessing AI risks and impacts, safeguarding human rights and democracy in the digital age. We will be moderated by Professor David Leslie, who is the Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, and Professor of Ethics, Technology, and Society at Queen Mary University of London. He will be introducing the rest of us, but to everyone joined here today and online, my name is Samara Jaideva. I’m a researcher at the Alan Turing Institute, and I’m very proud to say I have supported in helping publish and develop this human rights impact assessment framework that we’ve done with the Council of Europe. So now I’ll turn it to David to introduce us to this panel. Great. Samara, can you hear me? Am I… Just give me an acknowledgement and I’ll keep going. Good? Okay. Okay, so thank you so much, Samara. I am very thrilled with you. Just to say, our team at the Turing has been really involved with this process dating back to 2020 when the Ad Hoc Committee on Artificial Intelligence was really doing the initial steps to building a feasibility study that would come to inform what now is the framework convention, the treaty that is aligning human rights, democracy, and the rule of law with AI. And I’ll just also say that really this is the adoption of the Huderia methodology, which has just happened this past month, is really a kind of historical moment in a time of change where so much of the activities in the kind of international AI governance ecosystem are yet to be decided. And so this is really a kind of path breaking outcome, I would say. And I was just thinking about it, over the years, in being at the Council of Europe plenaries, where we’ve really talked through governance measures. It was early 2021, I want to say, where we first took a question about foundation models and frontier AI. I mean, you can just imagine that rich conversation about governance challenges has been going on at the Council of Europe’s venue in Strasbourg for a number of years now. So I’ll also just quickly say the Huderia itself that has been developed through the activities of the Committee on Artificial Intelligence and all the member states and observer states, it really is a unique anticipatory approach to the governance of the design, development, and deployment of AI systems that anchors itself in basically four fundamental elements. We’ve got a context-based risk analysis, which provides a kind of structured approach initially to collecting the information that’s needed to understand the risks of AI systems, in particular the risks they pose to human rights, democracy, and the rule of law. It really focuses in on what we call the socio-technical context, so the environments, the social environments in which the technology is embedded. It also allows for an initial determination of whether the system is the right approach at all, and it provides a mechanism for triaging more or less involving governance processes in light of the risks of the systems. There is also a stakeholder engagement process, which proposes an approach to enable engagement as appropriate for relevant stakeholders, so impacted communities, in order to sort of amplify the voices of those who are affected and to gain information regarding how they might view the impacts, and in particular contextualize and corroborate potential harms. Then there’s the third module, if you will, or the third element, a real risk and impact assessment, which is a more full-blown process to assess the risks and impacts that are related to human rights, democracy, and the rule of law in ways that really both integrate stakeholder consultation, but also really ask the how questions and try to think of downstream effects in a much more full-blown way. And then finally, there’s a kind of a mitigation planning element, which provides steps for mitigation and remedial measures that allow for access to remedy and iterative review. And as a whole, the Huderia also stresses that there’s a need for iterative revisitation of all of these processes and all of the elements of Huderia insofar as both the innovation environment, so the way that the systems are designed, developed, and deployed, both that is very dynamic and changing, but also the broader social and legal, economic, political contexts are always changing. And those changes mean that we need to be flexible and continually revisit how we’re looking at the governance process for any given system. So with that, let me now then introduce our first panel speaker, and that is Mr. William Diab, who is chair of the ISO-IEC JTC1 SC42, so just a wonderful standards development organization or set, a group of them that are doing great work on AI standards. And he’ll address the role of AI standardization in safeguarding human rights democracy as well as cover some existing and upcoming standards on these issues. So I’ll turn it over to you, Will. Go ahead.


Wael William Diab: Thank you, David, and thank you for the warm introduction. I’d like to thank you also for the invitation to present on this panel. My name is Will, and as David mentioned, I chaired the Joint Committee of ISO and IEC on Artificial Intelligence. And so I’m going to give you a brief flavor of what we do. Just to quickly acknowledge it’s not just me that does this. We have a pretty large management team, and we’ll make all of these slides available, but in the interest of time, I’m going to just jump into just what it is that we do. And so we take a look at the full ecosystem when it comes to AI. We start by looking at some of these non-technical trends and requirements, whether it’s application domains, regulatory policy, or what’s perhaps most relevant here is emerging societal requirements. Through that, we assimilate the context of use of the technologies we cover, and then what we do is we provide what we call horizontal and foundational projects on artificial intelligence. And I’ll talk a little bit more about examples, but I want to point out that the story doesn’t stop there. We have lots of sister committees in IEC and ISO that focus on the application domains themselves that leverage our standards. We work with open source communities and others. So we are part of the ISO and IEC families. Our scope is we are the focal point for the IT standardization of AI, and we help other sister committees in terms of looking at the application side. We’ve been growing quite a bit. We’ve published over 30 standards and have about 50 that are active. We have 68 countries, so the way we develop our standards is by one country, one vote principle, and about 800 unique experts that are in our system. I would also like to note that we work extensively with others. We have about 80 liaison relationships, both internal and external, and I’ll show a slide at the end. We also run a biannual workshop. The way we’re structured is we currently have 10 major subgroups, five of which are joint with other committees, and I’ll show what we do. So the first thing that’s important about understanding AI and being able to work with different stakeholders that have different needs is to have some foundational standards, and this area covers everything from common terminology and concepts, and by the way, that is a freely available standard that we do to work using AI. A lot of the work in this area has also been around enabling what we call certification and third-party certification. So I’ll show a slide at the end. The second thing that’s important to understand is that third-party audit of AI systems, so we believe that it’s important to enable this to ensure that we have broad, responsible adoption of AI. Another big area for us is around data. So data, as many people know, is at the cornerstone of a responsible and quality AI system. This original work started by looking at big data, and we completed all those projects, and then we expanded the scope to look at anything related to data and AI. And so we’re in the process of publishing a six-part multi-series. The first three have been published, and the next three should be published in this coming year around data quality for analytics in the AI space. Some of the more recent work is around synthetic data and data profiles for AI. Trustworthiness, which is very relevant to the topic at hand, as well as enabling responsible systems, is probably our largest area of work. The slide is a bit of an eye chart to try and read, and the reason is that we start from the fact that they are IT systems themselves, and yet with some differences from a traditional IT system, for example, in terms of the learning. And so what this allows us to do is to build on the large portfolio of standards that IEC and ISO have developed, and then extend that for the areas that are specific to AI. So one example of the work here is our AI risk management framework. This was built on the ISO 31,000 series as an implementation specific. But other things that you might see bolded on this chart are things that you might hear in every day, so making something controllable, explainable, transparent, and what we do is then take those concepts and translate them into technical requirements. A colleague of mine had put this together to indicate where societal and ethical issues lie in terms of the direct impact versus things that are further away, and I thought it was a great slide because everything in yellow really maps into what we’re doing today. So when it comes to societal issues in two ways, the first is through dedicated projects that are directly around this area, and again, you know, using use cases to translate from some of these non-technical requirements down to technical requirements and prescriptions on guidance, how to address them, as well as integrating it across our entire portfolio. For instance, when we look at use cases, we ask what some of the ethical and societal issues are. We don’t do this alone with a number of international organizations. In terms of use cases and applications, it’s important for us to be able to provide horizontal standards, and as I mentioned, you know, we’ve collected over 185 use cases, and we’re constantly updating this document, but we also take a look at the work from the point of view of an application developer, whether it’s at the technical development side or at the deployment side, and we have standards in this area. We’ve also started to look at the environmental sustainability aspects as well as the beneficial aspects of AI and big human machine teaming. Computational methods are at the heart of AI systems, and we have a large portfolio of work here. Our more recent work has been focused around having more efficient training and modeling mechanisms. Governance implications of AI, so this is looking at it from the point of view of a decision maker, whether it be a board or an organization, and answering some of the questions that might come up. Weeding of AI-based systems, this is another joint effort for us, and we have a multi-part series focused on testing, verification, and validation. In addition to the existing work, we’re looking at new ideas around things like red teaming. Health informatics is a joint effort with ISO TC215, and this is really taking us into the healthcare space, trying to assist them in building out their roadmap. In addition to the foundational project that we’ve got, we are also looking at extending the terminology concepts for the sector, which may serve as a model for other sectors as well, as well as looking at enabling certification for the healthcare space. In terms of functional safety, this is the work around enabling functional safety, which is essential for sectors that consider safety important. This is being done jointly with IEC SC65A. Natural language processing is around everything to do with language, and it goes beyond just text, and this is becoming increasingly important in new deployments. Last but not least, we have started a new joint working group with the ISO CASCO group that does certification and conformity assessment to look at conformity assessment schemes. Sustainability is a big area for us, both in terms of looking at the sustainability of AI and how AI can be applied to sustainability. I’m going to skip to just this slide. One of the important things is to allow this idea of a third-party certification and audit in order to ensure broad responsible adoption. This picture shows you how a lot of our standards come together. ISO-IEC 42001, which if you’re familiar with 27001, cybersecurity, or ISO 9001, is built around the same concepts, allows us to do this. Just quickly wrapping up, just to allow time for my other co-speakers, just to sum up, we’re looking at the entire ecosystem. We’re growing very rapidly. We work with a lot of other organizations, and it’s to join. We also run a biannual workshop that typically looks at four tracks applications. One of our recent ones was looking at transportation. We look at beneficial AI. We look at emerging standards and also what some of the emerging technology and requirements are. With that, I hand it back over to the moderator.


David Leslie: Thank you very much. Thanks so much, Val. That was a brilliant presentation. It just shows how much work on the concrete side of how the devil’s in the details, and we need to really work. I would say the Huderia that we’ve just adopted, this is the methodology. And as we move on in the next year or so, we’ll be working on what we call the model, which really gets into the trenches and explores some of those areas that you just presented, thinking also about the importance of alignment and ensuring the kind of standards are aligning with the way that we’re approaching this on the international governance level. So, our next speaker is Tetsushi Hirano, the Deputy Director of the Iraq Digital Policy Office at the Japanese Ministry of Internal Affairs and Communications. And Hirano-sensei will offer us his perspective on AI and its impacts on human rights and governance, both in Japan and internationally. Tetsushi, the floor is yours.


Tetsushi Hirano: Thank you, David. I’m very pleased to participate in this important session following the successful adoption of the Huderia methodology. And I sincerely hope that this pioneering work will promote this new type of approach and facilitate the accession of the interested countries to the AI cooperation. Speaking of Japan, Japan has been developing its own AI risk management framework since 2016. And this year, we released the AI Guidelines for Business, which took into account the results of the Hiroshima AI process for the advanced AI systems as well. And I see some similarities and differences between the Japanese guidelines and the similarities. Both are based on common human-centered values and also pay attention to the different contexts of AI life cycles. While Huderia provides a model of risk analysis of the application design and development deployment context, the Japanese guideline differentiates these aspects from the perspective of AI actors. Namely, the guidelines provide a detailed list of what developers and deployers and users are recommended to do with respect to our analysis. This is one of the features of our guidelines. compared to other frameworks. But despite this formal difference, Huderia and the Japanese guidelines go in the same direction in the analysis. So we are hoping to contribute to the further development of Huderia technical document plan for 2025. And the next is the difference. And this is also a strong point of the Huderia as far as I can see. And the Huderia offers a detailed analysis of right holders and effects on them. But some Japanese experts evaluate COBRA very highly, especially in view of the COBRA, which can be seen as a threshold mechanism. And it also provides a step-by-step analysis of stakeholder involvement. And I have to admit that the stakeholder involvement process presented there is demanding if some of the steps are to be implemented precisely. But this can serve as a kind of benchmark for continuous development. And the Japanese government is future framework for domestic AI regulations. And I’m sure that Huderia will be one of the key important documents to look at, especially when developing public procurement rules, for example, where the protection of the citizens’ rights is at the core of the issue. I would also like to mention interoperability, a document of which is also planned for 2025. As we all know, there are many AI risk management frameworks under development. And for example, the reporting framework based on the Hiroshima process code of conduct or EU AI Act itself has three different type management framework, and to name but a few. The interoperability document may highlight the commonalities of these frameworks, as well as their respective strengths, which can facilitate mutual learnings between them. In particular, there are documents that only address advanced AI systems and we will have to think about what kind of impact, for example, synthetic content created by generative AI can have on democracy also in the meetings of the future meetings of the AI Convention. And finally, I would like to address the future role of conferencing parties to the AI Convention. As a pioneering work in this field, Huderia is expected to become a benchmark. However, it is also important to share knowledge and the best practices with concrete examples as this type of risk and impact assessment is not yet well known. This together with the interoperability document will help interested to join this convention.


David Leslie: Thank you. Thank you so much, Tetsushi. And I’ll just say that the support of the Japanese government across this process has been absolutely essential to the innovative nature and the success of the instrument. So, just a real deep thank you there. Speaking of which, I am now, I have the pleasure of introducing Matt O’Shaughnessy, who is Senior Advisor at the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor. And I’ll just say that the past few years have really marked major strides, one might even say quantum leaps, in these approaches that the U.S. has developed, for instance, in AI risk management and governance, with key initiatives like NIST’s AI Risk Management Framework through the recent White House Office of Management and Budget Memorandum on Advancing Governance, Innovation, and Risk Management of Artificial Intelligence. So, there’s been a lot of really excellent work coming out of the public sector in the U.S. And so, Matt, I wanted to really ask you if you could talk a little bit more about these kind of national initiatives and speak a bit about how they reflect and contribute to emerging global frameworks and shared principles for AI development and use.


Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the NIST AI Risk House Office of Management and Budget Memorandum on Government Use of AI. Maybe I’ll say a few words, kind of being an overview of each of those, and then talk kind of about how they interact and inform our international approach to AI. So, both of these documents take a similar approach. They’re both flexible, they’re both very context-aware, directed. Specifically at how particular AI systems are designed and used in particular contexts. And they both aim to promote innovation, of course, while also setting out concrete groups that can help effectively manage their risks. So, I guess, let me start with the NIST AI Risk Management Framework. So, this is our general risk management framework that sets out steps that are applicable to all organizations, whether they’re private entities or government agencies who are developing or using AI. So, the AI Risk Management Framework describes different actions that organizations can take to manage risks of all of their AI activities. A lot of those are relevant to respect for human rights. So, for instance, it describes both technical and kind of steps that can help manage harmful bias, discrimination, mitigate risks to privacy. But it also describes a lot of more general actions, things like how to establish processes for documenting the outcomes of AI systems, processes for deciding whether an AI system should be commissioned or deployed in the first place, or policies or procedures that improve accountability, or kind of increased knowledge about the risks and impacts the application of that AI system has. So, a lot of these governance-oriented actions address many of the concepts that are set out in the council. And they help lay the groundwork for organizations to better consider the risks to human rights that their AI activities pose, and also address and mitigate them. As I mentioned before, the Risk Management Framework is really designed to be applied in a flexible and context-aware manner. And that’s really important. It helps ensure that the risk management steps are both well-tailored and proportionate to the specific context of use, but also that they’re effective, and that they effectively target the most salient risks that are posed by a particular system in the particular context of its use. David, in your instance of the Houdini area taking a socio-technical approach, considering both the social context that an AI system is developed in, it’s deployed in, and that’s really core to the NIST Risk Management Framework. And I think really important to making sure that AI risk management, more generally, is effective and effectively targets the most important risks. The Risk Management Framework sets out a lot of these kind of general steps that organizations can take to manage various risks. But as I said before, it’s most effective when it’s deployed in a very context-aware manner. And to do that more effectively, it supported the development of what it calls, quote, profiles, that describe how it can be used in specific sectors, for specific AI technologies, or for specific types of end-use organizations, whether it’s like a government agency or a specific private sector entity. So one example of that that the Department of State has developed is a risk management profile for AI and human rights. And that describes specific potential human rights impacts of AI systems. And that can help developers of AI systems better anticipate the specific human rights impacts that their AI systems could have, and help them tailor the actions that are described in the Risk Management Framework to the specific end-use that they could have. And this is also where tools like the Council of Europe’s Huderia tool, the Human Rights, Democracy, Rule of Law Impact Assessment tool, can contribute and be most effective. So, you know, a lot of the kind of key risk management steps that the Huderia sets out are similar to those in the NIST AI Risk Management Framework. But the Huderia provides more detail on actions that are particularly relevant to human rights and democracy. Things like, you know, engaging stakeholders to make sure that organizations are aware of the human have, or establishing mechanisms for remedy. So as Tetsushi mentioned, the detailed resources that will be negotiated and developed next year will be particularly helpful in kind of offering this insight for organizations who are applying risk management tools that already exist, but are looking for more detailed references or resources to help them specifically look at human rights impacts in contexts where those are particularly salient. Okay, so that’s our framework, which again applies to kind of all organizations. And again, it’s kind of a very flexible, context-oriented tool. You also asked about our White House Office of Management and Budgets memorandum governance, innovation, and risk management for agency use of AI. So this is the set of rules, binding rules for government agencies, covered government agencies that use AI, and it sets out similarly key risk management actions that government agencies who are developing or using AI systems must follow in their AI activities. So this memo was released in March of twenty twenty four. You can look it up online. It’s M. And it was in fulfillment of the AI and government act of twenty twenty. And even though it was developed by this administration, it builds on work that was started in the previous administration, such as a December twenty twenty executive order called Promoting the Use of Trustworthy AI in the federal government. So it sets out a lot of bipartisan priorities. This memo, again, kind of reflects our broader approach in the United States to AI governance. It’s meant to be tailored to advance innovation, make sure that we’re using AI in ways that benefit citizens and the public at large, but also make sure that we. The example in managing and addressing the risks of AI, this guidance aligns with a lot of the provisions that were set out in the Council of Europe’s AI Convention, I’ll just give you a quick overview of some of those key aspects. So it establishes some AI governance structures and federal agencies like chief officers or governance boards that promote accountability, documentation, transparency. It sets out some key risk management practices, especially for AI systems that are determined to be what we call safety, impacting or right. Those include steps for things like risk evaluation or assessments of the quality of an AI data set that’s used for training or testing, ongoing testing and monitoring steps, training, oversight for human operators, assessments and mitigations of harmful bias, engagement for affected communities, for rights impacting AI systems. So, again, just kind of some key risk management steps that are mandated for government AI systems. And we see those as really instrumental for managing impacts on human rights. You know, things like AI systems that are used in law enforcement contexts or related to critical government services, determining whether someone is eligible for benefits, which we would label as rights impacting and apply these, you know, kind of key risk management steps that are set out in this memorandum. So those are kind of our two key domestic policies that set out AI risk management practices. And in terms of the international implications of these, both of them were informed by international best practices looking to work done by other countries, international organizations. The NIST AI Risk Management Framework had extensive international multi-stakeholder consultations. And. It’s 1.0 right now and is intended to be updated over the years, so there’ll be, you know, kind of continuing conversation between these domestic efforts and best practices that are being set out and developed internationally. And in turn, both of these these domestic products inform our international work. So both the Council of Europe’s Huderia and recent OECD projects have drawn from the AI Risk Management Framework. It’s informed the work of standards developing organizations like ISO, IEC. Continuing to work with NIST to develop crosswalks of their own domestic guidelines with the RMF, which helps ease compliance and aid interoperability. So both of these things kind of lay the groundwork for all of our international work on safe, secure and trustworthy AI, whether it’s in the Council of Europe’s AI Convention, whether it’s our UN General Assembly resolution on AI or our Freedom Online Coalition joint statement on responsible government practices for AI. And, you know, we’re looking forward to over the next couple of years, continuing to. The and the conversation on AI Risk Management continues to develop on there and turn it back over to you, David, thanks again.


David Leslie: Thanks, Matt. And and also just to say, Matt’s presence in Strasbourg has has been a huge boon for for the you know, as we’ve tried to to sort of develop the Huderia over the months and years. And so just to also thank you for that, for that continuing commitment to that process. I think it’s been really important to have, you know, everybody speak and and share insights in the room and at the Council of Europe. So I’d like to now introduce Clara Neppel, who is a senior director at IEEE. You’re up in at the very forefront of driving initiatives that address the ethical and societal implications of emerging technologies. IEEE is one of the world’s largest technical organizations, has been instrumental in developing frameworks and standards for responsible use for a number of years now. And it’s always had a strong focus on risk management. IEEE’s work on risk management provides practical tools and methodologies to ensure that these AI systems that are being developed are robust, fair and aligned with societal values. And and so Clara will share with us insights into, you know, into the into this work and into into how it’s contributing to our to sort of the broader AI governance ecosystem. And I think you’re there, Clara, in person. So go ahead. Yes, yes.


Clara Neppel: Thank you. Thank you, David. Thank you also for the kind introduction. Yes, we were we were also very active in the Council of Europe as well as in the OECD and other international organizations. And maybe one of the, I think, critical aspects here is that IEEE is not only a standard setting organization, but also an association, as you mentioned, of technology also that permits us to be quite early in identifying risks. And maybe this is also the reason why we were among the first to start working on what we call ethically aligned design in 2016, which permitted us to come up with some concrete instruments like standard certifications quite early. And what I would like to share with you now is really also some practical lessons learned, which I think is important to implement human rights in technical systems, AI systems. So first lesson learned is really that we need the time and we need the stakeholders. We need for even if we think that some of the concepts like transparency or fairness are already quite defined, you might be surprised. So I’m also co-chair of the OECD expert group on AI and privacy. And both, let’s say, ecosystems have very clear understanding of what transparency means or what fairness means, but they have this is very different. The professionals, for instance, transparency is about the transparency of data collection. And on the AI expert side, it’s really about how the, let’s say, decisions of the systems are made understandable. So this is just one example. And so let’s say one of our most used standards right now, IEEE 7000 took this time, so it took five years to being developed. In 2000, when the standard published. And since then, there are a lot of, let’s say, lessons that we would like because it was really worldwide deployed. So the second lesson that I would like to share with you is that we need skills. The skills that we need is not only the technology, the skills related to technology, but also to ethics. And we were investing in this also right from the beginning. We have not only systems certification, but also personal certification. In addition of assessors, and we can say now that we have more than 200 assessors worldwide that are certified by IEEE. We have a training program which reaches from Dubai, as I just heard today, to South Korea and obviously in Europe. So we have, let’s say, this worldwide network of assessors that also, let’s say, have a certified way of understanding of what human rights and ethics is. And third, and I think this is the most important of these standards instruments, and we have the skills and the people that can implement it, we can build very strong ecosystems. And I think that without that, you are still working in isolation. You need these ecosystems. I can give you the example now in Austria, because the European office is based in Vienna. We have now, starting from the city of Vienna, so from public services to data hubs in Tirol, for instance, that are built on the basin, which that means that already the data governance, let’s say, is according to ethical principles. And then all the applications that are running on this data hub are also required to to fulfill the same requirements. And this permits to have these ecosystems, which, in the end, it’s, let’s say, they found of what we want to achieve with human rights. I think what the Huderia methodology concerns, the standard was a human rights first approach. And this was also acknowledged by the Joint Research Center of the European Commission that made the analysis of existing standards on human rights and acknowledged that IEEE standards are very close to what is being required with respect to human rights. It is about stakeholder engagement, if you want, so it’s about the recipe about how to engage stakeholders, how to understand the values of the stakeholders. And I would like maybe to bring here also an aspect which I think is very often not seen. So very often we are focusing on transparency, on fairness and so on. But there are human rights that are not in the existing framework, like dignity. And we have in 7000 all these aspects, all these values that are being analyzed because it’s a risk-based approach. Then there is a clear methodology on how to mitigate those risks with translating it into concrete system requirements or organizational methods. So this is about the design phase and this is complemented by certified, so a certification method, which is also looking to existing systems and assesses it along the different aspects of transparency, accountability. Last but not least, I would like to mention that we are now also in the process of scaling, let’s say, the certification system. We are working with VDE from Germany and Positive AI from France to develop trust label, AI trust label, which would include the seven aspects of human agency and oversight, technical robustness and safety, private transparency, diversity, and social environmental well-being. Just to the last one, for the environmental well-being, we just started a working group on the environmental impact of AI to clearly define the metrics that are being used for environmental impact, including also inference cost and not only in energy, but also, for instance, data usage. We are doing this also together with the OECD. So I think that’s a first overview of what we’re doing. Thank you.


David Leslie: Thanks, Clara. And I mean, it’s really important to note here as well that, you know, making these approaches usable for people is such a priority. And one of the things I think that lies ahead of us is really making the human rights, the range of human rights that are accessible to people and being able to translate them out so that people can actually pick up, you know, the various approaches to risk management and really, if you will, operationalize a concrete approach to understanding and assessing the impacts on those rights. So I’ll now introduce Mr. Myung-Shin Kim, who is Principal Policy Officer at LG AI Research and an IEEE Certified AI Professor. LG AI Research really focuses on innovation in AI that is responsible and that’s developed and deployed safely and ethically. And I think, you know, an important dimension of that is risk governance and addressing bias mitigation and ensuring transparency and accountability. So, Mr. Kim, I’m wondering if you could share LG AI Research’s perspective on specifically on AI risk governance. How does your organization approach managing these risks? And what do you believe an ideal framework for AI risk governance should look like? Right.


Myoung Shin Kim: Thank you very much for inviting me to this meaningful discussion. Today, I will share how LG AI Research is translating our AI ethics principle into tangible action, focusing on AI risk governance. …about LG AI Research. Established four years ago, our mission is to provide advanced AI technologies and capabilities to LG affiliates, such as LG Electronics and LG Chemical. One of our landmark achievements is the development of XR1, a generative AI model capable of understanding and creating contents in both Korean and English. XR1 has achieved performance on par with global benchmarks, demonstrating its competitive edge in the international AI landscape. Just last week, we released XR1 3.5 as an open-source language model contributing to the development of the AI research ecosystem. Beyond AI technology, LG AI Research places a strong emphasis on adhering to AI ethics throughout the entire lifecycle of the AI system. Since XR1, LG officially announced its AI ethics principles with five core values, humanity, fairness, safety, accountability, and transparency. But you know, more important than principles is putting them into practice. So we employ three different strategic pillars to ensure adherence to AI ethics principles, namely governance, research, and engagement. Let me explain each in detail. First of all, we conduct assessment for every project to identify and address potential risks across the AI lifecycle. It consists of three steps. First, analyzing project characteristics, setting problem-solving practice, and verifying research and documentation. When risks or problems are identified, we establish specific solutions and assign responsibilities to designated personnel and set deadlines for resolving the issues. The entire AI ethical impact assessment process and its outcome are attached to the final report when the project closes in our project management system. A unique aspect of our approach is the involvement of a cross-functional task force. This brings together researchers in charge of technology, business, and AI ethics, each contributing their specialized knowledge and diverse perspectives. From a human rights perspective, we pay special attention to some of the key questions during the AI ethical impact assessment. For example, what groups are included among stakeholders affected, and if there is any possibility of intentional or unintentional misuse of the AI system by users. Additionally, we educate data workers about the Universal Declaration of Human Rights and the Sustainable Development Goals, providing guidelines to respect, protect, and promote human rights during data production or the data-steaming process. As you know, generative AI models sometimes produce inaccurate information known as hallucinations due to misinformation. To address this issue, we have developed AI models that generate answers based on factual information and evidence. Additionally, we are continually researching unlearning techniques to selectively delete personal information that was unintentionally used during the training process. Considering that AI is ultimately created by humans, I think it is also important to assess the level of human rights sensitivity among our researchers. For this reason, every spring, LGRI Research conducts an AI ethics awareness survey to assess and improve adherence to our AI ethics principles. I am personally pleased to see that the gap between awareness and practice has narrowed this spring compared to last year. Additionally, we hold an AI ethics seminar bi-weekly to boost interest and participation in AI ethics. For AI ethics to take root in our society, I believe citizens’ AI literacy must improve. Additionally, if high-quality AI education is not evenly provided, existing economic and social disparities may widen. To address this issue, we provide a customized AI education program to over 40,000 youth, college students, and workers annually. An old curriculum includes AI ethics to help citizens to grow a more mature user and also like critical watchdogs in the AI market. And our efforts are expanding beyond Korea to the global level. We are collaborating with UNESCO to develop online educational content for AI ethics targeting researchers, developers, and policy makers. The final MOOC will be held worldwide by early 2026. Lastly, every January we published a report compiling all the outcomes and lessons learned from implementing our AI ethics principles. These reports illustrate how we are implementing not only just like our own AI ethics principles, but also UNESCO’s recommendation on the ethics of AI and South Korea’s national AI ethics guidelines. We hope this can serve as a reference for their AI ethics implementation approach. The following report is scheduled to be published at the end of like January, next month, and will be available on our homepage. So if you have interest, please check. Thank you for your attention. Thanks so much for all that great information, Dr. Kim. Now, in the interest of time,


David Leslie: I’m just going to go right to introducing Hiram Poddar, who is at the Center for AI and Digital Policy, CAIDP, and is also Executive Director of ENCODE India. Now, in particular, CAIDP has been a vocal advocate for the development and implementation of strong governance frameworks that prioritize transparency, accountability, and fairness in the production and use of AI systems. It’s an organization that’s also deeply engaged in policy analysis and stakeholder collaboration to safeguard human rights and democratic principles in the face of rapid technological transformation. So, Hiram, given your work with CAIDP, could you share some thoughts on how NGOs can contribute to creating good governance guardrails for AI? In particular, what do you see as the critical steps for ensuring that AI systems are designed and deployed in ways that uphold societal values and human rights? And you are there in the room, if I’m not mistaken.


Heramb Podar: Yes, I am. I hope you can hear me. For the opportunity to speak, CAIDP has been, indeed, a very vocal advocate. All of the work we do is grounded in policies to uphold human rights, democracy, and the rule of law. Ultimately, for NGOs, it’s all about advocacy through engagement with the due process in terms of, you know, public voice opportunities which might come up and bringing in as much of a public voice as possible. Just a few minutes ago, my co-speaker was just speaking about, you know, like how all rights are not often covered. Sometimes there are contexts which are overlooked, unfortunately. So, really kind of CSOs and NGOs can be that bridge between the implementation or on-ground, you know, risks and how the public is feeling and the policies that are being developed, whether it be at the COE or whether it be the NIST frameworks and so on. Highlighting, like, specific actions CAIDP has taken, we have been very vocal in the advocacy for the ratification of the Council of Europe AI Treaty. We think it prevents global fragmentation, it aligns everyone’s national policies to global standards, and we have recently released statements to the South African presidency for the G20 to ratify the treaty, to the U.S. Senate to ratify the treaty. You know, bring in voices, as I was talking about earlier. One of our key members in our global academic network is NCORE Justice, which is a youth organization focused on AI risks, making sure that AI works for everyone and that AI is safe for, you know, any kind of future generations that do not inherit any kind of malicious AI that might impact human rights. Quickly jumping on to, you know, specific actions in terms of design and development, that was a very interesting question. At CAIDP, we have something called the Universal Guidelines on AI. We just recently celebrated the sixth anniversary of the UKAI principles, as we like to call it, and what we would like to see most is, you know, having clear red lines in whatever policies governments put out in terms of prohibiting use cases that are not based on scientific validity, in terms of use cases that might be adversely impacting certain groups or impacting human rights. We see some early examples of high-risk use cases, for example, in the EU AI Act, things like biometric surveillance or social scoring and so on. What would be, you know, exciting to see is, you know, having ex-ante impact assessments, having proper kind of transparency and explainability across the AI life cycle from design to decommissioning. Ultimately, having, you know, whistleblower protections, we’re seeing an increasing kind of a race to turn better AI systems, and we find it very necessary, you know, for there to be certain guardrails and certain whistleblower protections so that people can speak their mind. Yeah, and just in specific use cases like autonomous weapon systems having termination obligations, which is another one of the cornerstones of our UKAI principles, so having human oversight, we see constantly that a lot of states, so we released something called the Artificial Intelligence and Democratic Values Report on an annual basis, which is the world’s most comprehensive coverage of, like, national AI policies, and we rank countries according to their metrics. And something we saw very interestingly was also a recommendation on AI ethics, where countries are really slow in implementing them, and this also brings to light, kind of, the global digital divide. A lot of the global south countries are particularly, you know, playing catch-up. Countries are not getting to, you know, submit their readiness assessment methodologies to the UNESCO, which is our key indicator for implementation. So, again, you know, NGOs, coming back to the original question, have a role to play in terms of making sure that countries, companies, you know, other different sectors, they not only make these commitments, but they also, you know, follow through with action, you know, and not just rooted in words which might, you know, have interpreting differences and, like, actually having some sort of grounded principles or grounded metrics. Yeah, and I’ll end this here. Thank you so much, Haram. That’s really, really great to hear that this is a kind of multilateral, it needs to be a multilateral effort, and NGOs need to play a central role as we develop the governance instruments. So, I’ll just say that it’s been amazing to hear


David Leslie: about all of this innovative work that’s been done in standards development organizations at the state level. The work of the Council of Europe, I think, is, you know, it’s been out ahead on many things, and always hearing about all this innovation, innovative work really just reminds me that actually, you know, we talk a lot about move fast, break things, right? But I think, you know, on our end of things and hearing about all the fast and safe things, you know, we need to be out in front of some of the way these technologies are developing. So, to close here, I want to just maybe turn back to Smera and ask if you had any closing observations. Yes, all I would say is it’s so fantastic to hear from everyone who’s joined us here today. I think so many excellent points about stakeholder engagement, the role of civil society being a part of it, being ahead of the curve and identifying some of those risks, skills development as well, which was mentioned. So, I think all of this develops a really good and strong ecosystem, and when you use tools like the Huderia methodology in this space to identify this and introduce impact mitigation measures, you know, as you said, David, move fast and save things. So, I think on that note, I’ll send back to you. Okay, wonderful. So, just again, one more thank you to all of our speakers. We are striving to finish on time, and thank you so, so much to all the important comments and information that were shared today. So, I wish you well from the southeast of England, and I hope you have a nice time who are physically there in Riyadh at the rest of IGF. Take care. Thank you.


D

David Leslie

Speech speed

138 words per minute

Speech length

2145 words

Speech time

928 seconds

Huderia methodology for AI risk assessment

Explanation

The Huderia methodology is a unique anticipatory approach to AI governance. It focuses on four fundamental elements: context-based risk analysis, stakeholder engagement, risk and impact assessment, and mitigation planning.


Evidence

Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mitigation planning


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Stakeholder engagement in AI impact assessment

Explanation

The Huderia methodology emphasizes the importance of stakeholder engagement in AI impact assessment. It proposes an approach to enable engagement with relevant stakeholders, including impacted communities.


Evidence

Aims to amplify voices of affected communities and gain information on how they view potential impacts


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


W

Wael William Diab

Speech speed

138 words per minute

Speech length

1441 words

Speech time

624 seconds

ISO/IEC standards for AI systems

Explanation

ISO/IEC JTC1 SC42 is developing standards for the full AI ecosystem. These standards cover various aspects including non-technical trends, requirements, and horizontal and foundational projects on artificial intelligence.


Evidence

Over 30 published standards, about 50 active projects, 68 participating countries, and 800 unique experts involved


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Data quality standards for AI systems

Explanation

ISO/IEC is developing standards for data quality in AI systems. This includes a six-part multi-series on data quality for analytics in the AI space.


Evidence

First three parts of the data quality series have been published, with the next three scheduled for publication in the coming year


Major Discussion Point

Human Rights Considerations in AI Development


T

Tetsushi Hirano

Speech speed

131 words per minute

Speech length

576 words

Speech time

262 seconds

Japanese AI Guidelines for Business

Explanation

Japan has developed AI Guidelines for Business, taking into account the results of the Hiroshima AI process for advanced AI systems. The guidelines differentiate aspects of AI from the perspective of AI actors, providing detailed recommendations for developers, deployers, and users.


Evidence

Guidelines provide a detailed list of recommendations for developers, deployers, and users


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Differed with

Matt O’Shaughnessy


Differed on

Approach to AI risk assessment frameworks


Detailed analysis of rights holders in Huderia

Explanation

The Huderia methodology offers a detailed analysis of rights holders and effects on them. It provides a step-by-step analysis of stakeholder involvement, which is seen as a benchmark for continuous development.


Evidence

Japanese experts evaluate COBRA (part of Huderia) highly, especially as a threshold mechanism


Major Discussion Point

Human Rights Considerations in AI Development


Interoperability between AI frameworks

Explanation

There is a need for interoperability between different AI risk management frameworks. An interoperability document planned for 2025 may highlight commonalities of these frameworks and their respective strengths.


Evidence

Mentions various frameworks like the Hiroshima process code of conduct and EU AI Act


Major Discussion Point

International Cooperation on AI Governance


M

Matt O’Shaughnessy

Speech speed

163 words per minute

Speech length

1461 words

Speech time

536 seconds

NIST AI Risk Management Framework

Explanation

The NIST AI Risk Management Framework is a general risk management framework applicable to all organizations developing or using AI. It describes actions organizations can take to manage risks of their AI activities, including those relevant to human rights.


Evidence

Framework describes technical steps to manage harmful bias, discrimination, mitigate privacy risks, and improve accountability


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Differed with

Tetsushi Hirano


Differed on

Approach to AI risk assessment frameworks


Human rights impact assessments for government AI use

Explanation

The White House Office of Management and Budget memorandum sets out binding rules for government agencies using AI. It mandates key risk management actions, particularly for AI systems determined to be safety-impacting or rights-impacting.


Evidence

Includes steps for risk evaluation, data quality assessment, ongoing testing and monitoring, and engagement with affected communities


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Clara Neppel


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


U.S. domestic AI policies informing international work

Explanation

U.S. domestic AI policies, such as the NIST AI Risk Management Framework, inform international work on AI governance. These domestic products have influenced international initiatives and standards.


Evidence

Council of Europe’s Huderia and OECD projects have drawn from the AI Risk Management Framework


Major Discussion Point

International Cooperation on AI Governance


Context-aware application of risk management frameworks

Explanation

The NIST AI Risk Management Framework is designed to be applied in a flexible and context-aware manner. This approach ensures that risk management steps are well-tailored and proportionate to the specific context of use.


Evidence

Framework supported by ‘profiles’ that describe how it can be used in specific sectors, for specific AI technologies, or for specific types of end-use organizations


Major Discussion Point

Practical Implementation of AI Ethics


C

Clara Neppel

Speech speed

133 words per minute

Speech length

932 words

Speech time

419 seconds

IEEE standards for ethically aligned AI design

Explanation

IEEE has been developing standards for responsible use of AI with a strong focus on risk management. Their work provides practical tools and methodologies to ensure AI systems are robust, fair, and aligned with societal values.


Evidence

IEEE 7000 standard took five years to develop and has been widely deployed


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Incorporating human rights principles in AI standards

Explanation

IEEE standards take a human rights first approach in AI development. Their standards are acknowledged to be very close to what is required with respect to human rights.


Evidence

Acknowledgment by the Joint Research Center of the European Commission


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Matt O’Shaughnessy


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


IEEE’s global network of AI ethics assessors

Explanation

IEEE has developed a global network of certified AI ethics assessors. This network helps in implementing and assessing adherence to AI ethics principles worldwide.


Evidence

More than 200 certified assessors worldwide, training programs from Dubai to South Korea


Major Discussion Point

International Cooperation on AI Governance


Building ecosystems to implement ethical AI standards

Explanation

IEEE emphasizes the importance of building strong ecosystems to implement ethical AI standards. These ecosystems involve various stakeholders and ensure that AI systems adhere to ethical principles from data governance to application development.


Evidence

Example of ecosystem in Austria, from city of Vienna public services to data hubs in Tirol


Major Discussion Point

Practical Implementation of AI Ethics


M

Myoung Shin Kim

Speech speed

111 words per minute

Speech length

774 words

Speech time

416 seconds

LG AI Research’s approach to AI ethics and risk governance

Explanation

LG AI Research has developed an approach to AI ethics and risk governance based on five core values: humanity, fairness, safety, accountability, and transparency. They employ three strategic pillars: governance, research, and engagement.


Evidence

Development of XR1, a generative AI model, and implementation of AI ethics principles


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Agreed on

Importance of AI risk management frameworks


Educating data workers on human rights

Explanation

LG AI Research educates data workers about the Universal Declaration of Human Rights and the Sustainable Development Goals. They provide guidelines to respect, protect, and promote human rights during data production or data-steaming process.


Major Discussion Point

Human Rights Considerations in AI Development


LG AI Research’s collaboration with UNESCO

Explanation

LG AI Research is collaborating with UNESCO to develop online educational content for AI ethics. This initiative targets researchers, developers, and policymakers globally.


Evidence

Final MOOC planned to be held worldwide by early 2026


Major Discussion Point

International Cooperation on AI Governance


LG’s AI ethics impact assessment process

Explanation

LG AI Research conducts an AI ethics impact assessment for every project to identify and address potential risks across the AI lifecycle. This process involves a cross-functional task force bringing together researchers from technology, business, and AI ethics.


Evidence

Three-step process: analyzing project characteristics, setting problem-solving practice, and verifying research and documentation


Major Discussion Point

Practical Implementation of AI Ethics


Agreed with

David Leslie


Matt O’Shaughnessy


Clara Neppel


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


H

Heramb Podar

Speech speed

154 words per minute

Speech length

753 words

Speech time

292 seconds

NGO advocacy for human rights in AI governance

Explanation

NGOs like CAIDP play a crucial role in advocating for human rights in AI governance. They act as a bridge between on-ground risks, public sentiment, and policy development.


Evidence

CAIDP’s advocacy for the ratification of the Council of Europe AI Treaty


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Stakeholder engagement in AI impact assessment


CAIDP’s advocacy for ratification of AI treaties

Explanation

CAIDP advocates for the ratification of international AI treaties to prevent global fragmentation and align national policies with global standards. They have released statements urging various countries and organizations to ratify the Council of Europe AI Treaty.


Evidence

Statements released to the South African presidency for the G20 and to the U.S. Senate


Major Discussion Point

International Cooperation on AI Governance


Need for clear prohibitions on high-risk AI use cases

Explanation

CAIDP advocates for clear red lines in AI policies, prohibiting use cases that are not based on scientific validity or that might adversely impact certain groups or human rights. They call for ex-ante impact assessments and proper transparency across the AI lifecycle.


Evidence

Examples of high-risk use cases in the EU AI Act, such as biometric surveillance or social scoring


Major Discussion Point

Practical Implementation of AI Ethics


Agreements

Agreement Points

Importance of AI risk management frameworks

speakers

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


arguments

Huderia methodology for AI risk assessment


ISO/IEC standards for AI systems


Japanese AI Guidelines for Business


NIST AI Risk Management Framework


IEEE standards for ethically aligned AI design


LG AI Research’s approach to AI ethics and risk governance


summary

All speakers emphasized the importance of developing and implementing comprehensive AI risk management frameworks to ensure responsible AI development and deployment.


Stakeholder engagement in AI impact assessment

speakers

David Leslie


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Heramb Podar


arguments

Stakeholder engagement in AI impact assessment


Human rights impact assessments for government AI use


Incorporating human rights principles in AI standards


LG’s AI ethics impact assessment process


NGO advocacy for human rights in AI governance


summary

Multiple speakers highlighted the importance of involving stakeholders, including affected communities, in AI impact assessments to ensure comprehensive consideration of potential risks and impacts.


Similar Viewpoints

Both speakers emphasized the importance of applying AI risk management frameworks in a context-specific manner, taking into account the unique ecosystems and environments in which AI systems are deployed.

speakers

Matt O’Shaughnessy


Clara Neppel


arguments

Context-aware application of risk management frameworks


Building ecosystems to implement ethical AI standards


These speakers highlighted the importance of aligning national and international AI governance efforts to ensure consistency and prevent fragmentation in global AI governance.

speakers

Tetsushi Hirano


Matt O’Shaughnessy


Heramb Podar


arguments

Interoperability between AI frameworks


U.S. domestic AI policies informing international work


CAIDP’s advocacy for ratification of AI treaties


Unexpected Consensus

Education and skill development for AI ethics

speakers

Clara Neppel


Myoung Shin Kim


arguments

IEEE’s global network of AI ethics assessors


Educating data workers on human rights


explanation

Both speakers from different sectors (standards organization and private company) emphasized the importance of education and skill development in AI ethics, which was an unexpected area of focus given the primarily policy-oriented discussion.


Overall Assessment

Summary

The speakers showed strong agreement on the need for comprehensive AI risk management frameworks, stakeholder engagement in impact assessments, and the importance of aligning national and international AI governance efforts.


Consensus level

High level of consensus among speakers, indicating a shared understanding of key challenges and approaches in AI governance. This consensus suggests potential for collaborative efforts in developing and implementing AI governance frameworks across different sectors and jurisdictions.


Differences

Different Viewpoints

Approach to AI risk assessment frameworks

speakers

Tetsushi Hirano


Matt O’Shaughnessy


arguments

Japanese AI Guidelines for Business


NIST AI Risk Management Framework


summary

While both speakers discuss AI risk assessment frameworks, they present different approaches. Hirano focuses on the Japanese AI Guidelines for Business, which differentiates aspects from the perspective of AI actors, while O’Shaughnessy emphasizes the NIST framework’s flexible and context-aware application.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches and frameworks for AI risk assessment and governance, with different organizations and countries presenting their own methodologies.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers present complementary rather than conflicting views, focusing on their respective organizations’ or countries’ approaches to AI governance. This suggests a general alignment in recognizing the importance of AI ethics and risk management, but with variations in implementation strategies. The implications are that while there is a shared goal of responsible AI development, there may be challenges in creating a unified global approach due to these differing methodologies.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of implementing ethical AI standards, but they differ in their approaches. Neppel emphasizes building ecosystems and a global network of assessors, while Kim focuses on internal processes and education within LG AI Research.

speakers

Clara Neppel


Myoung Shin Kim


arguments

IEEE standards for ethically aligned AI design


LG AI Research’s approach to AI ethics and risk governance


Similar Viewpoints

Both speakers emphasized the importance of applying AI risk management frameworks in a context-specific manner, taking into account the unique ecosystems and environments in which AI systems are deployed.

speakers

Matt O’Shaughnessy


Clara Neppel


arguments

Context-aware application of risk management frameworks


Building ecosystems to implement ethical AI standards


These speakers highlighted the importance of aligning national and international AI governance efforts to ensure consistency and prevent fragmentation in global AI governance.

speakers

Tetsushi Hirano


Matt O’Shaughnessy


Heramb Podar


arguments

Interoperability between AI frameworks


U.S. domestic AI policies informing international work


CAIDP’s advocacy for ratification of AI treaties


Takeaways

Key Takeaways

Multiple AI governance frameworks and standards are being developed by different organizations globally, including Huderia, ISO/IEC, NIST, IEEE, and country-specific guidelines.


Human rights considerations are becoming increasingly important in AI development and governance, with a focus on stakeholder engagement, impact assessments, and data quality.


International cooperation and interoperability between different AI governance frameworks is crucial for effective global AI governance.


Practical implementation of AI ethics requires context-aware application of risk management frameworks, ecosystem building, and clear prohibitions on high-risk AI use cases.


NGOs and civil society organizations play a vital role in advocating for human rights in AI governance and bridging the gap between policy development and on-the-ground risks.


Resolutions and Action Items

Continue development of the Huderia technical document plan for 2025


Develop interoperability document for AI risk management frameworks by 2025


LG AI Research to publish annual report on AI ethics implementation in January


UNESCO and LG AI Research to develop online educational content for AI ethics by early 2026


Unresolved Issues

How to effectively address the global digital divide in AI governance implementation


Balancing innovation with responsible AI development and use


Addressing potential impacts of synthetic content created by generative AI on democracy


Ensuring consistent implementation of AI ethics recommendations across different countries


Suggested Compromises

Flexible and context-aware application of AI risk management frameworks to balance innovation and risk mitigation


Collaboration between public and private sectors in developing AI governance approaches


Incorporating diverse stakeholder perspectives in AI impact assessments to address varied concerns


Thought Provoking Comments

The Huderia itself that has been developed through the activities of the Committee on Artificial Intelligence and all the member states and observer states, it really is a unique anticipatory approach to the governance of the design, development, and deployment of AI systems that anchors itself in basically four fundamental elements.

speaker

David Leslie


reason

This comment introduces the core structure of the Huderia methodology, highlighting its comprehensive and forward-looking approach to AI governance.


impact

It set the stage for the entire discussion by outlining the key elements of Huderia, providing a framework for subsequent speakers to relate their work and perspectives to.


One of the important things is to allow this idea of a third-party certification and audit in order to ensure broad responsible adoption.

speaker

Wael William Diab


reason

This insight emphasizes the critical role of independent verification in ensuring responsible AI adoption, introducing a key governance mechanism.


impact

It shifted the conversation towards the importance of standardization and certification in AI governance, prompting discussion on practical implementation of ethical principles.


As a pioneering work in this field, Huderia is expected to become a benchmark. However, it is also important to share knowledge and the best practices with concrete examples as this type of risk and impact assessment is not yet well known.

speaker

Tetsushi Hirano


reason

This comment highlights both the potential of Huderia and the need for practical implementation guidance, addressing a crucial gap in current AI governance efforts.


impact

It prompted consideration of how to make abstract governance principles more concrete and actionable, influencing subsequent discussions on implementation and best practices.


We need the time and we need the stakeholders. We need for even if we think that some of the concepts like transparency or fairness are already quite defined, you might be surprised.

speaker

Clara Neppel


reason

This insight underscores the complexity of defining and implementing ethical AI concepts, emphasizing the need for diverse stakeholder engagement and iterative development.


impact

It deepened the conversation by highlighting the challenges in operationalizing ethical principles, leading to discussions on the importance of multi-stakeholder collaboration and ongoing refinement of governance approaches.


For AI ethics to take root in our society, I believe citizens’ AI literacy must improve. Additionally, if high-quality AI education is not evenly provided, existing economic and social disparities may widen.

speaker

Myoung Shin Kim


reason

This comment introduces the crucial aspect of public education and literacy in AI ethics, linking it to broader societal issues of equality and fairness.


impact

It broadened the scope of the discussion to include the role of public education in AI governance, prompting consideration of how to engage and empower the general public in AI ethics discussions.


Overall Assessment

These key comments shaped the discussion by progressively expanding the scope of AI governance considerations. Starting from the structural framework of Huderia, the conversation evolved to cover practical implementation challenges, the need for standardization and certification, the importance of stakeholder engagement, and the role of public education. This progression highlighted the multifaceted nature of AI governance, emphasizing the need for comprehensive, collaborative, and adaptable approaches that consider both technical and societal aspects of AI development and deployment.


Follow-up Questions

How can the Huderia methodology be further developed and refined?

speaker

David Leslie


explanation

David mentioned that as they move forward in the next year, they will be working on what they call ‘the model’, which will explore some areas in more detail. This suggests a need for further development of the Huderia methodology.


How can interoperability between different AI risk management frameworks be improved?

speaker

Tetsushi Hirano


explanation

Tetsushi mentioned the need for an interoperability document that highlights commonalities between different frameworks and their respective strengths. This is important for facilitating mutual learning and potentially easing compliance across different standards.


How can knowledge and best practices of AI risk and impact assessment be shared more effectively?

speaker

Tetsushi Hirano


explanation

Tetsushi emphasized the importance of sharing knowledge and best practices with concrete examples, as this type of risk and impact assessment is not yet well known. This is crucial for helping interested parties join the AI Convention.


How can we better address the impacts of synthetic content created by generative AI on democracy?

speaker

Tetsushi Hirano


explanation

Tetsushi highlighted the need to consider the impact of synthetic content created by generative AI on democracy in future meetings of the AI Convention. This is an emerging area of concern that requires further research and discussion.


How can we improve the implementation of AI ethics recommendations globally, particularly in Global South countries?

speaker

Heramb Podar


explanation

Heramb noted that many countries, especially in the Global South, are slow in implementing AI ethics recommendations. This highlights a need for research into effective implementation strategies and addressing the global digital divide in AI governance.


How can we develop more effective metrics for assessing countries’ implementation of AI ethics and governance frameworks?

speaker

Heramb Podar


explanation

Heramb mentioned the need for grounded principles or metrics to assess countries’ follow-through on AI ethics commitments. This suggests a need for research into developing more robust assessment methodologies.


How can we improve AI literacy among citizens to ensure they can be mature users and critical watchdogs in the AI market?

speaker

Myoung Shin Kim


explanation

Myoung Shin emphasized the importance of improving citizens’ AI literacy to help AI ethics take root in society. This suggests a need for research into effective AI education strategies for the general public.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #43 States and Digital Sovereignty: Infrastructural Challenges

WS #43 States and Digital Sovereignty: Infrastructural Challenges

Session at a Glance

Summary

This workshop focused on digital sovereignty and infrastructure challenges in the context of global digital transformation. Speakers from various countries and organizations discussed different perspectives on digital sovereignty, ranging from state-centric approaches to more inclusive, multi-stakeholder models. The discussion highlighted the importance of digital public infrastructure (DPI) in enabling countries to exercise greater control over their digital assets and services.

Key topics included the development of sovereign digital infrastructures, the role of open-source technologies, and the importance of data localization and protection. Speakers emphasized the need for countries to balance autonomy with international cooperation, particularly in regions facing infrastructure limitations. The Brazilian AI plan was presented as an example of national efforts to boost technological capabilities and reduce dependencies.

Challenges such as meaningful connectivity, especially in Global South countries, were identified as crucial factors affecting the success of digital sovereignty initiatives. The debate also touched on the role of private sector involvement in DPI development and the need for regulatory frameworks to ensure accountability and ethical use of technologies.

Participants discussed the potential for regional cooperation in building digital infrastructures, while also addressing concerns about how digital sovereignty might be used as a geopolitical tool. The importance of interoperability and cross-border collaboration was stressed, particularly in the context of emerging technologies like AI.

Overall, the workshop underscored the complex nature of digital sovereignty, highlighting the need for nuanced approaches that consider diverse national contexts while fostering international cooperation and inclusive development in the digital realm.

Keypoints

Major discussion points:

– Different conceptions and layers of digital sovereignty, from state-level to personal and common sovereignty

– The role of digital public infrastructure (DPI) in enabling digital sovereignty for countries

– Challenges around connectivity, data localization, and infrastructure development, especially for Global South countries

– Balancing national sovereignty with regional/international cooperation and interoperability

– The importance of open source technologies and multistakeholder governance models

The overall purpose of the discussion was to explore how different countries and regions are approaching digital sovereignty and digital public infrastructure development, examining both the challenges and opportunities. The speakers aimed to share perspectives from different parts of the world on these issues.

The tone of the discussion was largely analytical and informative, with speakers presenting research and case studies from their areas of expertise. There was a sense of urgency around addressing digital divides and asymmetries between countries, but also optimism about the potential for DPI and regional cooperation to enable greater digital sovereignty. The Q&A portion introduced some more critical perspectives, particularly around connectivity challenges, but the overall tone remained constructive.

Speakers

– Rodolfo Avelino: Counselor of the Brazilian Internet Steering Committee, moderator of the session

– Min Jiang: Professor of Journalism Studies at the University of North Carolina at Charlotte, CyberBRICS fellow

– Ekaterine Imedadze: Commissioner of the Georgia National Communication Commission

– Korstiaan Wapenaar: Principal at the center of digital excellence in Johannesburg, develops digital economy strategies for Africa

– Ritul Gaur: Policy Advisor at the Digital Impact Alliance, worked on DPI negotiations at G20

– Renata Mielli:

Additional speakers:

– Luca Belli: Professor at UW Law School

– Jose Renato: Researcher at the University of Bonn Sustainable AI Lab, co-founder of LAPIN

Full session report

Digital Sovereignty and Infrastructure Challenges in Global Digital Transformation

This workshop explored the complex landscape of digital sovereignty and infrastructure challenges in the context of global digital transformation. Speakers from various countries and organisations shared diverse perspectives on digital sovereignty, ranging from state-centric approaches to more inclusive, multi-stakeholder models.

Conceptualising Digital Sovereignty and Digital Public Infrastructure (DPI)

Digital sovereignty was presented as a multifaceted concept extending beyond nation-states. Min Jiang, Professor at the University of North Carolina at Charlotte, emphasized supranational, corporate, personal, and common digital sovereignty. This broader view complements the multistakeholder model by addressing underlying power issues.

Ritul Gaur, Policy Advisor at the Digital Impact Alliance, focused on how digital sovereignty enables countries to exercise more control over critical digital assets. Gaur explained that DPI governance can vary from state-controlled to private sector-driven, highlighting the flexibility in approaches. He described DPI as “laying out the most common drill, but then allowing others to build a market economy around it,” positioning it as a foundation for broader digital development.

Renata Mielli stressed the importance of viewing digital sovereignty as complementary to cooperation between countries, arguing that cooperation is fundamental to achieving sovereignty given the different realities each country faces in digital areas.

Infrastructure and Connectivity Challenges

The workshop highlighted significant challenges in developing digital infrastructure and ensuring meaningful connectivity, especially for countries in the Global South. Ekaterine Imedadze, Commissioner of the Georgia National Communication Commission, discussed Georgia’s challenges in developing data centres and connectivity infrastructure. Imedadze also mentioned Georgia’s green energy production potential, which could support data center development.

Korstiaan Wapenaar, Principal at the center of digital excellence in Johannesburg, noted that African countries struggle with fiscal and capacity constraints for digital infrastructure. He explained that DPI enables governments to deliver services at scale and reach people in need.

Luca Belli, Professor at UW Law School, raised a critical point about the lack of meaningful connectivity in Brazil, defining it as stable, fast enough internet access on an appropriate device with enough data. Belli stated that only 22% of the Brazilian population has meaningful connectivity, challenging the effectiveness of current digital sovereignty efforts.

In response, Renata Mielli outlined Brazil’s plans to address connectivity challenges through the PAC (Growth Facilitation Program), which aims to invest 23 billion reais (around $4 billion) over the next four years in digital infrastructure and AI development. Mielli emphasized that these efforts must be guided by reducing inequalities from the outset.

AI Development and Data Sovereignty

The discussion highlighted the importance of AI development and data sovereignty. Mielli stressed that data sovereignty is central to AI development and self-determination. She also mentioned ongoing G20 discussions on AI and the digital economy.

Min Jiang emphasized the importance of open source technologies and free software for AI sovereignty in developing countries, while Ritul Gaur advocated for DPI to be designed for cross-border interoperability.

Balancing Sovereignty and Cooperation

A key theme throughout the discussion was the need to balance national digital sovereignty efforts with regional and international cooperation. Min Jiang pointed out that small countries need to cooperate and build alliances to achieve digital sovereignty. In response to a question about regime types, Jiang noted that while democracies might be more inclined to collaborate, authoritarian regimes also engage in digital cooperation when it serves their interests.

The discussion also touched on international infrastructure projects, such as the Peace Cable that Meta is investing in, highlighting the complex interplay between corporate interests and national digital sovereignty efforts.

Unresolved Issues and Future Considerations

The workshop underscored several unresolved issues and areas for future consideration:

1. Balancing national digital sovereignty with cross-border interoperability

2. Addressing lack of meaningful connectivity while investing in advanced technologies

3. Defining the scope and governance models of Digital Public Infrastructure

4. Ensuring stability and productive management of regional digital infrastructure projects

5. Preventing the use of digital sovereignty on infrastructure with regional impact as a weapon against other countries

Conclusion

The workshop highlighted the complex nature of digital sovereignty, emphasizing the need for nuanced approaches that consider diverse national contexts while fostering international cooperation. The discussion evolved from theoretical concepts to practical challenges and potential solutions, underscoring the importance of context-specific strategies that balance national autonomy with international cooperation and equitable access. There was a general consensus on the critical role of DPI in enhancing digital sovereignty, the importance of open technologies and interoperability, and the need for both national efforts and international cooperation in achieving digital sovereignty in an increasingly interconnected world.

Session Transcript

Rodolfo Avelino: Aloha. Aloha, aloha, okay, okay. We have a test. Hello. Good afternoon. Good afternoon. Hello. One, two, three. One, two, three. Hello. One, two, three. Yes. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Well come. everyone to the Workshop States and the Digital Sovereign Infrastructural Challenges. I am Rodolfo Avelino, Counselor of the Brazilian Internet Steering Committee, and I will be moderating these sessions. We also have here Juliana Ons as Online Mediator and Ramon Costa as Rapporteur. Both are technical advisors for CGI.br. First, I would like to thank the IGF organization and everyone that is present today. A special thanks to our speakers that will contribute to our debate. The developments of the Internet are marketed by the consolidations of large digital platforms, notifications, and the growing use of artificial intelligence. This caused significant changes not only in social process, but also in sessions public services such as health, education, communications, and the state’s capacities in general. Despite technologies advances, problems of the opacity, national security, surveillance, and autonomy to implement digital policies arise. These issues can be framed under the conceptions of the digital sovereignty, a notion that can have multiples. meanings and purposes, raises themes such as the security of digital infrastructures, the security of the strategies data, innovation and the status capacity to guarantee fundamentals rights. This sessions aims at discussions, policies, initiatives to implement digital infrastructures in different regions and countries in light of different approaches to digital sovereignty. I hope we have a great conversations and that which experience may serve as inspirations for others. Now, I would like to give the floor to our speakers. Our first presentations will be delivered by Dr. Min Young, a professor of Journalism Studies at the University of North Carolina at Charlotte and also a CyberBRICS fellow. She will start our conversations by presenting the different conceptions of the digital sovereignty. Dr. Young, you have the floor for 80 minutes, please.

Min Jiang: Thank you. Thank you colleagues at CGI Brazil for convening the session and for inviting me to join. Can you hear me all right? Just double checking. Okay, great. Thank you so much. And greetings also to participants from around the world. My contribution to the panel is based largely on a book I co-edited with Dr. Luca Belli of FGV Law School. And can I have the slides up, please? Thank you. Can you move to the next slide, please? Hello. Can we move to the next slide, please? Yes. Thank you so much. So the book is titled Digital Sovereignty in the BRICS Countries, How the Global South and Emerging Power Alliances are Reshaping Digital Governance, coming out in two weeks through Cambridge University Press. I will develop my remarks today in two parts. First, I will trace the development of digital sovereignty and explain why digital sovereignty is gaining currency. Second, I will offer a broad framework for conceptualizing digital sovereignty beyond a traditional normative definition of sovereignty centered around nation states. In fact, I would argue digital sovereignty is not something that belongs to the states owning. Instead, digital sovereignty as broadly conceptualized complements the multistakeholder model by foregrounding the underlying power issues that have prevented multistakeholderism to be more widely adopted. So to start…next slide, please. Given the IGF…sorry, one slide back. To start, given the IGF is a global forum under the auspice of the UN, it’s appropriate to recognize that the UN is a post-World War II creation based on national independence and sovereignty. The idea of sovereignty, which can be traced back to the French philosopher Jean Baudin in the 16th century, as well as the 1648 Peace of Westphalia, is foundational to the modern system of nation states. As such, states are thought to enjoy territorial integrity, legal equality, and non-interference in international affairs. However, all of us also recognize that such normative and very idealistic notions of sovereignty are often good on paper, but not so good in practice. Sovereignty is frequently a function of power. Strong states, for example, can invade. Other states think Iraq War and the current war in Ukraine. Weaker states often lack power to exert influence. Next slide, please. The problem of power imbalance is especially evident in the digital era, where much of the world’s digital infrastructure, data, services, and increasingly AI depends on a handful of Silicon Valley firms. Snowden’s revelation of NSA’s global surveillance program in 2013 made it clear that the U.S. government cannot be trusted. Facebook, Cambridge Analytica scandal and the general failure of U.S. Big Tech in the 2016 U.S. presidential election also made it clear that U.S. Big Tech cannot be trusted. While internet sovereignty was once thought to be an authoritarian product shipped out of China, post-Snowden, it’s not surprising why more and more countries, including EU, as a coalition of post-nations, are picking up the banner of digital sovereignty. And in fact, EU means by digital sovereignty self-determination to voice their dissatisfaction. It’s also not surprising why ICANN was pressured to move out of the U.S. Commerce Department in 2014. Whether the effort came from individual states, group of states, or multi-stakeholder fora, they share one thing in common as the global consensus of U.S. centered. global digital order is breaking down, national and international actors are in search of alternative to build a new digital order. Next slide, please. The moment we’re in is not unlike the new world information and communication order new wickle debate in the 1970s. Next slide, please. Hello, next slide, please.

Rodolfo Avelino: One minute, one minute, please.

Min Jiang: Thank you. The new wickle debate in the 1970s culminated in the McBride report published by UNESCO in 1980. At the time information sovereignty and cultural sovereignty these were the exact phrases from the report were of concern to global South countries which were critical of the free flow of information agenda championed by the US and UK seen as an instrument for information colonization and cultural imperialism. In the end, the US and UK pulled out of the new wickle debate arguing global South countries used information sovereignty and cultural sovereignty as a pretext for censorship and control at home. Next slide, please. The power is symmetry then mirrors the power symmetry today while state centric perspective remains essential to understand digital sovereignty. Many researchers, including myself also recognize digital sovereignty is a signifier is a term with many different meanings used by different actors to express their aspirations and also to assert control and power. Thus in the book volume we adopted a more generative definition of digital sovereignty as the exercise of agency power and control in shaping. infrastructure, data, services, and protocols. The book project’s bottom-up efforts also led us to develop a broader framework of digital sovereignty, mapping the following seven perspectives. Next slide, please. In the state digital sovereignty perspective, nation-states exert control over digital architecture, data, protocols, and services. It can be both positive and negative. While Brazil, for example, built the PICS digital payment system during the pandemic, and India built the UPI digital payment system as digital financial infrastructure to increase independence and inclusion, Russia, on the other hand, built the RUNET for digital isolation. In the supranational digital sovereignty perspective, regional alliances like the EU develop unified digital policies as well as legal and digital infrastructure. Former German Chancellor Angela Merkel, in fact, gave a speech at the 2019 IGF defining EU’s digital sovereignty as a form of self-determination. So for EU countries, being sovereign doesn’t mean working alone, but working together. Network digital sovereignty, something ICANN as an organization might endorse, it emphasizes decentralized control, network interoperability, freedom from nation-states, and global coalition. Corporate digital sovereignty, on the other hand, tends to endorse the freedom of tech giants in driving digital economies and shaping digital norms, something scholars have critiqued to be a form of surveillance capitalism or data colonialism. Personal digital sovereignty, on the other hand, emphasizes empowerment of individuals to control their digital identities and personhood. Post-colonial digital sovereignty highlights efforts by former colonized nations to reclaim autonomy, access, ownership, and control in the digital space. Finally, common digital sovereignty emphasizes community driven governance of shared digital resources and production of public digital goods. This is embodied in the open source free software movement, as well as international digital solidarity and labor movements. So for us, digital sovereignty is not something that belongs to the nation states owning. Broadly conceptualized, digital sovereignty does not replace the model stakeholder model. On the country, it complements it by placing it in a wider discursive field and foregrounding the underlying power issues. Global digital sovereignty shouldn’t and cannot be determined by national governments alone, but needs a global coalition to address pressing issues of global divide in digital infrastructure, entrenched challenges of digital surveillance and censorship, as well as structural monopoly and digital inequality. Thank you for allowing me to share my perspective. Look forward to further exchanges and discussions.

Rodolfo Avelino: Thank you. Thank you, Dr. Young. Now introduce it Ekaterine has been a commissioner of the Georgia National Communication Commission since in 2021. The commissioner has 15 years of professional experience in the telecommunications field. Mrs. Ekaterine, could you comment on how the Georgian government adapts to undergoing digital transformations and how these projects relate to broader European context?

Ekaterine Imedadze: Thank you very much. I think my presentation will be put on now and first of all, I want to thank hosts of this very interesting panel. Thank you very much. Thank you. interesting, very important, very, very, very much relevant topic of workshop. This is Information and Coordination Center of Brazil. Also, I think everybody will join me thanking the host country, Saudi Arabia, Riyadh, for this amazing venue for the IGF. And it’s a great pleasure to share a perspective from another part of the world about how the state can see very small, tiny state in the South Caucasus can see the challenges and can overcome the challenges related to digital sovereignty. And the previous presenter very in a best manner outlined what are the layers of the digital sovereignty and how it has evolved. And I will be speaking about the very specifically telecom layer, the most upstream perspective of the infrastructure and challenges related to my country, Georgia. And it is also related to the region itself, the South Caucasus. So my perspective as representative of the telecom regulator and state representative will be specifically as I said, to the very upstream layer of the infrastructural challenges. I’m trying to now move on. Thank you. So my friend from South Caucasus is helping me as we do usually. Thank you. Oh, thank you. Thank you. Thank you so much. So, we all know that most part of the data, as we speak about digital, most part of the data traffic goes through the submarine cables, you know that, and how important submarine cable resilience levels are. We see that this information was kindly provided by one of our important partners, which is World Bank, so I’m allowed to show this information, but actually this first slide is available information you can find on telegeography, and it’s updated on a daily basis almost, and you see this is the most upstream layer of the Internet, and this is a value chain, and this is a growing market, and this is the very, how to say, the basis of our connectivity, that enables us to exchange data, to protect data, and this many cables are also created to ensure that the data exchange is resilient, so the sovereignty information on the infrastructure level is pertained. If we can go to the next slide. It’s a bit of a challenge. Thank you. So, now let’s speak about the very tiny segment of this big map of worldwide connectivity, which is South Caucasus, where my country Georgia is located, and you can see the map, you can see the geography, from these almost 500 infrastructure connectivity routes, only one route is connecting Georgia and South Caucasus with Europe. And this connection is very important, direct connection. You understand how important it is if we speak about the infrastructure level, resilience and sovereignty of data based on that. And there are the aspects of resilience, which is providing direct international access, not through some other jurisdictions, but through the sea. This is what makes the subsea cables so important nowadays. And development of more inter-regional networks is obvious. The necessity of development of more inter-regional networks is an absolute necessity for our region. So this is the challenge we are facing now. And fortunately we have partners who are supporting us with making this connectivity and resilience really work. I will speak later on that. What else is happening in our region is that there is a project of trans-Caspian cables under discussion, which will connect us further to East Asia or Central Asia. We know that digital is interconnected, so we need to be part of global resilient connectivity paths. And with this, not only the infrastructure level, resilience and direct connectivity corridors come to our mind, but also the layers that were also… So, what else comes to mind after seeing that there is definite need of expanding the infrastructure level independence of the region? It comes also the layer of services, software and data protection. Data-related resilience layer, we will speak about the digital hub concept, which is also crucial for our region, which is on… Hello? Hello? Check? Some kind of check. Next slide, please. Thank you. Maybe we need some kind of inter-regional digital hub. If we speak about the concept of protecting our data, we see that concerning the geopolitical situation around South Caucasus, you can see how on the map, how important it is to have the connectivity, some kind of inter-regional connectivity hub that will enable us having the transparent data protection frameworks, which is aligned with the EU GDPRs, EU-related data protection legislation, and that will allow us to have some kind of protected sovereign data transfers throughout the region. Another important aspect why we need to have data hubs in region is that upcoming technological demands related with AI definitely requires that information should be brought closer to the customers. So this is another challenge and this is another important precondition to building the inter-regional connectivity hubs, which will involve regional countries, but which will create some kind of alternative to overcome choke points like you see in the Red Sea region. And when we see about the challenges, challenges bring usually the opportunity. So my presentation here was brought to show you that we’re trying to turn these challenges into the opportunities for our region. And as I’ve mentioned in the beginning, if we can… and move to the next slide. Thank you, Nini. So there are some kind of articles about what are the challenges, how the big techs and geopolitics are reshaping the internet plumbing now and what is going on around the world. It’s very much relevant to our region. And as I’ve mentioned, this concept of South Caucasus Digital Hub to make our data more resilient is kind of the answer to the question how we can build more robust digital service, so digital layers, starting from the upstream infrastructure up to the software and data protection layers. And you can see this Baltic Highway project, which is supported by European Union. And we also are supported by World Bank and by European Union to build similar regional connectivity corridors that will enable countries in the region to be connected safely with other worlds and to bring the data closer to our subscriber and to ensure that policies and regulations that are adopted amongst the European Union to be kind of transposed to the regional data hubs. This is how we see, answering to the challenge you would have. I’ve seen, of course, this is now in projection stage. The projection stage means that we have some concept how this data center regional hub concept should work. And we really hope that it will be continued and it will turn because the adoption of AI and growing demand on the machine learning or bringing the more like content into the digital space gives us understanding that this project should be elaborated as soon as possible. So this was what I wanted to share with you and I’m happy to answer later the questions. Thank you.

Rodolfo Avelino: Thank you. And now introduce it. And Korstiaan is a principal at the center of digital excellence in Johannesburg. He develops digital economy strategies to address Africa’s developmental challenges. Korstiaan will comment on the capacity limitations in Africa, different dependence on local, private and the international players connecting to the trade offs of sovereignty. You have the floor.

Korstiaan Wapenaar: Thank you very much, colleagues for having me. Thank you. in and for the opportunity to participate. Can I ask? Okay, great. Thank you very much. There have been a couple of version changes this morning, so there might be a couple of edits that might not have come through, but we can run with it. So, I think the point of departure to start is that digital transformation of the public sector is a prerequisite for socioeconomic development in Africa. African states have struggled to deliver services to people and organizations at scale, and subsequently these technologies allow them to reach people in need at scale when they need. Unfortunately, despite this prerequisite, let’s call it, African countries have largely struggled to deliver on the opportunities of digital transformation. The e-Government Development Index is a useful proxy for that, and we see that there are only four African countries that have managed to achieve above the global average. And so, there are some critical underlying drivers of this underperformance. In particular, one is acknowledging that many African countries have significant fiscal constraints and significant capacity constraints in terms of their expertise, and that subsequently this has impacted the rollout of the quality of both hard infrastructure, if we think about data centers and the like, as well as soft infrastructure being the services that, the technologies that are used to deliver services through this hard infrastructure. Next slide, please. Next slide, please. If we look at data centers as a proxy for the availability of infrastructure across the continent, we see that there is a rapidly growing demand for more physical infrastructure. This estimate on screen, and apologies that there’s no access on the on this graphic, is that African countries will need to more than double their data center hosting capacity by 2030. At present, the number of these countries are underdeveloped, there’s not a lot of digital activity, and so localization requirements are hard to meet through a local data sector because it’s economically infeasible to host a center domestically just to meet those local requirements. Subsequently, we’ve seen that a number of markets across the country, governments have started or have experimented with deploying their own data centers to manage their own data and operate their own technologies and infrastructures. In many cases, though, due to the capacity limitations, these are poorly managed, they’re underutilized, and they have become what is termed economic drains, maybe one might call it a white elephant or the like. And so it leaves a little bit of a quandary for African countries that are trying to achieve localization requirements independently and autonomously. Next slide please. And so subsequently, what this means is that there is an inherent dependency in Africa on, maybe inherent is a bit of a strong term, but there is a mutual benefit between the state and the private sector in delivering this hard infrastructure, where in many cases, private sector players such as your hyperscalers are supporting governments in the operation of their own technologies. Next slide please. And so subsequently, as the value of digital public infrastructure is better understood, and is gathering steam across the globe, we likewise see increasing adoption in Africa, as we saw before with the e-government digital. or EGDI and the like, that there is, this adoption is slow or slower across the continent. So these principles, Dr. Min was talking about FOSS, open source and the like, these principles are arguably key mechanisms that will allow service delivery at scale by allowing governments to adopt these technologies, lead with their own interests and operate them independently and autonomously. Next slide, please. If we break away from, we’ll start to unpack the debates within the DPI realm around what is public, we see that there is room to explore the role of the private sector in supporting the design, delivery and operation of services through technology and government. And so we see in Africa that the private sector has a key role to play in many of these, in many countries in service delivery, creating a question around that P in DPI and whether or not P needs to be big P or small P for those that are participants in the debate. So we know that financial services players, telecoms, retailers, vendors and community are all supporting or bolstering government in its delivery of services. So arguably, if we think about sovereignty, and this is maybe bending the definition a little bit, there’s sovereignty in terms of government’s ability to deliver services independently or its requirements to engage the support of the private sector. And we see that in Africa, participation in the private sector may be a requirement but is not detrimental inherently. And so this might be a necessity given current limitations. Next slide, please. Likewise, when we look towards the emerging DPI. ecosystem, we see that there are a wide variety of non-government players that are offering technology and support. A couple of them are on screen there. So these entities will help governments identify what technologies to use. They will help them roll their technology out and optimize it for their local environment. Again, this is contrary to a hard line view of an independent autonomous state by drawing in the participation of these entities in their support. So these role players, arguably, as non-government role players, are critical to catalyzing digital transformation across Africa. Likewise, equivalent to the requirement of private sector players. Thank you very much.

Rodolfo Avelino: Thank you, Korstiaan. Now, Ritul Gaur is a Policy Advisor at the Digital Impact Alliance. His area of work includes research and advocacy around the digital public infrastructure. In his previous role at the Minister of Electronics and IT, GOI, he worked on DPI negotiations at G20, tackling questions of the why, what, how of DPI. Ritul, given your experience in this field, who do you share with us your thoughts on the connections between digital

Ritul Gaur: sovereignty and DPI? Hi, thank you so much. a big thank you to the organizers and everyone else for attending. I wish I was there in person, but you’ll see two gentlemen in the room. There’s Ibrahim and there’s Talha, who are my colleagues from the digital backlines. So if I say anything which is controversial, they are my lawyers. So I want to start, and I also have a great, great job to explain something which I spend a lot of time theorizing and et cetera, which is digital public infrastructure. Think about digital public infrastructure. Think of society in the digital age. Now, what is it that is absolutely required? It is an identity system which is secure, which can be authenticated against something which can truly prove that you are you and in a unique way. So an identity is an important component of it. A fast payment systems, which allows you to transact both P2P and person to business, person to person, et cetera. And then data, which allows you to both store and share your data across both public and private services in order to access different services. Now, it’s not restricted to this because DPI is still an evolving concept and there are already new DPIs in climate, in commerce, et cetera, such as ONDC, et cetera. But broadly think about why are we referring to this as infrastructure is because it lays down just the common minimum rails as in the 19th and the 20th century roads rails did. And then it’s for others to come in and innovate and build on top of it for developing so many other services. Now, you could ask me a question that this is how digitization happens. What makes it new or why are we saying this? Why are we calling this as an approach? A simple answer to that is think of DPI from three common aspects, which is tech governance and community. Now, when you think of technology, it is an amalgamation of open source technologies using using open standards and open specs to build the tech that’s required. So essentially for your critical national digital infrastructure, you are not going for big vendor contract, but you are actually building something from scrap. You’re using a lot of open source tools. You’re using open standards. You’re using you’re not going for proprietary standards. You’re not going for big vendors. You’re using a lot of open technologies. The governance of DPI. So first is the tech. Second is the governance. The governance of DPI is multilayered. There is a governance embedded in the protocol itself, which is safety by design, security by design. And there’s a governance of specific aspect of DPI. Let’s say if it’s an ID, then there’ll be an ID regulation, an ID legislation. And of course, your broader umbrella data protection, GDPR kind of regulation, which also applies. So so that is the tech. Then there’s the governance. And then the most important part, DPI is nothing without its community. So to borrow, borrow a phrase from a professor, David Eves calls it that DPI allows you to have shared means to many ends because essentially it’s laying out the most common drill, but then allowing others to build a market economy around it, allowing others to use that ID to do a KYC to then provide services, allowing others to build that payment service app to then offer other things. So that’s the, that’s the amalgamation of these three things. And as Kristen referred, the two most important things of digital public infrastructure is it has to be open for all to access and it needs to be interoperable. It needs to be interoperable across different systems in the country, etc. Now, the element of sovereignty as to what what is the role of DPI and sovereignty is linked, I believe DPI does empower governments and countries to exercise more sovereign control over their critical national digital assets. And we’ve seen this in the case of India. India gradually moved away from Visa MasterCard. Now, 80% of our financial transaction goes through our national payment infrastructure called UPI. 80% of our digital financial transactions are not going through Visa MasterCard, but are this thing. Our national ID data, which includes our biometric, et cetera, everything is coded, homegrown, and it’s out of the data sets out of India. So I think in a lot of sense, both India, Brazil, Singapore, Togo, we’ve seen that how DPI has been a critical enabler of sovereignty. But now I think a lot. So I think at this stage that we need to take a step back and actually analyze what is digital sovereignty and what does it mean in this context? I’m going to break it down into three aspects, one being the data part of it, the other being the hardware part of it, and third being the software part of it. The software is the most easiest part. How does DPI enable sovereignty? A lot of DPIs are built on open source softwares. So essentially, you’re just taking something from GitHub, contextualizing it, making it in a way that it’s feasible for your population, it’s contextualized for your population. Essentially, it becomes your own source code with the moderation, with the modulation that is required. We’ve seen this in the case of MOSFET, which is an ID provider, OpenG2P, which is a government-to-citizen, government-to-person payment service provider, Engie, which is a wallet, Mojaloop, again, a fast payment system, OpenCRVS for civil registry, et cetera. So a lot of these software, which are out there and open as open digital assets, are adapted by the country to then contextualize in their own economy, and then the software is housed and hosted within the premise of that country, and it’s owned by that country. So, software is one aspect of DPI sovereignty. The second is the hardware. Now a lot of DPI-related stuff requires you to have biometric scanners, cameras, card printers, etc. I think in this case, the sovereignty is a bit malleable because you still require domestic and international vendors to procure, there are still very few companies that still make these kind of standardized hardwares, etc., which are required to enroll large swathes of population. So I think then, and of course, there are a lot of ingenious solutions that are required based on your contextualized population. In India’s case, we have something called voice box, which pops every time you make a payment. So essentially, it’s building trust, etc. So a lot of vendor management procurement happens in the hardware part of it, which could be both domestic but international. And then finally, the data aspect of it. So data is something which happens in both ways in terms of DPI. It also stays with country if you have data localization norms, if you have data housed within the premise of the particular ministries, etc. But also a lot of countries also go for cloud-based data because it’s cheaper, it’s easily intractable, you can also switch clouds, etc. So I think how can, and to sort of summarize this and say, how can DPI enable sovereignty is, of course, use open technologies, I think that’s the most important thing. But through regulation, use data localization norms, get better deals with vendor, make sure that for poorer countries, particularly, that a vendor does not come and harass you. So if you are going for a vendor, make sure that there is a high degree of vendor interoperability in your case, that if you want to move your data from Google Cloud to AWS or to Oracle, you can do it. Choices. I think at the very stage of conceptualization, design choices matter a lot. Go open by design, use open source, open standards, pick domestic vendor as much as possible. as you can. Don’t rely for the big vendors because they have a lot to service and you will be on the last tier list to be serviced. And I think the final and most important thing is the funding. Try to get neutral donors who do not try to push you a certain kind of technologies. Try to find partners who are invested in the longevity of the system and not the constituency back home that wants to sell you a certain kind of software which then the servicing of it will be super expensive. So I think with that I will conclude my statements. There is a big link and we’ve seen in countries like India, South Africa, Brazil etc where DPI is enabling high degree of sovereignty but there are multiple facets to that sovereignty that still needs to be figured out, it still needs to be tweaked, it still needs to be better managed. Thank you.

Rodolfo Avelino: Thank you. So ladies and gentlemen, let me introduce first Dr. Mouamad Alsour, the founder and president of Sustainability Professionals of Saudi Arabia, whose groundbreaking work and initiatives in sustainability and sustainable designs and green certifications have set benchmarks in the region and beyond. Renata, how do you comment on how the Brazilian plans for artificial intelligence relate to today’s challenges around the digital server? Thanks Rodolfo,

Renata Mielli: thanks Jeff for this workshop. I think Dr. Ming, Catherine for standing root to bring a very broad perspective about sovereignty, infrastructure. Dr. Ming brings some concepts, Catherine brings the aspects of connectivity and Korsten brings another aspect he told us about DPI. I will answer your question, but as I’m the last one to speak, I’m going to bring some summarized aspects about how Brazil and Brazilian government are seeing this broader challenging regarding sovereignty in infrastructure. Well, I started pointing something that’s very obviously, but nowadays we need to tell the obvious, that we live in a world in transition where every day all human activities, socioeconomic and cultural relations are mediated by information and communication technologies, by a broad digital ecosystem. The mastery of these new technologies reshapes the international geopolitical board and redefines groups of countries that are producers and consumers of the digital technologies, and this is our main concern these days. And we bring this debate during our G20 presidency under the AI priority we led on the digital economy working group this year. Besides AI, we had another three other priorities issues, meaningful connectivity, DPI and information integrity, all starting from this perspective about how we can move and address the challenge of these asymmetries we have in terms of technology and emergence new technologies. Well, these issues relate to these asymmetries between and within countries exist in many areas and have been present for a long time. This scenario has deepened significantly with the emergence of large digital platforms which, in a way, determine the current economic model of society and set new forms of capital accumulation. A few large companies, the big techs, operate in various areas of the economy but have platforms at the core of their operation that mediated commercial authorizations, the flow of information, the provision of services, and in this very moment all the infrastructure and knowledge about the development and the employment of AI in the world. So, regarding AI and other emerging technologies, what we have now, we are facing a scenario that, at least till this moment, we are facing a deepening divide and inequalities, particularly in the Global South. In the sense, the debate about the digital sort of sovereignty is increasing. This term refers, as Dr. Ming said, among other things, to a national strategy autonomy or its capacity to develop digital tools and artificial intelligence using its own infrastructure, data sets, workforce and businesses. Furthermore, it involves the ability to independently regulate and decide on its own digital and AI path in a quest to ensure inclusive growth and sustainable development. The digital sort of sovereignty refers to the ability of states to control their own infrastructure, emphasizing the position of each country in controlling ICTs, that is, a greater or lesser degree of autonomy to make choices and decisions in the field of technology. And, of course, in the field of cooperation with other countries, this is very important also. So, for us in Brazil, we have some key perspectives. The role of the states. So, emphasizing the importance of government action through public policies that support the development of technological infrastructure, science and technology initiatives and industrial policies to foster innovation and reduce dependency. This includes encouraging and promoting the use of national technologies and regulating the use of foreign technological tools. I’m going to say next about precisely about AI plan, but we have another public policies regarding industrial economic development and other initiatives that compose a very large umbrella of public policies regarding investments and in technology. The second point is sovereign digital infrastructures development, maintaining independent digital infrastructures that ensure national control and security. Meaningful connectivity. We also face profound challenges in the field of meaningful connectivity and access. This is how to reduce the prices of equipment, for example, cell phones and computers for the population. We cannot see only the access aspect. We need to see more broader aspects when we’re talking about meaningful connectivity. After all, if we are talking about leaving no one behind, we need to develop the capabilities locally to offer better services to society. For this, there must be meaningful connectivity and we need to strategically think about how to build a permanent digital training process for the entire population, from young people to the elderly, for people living both urban and rural areas. In addition to connectivity, we need to think about the availability of equipment, especially cell phones, which is the most used device to assess services that have the quality and minimal capacity to run applications and tools that use AI. Governance and regulation. And now in Brazil, we are discussing a bill regarding regulation on AI. And for us, it’s crucial to create frameworks for data governance and platform regulations also, that ensure accountability, transparency and ethical use of technology. In this scenario, we need to think about how to create a framework for where the digital system is available in the country, platforms and AI tools are mostly international, it’s necessary to discuss regulatory mechanisms that establish rules for the operation of these companies in the country with transparency obligations about their systems, granular information about aspects that have economic, social and political impacts, conduct adjustments mechanisms, among many other regulatory aspects related to social and human rights. Security and privacy and also develop sovereignty in security and privacy technologies. So for AI to have a positive impact in catalyzing innovation aimed to reducing inequalities and other social issues, its development must be guided by this proposal from the outset. This includes its conception, production, programming, the use of training data set structures to enable AI to achieve its goals with accuracy, linguistic, cultural and geographical diversity. Otherwise, AI could become yet another driver of inequality. This is why data sovereignty is central to the development, implementation and use of AI by countries that aspire to any degree of self-determination. Those was the main focus that Brazil brought to Digital and Digital Economic Working Group this year. We produced as Brazilian presidency contribution a toolkit for AI readiness assessment in partnership with UNESCO that was our knowledge partner with insights to leverage the potential of enabling a holistic and inclusive approach to the ethical and responsible development, deployment and use of AI technologies. And also a mapping AI adoption for enhanced public service with insights into systematic monitoring and relevant opportunities and challenge supporting ethical AI applications within and by governments. Consider that DPI is, as Rutul said, a very strong and potent tool to inclusive and sovereignty digital for countries. In terms of public policies, the perspective I bring to this debate are reflecting our Brazilian AI plan, as Rodolfo said, leaded by Minister of Science, Technology and Innovation this year. A plan that forecast is an investment of 23 billion reais, around $4 billion in the next year. next four years for Brazil is a very huge amount of money. In terms of infrastructure and sovereignty, I highlight some investments from the Brazilian artificial intelligence plan such as national infrastructure program for AI around 105 million dollars, sustainability and renewable energies program from AI around 83 million dollars, data and software ecosystem and structuring program for AI 165 million dollars, research and development program in AI 873 million dollars, no 144 million dollars, and the perspective to achieve an AI supercomputer that puts Brazil on the top of five supercomputers in the world. This is just some highlights on our AI plan that has five axes regarding governance, private sector investments, re-skilling and capacities from workforce and also infrastructure investments. That’s it for now. Thank you very much.

Rodolfo Avelino: Thank you. Thanks a lot for the very relevant points. Now we are going to open the floor to questions of the audience inside and online. First we can start inside.

Luca Belli: Hello, good afternoon. So Luca Belli, professor at UW Law School, very happy to hear that the the research I have been conducting with Professor Min Zhang has been presented here. And sorry if I was late. I was in another session. I was very happy also to see that a lot of the points that we raise in the research are now well-integrated. But maybe some of them are not so well-integrated. Let me give you a very good example, because we have been doing a lot of research on AI sovereignty over the past couple of years. And of course, connectivity is one of the points that we stress is essential to achieve AI sovereignty. And let me give you a very concrete example that also speaks to the debate on DPI that was brought here. Most global south countries, including Brazil, do not have meaningful connectivity. We have most of the population connected to zero rating plans, so to basically a very small selection of apps, including mainly the meta family of apps. So to give you concrete details that friends from SETIC here can confirm, thanks to a very good study that they have done on meaningful connectivity this year, 78% of the population in Brazil does not have meaningful connectivity. It means that only 22% are meaningfully connected. What does it mean concretely? I think the Brazilian government is putting a lot of money. Actually, we are analyzing this primarily in software and data with the AI plan. But even if we have the best possible language models trained with Brazilian data, if all Brazilians only access meta AI through WhatsApp that is zero rated, whereas no one else will be able to access the new fantastic domestic models created thanks to the plan, that is not the very best way of directing the public investment. And this is due to the fact that the access is an incredibly relevant variable in this context. As you were saying, as we have been demonstrating with research, the fact that 78% of the population simply access Meta AI and will never access Brazilian technology because they will keep on having not having money to pay for full internet connectivity and only being directed to Meta, Facebook, Whatsapp. That is an enormous impediment from national innovation and so frustrates a lot the logic very good logic of putting public money to improving national research and development but at the end of the day the consumers will not use it and will keep on not only using another non-Brazilian technology but also train it so for free of course. So I think this the entire logic here is a little bit frustrated and let me give you a very good example of an institution in Brazil that has understood this logic very well. The Brazilian Central Bank when they introduced PIX, our UPI, our Brazilian digital public infrastructure for payment, Whatsapp wanted to launch Whatsapp payments but they blocked it and they suspended it and the rationale was precisely because if it had been launched before PIX everyone in Brazil would have used only Whatsapp payment and no one today we would not be here praising PIX as an example of success story if the Brazilian Central Bank hadn’t blocked Whatsapp payment and hadn’t not suspended it until the entry in force of PIX because otherwise everyone in Brazil here would be using only Whatsapp payment and PIX nobody wouldn’t even know what it is. So I think that these are points that if not considered I know that very well that the Brazilian AI plan does not consider connectivity but I think it’s a mistake and I think that this as you were saying is an essential point and should be brought into the picture otherwise we risk putting a lot of public money for nothing. Thank you very much.

Rodolfo Avelino: Thank you for the question, Luca. Now let’s go to the online questions. Okay, okay. Okay, thank you very much.

Jose Renato: My name is Jose Renato. I am a researcher at the University of Bonn Sustainable AI Lab and also co-founder of LAPIN, non-profit organization in Brazil. Well, thank you so much. Amazing insights. I was actually wanting to ask the presenter and speaker from Georgia. I’m sorry, I didn’t get your name. I really apologize for that.

Ekaterine Imedadze: No worries, it’s Eka. You can call me Eka.

Jose Renato: Okay, nice to meet you. I was actually wondering if you could talk a little bit more about the data center related initiatives in Georgia. And also if you could also share how are you thinking on embedding this within like energy infrastructure, water infrastructure as this has been a very wide and hot topic in the last few months, I would say. So if you could share some thoughts about that and also the Brazilian government’s thinking about this kind of thing. So I think it will be interesting to hear. Thank you very much indeed.

Ekaterine Imedadze: Thank you so much for amazing question. Thank you. Actually, you pointed out in the question, the topics that I’ve actually missed and wanted to share about. So the data center topic, what we have now under discussion is like, first of all, as a regulator and as a state representative, we are working a lot about enabling access to the existing internet infrastructure, opening market, building the IXP and neutral exchange point. This is the first step we are seeing to be, it’s ongoing process. It’s somehow already almost done. And this is the first step to enable them the real data center. Another topic is resilience of infrastructure and finding out that we’re quite small country, but still find the geography where the data center is best to be located from the, also the energy point of view. The good side of the story is that we have, Georgia is the greener energy producing country. So we can ensure that the energy produced locally will be the green energy, which is a very important, how to say, a component of building the right data center and bringing the investors to be interested in this kind of the projects. And also related to this, one thing is producing green energy. Another point is having the geographic location where the energy efficiency will be the best with it’s actually with support of, actually Amazon did some kind of research that in Georgia, this energy efficient locations are present. And this is, so this is kind of projected level, but there is a lot of stuff still ongoing to be done. Security aspect, physical security aspects are very important that still needs to be resolved. And another part is also energy prices. We’re quite competitive electricity prices. So this is, those are the different, how to say different components of the projects we need to solve and put together. Yes, and most importantly to understand financing model, which will work best for this, to make not only the local like Georgia specific project, but regional projects. So investment options are there on table, whether their state should be part of it or it should be totally public or it should be public private partnership, et cetera. Those are kind of points to be resolved still. Thank you so much.

Rodolfo Avelino: Thank you. Let’s do a round of the two more questions and the speakers will answer.

Oms Juliana: Yes, I’ll just make the online. questions, and I think I have one more from the audience here. And then we make a round of the speakers answering, OK? From the Zoom questions, we have Azeem asking, he likes to learn about Peace Cable that Meta is investing. So maybe again to Eka about platforms investing in cable. Another question from Van, he asks, digital public infrastructure, if translated to other languages, can be translated as a state infrastructure. And this would be controlled or owned by then. Is there a line and widely accepted understanding of what the PI is? And finally, a question to Dr. Ming, examining digital sovereignty as a supranational issue, how does regime type influence collaboration? Are democratic regimes more likely to cooperate than authoritarian ones, or this is an outdated assessment? I think maybe we can do another. One more here? OK, I think this is the last one because of time.

Audience: No, it was the same. So I had two questions there. We have discussed, it was my question that you have said. Yes, it is. One, do you hear me? OK, so I have here two questions. So we have discussed several activities for infrastructure development, but they all are almost all were for original connectivity. So the question is that, how can the digital sovereignty on digital infrastructure that has a regional impact be used as a weapon against other countries? And if yes, how? How it can be eliminated? And a small comment that maybe many regional projects require several states to get engaged. So how can we ensure proper stability and productive management on the infrastructure, taking into account the challenges that we have discussed today about digital sovereignty and digital public infrastructures? And also that question.

Rodolfo Avelino: Thank you for the question. And now when answering the question, please give me your closing lines and the final comments. And can we start by the same order with Dr. Min?

Min Jiang: Sure. Thank you for the great question, Nada. And I think it’s a tricky question. I will make two points in relation to your question. First of all, traditional notions or definitions of sovereignty are usually predicated on nation states having a form of autonomy or self-determination, but do not take into account, in reality, the very notion of power. Small nations and small states know this very well, especially in the digital age. Big tech have power and financial power that can easily eclipse those of small nation states. In fact, if one examines the telegeography map that our previous speaker referred to early on about global undersea cables, companies like Amazon, Google, Facebook all have their own dedicated infrastructure at that level. So small countries, not only in the global South, but also, for example, EU, they recognize that in order to be sovereign, they must also cooperate and build alliances. important point to recognize. Also in a previous speaker, Ritu’s account of public digital infrastructure, he makes the case that nation states did digital development, especially those in the global south, have to draw upon open source and free software, which are very, very important notions to common digital sovereignty. So I think we need to disrupt how we think about sovereignty to begin with. And second, the question is about regime types. That’s a very, very important notion for sure. But we also need to recognize the regime types are labels that we attach to nations, but nations also change and evolve. The political system, as we have seen in the United States, in my own country, has evolved a lot. We just elected Donald Trump for a second term, right? So how do we label countries and what type of regime they are is becoming more and more challenging. And the United States is a country with great power, and with great power comes great responsibility. And what NSA, for example, implemented for a long time, and what the big tech are doing, perhaps challenge this very notion of what it means to be democratic. And I think we’re at an age where the older conceptualization and infrastructure and legal regimes to think about democratic is somewhat breaking down. And that’s why we’re seeing this resurgence of claim to digital sovereignty and different actors national or international are hoping to gain more independence, autonomy, and self-determination. So yes, I’m happy to carry on the conversation through some other means, but I will restrict my comments to the BoF for now. Thank you.

Rodolfo Avelino: Thank you very much, Min. Catherine?

Ekaterine Imedadze: Yes, sir. Very challenging questions, let me put it this way. And exactly, echoing what are the underlying challenges with sovereignty, starting from infrastructure level up to the service and data protection level. And what I wanted to outline. that yes, on the one hand side, the sovereignty can be used as some kind of weapon, some kind of strength from the one country having the totally sovereign kind of infrastructure, not giving the access, and kind of isolating from one hand side, conceptually isolating the country. On the other hand side, it requires a lot of effort when we speak about the regional perspective, putting the regional concept of sovereignty, so countries with very different political views should sit together and agree on the major terms. But I think that this debate of digital sovereignty, why it is an open debate and why it is an evolving debate, countries still are trying to understand what are these basic and minimal concepts of, on the one hand side, independence of infrastructure and data, and at the same time, the shared framework of data independence or protection of data. Without this kind of touch points among countries, between countries with very different political views or geopolitical locations, it’s impossible to let this very interconnected world work. We will need more and more interconnected data centers, otherwise it will not work at all. But at the same time, countries and regions are required to protect themselves by owning some kind of the infrastructure. So I think that this is the thin line where we need to all agree and we need to introduce some kind of frameworks. For Georgia, what I can answer is… is that the EU framework we decided to go with, existing EU framework of this sovereignty concept on data level GDPR that is provided by EU framework, legal framework for data protection is the one that is acceptable for us. And we think that this is the best model we can introduce and it should work for our region as well for the current situation. This is my answer.

Rodolfo Avelino: Thank you. Korstiaan, please.

Korstiaan Wapenaar: I’ll make my closing remarks very short. Maybe firstly, just to say thank you to everyone, to the organizers for the participation and to my fellow panelists for the interesting discussion. Without stepping ahead of the questions, pass to yourself maybe just a couple of thought provokers on the regional considerations for sovereignty, maybe just to propose the question around how one manages the aspirations of the AU to develop a continental identity system and how that would be governed and managed and the extent to which that is a risk or how to prevent exclusion across different markets. And then curious to hear from my fellow participant following me, their view on multiple definitions of DPI and what that big and small P looks like as we think about our colleague, Mr. Yves. Thank you very much, everyone.

Ritul Gaur: Thanks, thanks, Christian. I think to answer the first question, which I’m going to take away that, how do we ensure stability and management of DPI? I’m not going to do a regional, I don’t have an answer to that, but. But I can say like, just in terms of a geographical context, what we need to do is ensure that you have the highest grade data centers, therefore great data centers, you have security assessments, you have regular regular audits. And you could do similar things in cross-border context. If you have an ID payment or data sharing, which is in a regional context, we don’t have it in India’s case. But as we build, I think these are the three tech metrics that will follow. But then there will also be some non-tech, which is the governance side of it, which will also follow. Now, answering the perplexing puzzle of the DPI, which is what is the P about? Should DPIs be controlled or owned by state? I think A, to start off with, there’s no clear definition of DPI. I think it’s at a very evolving stage. The G20 definition is as confusing, as clarifying as it is. And I take some blame for it. But if you think about it, it has to be understood from a different grade perspective. Something like an identity is a very sovereign function. To say, you are you, can be trusted by a sovereign state more than any other entity. So in India’s case, ID sits out of the Ministry of Electronics and IT. It’s a statutory organization backed by a constitutional law. And the entity is posted with civil servants, et cetera. So it’s a very, very state-driven function. On the contrary, payment system is rather fluid. It is a non-profit structure. It’s a Section 8 company, which is a non-profit in India’s case. It is a conglomeration of different banks and the central bank coming together and just building the protocol. The rest of it is actually controlled by different banks who come and participate on top of it. But the role of the state in that case is the regulator. The state is only the regulators in India’s payment scenario. And similarly, the document, the data sharing wallet as well, the state, again, a Section 8 company has created it, which the state only regulates in terms of how you can share your… credentials, etc. So I think it will differ on a country to country basis. I remember some time back, I was in Ghana, and I was talking to the bureaucrat there, and he said, in our country, everything is a very private sector driven phenomenon. So how do we do it? So I think it will be a very country to country phenomenon. But in my limited experience, most ID systems, and I think Kristen would agree to it, we met last week in Bangalore, most ID systems are part of either the home ministry or the IT ministries, etc. So you will see a lot of identity function, which is so central to any targeted beneficiary delivery. It is essentially establishing your relationship with the state is done by the state. But other DPI functions can be performed by different partners. In fact, SingPass, PayNow, PromptPay, etc. These are some payment systems and other systems which are created by the private sector in conglomeration with the state. But the state’s role is at least in this case, to be a regulator, to be an observer that nobody creates disproportionate amount of monopolies, that nobody is playing, not playing by the rules. So to set the broad rules of the game, and then let the players come and come and build on the basis of what purpose does itself. So if it’s something which requires high degree of trust, authenticity, etc, state is the best entity to do it. If it’s something which can be created by different market players coming together, state can be an observer or regulator. So that’s my view. Finally, on DPI and sovereignty, I think it’s a very important link to be made there. My only concern is that as most countries go in the quest to build their DPIs, we should not lose sight of cross-border interoperability or interoperability of those different DPIs. So as we all go towards making our own payment systems, as we go towards making our ID systems, etc, we also need to be cognizant. isn’t enough, that we are also thinking of regional blocks. We are also thinking of cross-border interoperability, et cetera. So do not lose that. Otherwise, I think in a broader scheme of things, DPI is a big time enabler of sovereignty. Thank you.

Rodolfo Avelino: Thank you very much, Ritul. Renata, your final answer.

Renata Mielli: Yes, thank you. Thank you very much for this interesting panel. I will start saying that we need to see digital sovereignty as complementary with cooperation. It’s not just different things. Since each country faces different realities in these areas, in digital areas, cooperation will be fundamental. Without cooperation, we are not going to achieve sovereignty. Establishing mechanisms for regional cooperation that create complementarity strategies based on each country’s capabilities may be a more effective and faster path toward reducing inequalities and achieving greater autonomy for nations. I think we have to keep this in mind. Regarding the question that Luca made about connectivity in Brazil, he knows I’m profoundly and deeply critical of zero rating. But it’s important to say that in Brazil, we have a huge public policy in terms of strategic investment of government that calls PAC. How can I say PAC? Growth Facilitation Program. And the connectivity policies are in the PAC. And with 28 billion reals, something around $5 billion to invest in building connectivity in technologies, 5G, 4G, building back halls, backbones, school connectivity, health system connectivity. So there is a public policy that are being made inside the communication ministry, Ministry of Communications. And as I see and as the government see, as my minister of science, technology, innovation see, we cannot wait to solve the problems regarding connectivity. And I completely agree with you. Brazil doesn’t have meaningful connectivity for all population. But we need to start to build expertise and investments in infrastructure and in all the economic chain of the AI, because we need to start from some point. So these are two policies that needs to be put in movement with each other. Communications are dealing with connectivity, doing the investments. And we, as Minister of Science, Innovation, and other governments, and other ministers, are focus on how to build capabilities in terms of re-skilling, in terms of infrastructure, and building applications, AI applications. So that’s my point. We need to do the both thing together. So if you want to achieve some autonomy, sovereignty in Brazil regarding technology, digital technology and AI. So thank you very much for the opportunity. And that’s it.

Rodolfo Avelino: Thank you to our speakers for their great contributions and to everyone in the audience. This has been a very good workshop. We appreciate the IGF organizations for facilitating this valuable discussion. Thank you all.

M

Min Jiang

Speech speed

137 words per minute

Speech length

1598 words

Speech time

699 seconds

Digital sovereignty has multiple meanings and perspectives beyond just nation-states

Explanation

Digital sovereignty is not limited to nation-states but encompasses various perspectives including supranational, network, corporate, personal, post-colonial, and common digital sovereignty. This broader conceptualization complements the multistakeholder model by highlighting underlying power issues.

Evidence

The speaker references a book she co-edited titled ‘Digital Sovereignty in the BRICS Countries’ which explores these different perspectives.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

Agreed with

Ritul Gaur

Renata Mielli

Agreed on

Digital sovereignty is multifaceted and goes beyond nation-states

Differed with

Ritul Gaur

Differed on

Role of state in digital sovereignty

Small countries need to cooperate and build alliances to achieve digital sovereignty

Explanation

Traditional notions of sovereignty based on nation-state autonomy do not account for power dynamics in the digital age. Small nations and states recognize the need to cooperate and form alliances to achieve digital sovereignty, especially in the face of big tech companies’ power.

Evidence

The speaker mentions that EU countries recognize the need to cooperate to be sovereign, and that small countries in the global South also need to build alliances.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

Open source technologies are important for AI sovereignty in developing countries

Explanation

The speaker emphasizes the importance of open source and free software for digital sovereignty, especially for developing countries. These technologies allow nations to develop their digital infrastructure independently and adapt it to their local context.

Evidence

The speaker references Ritul’s account of public digital infrastructure and the need for global South countries to draw upon open source and free software.

Major Discussion Point

AI Development and Sovereignty

E

Ekaterine Imedadze

Speech speed

112 words per minute

Speech length

1948 words

Speech time

1038 seconds

Georgia faces challenges in developing data centers and connectivity infrastructure

Explanation

Georgia is working on enabling access to existing internet infrastructure, opening markets, and building neutral exchange points. The country is also considering factors such as energy efficiency, green energy production, and physical security for data center development.

Evidence

The speaker mentions ongoing projects to build IXPs and neutral exchange points, as well as research on energy-efficient locations for data centers in Georgia.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

K

Korstiaan Wapenaar

Speech speed

139 words per minute

Speech length

1062 words

Speech time

457 seconds

African countries struggle with fiscal and capacity constraints for digital infrastructure

Explanation

Many African countries face significant fiscal constraints and lack of expertise, which impacts the rollout of both hard and soft digital infrastructure. This has led to underperformance in digital transformation and e-government development.

Evidence

The speaker cites the e-Government Development Index, showing that only four African countries have achieved above the global average.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

DPI enables governments to deliver services at scale and reach people in need

Explanation

Digital Public Infrastructure (DPI) allows governments to deliver services to people and organizations at scale. This is particularly important for African states that have struggled to deliver services effectively in the past.

Evidence

The speaker mentions that digital transformation of the public sector is a prerequisite for socioeconomic development in Africa.

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

Agreed with

Ritul Gaur

Renata Mielli

Agreed on

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

R

Ritul Gaur

Speech speed

168 words per minute

Speech length

2274 words

Speech time

808 seconds

Digital sovereignty enables countries to exercise more control over critical digital assets

Explanation

Digital sovereignty allows countries to have more control over their critical national digital assets. This includes the ability to develop and operate their own technologies and infrastructure independently.

Evidence

The speaker cites India’s example, where 80% of digital financial transactions now go through the national payment infrastructure (UPI) instead of Visa or Mastercard.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

DPI components like digital ID and payment systems can enhance sovereignty

Explanation

Digital Public Infrastructure components such as digital identity systems and payment systems can enhance a country’s digital sovereignty. These systems allow countries to have more control over critical digital functions and reduce dependence on foreign technologies.

Evidence

The speaker mentions India’s national ID system (Aadhaar) and payment system (UPI) as examples of DPI enhancing sovereignty.

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

Agreed with

Korstiaan Wapenaar

Renata Mielli

Agreed on

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

The governance of DPI can vary from state-controlled to private sector-driven

Explanation

The governance of Digital Public Infrastructure can vary depending on the specific component and country context. Some DPI components, like identity systems, are often state-controlled, while others, like payment systems, may involve more private sector participation.

Evidence

The speaker contrasts India’s ID system (state-controlled) with its payment system (involving private banks but regulated by the state).

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

Agreed with

Min Jiang

Renata Mielli

Agreed on

Digital sovereignty is multifaceted and goes beyond nation-states

Differed with

Min Jiang

Differed on

Role of state in digital sovereignty

DPI should be designed for cross-border interoperability

Explanation

As countries develop their own Digital Public Infrastructure, it’s important to consider cross-border interoperability. This ensures that different national systems can work together and facilitates regional cooperation.

Evidence

The speaker warns against losing sight of cross-border interoperability while countries focus on building their own DPIs.

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

R

Renata Mielli

Speech speed

107 words per minute

Speech length

1651 words

Speech time

922 seconds

Digital sovereignty should be seen as complementary to cooperation between countries

Explanation

Digital sovereignty and cooperation between countries are not mutually exclusive but complementary. Given the different realities faced by each country in the digital realm, cooperation is fundamental to achieving sovereignty.

Evidence

The speaker suggests that establishing mechanisms for regional cooperation based on each country’s capabilities may be a more effective path toward reducing inequalities and achieving greater autonomy.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

Agreed with

Min Jiang

Ritul Gaur

Agreed on

Digital sovereignty is multifaceted and goes beyond nation-states

Brazil is investing in connectivity infrastructure alongside AI development

Explanation

Brazil is implementing public policies for strategic investment in connectivity infrastructure through the Growth Facilitation Program (PAC). This includes investments in 5G, 4G, backhauls, backbones, and connectivity for schools and health systems.

Evidence

The speaker mentions a 28 billion reals (around $5 billion) investment in building connectivity technologies and infrastructure.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

Brazil is investing significantly in AI development and infrastructure

Explanation

Brazil has developed an AI plan that includes substantial investments in various aspects of AI development and infrastructure. This plan aims to build expertise and invest in the entire economic chain of AI.

Evidence

The speaker mentions a planned investment of 23 billion reais (around $4 billion) over the next four years for AI development in Brazil.

Major Discussion Point

AI Development and Sovereignty

Agreed with

Korstiaan Wapenaar

Ritul Gaur

Agreed on

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

AI development must be guided by reducing inequalities from the outset

Explanation

The development of AI should be guided by the goal of reducing inequalities and addressing social issues from the very beginning. This includes considerations of accuracy, linguistic, cultural, and geographical diversity in AI development.

Major Discussion Point

AI Development and Sovereignty

Data sovereignty is central to AI development and self-determination

Explanation

Data sovereignty is crucial for countries aspiring to any degree of self-determination in AI development and implementation. Control over data is seen as a key aspect of digital sovereignty in the context of AI.

Major Discussion Point

AI Development and Sovereignty

L

Luca Belli

Speech speed

162 words per minute

Speech length

652 words

Speech time

241 seconds

Lack of meaningful connectivity in Brazil limits access to domestic AI technologies

Explanation

Despite Brazil’s investments in AI development, the lack of meaningful connectivity for a large portion of the population limits access to domestic AI technologies. This situation may lead to most Brazilians only accessing foreign AI technologies through zero-rated apps.

Evidence

The speaker cites a study showing that 78% of the population in Brazil does not have meaningful connectivity, with many relying on zero-rating plans that primarily include Meta’s family of apps.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

Agreements

Agreement Points

Digital sovereignty is multifaceted and goes beyond nation-states

Min Jiang

Ritul Gaur

Renata Mielli

Digital sovereignty has multiple meanings and perspectives beyond just nation-states

The governance of DPI can vary from state-controlled to private sector-driven

Digital sovereignty should be seen as complementary to cooperation between countries

Speakers agree that digital sovereignty is a complex concept that involves various actors and perspectives, not just nation-states. It can include different governance models and requires cooperation between countries.

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

Korstiaan Wapenaar

Ritul Gaur

Renata Mielli

DPI enables governments to deliver services at scale and reach people in need

DPI components like digital ID and payment systems can enhance sovereignty

Brazil is investing significantly in AI development and infrastructure

Speakers emphasize the crucial role of Digital Public Infrastructure in enhancing digital sovereignty and enabling governments to deliver services effectively, particularly in developing countries.

Similar Viewpoints

Developing countries and smaller nations face significant challenges in achieving digital sovereignty and building digital infrastructure, often requiring cooperation and support.

Min Jiang

Ekaterine Imedadze

Korstiaan Wapenaar

Small countries need to cooperate and build alliances to achieve digital sovereignty

Georgia faces challenges in developing data centers and connectivity infrastructure

African countries struggle with fiscal and capacity constraints for digital infrastructure

Unexpected Consensus

Importance of open technologies and interoperability

Min Jiang

Ritul Gaur

Open source technologies are important for AI sovereignty in developing countries

DPI should be designed for cross-border interoperability

Despite coming from different perspectives, both speakers emphasize the importance of open technologies and interoperability in achieving digital sovereignty, which is somewhat unexpected given the potential tension between sovereignty and openness.

Overall Assessment

Summary

The speakers generally agree on the multifaceted nature of digital sovereignty, the importance of Digital Public Infrastructure, and the need for cooperation and open technologies in achieving sovereignty. They also recognize the challenges faced by developing countries in building digital infrastructure.

Consensus level

There is a moderate to high level of consensus among the speakers on the main themes. This suggests a growing understanding of the complexities of digital sovereignty and the need for nuanced approaches that balance national interests with international cooperation and open technologies. The implications of this consensus could lead to more collaborative efforts in developing digital infrastructure and policies that support both sovereignty and global interoperability.

Differences

Different Viewpoints

Role of state in digital sovereignty

Min Jiang

Ritul Gaur

Digital sovereignty has multiple meanings and perspectives beyond just nation-states

The governance of DPI can vary from state-controlled to private sector-driven

Min Jiang emphasizes a broader conceptualization of digital sovereignty beyond nation-states, while Ritul Gaur focuses more on the varying degrees of state control in DPI governance.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the role of the state in digital sovereignty, the balance between national autonomy and international cooperation, and the effectiveness of current connectivity initiatives in developing countries.

difference_level

The level of disagreement among speakers is moderate. While there are some differing perspectives on specific aspects of digital sovereignty and infrastructure development, there is a general consensus on the importance of these issues for national development and the need for some form of cooperation. These differences highlight the complexity of implementing digital sovereignty in practice, especially for developing countries balancing national interests with global technological trends.

Partial Agreements

Partial Agreements

Both speakers acknowledge the importance of connectivity for AI development in Brazil, but disagree on the effectiveness of current approaches. Mielli emphasizes ongoing investments, while Belli argues that these efforts are insufficient to address the lack of meaningful connectivity.

Renata Mielli

Luca Belli

Brazil is investing in connectivity infrastructure alongside AI development

Lack of meaningful connectivity in Brazil limits access to domestic AI technologies

Similar Viewpoints

Developing countries and smaller nations face significant challenges in achieving digital sovereignty and building digital infrastructure, often requiring cooperation and support.

Min Jiang

Ekaterine Imedadze

Korstiaan Wapenaar

Small countries need to cooperate and build alliances to achieve digital sovereignty

Georgia faces challenges in developing data centers and connectivity infrastructure

African countries struggle with fiscal and capacity constraints for digital infrastructure

Takeaways

Key Takeaways

Digital sovereignty has multiple meanings and perspectives beyond just nation-states, including supranational, corporate, personal, and common digital sovereignty.

Digital infrastructure and meaningful connectivity remain major challenges for many countries, especially in the Global South.

Digital Public Infrastructure (DPI) is seen as an important tool for enhancing digital sovereignty and delivering services at scale.

AI development and data sovereignty are increasingly important for countries seeking technological autonomy.

Cooperation between countries and open technologies are crucial for achieving digital sovereignty, especially for smaller nations.

Resolutions and Action Items

Brazil plans to invest 23 billion reais (around $4 billion) in AI development over the next four years

Georgia is working on enabling access to existing internet infrastructure and building neutral exchange points as steps towards data center development

Unresolved Issues

How to balance national digital sovereignty efforts with the need for cross-border interoperability

How to address the lack of meaningful connectivity in many countries while simultaneously investing in advanced technologies like AI

The exact definition and scope of Digital Public Infrastructure (DPI) and its governance models

How to ensure proper stability and productive management of regional digital infrastructure projects

Suggested Compromises

Adopting a broader framework of digital sovereignty that includes multiple perspectives beyond just nation-states

Using open source technologies and open standards to build critical national digital infrastructure

Balancing state control and private sector involvement in DPI development based on the specific function and country context

Pursuing digital sovereignty efforts alongside regional cooperation and alliance-building

Thought Provoking Comments

Digital sovereignty as broadly conceptualized complements the multistakeholder model by foregrounding the underlying power issues that have prevented multistakeholderism to be more widely adopted.

speaker

Min Jiang

reason

This comment reframes digital sovereignty not as opposed to multistakeholderism, but as complementary to it. It suggests that digital sovereignty can address power imbalances that have limited multistakeholder approaches.

impact

This set the tone for considering digital sovereignty as a nuanced concept that goes beyond just state control, influencing subsequent speakers to discuss various dimensions and stakeholders involved in digital sovereignty.

DPI allows you to have shared means to many ends because essentially it’s laying out the most common drill, but then allowing others to build a market economy around it, allowing others to use that ID to do a KYC to then provide services, allowing others to build that payment service app to then offer other things.

speaker

Ritul Gaur

reason

This comment provides a clear explanation of how Digital Public Infrastructure (DPI) can enable both public and private sector innovation, highlighting its role in fostering a digital ecosystem.

impact

It shifted the discussion towards considering DPI as a foundation for broader digital development, rather than just a government-controlled system. This influenced later comments on the role of private sector and community in DPI.

78% of the population in Brazil does not have meaningful connectivity. It means that only 22% are meaningfully connected. What does it mean concretely? I think the Brazilian government is putting a lot of money. Actually, we are analyzing this primarily in software and data with the AI plan. But even if we have the best possible language models trained with Brazilian data, if all Brazilians only access meta AI through WhatsApp that is zero rated, whereas no one else will be able to access the new fantastic domestic models created thanks to the plan, that is not the very best way of directing the public investment.

speaker

Luca Belli

reason

This comment highlights a critical gap between infrastructure development and actual access, challenging the effectiveness of current digital sovereignty efforts.

impact

It prompted a response from the Brazilian representative about ongoing connectivity efforts and sparked a discussion about the need to address both infrastructure and access simultaneously in digital sovereignty initiatives.

We need to see digital sovereignty as complementary with cooperation. It’s not just different things. Since each country faces different realities in these areas, in digital areas, cooperation will be fundamental. Without cooperation, we are not going to achieve sovereignty.

speaker

Renata Mielli

reason

This comment synthesizes the discussion by emphasizing that sovereignty and cooperation are not mutually exclusive, but rather interdependent in the digital realm.

impact

It provided a concluding perspective that tied together various threads of the discussion, emphasizing the need for both national efforts and international cooperation in achieving digital sovereignty.

Overall Assessment

These key comments shaped the discussion by expanding the concept of digital sovereignty beyond state control to include multistakeholder approaches, the role of digital public infrastructure, the importance of meaningful connectivity, and the need for international cooperation. They challenged simplistic notions of sovereignty and highlighted the complex interplay between national interests, private sector involvement, and global collaboration in the digital realm. The discussion evolved from theoretical concepts to practical challenges and potential solutions, emphasizing the need for nuanced, context-specific approaches to digital sovereignty that balance national autonomy with international cooperation and equitable access.

Follow-up Questions

How can meaningful connectivity be improved to ensure wider access to national AI technologies?

speaker

Luca Belli

explanation

This is important because without meaningful connectivity, investments in national AI technologies may not reach the majority of the population, who may only have access to foreign technologies through zero-rating plans.

How are data center initiatives in Georgia being integrated with energy and water infrastructure?

speaker

Jose Renato

explanation

This is important for understanding the holistic approach to infrastructure development and its environmental impact.

What are the details of the Peace Cable that Meta is investing in?

speaker

Azeem

explanation

This information is relevant to understanding private sector investments in digital infrastructure.

Is there a widely accepted understanding of what Digital Public Infrastructure (DPI) is?

speaker

Van

explanation

A clear definition is important for consistent policy-making and implementation across different countries.

How does regime type influence collaboration on digital sovereignty issues?

speaker

Unnamed participant

explanation

Understanding this could provide insights into international cooperation patterns in digital governance.

How can digital sovereignty on infrastructure with regional impact be used as a weapon against other countries, and how can this be prevented?

speaker

Audience member

explanation

This is important for understanding potential geopolitical implications of digital infrastructure development.

How can proper stability and productive management of regional digital infrastructure projects be ensured, given the challenges of digital sovereignty?

speaker

Audience member

explanation

This is crucial for successful implementation of cross-border digital infrastructure initiatives.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #75 An Open and Democratic Internet in the Digitization Era

WS #75 An Open and Democratic Internet in the Digitization Era

Session at a Glance

Summary

This discussion focused on fostering an open and democratic internet in the digitalization era, with an emphasis on improving digital governance. Speakers from various backgrounds explored how to maintain transparency, accountability, and user privacy while preventing monopolistic control by tech giants and internet fragmentation.

Key points included the importance of multi-stakeholder governance models and personal digital sovereignty. Speakers emphasized the need for raising awareness about internet infrastructure and open standards among users. The role of regulation was discussed, with examples like GDPR highlighted as attempts to protect user privacy, though potential unintended consequences were noted.

Participants stressed the importance of international cooperation, particularly between developed and developing countries, to harmonize standards and practices. The need for flexible, adaptable regulations that can keep pace with technological change was emphasized. Speakers also discussed the importance of impact assessments and dialogue between technical communities and policymakers.

Digital literacy programs, especially those targeting girls and women, were proposed as crucial for empowering citizens to participate in discussions about internet governance. The importance of protecting and enforcing open standards and digital commons was highlighted, with suggestions for government support and financing of such initiatives.

Overall, the discussion underscored the complex challenges in maintaining an open, fair, and accessible digital ecosystem, emphasizing the need for collaboration between various stakeholders to address these issues effectively.

Keypoints

Major discussion points:

– The importance of open standards and a multi-stakeholder model for internet governance

– Balancing innovation with regulation and privacy protection

– Challenges of fragmentation and monopolistic control by tech giants

– Need for digital literacy and awareness among users

– Role of different stakeholders in supporting an open and fair digital ecosystem

The overall purpose of the discussion was to explore how to foster open digital architectures that support transparency and accountability while addressing challenges like privacy erosion and internet fragmentation. The speakers aimed to identify actionable steps different stakeholders can take to promote a democratic and accessible internet.

The tone of the discussion was largely collaborative and solution-oriented. Speakers built on each other’s points and offered perspectives from their diverse backgrounds in technology, law, policy, and government. There was a sense of urgency about the need to protect open standards and digital commons, but also optimism about potential solutions if stakeholders work together. The tone became more action-focused towards the end as speakers proposed concrete steps different groups could take.

Speakers

– MODERATOR: Session moderator

– Edmon Chung: CEO of DotAsia organization

– Henry Verdier: Ambassador for Digital Affairs, French Ministry of Defense and Foreign Affairs

– Paola Galvez: Civil Society, Latin American and Caribbean Group (GRULAC)

– Amrita Choudhury: Director of CCEI

– Nur Adlin Hanisah Shahul Ikram: Data privacy specialist at the National Islamic University, Malaysia

– Barkha Manral: Online moderator

Full session report

Fostering an Open and Democratic Internet in the Digital Era

This discussion brought together experts from various backgrounds to explore how to maintain an open and democratic internet in the face of rapid digitalisation. The speakers focused on improving digital governance while addressing challenges such as transparency, accountability, user privacy, monopolistic control by tech giants, and internet fragmentation.

Key Themes and Discussions

1. Importance of Open Standards and Historical Context

There was strong consensus among speakers on the critical role of open standards in fostering innovation and interoperability on the internet. Henry Verdier emphasised that open standards are foundational to technological innovation, providing historical context with the example of the telegraph and the creation of the ITU. He highlighted specific examples of open standards such as TCP/IP, the web, Wi-Fi, Bluetooth, Linux, MySQL, and Apache. Paola Galvez noted their importance in preventing lock-in to proprietary systems. Edmon Chung stressed the need to protect open standards from neglect in favour of closed ecosystems, while Amrita Choudhury added that open standards should incorporate human rights and privacy considerations.

However, challenges to open digital architecture were identified. These included monopolistic control by tech giants, the risk of internet fragmentation into isolated ecosystems, security concerns compared to proprietary technologies, and a lack of funding and incentives for open systems development.

2. Multi-stakeholder Governance Model

Speakers advocated for a multi-stakeholder approach to internet governance. Edmon Chung reframed the concept of democracy in this context, moving away from traditional voting models to a more inclusive, participatory approach. He emphasised that the multi-stakeholder model allows diverse groups to participate equally, specifically highlighting the importance of including youth and the technical community. Amrita Choudhury highlighted the need for dialogue between technical communities and policymakers, while Paula Jervis stressed the importance of public participation in regulatory processes.

3. Personal Digital Sovereignty

Edmon Chung emphasized the importance of personal digital sovereignty in his opening remarks. This concept underscores the need for individuals to have control over their digital identities and data, which is crucial in maintaining an open and democratic internet.

4. Balancing Regulation and Innovation

The discussion acknowledged the complex challenge of balancing innovation with regulation and privacy protection. Nur Adlin called for flexible, adaptable regulations to keep pace with technological change, mentioning the OECD recommendation for agile regulation governance. Amrita Choudhury emphasised the importance of impact assessments to avoid unintended consequences of regulation. Paula Jervis argued for technology-neutral and future-proof regulations.

Henry Verdier suggested imposing data portability and interoperability by default in public policies. Edmon Chung noted that the IETF now includes human rights and privacy considerations in protocol discussions, demonstrating a shift towards incorporating these concerns into technical standards.

5. Digital Literacy and Awareness

Speakers unanimously agreed on the critical need for improved digital literacy and awareness among users. Henry Verdier stressed the importance of raising awareness about how internet infrastructure works, including the need to educate friends and family about the distinction between internet infrastructure and specific companies or platforms. Paola Galvez emphasised the need for digital literacy programmes, especially for underrepresented groups such as girls and women. Edmon Chung argued that users need a better understanding of underlying technologies, while Nur Adlin highlighted academia’s role in researching ethical frameworks and offering digital literacy programmes.

Amrita Choudhury made a thought-provoking comment about the practical aspects of accessibility and usability, especially in developing countries. She emphasised the need for services to be easy to use, available in multiple languages, and mobile-friendly.

6. International Cooperation and Global Perspectives

The discussion underscored the importance of international cooperation in harmonising standards and practices. Paola Galvez mentioned the Council of Europe AI Convention and the UNESCO recommendation on the ethics of AI as examples of international standards. She also emphasized the importance of fostering international cooperation between developing and developed countries.

Nur Adlin provided concrete examples of how data privacy laws are evolving globally, including in non-Western countries. She mentioned the Kingdom of Saudi Arabia’s personal data protection law and Malaysia’s recent amendment to its data privacy law. This highlighted the dynamic nature of digital governance across different regions and the need for flexible, adaptable regulations that can accommodate diverse global contexts.

Edmon Chung identified improving collaboration between global multi-stakeholder models and local multilateral systems in internet governance as a critical issue for the coming years. This would help prevent unintended consequences of local legislation on global internet standards.

Unresolved Issues and Future Considerations

Despite the productive discussion, several issues remained unresolved:

1. How to effectively balance open standards with security concerns.

2. Specific ways to prevent fragmentation of the internet into isolated ecosystems.

3. How to increase funding and incentives for open systems development.

4. Methods to harmonise global multi-stakeholder models with local/regional regulations.

The speakers suggested some approaches to address these challenges, including flexible regulations that can keep pace with technological change while still protecting user rights, technology-neutral regulatory frameworks, and creating digital public infrastructure and goods with government support to complement market-driven development. Henry Verdier proposed creating a foundation to finance open standards, digital commons, and public goods as a potential solution to support these initiatives.

In conclusion, the discussion highlighted the complex challenges in maintaining an open, fair, and accessible digital ecosystem. It emphasised the need for collaboration between various stakeholders to address these issues effectively, while also recognising the importance of adapting approaches to diverse global contexts and rapidly evolving technologies.

Session Transcript

MODERATOR: As we are running, like, is we are like, for the 8 minute. So, hello, everyone, I can send thank you for joining. So, I will send you a credit to internet in the digitalization error. Session, which is organized by the. Into our backup. I’m not well, so. An area, so from the net mission. Asia, so where can anyone and also. This session is about the. Open center, look at the Internet and. As we may know that that Internet is a foundation of the. This is our emerging technology and is open to all. And interoperate people rely on their open protocol. In this short, we are, we are going to be focused on be serving and upholding the foundational principles of the Internet by maintaining user centric. Passport is and advocating for the continued and influence of the open standard. Our goal is to advance the transformation of the Internet into the close. And, um, and to the ecosystem. So, in this discussion, we are going to raise tools. It’s a, it’s a by nation of the crucial issue. That car impasse open nature of the Internet. So, we are going to begin by addressing the challenges. Post by the open standard in a rapidly evolving technological landscape. In this session, we are going to have the speaker. Henry, but here. He’s an ambassador for the Dissident Affairs, French Ministry of Defense and Foreign Affairs. And Paolo Gervais, founder and director of the Indoneia Lab, and Amrita Choudhury, who’s the director of the CCEI, and Edmond Cha, CEO of the DotAsia organization, and Adeline Hanissa, data privacy specialist at the National Islamic University, Malaysia. So, first of all, I would like to welcome to the speaker the very beginning question for the opening remarks of them. So, the first question will be like, how can we restore the diplomacy of open digital architecture that support the transparency and accountability while also preventing the erosion of the user privacy, public state control by test giants, and the fragmentation of the internet into the isolated ecosystem? So, I would like to welcome and invite the speaker to respond to this question based on your expertise. So, Edmond, would you like to go first for giving the opening remark in response to that question?

Edmon Chung: Sorry, is it? If you’re asking for me, sure. I guess it’s me, because the audio is coming through a little bit shaky. I guess, hello, everyone. This is Edmond. I think, first of all, I think the topic itself is… is very timely. In fact, maybe slightly overdue. This is something that is very important in terms of how we look at democratizing the governance of different platforms and how we utilize the internet in an open and interoperable way. So I was just going to give a little bit of an introduction and then come back to Pio’s question about the first policy question. First of all, I guess one of the things that I find quite encouraging, especially in the development of the internet governance ecosystem, especially with the protocol side, is the IETF, or the Internet Engineering Task Force. In the last couple of years, I kind of reconnected with the Internet Engineering Task Force, the IETF. But before that, actually, I was participating all the way through about 2014, and human rights considerations, privacy considerations, were almost unheard of. Last year, in 2023, I started re-engaging in the IETF discussion, and to my surprise, and actually pleasantly surprised, when we talk about protocol these days, actually beyond what we call the security considerations or even internationalization considerations, human rights and privacy is now a feature prominently in protocol discussions as well, and I think that’s a very healthy development. And when we talk about as this, the way that this session frames it in terms of a democratic approach, we’re really not talking about what somebody, you know, what may people point to democracy is in terms of voting and a bit more antagonistic kind of a campaigning and voting, but a democratic approach for the internet governance aspect, in my mind is much more participatory. And also what we have come to to to treasure and call a multi stakeholder model. And when we talk about multi stakeholder, of course, stakeholders include youth and the technical community, which makes the biggest difference, because even in multilateral forums, there would be multi stakeholder kind of consultation. But a lot of times it’s much more focused on civil society and the industry. But when we talk about a multi stakeholder and democratic model, we’re talking about youth and technical community, being able to participate in an equal footing. And I think that’s the major difference here. Now, back to Pio’s opening question about the issue of privacy platforms and fragmentation of the ecosystems. I think they kind of come hand in hand. And in essence, an open digital architecture, I think it’s not only built on interoperability between systems and between jurisdictions. One of the kind of a high interest topic these days is is about digital sovereignty. A lot of times when we talk about digital sovereignty, we talk about, or countries or governments like to talk about data localization, much more in terms of digital sovereignty, a national digital sovereignty. But I think when we think about digital architecture and we really want to address privacy and we really want to address issues about multinational platforms, we need to deal with digital sovereignty in a personal level, whether we have personal digital sovereignty. And I think for countries, governments who really want to support privacy and support data, quote unquote, localization, you have to take it to another level for personal to be able to have ownership, the ability to move data and the ability to withdraw consent about their own personal data. And that I think is the key aspect because privacy by design doesn’t mean confidentiality of the data. It means that the platforms do not keep data at all from the start to begin with. And that’s what I think personal digital sovereignty is about. I will stop you for pause here because I’ve taken enough time, but I understand that there are a couple of points, but I want to start with the note that multi-stakeholder model is really comes hand in hand with number of the issues we have today and that digital sovereignty, we need to dig down to the level of personal digital sovereignty.

MODERATOR: Thank you, Edmond. As you mentioned, like the multi-stakeholder model is quite important even for the open and digital architecture, which can support to the transparencies and accountability. And so I… I would like to ask Mr. Henry, from the government perspective, how do you see how to foster the deployments of the Open Digital Architects that can support the transparencies and accountability?

Henry Verdier: Thank you for the invitation. Thank you for this very important topic. That’s the question, I feel. So I’m very happy to be there. Maybe I could start with a very funny story that I discovered recently. In 1865, that’s a while, the French emperor, Napoleon III, discovered this new technology, the telegraph. And they thought, wow, that’s a very promising technology. How can we be sure that there will be a resource for peace and prosperity in commerce? For them, commerce was a source of peace. And they said, we should be sure that we’ll find a way to be sure that we can send international telegraphs, telegrams. So they did convene an international conference in Paris, 1865, and they did decide to develop together open standards for telegrams. And to enforce this, they did install the first ever international organization, the ITU. At this time, it was the International Telegraph Union. It became the International Telecommunication Union. So that’s a long story. And I want to share with us, maybe we know this, but we have to recall it. This story of open standards is the story of Internet and everything good that did happen. You could not conceive the Internet revolution and now the AI revolution without TCP, IP, the web, Wi-Fi, Bluetooth, Linux, MySQL, Apache, and whatsoever. The real story of Internet revolution is this one, that the story of open standards. The question is not, should we protect them or do they matter? The question is, why does other actors don’t recognize this importance? Why do most of our co-citizens make a confusion between big tech companies and public actors like states and these Internets? That’s not just, I totally agree with what Edmond said, the multi-stakeholder governance is at most important, but that’s not enough. For me, the question is a good balance between common and enclosure and how to protect the common and the commons. For this, I just share with you a few ideas, but the open standard, they are not directly attacked because everyone is using them, so they are not directly attacked, they are just neglected. They are neglected in a time of intense competition and a movement on re-enclosure because to find a business model, the most easy is to capture your customers and to constrain them to remain in your small enclosure. So the question is, how to do politics without being politicized? Because the question is really politic, that’s about how do we want to live together, and it’s not politicized because it’s not right or left or this party or this party. And that’s an important question. Sorry, sadly, I don’t have the answer, but I just have a few ideas. First, because you are the youth of the world, and I’m an old veteran of this, I started my first internet company in 1995. I remember at this time, more people knew how those… stuff did work. And now most of our contemporaries, my daughters for example, they don’t pay attention. They say I’m in internet when they are in TikTok or Facebook. So first we have to raise the awareness of our friends and families and to re-explain that there is something named internet, there is something named the web, there is something else that is a company, etc. Probably we can afford to have different policies. I think that most people make the confusion. For example, they told me the GDPR, so the European regulation on privacy, is fragmentizing the internet. I said no, the internet is an infrastructure, like roads, and I try to regulate companies like cars. So I can on one hand protect the open, decentralized, free, distributed, unique internet, and on the other hand ask for some accountability and responsibility to companies. You have to understand this. We have to say to our friends and colleagues, don’t be passive consumers. Pay attention, be skeptical, try to understand how it works. So I will finish with this because we have three minutes. But my point is that this is about politics. We have to raise the level of awareness, we have to explain again and again, and we have to have a clear view that this revolution would not have been possible without an important set of open standards, and that the power of this time did just use it. Maybe they did hack this. They are not the owner of this, and we have the right to reclaim and to protest and to say, no, you are just using our infrastructure. Please respect it.

MODERATOR: Thank you, Mr. Henry. for giving the lots of ideas that we, as a young person, we have to think about how we should be navigate with the open internet and also the other challenges. So I would like to ask to the parlor, how do you see as a young person regarding the accountability and transparency of open architecture? Maybe we can think about from the privacy perspectives or yeah, the floor is here.

Paula Jervis: Thanks so much. Well, let me first give why I do believe the open standards are so key. And this comes from a perspective from a lawyer. My background is in law. I’m actually here, I’m happy to hear diverse perspectives, Edmond, Mr. Henry that are experts on the technical part, but I’ve been working in technology policy for the past 12 years from the private, the government sector and now as an independent consultant with civil society organization. And I’ve always seen open standards as key and core of interoperable internet. So these two ideas, I really believe that it allows for innovations without exclusivity and showing that no one is locked into proprietary systems. And on the other side, I can truly see the power and the potential on transparency. I’ve worked in public procurement solution in my country and with Columbia, and later I can explain this use case, but I truly believe that using open standards, open source tools, standardized data really help enhance transparency in governments and also reduce barriers for a small business for instance. But the other part is how it promotes inclusion. My whole career I’ve tried to bridge the digital gender gap. Today during the opening ceremony we’ve heard from the different excellencies and authorities how important is this and how this gap is increasing rather than bridging. And I do believe that open standards make it easier to create tools that are accessible to everyone and that can help girls, women get into this digital era promoting open standards with a gender lens. I can speak more about this later. Now going to your question Fiyo about privacy and how we can foster the development of open digital structures supporting these principles. So this may not come as a surprise but as a lawyer I truly believe that we need to approve regulation or regulatory frameworks that can be really implemented. And that means having multi-stakeholder discussions that bring legitimate regulation because I’ve seen many cases in Latin America. I am from Peru and I can tell how sometimes these regulations are approved without the appropriate discussion in Congress. And when it’s time to implement it’s very hard. This is one thing. Second, to encourage the adoption of privacy by design principles. So many times I’ve heard countries that do not have data protection laws and that’s a problem and issue that should be tackled because data protection regulation is a must to prevent the erosion of users’ privacy. But even though we do not have this, I truly believe that private sector and civil society can work hand in hand so that this principle of privacy by design can be implemented. design can be from the very beginning of any development of technology. Ensure that all the companies with the data are protected, really embed strong safeguards to protect the user data. I may be running out of time so I’ll last but not least just my only point that I would like to add, the importance of fostering international cooperation between developing countries and developed countries. It is really important to collaborate across borders, to harmonize the standards practices, to ensure the global flow of information without compromising local privacy norms and also to set international standards that can help also because we want our economies to prosper and this can be an idea for it to follow for instance the Council of Europe AI Convention which is the first one of its kind, the UNESCO recommendation on the ethics of AI. So that’s for now, thank you for the invitation, sorry for saying this lastly. Thank you Paula for your intervention,

MODERATOR: like you highlight about the importance of the international cooperation. You know like yesterday there was a session talking about privacy and data related to for using especially the data coming from the global south and they talk about how the people from the global south are using the internet and even the data from the global south are also using for the developing the AI and its related evolution. So that is what we can see that we also need to foster the international cooperation for making sure that those who are those people around the world need to be respectful of on their data and privacy with that way. So I will call to our last speaker, Amrita, how do you see that the accountability and transparency in your opinion is?

Amrita Choudhury: Thank you. And thank you for having me. And let me tell you, I’m not a technologist. I just work on policy. So I will be looking at it from a socio-political lens. If you want to actually foster an open digital architecture system, which actually supports transparency and accountability, and I think this is what most governments and even civil society are asking from companies today, I think even the open standards has to kind of, I would not say work hard, but at least look at certain aspects. For example, at times the security of the systems is a concern. And that’s where many of the monopolistic technologies get an edge, that they have the security standards upgraded, et cetera. But I agree with most of the panelists. Like for example, concepts like human rights by design and privacy by design should be enshrined in any kind of technology which is open standard or even proprietary for it to work, because I think those are fundamental things which any platform of any kind should have it. In terms of erosion of privacy, that’s a huge concern globally. We see the number of data breaches. We see the antitrust issues which are coming up daily in different countries without consent. Children’s data, et cetera, is used. So I think any kind of platform, and obviously we do think that when you have an open architecture where people build upon it with software, with other technologies, these would be considered. I think there should be more discussion. It should not be just technical people there. The other relevant stakeholders, I would not say multi-stakeholder, but I would say the actors who are important need to be there, not for tokenism, but actually when things are being built, they can give their perspective. Look, have you considered these issues that these things are there, that the systems don’t have biases. We have been talking about AI. We do talk about data sets of global South going, but there are biases, there are racial biases. Are we kind of taking those into consideration? How transparent and accountable are those systems on how it is being used? So I think those things are important. In terms of when you have issues about, the second aspect which you had is monopolistic control of tech giants. I think first we have to agree that the systems work. It is easy for everyone to use. They understand the pulse of people. We cannot deny a Google or a Facebook or a Meta giving those services which everyone can use. So if you want to have those kinds of services given to people, it has to be easy to use. It has to be in different languages so that different people can use it, not only English. And it has to be very easily usable. For example, if you are in a developing country, it has to be mobile friendly. 90% of the people use it in mobile, but if you’re building systems for laptops, it’s not going to work. So you have to look at the practicality. And for that, you need funding. And I think if governments or even foundations can put in a lot more money or give them incentives to work, I think it can help to support the open data or open systems of people who are building upon it, even technologists who are working on it. And these are my perspectives. You know, they may differ, but I think having regulations to encourage them would help. And if you’re talking… about fragmentation of the internet into isolated ecosystem, again, all fragmentation is not bad. You may argue that, one may argue that even IPv6 has fragmented, but it is also a different technology, right? Because when you want to go to IPv6, you have to change your infrastructure, your equipments. And that’s why many, even ISPs are not investing in it. And Henry mentioned GDPR is also considered a fragmenter, but was it necessary to protect the data privacy of Europeans? I guess so. So not all fragmentation is bad. And countries and nations would obviously want to protect their interests. We’ve seen a lot of things, right? We’ve seen the Snowden revelations, we have seen other things, and there are countries who are snooping on the others too, nation states and bad actors. So one may want to protect their interest, but you have to see the cost at which you are protecting. Is it really going to help you and the others in the long run, or is it going to harm? So I think it’s a very thin line. I may be saying a controversial statement, but it needs to be seen what kind of fragmentation are we talking about. So I’ll end it at that.

MODERATOR: Thank you, Arita. Mostly when we are talking about the fragmentation, it’s why we have various definition of the fragmentation as well, right? So you also highlight about how it is, how we can seize the fragmentation in some what way and how we can make sure the accountability and transparency and even talking about the assessings platform like the Google service and the other kind of like this. So I’ve actually, I’m Peter, is not the last speaker. My apology. We also have a data specialist, Niu Ai-Ling. So I would like to give the floor. to her stick to the five minute as we are running out of that. We are about to run out of the time. So Eileen, how do you see about this matter in your prospective? Eileen, the floor is yours. Can you unmute yourself? Technician, could you please have to unmute to the link? Hello, Technician. Hi. Hi, everyone.

Nur Adlin: Can you hear me? Okay. Okay. Assalamualaikum warahmatullahi wabarakatuh. Good day. Good day. Good afternoon, ladies and gentlemen. It’s an honor to be here today among such distinguished experts and missionaries. I am Dr. Nur Adelina Hanissa. And my academic and professional journey has focused on the intricate relationship between law, technology, and innovation. As a legal scholar specializing in data privacy, I have dedicated my career to exploring how we can leverage the transformative power of digital technologies while safeguarding fundamental principles such as privacy, fairness, and inclusivity. In my work, I aim to bridge the gap between technological advancement and regulatory framework, emphasizing fostering trust and accountability in our rapidly evolving digital ecosystem. Today, I am excited to discuss a topic that is central to these efforts, an open and democratic internet in the digitization era, improving digital governance. for the Internet B1. The digital age presents us with immense opportunity but also challenges that require thoughtful and collaborative solutions. By balancing innovation and responsibility, I believe we can build a digital future that is fair, inclusive and resilient for everyone. We can foster a digital architecture while addressing these pressing challenges using a multifaceted approach and multi-stakeholder collaboration, including governments, the private sector, academia and civil society. I would like to emphasize more on the significant role of regulation in ensuring accountability, creating uniform standards, curbing monopolies and having a balanced approach. The EU GDPR is a good example of comprehensive data protection regulation. According to the EU Commission, the GDPR aims to give citizens back control over their data and simplify the regulatory environment for businesses. Moreover, GDPR has established itself as a benchmark for other countries to follow. GDPR enhances transparency, safeguards privacy rights of EU citizens and aligns with open standards like ISO 27001 and W3C standards that promote principles like data portability and interoperability, which help mitigate monopolistic control. These harmonious regulations help reduce the complexity and compliance and prevent fragmentation. Data privacy laws are emerging and being amended as we speak. For example, the Kingdom of Saudi Arabia’s personal data protection law came into force last year and became fully enforceable in September this year. Another example is my country, Malaysia. Just amended its data privacy law this year. year, introducing updates including mandatory data breach notification and rights to data portability. The UN trade and development reported that 137 out of 194 countries have data privacy laws. Regulations need to be flexible and updated to reflect the technological advancement. This adaptability ensures that regulation will not become obsolete in the face of rapid technological change. Robust regulation must be accompanied by effective enforcement to ensure the organization’s compliance. When it comes to compliance, there is no one-size-fits-all solution for each organization to address its unique circumstances. Even in a country without data privacy regulations, since privacy practices almost have similar templates, tech companies can voluntarily adopt self-imposed best practices like implementing privacy by design, such as encryption, anonymization, and data minimization to foster trust and innovation. A common misconception is that strong regulations stifle innovation. However, research has proven otherwise. An overly rigid or outdated regulation can hinder innovation, particularly for smaller players. The OECD, in its recommendation for agile regulation, governance to harness innovation has provided guidance to countries on how to adapt regulatory framework and institutions to challenges and opportunity of innovation to enable better governance outcomes. So the key of this regulation is the balancing between the innovation and regulation. Thank you.

MODERATOR: Thank you, Lynn. You mentioned about the current example, and even though the GDPR is, you know, from the… Global North side, there are also the core issues that you mentioned about, these can be the reference that we can practice, but in the Global South as well, why having like a flexible regulation and adopting the policy by referring to the standards. Thank you for mentioning about this, and I have to pass the floor to our online moderator for the next part of our session, Barkha, the floor is yours.

Barkha Manral: Thank you, thank you for passing on. So it’s a very good answer we get from the speaker. So we have a connecting, not so connecting, but he has a connecting questions, which can be around for the session, but I would like to request every speaker to stick to two minutes because we are lacking from the time. So the question is, how can open standards be enhanced to better accommodate the pace of technological change and foster agility and responsiveness in addressing emerging challenges and opportunities? So as Noor was the last one to speak, so I would like to pass the floor to Noor and if she can just quickly sums it up in two minutes.

Nur Adlin: Yeah, thank you very much, Barkha, actually, I already mentioned before, in order to ensure the agility of the regulation, they must consider all the factors, and it must be not be rigidly audited. It must be updated from time to time, that’s all from me, thank you.

Barkha Manral: Thank you. Thank you for so quick and small answer. You had just completed in like two seconds only, thank you for that. Then I will request Edmond to like highlight whatever his answer for this question is.

Edmon Chung: Sure, thank you, well, I guess I touched on that a little bit. But I would add that the protocols development or the open standards. development, whether it is in IETF or other parts of the internet governance ecosystem, I think itself needs to improve in terms of evolving the governance processes and to more agile ability to address. But I think the one of the things that is really critical in the next few years is how the global multistakeholder model works with the local multilateral systems that have legislations and so on. I think I agree with Henry earlier very much that whereas the standards are not under threat right now, it’s largely neglected. And that is reflective of some of the local legislations as well or regional. When we look at GDPR, I don’t think on its own, it creates any kind of fragmentation. It’s actually a very genuine, in my mind, it’s a genuine attempt to bring privacy to the forefront. But what it did unintentionally was that what was legislated for a higher user level for privacy actually had a impact on domain registrations, for example, where the who is information and the registration information is suddenly disappeared. And that is the kind of threat that fragmentation brings. And that comes back to, I think, one of the key issues is how do we work? How does the local legislation work to complement the global multistakeholder model whereby the technical community and civil society and the academia are all participatory in the agenda setting as well as decision making process and then inform the local legislation so that the two of them don’t step over each other? I think that is what the internet governance ecosystem really need to figure out in the next few years, which addresses the issue of the pace of technical change and the agility in the standards development process. Okay, thank you.

Barkha Manral: Thank you, Edmund. So, I would ask the same questions to Amrita, although she already informed that she’s not a technical person, but still when we talk about the challenges and opportunities in any emerging technology, we still consider and always consider the policy makers. So, I would like Amrita to answer this.

Amrita Choudhury: Thanks, Barkha. If you’re sayingthat how can open standards be enhanced to better accommodate the pace of technology, I think more dialogue between the technical communities who are into standard making and also those who are using these standards to build up within the countries, they may not have the same technical expertise. So, having more dialogues on that amongst the different actors or the people who would be impacted. Impact assessment is important and I’m taking it from what Edmund has cited as an example that many times the unintended impact of a regulation is not part of all this. So, better impact assessment would be something. Obviously, when you are using open standards, et cetera, some things, you know, the scalability of the technologies which are being made is something which needs to be looked at. The security, many times, you know, you have scale to something else, but then there may be hidden costs for developers and I’m talking from those who use these standards later. The compliance and enforcement parts later on, I think those are certain things which come to me at the top of the mind which needs to be looked at. Thank you.

Barkha Manral: Thank you, Anita. So, I would like Paola to answer it.

Paula Jervis: So, yes, well, I think, first, know your neutral frameworks, and I could cite as an example, the Council of Europe AI Convention on AI, the rule of law and human rights. Yesterday on the session of Ambassador Baron Barrett mentioned how difficult it was to find a regulatory framework that is technology neutral, right, that can be future-proofing, that’s the word he used. And I found that very, very important, because how to design these standards that apply to technologies that keep changing. I mean, we’ve seen AI doing some, performing some actions in 2022, and now, well, I don’t know what will happen next year, right? So, flexibility implementation is a must, in my opinion, allow for variability in how these standards are applied. But this comes with a dynamic, and I would say temporal, if not annual, or not as much annual feedback loops, right? For instance, for this Council of Europe Convention, for instance, they have created this group that will review the document along the time, because if it must be updated, it should be. So, I truly believe that this could be nice solutions, and I cannot avoid mentioning digital literacy, because we cannot forget the citizens that are the ones being impacted by these technologies. When I was working in the government of Peru, I created a program called Digital Girls Peru, and once again, I need to repeat on the gender gap. So, creating digital literacy programs is a must, and it’s possible, and that’s why I mentioned it, because I know people from governments and private sector are hearing. So, we need to invest in programs that are targeted to girls and women, because we need them to understand how these open standards, how these technologies are working. are part of this discussion as well and nowadays that and I see this question in enhancing democratic and citizen engagement nowadays and especially in Peru all the regulation must be under a public participation process but how can this public participation process be effective if our citizens do not understand what is being regulated discussed or even created in the standards right so this is a leg I would say that is fundamental for the future of internet thank you. Thank you Paula for the brief and answering the previous

Barkha Manral: question at the same time so coming to Henry as propolis somewhere linked with the digital world and I find that Henry’s fault is somewhere about digital affairs so I would like to update discussion that how can digital affairs still contribute and still manage the balance between the open standards and making it the private at the same time to better accommodate the technological pace and changes we are facing as by age or by the years we are passing down.

Henry Verdier: Thank you. Thank you. I think that one conclusion of this exchange is that we have to enforce open standards and digital commons developing them and promoting them is not enough we need to do more and that’s for diplomats but also for ministry for civil society for companies and for other ministries for example and that’s very important we have to use it and to contribute when I was in charge of the French IT department I did pass an executive order if I may to say that public servants have the right and the duty to contribute to open source because they were not sure to have the right then we have to pay attention to protect when we regulate or when we legislate. I give the point to Edmond I know very well the controversy and the UBIS. I have also the French seat at the ICANN. I can tell you that we could have fine solutions, but a lot of people wanted to sell those data and didn’t really look for solutions. The French law, for example, we have very old GDPR for 40 years, but a lot of very personal data has to be public. If you run for an election, for example, you have to tell it publicly. So we could have decided to say some important data necessary for global security has to be online. It was really possible, but you have to think about this when you prepare the law. Probably we should in every public policies impose data portability and interoperability by default. So it would be a great service to open standards. We have to go further and to finance a bit, and with Jonas here, we try to convince Europe to launch a foundation to finance open standards, digital commons, and public goods. And that’s my last point, I think, inspired by the Indian example, that at some point you have to contribute and that digital public infrastructure and public goods matters. We cannot just wait and see and expect that the market will fix everything. We have to inject some resources in this ecosystem. That’s all.

Barkha Manral: Okay, thank you for the answers. So we will now like to open the floor for the Q&A part. So if anyone has any questions, they can raise their hand in the Zoom chat, and for the on-site, you can take care of. You can tell if there are any questions, we can take of them. Otherwise, I will like to tell the Zoom people if they can ask their question. technical team there.

MODERATOR: I think there’s a comment on that, so I’m sure we can go through it. Okay.

Barkha Manral: In that case, so there’s a question from Aviran Kanduria. So it’s an open question for all the speakers, whoever wants to answer it. The question states that, what actionable steps can different stakeholder groups implement right away to support an open, fair, and accessible digital ecosystem? I would appreciate if each of the speakers could address this from the perspective of their respective stakeholder group, and you can answer. Dakini is asking the same question to every speaker, and we would like that each of the speakers get the chance to speak on this particular question. So let’s start with Edwin.

MODERATOR: Sure. Sorry.

Barkha Manral: We only have five minutes, so let’s stick to one minute.

Edmon Chung: Yeah, I’ll be very quick. I mean, I think in response to the question from the technical community, I actually agree very much with what Henry mentioned earlier. Nowadays, users don’t know enough of the underlying technology, like even the domain name system or how email works or how HTTP works. People need to be a little bit more aware in order to address issues like barriers of entry that is created by walled gardens, what we call privately owned public spaces like Facebook. How do we deal with that to redefine some of the how things are implemented in a more open manner to address the interoperability of the digital ecosystem? So I think from the technical community’s perspective, I think a lot of the platforms in the drive to quote unquote make things easier for newcomers are actually, kind of trapping us into walled gardens where barriers of entries are struck up and that needs to be reversed. And people needs to actually, I believe in the future where people’s digital literacy actually is increased and is able to operate the internet in the way that we want more.

Barkha Manral: Fio, if any of the audience speaker would like to answer it. Like we have lack of time and we want to cover it. Go ahead.

Henry Verdier: Do you hear me? And that’s my conclusion. I see another question. I don’t see the question anymore. But the question was about how to implement human rights et cetera, within protocols. I just want to say that you won’t find a technical answer to a particular problem. So we need also to do politics. We need to stand for human rights, free speech, et cetera, everywhere. And not expect, because this would be the technosolutionist mistake.

Nur Adlin: From my perspective, thank you for the question, Aviral. I really love this question because it really meet a collective effort to make it worse. For the perspective of government, they need to implement enforce inclusive policies and laws. And for the private sector, they need to adopt some best practices for human rights. privacy by design, even though it depends on their uniqueness of their circumstances, it is more into art, not based on a science. So it depends on their own creativity. For the civil society, they need to advocate their users’ rights, raise awareness to inclusion, and collaborate with policymakers to shape equitable governance. And for the academia, they need to research ethical frameworks and offering accessible digital literacy programs to empower marginalized communities. I think together we can build a digital future and internet that we want. Thank you.

Barkha Manral: Okay, then I will request Averil to take a group photo, and from there, Fiyu can help us.

MODERATOR: Thank you, everyone. We would like to have a group photo, so please stay tuned to our moderator and speaker, and also organizer. Thank you. Averil, have you taken the picture? Oh, thank you. Thank you, everyone. Thank you, speakers, for joining the session

Barkha Manral: and putting up your points. Thank you.

MODERATOR: Thank you, everyone.

H

Henry Verdier

Speech speed

141 words per minute

Speech length

1200 words

Speech time

507 seconds

Open standards are foundational to the Internet and technological innovation

Explanation

Henry Verdier emphasizes that open standards have been crucial for the development of the Internet and subsequent technological advancements. He argues that the success of the Internet revolution is fundamentally tied to the story of open standards.

Evidence

Examples of open standards mentioned include TCP/IP, the web, Wi-Fi, Bluetooth, Linux, MySQL, and Apache.

Major Discussion Point

Importance of Open Standards for the Internet

Agreed with

Paola Galvez

Edmon Chung

Amrita Choudhury

Agreed on

Importance of open standards for the Internet

Government role in enforcing open standards and digital commons

Explanation

Henry Verdier argues for a proactive government role in enforcing open standards and promoting digital commons. He suggests that governments should not only develop and promote open standards but also actively contribute to and use them.

Evidence

Verdier mentions his experience in the French IT department where he passed an executive order giving public servants the right and duty to contribute to open source projects.

Major Discussion Point

Multi-stakeholder Governance Model

Need to raise awareness about how Internet infrastructure works

Explanation

Henry Verdier emphasizes the importance of raising awareness about how Internet infrastructure works. He argues that there is a need to re-explain to people the difference between the Internet as infrastructure and specific companies or platforms.

Evidence

Verdier mentions that many people confuse being on the Internet with being on specific platforms like TikTok or Facebook.

Major Discussion Point

Digital Literacy and Awareness

Agreed with

Paola Galvez

Edmon Chung

Nur Adlin

Agreed on

Importance of digital literacy and awareness

P

Paola Galvez

Speech speed

142 words per minute

Speech length

1002 words

Speech time

421 seconds

Open standards promote interoperability and prevent lock-in to proprietary systems

Explanation

Paola Galvez highlights that open standards are key to an interoperable internet. She argues that they allow for innovation without exclusivity and ensure that users are not locked into proprietary systems.

Evidence

Galvez mentions her experience working on public procurement solutions in her country and Colombia as an example of how open standards can enhance transparency in governments and reduce barriers for small businesses.

Major Discussion Point

Importance of Open Standards for the Internet

Agreed with

Henry Verdier

Edmon Chung

Amrita Choudhury

Agreed on

Importance of open standards for the Internet

Regulations should be technology-neutral and future-proof

Explanation

Paola Galvez argues for the importance of creating regulations that are technology-neutral and future-proof. She emphasizes the need for regulatory frameworks that can adapt to rapidly changing technologies without becoming quickly obsolete.

Evidence

Galvez cites the example of the Council of Europe AI Convention on AI, rule of law and human rights as an attempt to create a technology-neutral regulatory framework.

Major Discussion Point

Balancing Regulation and Innovation

Agreed with

Nur Adlin

Amrita Choudhury

Agreed on

Need for flexible and adaptable regulations

Importance of public participation in regulatory processes

Explanation

Paola Galvez stresses the significance of public participation in regulatory processes. She argues that effective public engagement is crucial for creating regulations that truly serve the needs of citizens and reflect their understanding of the technologies being regulated.

Evidence

Galvez mentions her experience in Peru where all regulations must undergo a public participation process.

Major Discussion Point

Multi-stakeholder Governance Model

Importance of digital literacy programs, especially for underrepresented groups

Explanation

Paola Galvez stresses the importance of digital literacy programs, particularly those targeting underrepresented groups like girls and women. She argues that these programs are crucial for bridging the digital gender gap and empowering marginalized communities.

Evidence

Galvez mentions her experience creating the ‘Digital Girls Peru’ program when working in the Peruvian government.

Major Discussion Point

Digital Literacy and Awareness

Agreed with

Henry Verdier

Edmon Chung

Nur Adlin

Agreed on

Importance of digital literacy and awareness

E

Edmon Chung

Speech speed

118 words per minute

Speech length

1220 words

Speech time

617 seconds

Open standards need to be protected and not neglected in favor of closed ecosystems

Explanation

Edmon Chung argues that while open standards are not directly under attack, they are being neglected. He emphasizes the need to protect and promote open standards in the face of increasing competition and the trend towards closed ecosystems.

Evidence

Chung mentions the positive development in the Internet Engineering Task Force (IETF) where human rights and privacy considerations are now prominently featured in protocol discussions.

Major Discussion Point

Importance of Open Standards for the Internet

Agreed with

Henry Verdier

Paola Galvez

Amrita Choudhury

Agreed on

Importance of open standards for the Internet

Multi-stakeholder model allows diverse groups to participate equally

Explanation

Edmon Chung advocates for a multi-stakeholder model in internet governance. He argues that this approach allows for equal participation from diverse groups, including youth and the technical community, which is crucial for democratic decision-making in the digital realm.

Major Discussion Point

Multi-stakeholder Governance Model

Users need better understanding of underlying technologies

Explanation

Edmon Chung argues that users need a better understanding of underlying Internet technologies. He suggests that increased digital literacy is necessary for users to navigate the Internet effectively and avoid being trapped in ‘walled gardens’ created by large platforms.

Major Discussion Point

Digital Literacy and Awareness

Agreed with

Henry Verdier

Paola Galvez

Nur Adlin

Agreed on

Importance of digital literacy and awareness

A

Amrita Choudhury

Speech speed

169 words per minute

Speech length

1008 words

Speech time

357 seconds

Open standards should incorporate human rights and privacy considerations

Explanation

Amrita Choudhury argues that open standards should embed concepts like human rights by design and privacy by design. She emphasizes that these are fundamental principles that any platform or technology should incorporate.

Major Discussion Point

Importance of Open Standards for the Internet

Agreed with

Henry Verdier

Paola Galvez

Edmon Chung

Agreed on

Importance of open standards for the Internet

Security of open systems can be a challenge compared to proprietary technologies

Explanation

Choudhury points out that security can be a concern for open systems. She notes that proprietary technologies often have an advantage in terms of security standards and upgrades.

Major Discussion Point

Challenges to Open Digital Architecture

Lack of funding and incentives for open systems development

Explanation

Choudhury highlights the need for funding and incentives to support the development of open systems. She suggests that governments or foundations could provide more financial support or incentives to encourage work on open data and open systems.

Major Discussion Point

Challenges to Open Digital Architecture

Importance of impact assessments to avoid unintended consequences of regulation

Explanation

Amrita Choudhury stresses the importance of conducting thorough impact assessments when implementing regulations. She argues that this is crucial to avoid unintended negative consequences that might arise from well-intentioned policies.

Evidence

Choudhury references Edmon’s earlier example of how GDPR unintentionally impacted domain registrations and WHOIS information availability.

Major Discussion Point

Balancing Regulation and Innovation

Agreed with

Nur Adlin

Paola Galvez

Agreed on

Need for flexible and adaptable regulations

Need for dialogue between technical communities and policymakers

Explanation

Amrita Choudhury emphasizes the importance of fostering dialogue between technical communities involved in standard-making and policymakers. She argues that this communication is crucial for developing effective and practical open standards and policies.

Major Discussion Point

Multi-stakeholder Governance Model

M

MODERATOR

Speech speed

116 words per minute

Speech length

969 words

Speech time

500 seconds

Monopolistic control by tech giants threatens open systems

Explanation

The moderator raises the concern that monopolistic control by large technology companies poses a threat to open systems. This implies that the dominance of a few major players in the tech industry could undermine the principles of openness and interoperability.

Major Discussion Point

Challenges to Open Digital Architecture

Fragmentation of the Internet into isolated ecosystems is a concern

Explanation

The moderator expresses concern about the potential fragmentation of the Internet into isolated ecosystems. This suggests a worry that the global, interconnected nature of the Internet could be compromised by the development of closed, separate systems.

Major Discussion Point

Challenges to Open Digital Architecture

N

Nur Adlin

Speech speed

116 words per minute

Speech length

775 words

Speech time

399 seconds

Need for flexible, adaptable regulations to keep pace with technological change

Explanation

Nur Adlin emphasizes the importance of having regulations that are flexible and adaptable to keep up with rapid technological advancements. She argues that this adaptability is crucial to ensure that regulations remain relevant and effective in the face of constant change.

Evidence

Adlin mentions the example of the UN trade and development report stating that 137 out of 194 countries have data privacy laws, indicating a global trend towards regulation in this area.

Major Discussion Point

Balancing Regulation and Innovation

Agreed with

Paola Galvez

Amrita Choudhury

Agreed on

Need for flexible and adaptable regulations

Regulations like GDPR can serve as benchmarks for data protection

Explanation

Adlin suggests that comprehensive regulations like the EU’s General Data Protection Regulation (GDPR) can serve as benchmarks for data protection globally. She argues that such regulations enhance transparency and safeguard privacy rights while aligning with open standards.

Evidence

Adlin cites the GDPR as an example, mentioning its aims to give citizens control over their data and simplify the regulatory environment for businesses.

Major Discussion Point

Balancing Regulation and Innovation

Differed with

Henry Verdier

Amrita Choudhury

Differed on

Impact of regulations on internet fragmentation

Academia’s role in researching ethical frameworks and offering digital literacy programs

Explanation

Nur Adlin highlights the role of academia in researching ethical frameworks for technology and offering digital literacy programs. She argues that these efforts are crucial for empowering marginalized communities and shaping a more inclusive digital future.

Major Discussion Point

Digital Literacy and Awareness

Agreed with

Henry Verdier

Paola Galvez

Edmon Chung

Agreed on

Importance of digital literacy and awareness

Agreements

Agreement Points

Importance of open standards for the Internet

Henry Verdier

Paola Galvez

Edmon Chung

Amrita Choudhury

Open standards are foundational to the Internet and technological innovation

Open standards promote interoperability and prevent lock-in to proprietary systems

Open standards need to be protected and not neglected in favor of closed ecosystems

Open standards should incorporate human rights and privacy considerations

All speakers emphasized the crucial role of open standards in fostering innovation, interoperability, and protecting user rights in the digital ecosystem.

Need for flexible and adaptable regulations

Nur Adlin

Paola Galvez

Amrita Choudhury

Need for flexible, adaptable regulations to keep pace with technological change

Regulations should be technology-neutral and future-proof

Importance of impact assessments to avoid unintended consequences of regulation

Speakers agreed on the necessity of creating flexible, technology-neutral regulations that can adapt to rapid technological changes while avoiding unintended negative consequences.

Importance of digital literacy and awareness

Henry Verdier

Paola Galvez

Edmon Chung

Nur Adlin

Need to raise awareness about how Internet infrastructure works

Importance of digital literacy programs, especially for underrepresented groups

Users need better understanding of underlying technologies

Academia’s role in researching ethical frameworks and offering digital literacy programs

Speakers collectively emphasized the critical need for improved digital literacy and awareness among users, particularly focusing on underrepresented groups and the role of various stakeholders in promoting this understanding.

Similar Viewpoints

These speakers advocated for a multi-stakeholder approach in internet governance, emphasizing the importance of inclusive dialogue and participation from diverse groups in shaping policies and standards.

Edmon Chung

Amrita Choudhury

Paola Galvez

Multi-stakeholder model allows diverse groups to participate equally

Need for dialogue between technical communities and policymakers

Importance of public participation in regulatory processes

Unexpected Consensus

Government’s active role in promoting open standards

Henry Verdier

Nur Adlin

Government role in enforcing open standards and digital commons

Regulations like GDPR can serve as benchmarks for data protection

Despite coming from different perspectives (government and academia), both speakers agreed on the positive role governments can play in promoting and enforcing open standards and data protection, which is somewhat unexpected given the often-criticized role of government intervention in technology.

Overall Assessment

Summary

The speakers generally agreed on the importance of open standards, the need for flexible and adaptive regulations, the significance of digital literacy, and the value of multi-stakeholder governance in the digital ecosystem.

Consensus level

There was a high level of consensus among the speakers on core principles, suggesting a shared vision for an open, inclusive, and user-centric digital future. This consensus implies a strong foundation for collaborative efforts in addressing challenges in internet governance and digital policy-making.

Differences

Different Viewpoints

Impact of regulations on internet fragmentation

Henry Verdier

Amrita Choudhury

Henry mentioned GDPR is also considered a fragmenter, but was it necessary to protect the data privacy of Europeans? I guess so.

Regulations like GDPR can serve as benchmarks for data protection

While Henry Verdier views GDPR as potentially fragmenting the internet, Amrita Choudhury sees it as a positive benchmark for data protection.

Unexpected Differences

Perception of internet fragmentation

MODERATOR

Amrita Choudhury

Fragmentation of the Internet into isolated ecosystems is a concern

Not all fragmentation is bad. You may argue that even IPv6 has fragmented, but it is also a different technology, right?

While the moderator presents fragmentation as a concern, Amrita Choudhury unexpectedly argues that not all fragmentation is negative, citing technological advancements like IPv6 as an example of beneficial fragmentation.

Overall Assessment

summary

The main areas of disagreement revolve around the impact of regulations on internet fragmentation, the approach to incorporating human rights and privacy into digital systems, and the perception of internet fragmentation itself.

difference_level

The level of disagreement among speakers is moderate. While there are some differing viewpoints, particularly on the effects of regulation and the nature of internet fragmentation, there is general agreement on the importance of open standards, privacy protection, and multi-stakeholder governance. These differences highlight the complexity of balancing various interests in internet governance and the need for continued dialogue among stakeholders.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of incorporating human rights and privacy into digital systems, but differ on the approach. Choudhury emphasizes embedding these principles directly into open standards, while Adlin focuses on flexible regulations to achieve the same goal.

Amrita Choudhury

Nur Adlin

Open standards should incorporate human rights and privacy considerations

Need for flexible, adaptable regulations to keep pace with technological change

Similar Viewpoints

These speakers advocated for a multi-stakeholder approach in internet governance, emphasizing the importance of inclusive dialogue and participation from diverse groups in shaping policies and standards.

Edmon Chung

Amrita Choudhury

Paula Jervis

Multi-stakeholder model allows diverse groups to participate equally

Need for dialogue between technical communities and policymakers

Importance of public participation in regulatory processes

Takeaways

Key Takeaways

Open standards are foundational to the Internet and technological innovation, promoting interoperability and preventing lock-in to proprietary systems

There are challenges to open digital architecture including monopolistic control by tech giants and potential fragmentation of the Internet

Regulations need to balance innovation with protection of user rights and privacy

A multi-stakeholder governance model is important for inclusive Internet governance

Digital literacy and awareness programs are needed to help users understand Internet infrastructure and technologies

Resolutions and Action Items

Governments should implement and enforce inclusive policies and laws related to digital governance

Private sector companies should adopt best practices for human rights and privacy by design

Civil society groups should advocate for users’ rights and collaborate with policymakers

Academia should research ethical frameworks and offer digital literacy programs

Unresolved Issues

How to effectively balance open standards with security concerns

Specific ways to prevent fragmentation of the Internet into isolated ecosystems

How to increase funding and incentives for open systems development

Methods to harmonize global multi-stakeholder models with local/regional regulations

Suggested Compromises

Flexible, adaptable regulations that can keep pace with technological change while still protecting user rights

Technology-neutral regulatory frameworks that can be future-proofed

Balancing innovation with regulation through impact assessments and stakeholder dialogue

Creating digital public infrastructure and goods with government support to complement market-driven development

Thought Provoking Comments

When we talk about as this, the way that this session frames it in terms of a democratic approach, we’re really not talking about what somebody, you know, what may people point to democracy is in terms of voting and a bit more antagonistic kind of a campaigning and voting, but a democratic approach for the internet governance aspect, in my mind is much more participatory. And also what we have come to to to treasure and call a multi stakeholder model.

speaker

Edmon Chung

reason

This comment reframes the concept of democracy in internet governance, moving away from traditional voting models to a more inclusive, participatory approach. It introduces the multi-stakeholder model as a key concept.

impact

This set the tone for the discussion, emphasizing the importance of diverse stakeholder participation in internet governance. It led to further exploration of how different groups can contribute to an open and fair digital ecosystem.

The real story of Internet revolution is this one, that the story of open standards. The question is not, should we protect them or do they matter? The question is, why does other actors don’t recognize this importance?

speaker

Henry Verdier

reason

This comment shifts the focus from whether open standards are important to why their importance is not widely recognized. It challenges participants to think about the broader context and perception of open standards.

impact

This comment deepened the discussion by highlighting the need for greater awareness and recognition of open standards’ role in the internet’s development. It led to conversations about raising awareness and educating the public about internet infrastructure.

So if you want to have those kinds of services given to people, it has to be easy to use. It has to be in different languages so that different people can use it, not only English. And it has to be very easily usable. For example, if you are in a developing country, it has to be mobile friendly.

speaker

Amrita Choudhury

reason

This comment brings attention to the practical aspects of accessibility and usability, especially in developing countries. It highlights the importance of considering diverse user needs in technology development.

impact

This comment broadened the discussion to include considerations of accessibility and inclusivity in technology design. It led to further exploration of how to make open standards and technologies more accessible to diverse global users.

Data privacy laws are emerging and being amended as we speak. For example, the Kingdom of Saudi Arabia’s personal data protection law came into force last year and became fully enforceable in September this year. Another example is my country, Malaysia. Just amended its data privacy law this year.

speaker

Nur Adlin

reason

This comment provides concrete examples of how data privacy laws are evolving globally, including in non-Western countries. It illustrates the dynamic nature of digital governance across different regions.

impact

This comment added depth to the discussion by providing specific examples of how different countries are addressing data privacy. It led to a more nuanced conversation about the global landscape of digital governance and the need for flexible, adaptable regulations.

Overall Assessment

These key comments shaped the discussion by broadening its scope from technical aspects of open standards to include considerations of governance models, public awareness, accessibility, and global regulatory trends. They encouraged a more holistic view of digital governance that considers diverse stakeholders, practical implementation challenges, and the need for ongoing adaptation to technological changes. The discussion evolved from focusing solely on the importance of open standards to exploring how to make them more widely recognized, accessible, and adaptable to diverse global contexts.

Follow-up Questions

How can we improve digital literacy and awareness about the underlying technologies of the internet?

speaker

Edmon Chung and Henry Verdier

explanation

Both speakers emphasized the importance of users understanding how internet technologies work to address issues like walled gardens and barriers to entry.

How can we balance innovation and regulation in the rapidly evolving technological landscape?

speaker

Nur Adlin Hanissa

explanation

The speaker highlighted the need for flexible regulations that can adapt to technological advancements while still protecting user rights.

How can we better integrate human rights considerations into internet protocols and standards?

speaker

Henry Verdier

explanation

The speaker mentioned this as an important area that requires both technical and political approaches.

How can we address the digital gender gap through open standards and digital literacy programs?

speaker

Paula Jervis

explanation

The speaker emphasized the need for targeted programs to increase digital literacy among girls and women.

How can we improve the collaboration between global multistakeholder models and local multilateral systems in internet governance?

speaker

Edmon Chung

explanation

The speaker identified this as a critical issue for the next few years to prevent unintended consequences of local legislation on global internet standards.

How can we better assess and mitigate the unintended impacts of regulations on open standards and internet technologies?

speaker

Amrita Choudhury

explanation

The speaker suggested that better impact assessments are needed to understand the full effects of new regulations on the internet ecosystem.

How can we develop and implement technology-neutral regulatory frameworks that are future-proof?

speaker

Paula Jervis

explanation

The speaker highlighted the challenge of creating regulations that can apply to rapidly changing technologies.

How can we create and fund a foundation to finance open standards, digital commons, and public goods?

speaker

Henry Verdier

explanation

The speaker suggested this as a potential solution to support and enforce open standards and digital commons.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #227 Sustainability and Data Protection in ESG Enhancement

WS #227 Sustainability and Data Protection in ESG Enhancement

Session at a Glance

Summary

This discussion focused on the intersection of Environmental, Social, and Governance (ESG) principles with cybersecurity and internet governance. Panelists explored how ESG frameworks can address sustainability and cybersecurity challenges in the digital age. A key point was the significant environmental impact of data centers and digital infrastructure, with speakers noting the high energy consumption and carbon footprint of these technologies. The need for more sustainable practices in the tech industry was emphasized, including the use of renewable energy sources for data centers.

The conversation also touched on data protection as a crucial aspect of ESG, with panelists stressing the importance of treating data security as a fundamental pillar rather than an afterthought. The potential use of blockchain technology for enhancing transparency in ESG reporting was discussed, though concerns about its energy consumption were raised. Participants highlighted the need for more specific ESG standards tailored to different regional realities, particularly in the Global South.

The discussion emphasized the importance of multi-stakeholder collaboration in developing effective ESG policies and regulations. Panelists suggested expanding the ESG framework to include cybersecurity explicitly, proposing the acronym ESGC. The need for stronger regulatory frameworks and accountability measures for big tech companies was also discussed. The session concluded with calls for more inclusive global conversations on ESG, ensuring representation from diverse regions, particularly Africa and other developing areas. Overall, the discussion underscored the complex interplay between sustainability, data protection, and cybersecurity in the context of ESG and internet governance.

Keypoints

Major discussion points:

– The importance of discussing ESG (Environmental, Social, Governance) issues in relation to internet governance and cybersecurity

– Challenges around energy consumption and environmental impacts of data centers and internet infrastructure

– The need for more specific ESG standards and regulations related to internet/technology issues, especially for developing countries

– Balancing data protection and privacy with ESG reporting and transparency goals

– Expanding ESG to include cybersecurity (ESGC) as a key consideration

The overall purpose of the discussion was to explore the intersection of ESG principles with internet governance and cybersecurity practices, and to consider how to enhance sustainability and accountability in the tech sector.

The tone of the discussion was generally constructive and forward-looking. Panelists shared insights from their areas of expertise while acknowledging challenges and areas for improvement. The conversation became more action-oriented towards the end, with participants and panelists suggesting concrete next steps and areas for further collaboration and research.

Speakers

– Moderator: Session moderator

– Thais Aguiar: Lawyer and researcher in digital rights from Brazil

– Jasmine Ko: Convener of Hong Kong IGF, Certified ESG analyst, Researcher on eco-internet

– Alina Ustinova: Head of youth Russian IGF, Representative of Center for Global IT Cooperation, Specialist in emerging technologies regulation

– Marko Paloski: Coordinator of IGF Macedonia, Part of Youth Coalition on Internet Governance

– Denise Leal: Part of Youth Coalition on Internet Governance, Brazilian youth program participant

– Osei Manu Kagyah: The Institute for ICT Professionals Ghana, Session rapporteur

Additional speakers:

– Peter Zanga Jackson, Jr.: From Liberia, works for regulator

– Chris Odu: From Nigeria, EC Web Technology

– Nicolas Fiumarelli: No specific role/expertise mentioned

Full session report

The discussion explored the intersection of Environmental, Social, and Governance (ESG) principles with cybersecurity and internet governance. Panelists examined how ESG frameworks can address sustainability and cybersecurity challenges in the digital age, emphasizing the need for more comprehensive and tailored approaches to these issues.

Environmental Impacts of Digital Infrastructure

A central theme of the discussion was the significant environmental impact of data centers and digital infrastructure. Jasmine Ko highlighted the high energy consumption and carbon footprint of these technologies, while Denise Leal stressed the importance of considering the location and community impacts of data centers. Marko Paloski advocated for the use of renewable energy and efficiency measures to mitigate these environmental challenges. The panelists agreed on the urgent need for more sustainable practices in the tech industry, with a recurring emphasis on transitioning to renewable energy sources and improving energy efficiency in data centers.

ESG Standards and Implementation

The discussion revealed both agreements and differences in approaches to ESG implementation. Alina Ustinova proposed expanding ESG to ESGC, explicitly including cybersecurity as a key consideration. Jasmine Ko mentioned specific ESG standards such as GLI and SASB, highlighting the need for alignment across different frameworks. Denise Leal emphasized the need for ESG standards tailored to the realities of the Global South and called for more specific internet-related ESG standards. This difference in focus reflects the complexity of applying ESG principles globally.

Thais Aguiar argued that ESG reporting should go beyond mere compliance to foster trust, while Marko Paloski stressed the need for government regulation to enforce ESG standards. These viewpoints suggest a shared recognition of the need for more robust ESG implementation, albeit with different emphases on voluntary versus regulatory approaches.

Cybersecurity and Data Protection

The discussion highlighted data protection as a crucial aspect of ESG, with panelists stressing the importance of treating data security as a fundamental pillar rather than an afterthought. Alina Ustinova, drawing from her experience in “the most attacked country in the world in terms of cyber attacks”, proposed implementing laws with criminal liability for data breaches. This suggestion aligns with the broader call for stronger regulatory frameworks and accountability measures for big tech companies.

Multi-stakeholder Collaboration and Inclusivity

Speakers emphasized the importance of multi-stakeholder collaboration in developing effective ESG policies and regulations. Jasmine Ko stressed the need to align expectations across stakeholders on ESG reporting. Denise Leal advocated for including marginalized communities in creating ESG standards, particularly those tailored for Global South realities. This focus on inclusivity was echoed by audience members who called for stronger representation from diverse regions, particularly Africa and other developing areas, in global conversations on ESG.

Actionable Steps and Future Directions

The session concluded with several suggested action items:

1. Expanding research on eco-internet impacts across more regions

2. Pushing to change ESG to ESGC, emphasizing cybersecurity as a key component

3. Developing ESG standards specific to internet governance issues

4. Creating regulations with criminal liability for data breaches

5. Implementing a more human-centric approach to ESG and internet governance

The moderator also noted an upcoming session on e-waste solutions, highlighting the interconnected nature of these sustainability challenges.

In summary, the discussion underscored the complex interplay between sustainability, data protection, and cybersecurity in the context of ESG and internet governance. It highlighted the need for more nuanced, inclusive approaches that consider regional contexts, leverage technology responsibly, and balance voluntary initiatives with regulatory frameworks to drive meaningful progress in this critical area. The session also revealed the need for further education on ESG concepts, particularly in relation to developing countries, and emphasized the importance of diverse global representation in shaping future ESG standards and practices.

Session Transcript

Moderator: Please welcome to proceed and welcome the speakers. Hi, everyone. Yes. We can hear you. You need to unmute your mic.

Thais Aguiar: Hi, I’m Thais lawyer in Brazil, lawyer and researcher in digital rights for me it’s a pleasure to be here with you today, and also with my dear friends and fellows in the panel hoping to have a great discussion today. We’re talking about the ESG and privacy and data protection, and hope you enjoy this panel with us.

Moderator: Thank you very much so allow me to please introduce my speakers. I’ll start with you Jasmine, please introduce yourself. And then when we are done with our side speakers will move to online.

Jasmine Cole: Hi, everyone. This is Jasmine Cole, I based in Hong Kong so I’m a convener of Hong Kong IGF. Also I’m affiliate with ISO Hong Kong and also Asia. So I’m now also, you know, Cisco, which is certified ESG analysts. So it’s a cert that get recognized on, you know, doing the ESG governance and analysis work. So, also I am a researcher and project leads on the eco internet in. And that’s, that’s how I find the relevance between sustainability and also the IGF.

Alina Ustinova: Thank you. Hi everyone. My name is Alina cinema I am based in Moscow. I’m the head of youth Russian IGF, and also I represent Center for Global IT Cooperation. We do researchers on different topics covered IT, and especially emerging technologies. So I specialize in emerging technologies regulation and the emerging technologies topics that are brought to anything connected with new technologies. And I also try to bring these topics to the youth and to let their opinions be heard among Russian legislative and also different experts. And we also try to cover ESG as well. It’s not like so popular in Russia though, but still there are many opinions about it and we try to bring them up today.

Marko Paloski: Thank you. Hello everyone, I’m Marko Pauski coming from Macedonia. I’m coordinator of the IGF Macedonia and also part of the Youth Coalition on Internet Governance here. I would say this topic is, I’m, how can I say, this year I got in this topic, but as previously mentioned also in Macedonia, this topic is not that much talked about, but now the private sector, especially the international organization and corporation that are here are starting to implement this or request this. So that’s why I’m also getting interested in trying to get involved in this topic because it’s, how can I say, in the future must, and also to implement it as better as possible. So, yeah.

Denise Leal: Thank you very much, Denise. Hello everyone, I hope you are hearing me well. It’s a pleasure to be here with you today. We are talking about this important topic, ESG, and you might question yourselves, what does it mean? Why are we here talking about this? And you will soon discover it. I am part of the Youth Coalition on Internet Governance. I am Brazilian and I was part of the Brazilian… youth program. I am happy to be here. I am also part of the YouthLock IGF organization team. And well, we’ve been diving into so many discussions and it’s really important and really nice that we do have sustainability as a topic in this IGF. I am looking forward to our discussions and I am also happy that we have teams in our room. Welcome everyone. It’s so important that we have an inclusive and also a sustainability session.

Moderator: Thank you very much, my dear panelists. We’re also joined with our rapporteur. His name is Osei. So he’s here. He’s going to take notes on whatever they were going to be discussing. So on this session, we’re going to explore two main critical fields. So the first, we’re going to see how cybersecurity can also enhance transparency, but also safeguard personal data and how this ESG can also, I mean, cybersecurity can enhance sustainable practices. And we have three policy questions that are going to be guiding the discussion today. And I’ll just mention them, but then when I go to a specific speaker, I’ll ask specific questions to each one of them. So the first policy questions that we’re going to be considering says, how do we, why do we need to discuss the ESG in an governance forum? What are sustainability and cybersecurity challenges in ESG systems and how can technology verify and check information accuracy in reports? So moving directly to my first speaker and I’ll start with you, Alina. And my question to you is why are we discussing ESG in governance? Why is it important for us to discuss ESG in governance?

Alina Ustinova: Well, I’ll try to be brief because I would like to share more details like later, but I think that it’s, as you see, like if you look at the name of the topics, we barely discuss ESG. It’s usually connected with some kind of ecological thing, like the infrastructure that is destroying some of the ecological specifics. And especially, for example, in Russia, we have a digital north thing, which is where we try to put our center for data because it is cold and we can protect it there. But still, we should understand that it could be really damaging for the ecological system there and we should also consider everything that goes because sometimes we do not consider this. Sometimes we think that the phone we use, sometimes we throw it away and never look back what happens to it and where it ends up on some kind of a storage. As you know, there is a big, big technological dumpster in one African country and unfortunately, there are a lot of broken and forgotten things that we do not consider. And five years from now, we probably can be in a very dangerous, risky situation. This is why we should talk about ESG in the first place. And of course, the second is kind of cyber security, but we’ll talk about it a little bit later. Thank you.

Moderator: Okay, thank you so much, Alina. Jasmine, you mentioned that you’re ESG certified, right? So, how do you see the current ESG framework address the issue of sustainability and cyber security challenges?

Jasmine Cole: Right, thank you very much. Actually, for what different people have different understanding, knowledge level of ESG, but just to be very brief, within the ESG framework in the social center, the second letter, when it comes to social equity and also the cyber security level, how safe, how people feel safe and how inclusive is internet center and also the service they’ve been using is actually part of the many, many, many index within the ESG framework. But the ESG framework itself has its limitation as well, because as you know, we do have different ESG framework being used such as a GLI, Global Reporting Initiative, second one, SASB, the Sustainability Accounting Standards Board, also the TCF, TCFD, the task force on climate-related financial disclosures, so et cetera, et cetera, just so many standards that people are using. I’m just mentioning those which are more common to use. And actually, if you don’t mind, I want to jump a little bit back on what Anina mentioned about the damage on the ecology from the internet sector itself. Because it involved data center. It involved the operation of it. The energy consumption, the amounts being very soaring. Because we need cooling, we need heating, the system itself, the infrastructure to manage the data center. And this is actually part of the research that I’ve been doing, that eco-internet index. We’ve been mentioning the carbon footprint of the data center across 14 Asia-Pacific jurisdictions. So we’re looking into continuing the research and also to expand it to more Pacific islands and to further Asia. Because mostly now, we are just around the east and the southeast Asia, but not yet to the West Asia. So we’re looking forward to expand our research scope on that one. Last thing to add on. So why we talk about ESG? It sounds like a very commercial term in IGF. But the important is that if you have noticed the reason why people start to do ESG is because we care about the environment. And we need to acknowledge that there is always a carbon footprint when we are into this internet sector. And whoever here, we’re using a laptop, we’re using the screen here, we’re using the light tank here, you know. Wow, actually a lot of things, even just the event itself, it’s consuming a lot of energy. So it is just an affordable topic. And impacts that actually have on the environment. And it is under the big umbrella of sustainability. And I just want to wrap a little bit, this, the effort of sustainability have actually been done, been doing by the ICANNs, ITF, ITU, and of course, my organization as well, DotAsia. So I just want to recognize that there have been works that have been done on sustainability and related to the IGF community.

Moderator: I’m thinking very much, Jasmin, that was really insightful. So when I’m speaking about the ESG issue, another important aspect is data. Like data protection and all that. So now to you, Marco. In what ways can technology verify and enhance accuracy on sustainability reporting while protecting personal data?

Marko Paloski: Hello, thank you for the question. And yeah, I would say it’s a good question. I would give here some, okay, a few examples, how can we use the technology for better accuracy and also verifying the stuff. The first one that is also under tests and development, it’s what we can do is also use the blockchain for transparency and also seeing, I mean, this technology can provide good records of the ACG-related data, such as supply chain, compliance, or emission reporting. Another thing that it also is the IoT sensors, because especially for those kind of data, sometimes we use IoT in the, like we mentioned, in the data centers, in the rooms that we are using, or in the model, not models, but the hardware stuff that we are using because of the consumption of energy or other stuff. So it could be used to monitor and to see the precise data and accuracy. Also we can use, I mean, the topic AI for a part-time detection, because we can see if over a long period of time, if something changed or if drastically changing to check it out if it’s, I mean, that’s the accurate data or something else. About the protection of the personal data, I would say as the previous examples, maybe we use encryption because of these data is very crucial, especially some of the data might be, how can you say, in the process, because some of these data might be publicly available, anonymized, so it can be protected in some way from the data and privacy. And yes, tools or different privacy or federal learnings analysis for sustainability data without exposing individual level details, like I mentioned. Sometimes this information could not be publicly shared with all of the details. I would also want to revert to the question that was, I mean, the first one, but what she also mentioned. I was reading one research analysis on how in the past, because five, 10 years ago, it was promoted to use cloud or something. You want to save, like, don’t buy a CD. You can have a cloud service where you can watch whenever you want. And the cost from production for everything for that CD to come will cost the environment. But now that’s, I think, the smaller issue. If you buy a CD, for example, I get the CD and DVD, you are maybe saving more the earth than using the stream services. Because someone on the stream services always is watching. Or you don’t need to watch it to scroll. I don’t know. There was, like, every click or every sent email on the internet costs energy. Maybe not you, because maybe I will use it with a phone, which uses less consumption than a laptop. But the service in the background uses more consumption than what we are currently doing and what we have in the past doing. So I think it’s a crucial thing. And it’s getting more and more bigger, especially how the data centers are managed. I mean, built. I don’t say that we need to stop building this, but to find some way. Because even if there is no data center, of course, there are a lot of servers. But the data centers are the one big black hole for energy or something like that. So yeah, I just wanted to point out that time is literally changing. And now not everything that we do in the cloud is like, oh, I’m not using that laptop or, I don’t know, TV or DVD for streaming. So I’m saving energy. Maybe you, yes, but the data center is spending much more energy and environment, I would say. Yeah, thank you.

Moderator: Well, thank you so much, Marco. So picking up from the same discussion, I want us to talk more about the data conception of this. I mean, the conception of these data centers. And Denise, now, coming to you. How can we address the environmental challenges associated with the energy consumption of data centers and communication networks? So we have seen Jasmine and Marco have spoken about the data conception of these data centers. So what would be your? Toru on that.

Denise Leal: Hey everyone. So when it comes to ESG topic, we have to understand that ESG is a standard for sustainability, but not only environmental sustainability, but also governance and social sustainability. We talk, when we talk in technology, what is ESG and why we discuss it. We need to discuss the standards of sustainability, social, environmental and governance sustainability, because technology has a huge impact in all of these categories and has changed how we live in society, how we work and with what we work and also has a huge impact in environmental environment. And then we come to this question that is related to energy consumption and data centers. So we have lots of topics related to data that are important here, the reports and other aspects, but specifically talking about energy consumption, we need to consider it as Marco said very well, maybe it was more sustainable to use CDs or DVDs than to use the way we storage data now and informations, because we really use not only energy, but a lot of space and physical space. Many people don’t really understand that when they are keeping their photos online, they are actually using another space that is located in another part of the road. And sometimes in a very important aspect is where are we using this, where are being this data center? indicators are located and what are the environmental impacts and in which communities are these impacts? Because sometimes we think that the way we are impacting the world with the exploration of the environment and the exploration for technology use, it’s well, it’s not seen in other places, but it’s really well seen, especially in marginalized countries where we get the, where we mine to get the important things we need to build our technology and where we storage places that build the technology and this internet that we have now. So it’s important to consider in ESG that the impacts of internet are not so obvious and they need to be considered in these standards. So what I think when it comes to internet governance and what I’ve seen reports is it’s easy to say beautiful things in the reports, in the sustainability reports, but we need to pay attention because when it comes to internet, it’s not, it’s not so obvious that we have another kind of impacts and specifically we need more standards related to internet use and internet aspects because we don’t have enough standards that could take care of this kind of impacts. So what I wanted to call the attention and call for action that I would like to leave in this talk about energy consumption of data centers and communication networks is that we need to pay attention on how we build technology, how we build and how we spend our and use energy and we need to have more specific standards on internet issues and aspects related to ESG. because ESG is a tool to, how can I say, secure, verify and check if enterprises are working well and in a sustainable way. But it’s very easy to work well when you don’t verify internet issues and other aspects of cybersecurity and how you actually make the reports and what you were really caring about when you make these reports. Thanks. I think I talked a lot. I hope it was clear.

Moderator: Thank you so much for that. Jasmine, what ethical considerations should organizations prioritize when aligning cybersecurity practices with ESG goals?

Jasmine Cole: Thank you. It’s not an easy question to answer on because, you know, a little bit similar with what Dennis has been mentioning. When it comes to reporting, it always has to look good. And, you know, the people, the consultant that you pay for doing the ESG report, they get your money. So, you know, you can imagine like what could be their incentive, you know, how much, sometimes there’s like a tricky dynamic in between on data accuracy and transparency. And the considerations when it comes to linking up cybersecurity and ESG, it’s, I think it’s about the organization itself. The leader have to have a clear alignment with their stakeholders, including their employee, including the community they serve, their customer, the supply chain, the upstream, the downstream. They have to be more using a multi-stakeholder approach like the IGF doing something like that. Come to an expectation alignment. Second thing, when it comes to buy-in, it’s important to think about what could be the pain point from each stakeholder when they have to report on the level of, you know, cybersecurity standards and to check the box of different, and that’s when we talk. about cybersecurity level. First thing is about, I think I’m losing track, but I think first thing is about the incentive. So after the pinpoint, it’s about the incentive. So how could you motivate and encourage people to do extra work to measure the data, measure the performance, track it, and trace it? A lot of work have been behind the scenes, actually. A lot of cost, a lot of time involved. So it’s a long and could be a painful but rewarding process in the long term for sustainability of the organization. Sustainability in terms, not just in terms of environmental sustainability, but also the business sustainability. Because nowadays, the business center itself, now it’s a big trend to write on ESG. But of course, we have to remember that there’s also risk and concern about greenwashing. So it’s another topic. I’m not going to talk more about it. But it’s good that we acknowledge there are risks and concerns. This is how we move forward constructively.

Moderator: Thank you very much, Jasmine. Now I want to open the floor to the participants. If you have a contribution from whatever that has been discussed up to this point before I move, I get deeper to other questions. Can you help me pass the mic back there?

Audience: Firstly, my name is Peter Zanga Jackson, Jr. I’m from Liberia, a developing country. And during the opening section, we were told that IGF, no one should be left out. And so when the first orator started her deliberations, she spoke about EGS. And you said you would tell us what EGS is. And she started to talk about EGS, but she didn’t tell us what EGS was. OK, I’m from the regulator. How do I? use the concept of ESG, excuse me, ESG, such that it benefits the society in which I serve. So this is where my little frustration or confusion is. I thought I’d let you know.

Moderator: Okay, so first, if you, before I leave it to my panelists to elaborate more, so first, I think I mentioned earlier that ESG stands for Environment, Sustainability, and Governance. So the discussion that we are having today is about that too, in relation to what? Environmental, social, sorry, Environment, Social, and Governance. So we are talking about ESG in relation to cybersecurity today. So do you guys want us to speak more, Denise? Denise wants to talk.

Denise Leal: Yeah, thank you, Thais and Milenio. So answering to his question, I think it’s an important question. ESG is not always an easy and common topic among many countries and many places. I remembered when I used it to work doing the reports, it wasn’t easy to see how it impacted people’s lives, but it does impact when it makes enterprises think about how they are being sustainable, not only in the environmental way, but also in the social way and the governance way. When it comes to governance, it’s the internal structure of the enterprise. So how they, if they have governance tools, if they have security, but not physical security, the security in a way, their processes are secure and safe in many ways. They don’t have corruption and things like that. But when it comes to social, it’s when we see the impact. in people’s lives because we have a lot of standards that are related to the impact in the society. So if we work with NGOs, if we have allocated money to the society, if we give back what we are receiving in our work, in what we are selling and stuff like that. When it comes to environmental, it’s the easier way because we usually associate, we used to see the word sustainability as an environmental word. But it’s not only an environmental word, what is to be sustainable? This is the discussion, this is the idea, to be sustainable is not only to be sustainable in environmental, but also in society, in our governance model. So the discussion here in this session, we are focusing more in cybersecurity, in data centers, in this stuff because it’s internet governance related, but we can see a lot of impact in other places too. And when it comes to internet, we can see, we could elaborate a lot. But what I wanted to ask, to answer for you, to your question is, we don’t have a lot of regulations in many countries. Many countries don’t talk about ESG, but we do have these internal policies in their enterprises that used to help them to get funds because they have ESG standards, they are accomplishing with it, they have sustainability. So what I think we can do as civil society and as government is, if we explain, if we have more understanding of ESG, we can assure, we can accomplish with the standards within the enterprises because they want to accomplish it, because they will get money and fund. from accomplishing it, so how we can develop more standards that will be useful for us, and they will get benefits from accomplishing them, and we as civil society and government will get also benefits from them to accomplish it. I don’t know, I think it’s clear now. I hope it’s clear now.

Moderator: Is that clear? Yes, Denise. Okay, perfect. I’ll take one more from on-site, and then we’ll move to online. So I’ll take one more from on-site, and then we’ll move to online. Yes. Okay, good evening, ladies and gentlemen.

Audience: I hope I’m audible. Okay, my name is Chris Odu from Nigeria, EC Web Technology, and my question is for one of the panelists, which is Marco. I think I’m glad to be having, I’m part of this conversation, and one thing you talked about which caught my interest is blockchain technology. You did mention you want to use, talking about blockchain technology for transparency of the data, which is a very good thing, but however, I do have some concerns because for you to be able to use the blockchain technology, you need blockchain nodes on the network, and these nodes actually consume a whole lot of energy. Okay, so that’s a bit contradictory, so I would like you to just help me with that so that I’m not, I don’t find myself in a lot of confusion. Thank you very much, okay.

Marko Paloski: Yeah, thank you for the question. What I mentioned is that it’s also in the testing phase and those kind of things, but the idea was to use on the lower edge, like on the Internet of Things, to use the blockchain for secure and accurate transmission of the data, to use on that kind of stuff, so I totally agree with that, and I sometimes got in the conflicts when discussing about blockchain and there’s now the electronic. money and those kind of things but not always all the blockchain technology doesn’t mean that it use that much power and data I mean we are using we are seeing the Bitcoin and those kind of stuff which is your consumption a lot but not every aspect and how it’s implement depends on on that but it’s still in Texas face so I never try it I mean this is from what getting through and the research so but on that point

Moderator: please let me move to online and then get back to the last one side so tight

Thais Aguiar: so thank you mark and indeed those were important considerations for us also we see here that as Denise said we need to discuss sustainability in a broader meaning for us as stakeholders to understand and implement ESG in the way the society needs and wants for sustainable development in a broad sense so bring some additional points on data protection and ESG we see that data protection is a cornerstone of ESG principles especially as organizations increasingly rely on digital systems to manage sustainability and governance efforts so when you see poor data governance or breaches not only in undermines trust but also compromise the integrity of ESG reporting so this raises critical questions about accountability in data stewardship so our organization treating data protection as a fundamental ESG pillar or is in an afterthought in their sustainable strategies so I wanted to leave this provocative question to Alina to so that Alina can share with us how can we assure that ESG commitments to data protection go beyond compliance to actively foster transparency trust and long-term stakeholder engagement so thank you I think this for the question

Alina Ustinova: I guess it’s like a very important issue I will speak from my personal experience I come from I come from the most attacked country in the world in terms of cyber attacks I guess my data my personal that has been five times stolen and sold to someone I receive lots of calls that I don’t take because it’s from numbers I don’t know just because there’s lots of breaches to the system and it’s not because like my data is not protected well but probably because like it’s so much attacked so that We can understand that probably what the companies don’t just don’t need to consider only ESG I guess we should move from ESG to ESG C Where C stands for cyber security and to implement these standards because it’s not only about cyber security companies that protect our data It’s about the cyber security issue in each company that protects our data as well. We should consider that Each company should be responsible for the data it stores not only in data centers, but also we use computers we use internet social media everything and everything that we use has its own creator and the company that is responsible for everything we have because if For example, your phone is stolen. It’s not just the phone you can buy a new phone You can restore his data but if your data is stolen, it’s basically like your personality is stolen if someone can use it to Pretend to be you to use your bank account to steal your money So what we did in Russia, we implemented a law but companies not just pay penalty If this data is stolen, it has a criminal liability for the data stores So I guess that one of the question one of the answer to this question is to have implemented a law which considers that company A crime criminal liable for everything they do with your data because otherwise they will not complain Unfortunately, the big penalties. It’s not a issue for them They can pay they have lots of money, especially the big tech companies but probably if they have something like you will be sent to jail so like some years or you will be your Company could not work. For example on the on this market if you do not complain to the law I guess this something that they can listen and sometimes we Need to do very very risky and destructive things to make them, you know, listen and complain. Thank you

Moderator: I wanted to confirm with you. Is there any contribution from online speakers? I mean from online attendees If you can help me check the chart All right. If we don’t have a line, then we can move on site. Osei. Thank you.

Audience: Thank you very much. I hope I’m audible enough. So this topic is quite interesting in the sense that how can we hold big companies accountable or, say, big tech accountable, and it’s such a very delicate matter. It seems that we are plundering our environment. And my question to my able panelists here is that in a few sentences, two lines, what is the way forward? So let’s bring finally to the conclusion, we are at the top of the hour, right? So I want to hear from my panelists, what do we need to do moving on? Like ask now both things we need to do. Thank you.

Moderator: Are any of the panelists ready to respond?

Alina Ustinova: As I said, I guess we need to change ESG to ESGC and try to move it to every possible panel we can, because otherwise they will just talk to ESG and think it’s more about ecological thing, as we usually think, and not like social thing. Because sometimes social governance are not like, you know, they don’t bring it up. They usually stand only on E, on environmental, and others are just forgotten. So if you set ESGC, that means that they will consider also data protection as well. That’s my point.

Jasmine Cole: Okay, thank you. So perhaps from the very grassroots individual level, it’s to, like for the audience here, to rethink what we’ve been talking about, and try to digest and reflect on how, you know, how does it make sense or not make sense. You know, you can always criticize on many things that you’ve been listening, because it’s about your own judgments. And it’s also about your personal experience. And now it’s like your homework to think about how do you convert the information that you absorb, transform it into something that you can do as an output. So the very, very general term is to keep paying attention, you know, like follow the trend. You know, there are some work that’s been done by different organizations. I mentioned in the beginning, so you can always search it up online. And for my part is to continue doing my eco-internet research. And as I say, we are expanding our research scope, and also to refine our research methodology. So always finding a way to improve, and also bring ESG into IGF, and also bring IGF and cybersecurity into the ESG center. So the major agenda that is in our mind is to integrating and fostering collaboration and dialogue between the two segregated kind of segregated center of ESG and IGF. So that’s my moving on. Thank you.

Moderator: All right. Do you want to say?

Marko Paloski: I will give it a few words. I mean, it’s a very good question because, yeah, we are finally discussing, but what is the next step forward? I mean, we cannot change it from here, but how we see. I would say maybe because the data centers, they will be growing. I mean, the next years, not just the big tech companies, but we now see that the countries are building. Other smaller companies, everyone is, how can I say, going to debt because of the services and data requests. I would say maybe the first thing is going with renewable energy to try to use that, and maybe build data centers where there are a lot of sun, or maybe that you can later use it for electricity and all those kind of stuff. But what is important here, not with just renewable, but also with other, I would say that the government and the policymakers should get bigger role here because, yeah, we agree that we’re going to do this, and it’s better. And the companies are sharing a lot of, I don’t know, like saying to 2030, we’re going to be 100% renewable, those kind of things. But how many people know if they are exactly doing this in the details? So I think that there must be some. kind play of the government and policy makers to make this regulation. Okay, 2025 all data centers must be, I mean 2025 is so close, 2030 all the data centers must be renewable or must have this, more of this kind like a strict regulation because there are still five years to the R6 actually, but to have to implement because without regulation or like we are doing sometimes I think with the plastic in the ecosystem where yeah we need to stop plastic doing but nobody is doing, I mean it’s reverting to the person of course, also we should be, how can I say, mindful how we are using the technology and everything because sometimes we are so used to it, we are using, I don’t know, you play YouTube song on your computer whole day and you’re not even listening, why is that or something like that, but my point is that we need to have some kind of regulation here so to make more strict and to get more serious because like this yeah every company is like mentioned going with the trend but if I don’t want I won’t go with the trend and nobody will do anything to you, yeah might be costly, sometimes it can be cheaper not to go with the trend but nobody is getting, how can I say, how can I say, for that that you are not following the regulations or if it’s not in the policy, yeah that would be my answer. Thank you

Denise Leal: yeah thank you Milenium, just a few words, what I wanted to very much, I wanted to say something, to give us my topic in this discussion is that I believe and I see that we need regulations that are made for and in the global south, we use standards, sustainability standards that usually comes from Europe or USA but we need to create also specific regulations that are related to the reality of global south. And why am I saying that? Because we have these groups, these communities, traditional communities, indigenous and people that have very different realities and they need to be considered in what it means to be sustainable. In ESG or other discussions, so that to happen, we need to better work and improve in regulations, law and policies made by these people. We have to stop using only regulations that come, types of regulations and models of regulations that come from a part of the world and apply it everywhere and sometimes it doesn’t really protect specific interests of people who are so marginalized. So I would recommend that we start reading and understanding also what these communities have to say about these discussions on sustainability and that could be used also in other topics, not only ESG.

Moderator: Thank you so much, Denise. I think that’s really an important point. I think there’s this approach that is called the human-centric approach. So I think that that should be something that we can consider in this kind of a discussion, have people who are affected in these kind of fields, all the stakeholders that are involved in these kind of issues, put them in the table or in the room all together, discuss, understand their needs and then all together come with a solution that we think may work and help us. Yeah, so I want to close the discussion but before I do that, I wonder if any of my panelists, one or two, can help me suggest what actionable steps can stakeholders, let’s say be it the government, the civil societies or the technical communities, use to enhance transparency and accountability in sustainability reporting?

Thais Aguiar: If I may add to Melania’s question, to complement, I would like to I ask you all also, in terms of actionable points, what role should global regulatory frameworks play to harmonize ESG, data protection, and cybersecurity standards across different regions to ensure consistent and equitable implementation?

Moderator: So again, any of the panelists who is ready to take any of the two questions? Mine was the steps that the stakeholders can take to enhance transparency and accountability. Well, you are not easing on us.

Audience: So the question of collaboration, I would say, with stakeholders and government is quite a tricky one, because it’s all about interest. Yeah, it’s all about interest. The big technology, I would say, industry will always have their interest. Government will also have their interest. But that’s where we need to push for the advocacy. That’s where all of us in this room, that’s all of us interested in saving our environment, interested in pushing this cause. We need to push this topic to every corner of the world and holding our leaders accountable, holding industry or, say, stakeholders accountable. That’s the only way we can make progress or we can move forward. But if you are to leave it that way, or, say, if you are to freestyle, your guess is as good as mine. Thank you.

Moderator: OK, Nicolas, you had something to say?

Nicolas Fiumarelli: Hello, everybody. Nicolas Fimarelli. I may revamp the issue about the blockchain, because you can use blockchain to actually accurately measure if, in real-time, you can actually do something like this. real-time if the ESG parameters, right? So that could be a way to disclosure if an organization do this false ESG or quick quick fixes to fit on the ESG reporting, but then on the long term they are not like complaining, right? So if you have a way to measure and to have a real-time blockchain having this information, you will actually disclosure if this organization is not complaining with the ESG.

Moderator: All right, okay now since I don’t have, is there anyone want to contribute before I move to a rapporteur to help us summarize what we have discussed?

Denise Leal: Yes, just to to add something to this question that Thais has asked us about harmonizing the global regulatory frameworks and harmonizing ESG data protection and cyber security standard across different regions. I think that this ESG is pretty global actually and it’s used across the globe. The same, they’re very much the same standards. But what I think we need is to have more specific standards depending on the realities of different regions. So I would not say that we need always to have the, we could and it’s good to have global standards but we also need to have specific standards that apply to different realities. So in terms of how we explore these countries and how we protect these countries and how we make this in enterprises sustainable in each different reality because it’s very different when you were talking about a place that have traditional people who are impacted very differently and have a very different relation with nature and society. So we need there also to consider these different realities, to create different standards, to be more effective in protecting and being sustainable. That’s what I wanted to add. Thank you.

Moderator: Thank you so much to my panelists. And before we close, I would like to invite our rapporteur to summarize for us what we have discussed.

Osei Manu Kagyah: It’s been such a forward-looking conversation. I hope we’ve all enjoyed ourselves. I wanted to make it quite interactive, actually. I’ll come to the summary. So I’m going to take a step here and go out there and hear a word from our participants, and also their interventions or suggestions. And I’ll make a quick run-up. I think we still have a few minutes. So just a few words from our participants, what they have to say about this. Would you like to contribute to this topic?

Audience: Thank you. I like your idea that you discussed today, that regulation and input should be at different levels. Alina told about very top level of contributing cybersecurity to ESG problem at the level of international and from intergovernmental perspective. Another of our colleagues told about implementation into local regulation synchronized in between the regions and between different countries. Another one topic I’ve heard about implementation in the corporate level, in the individual responsibility of each company, not just to find the reasons and to find arguments not to do so, but to find the resources, find the energy to implement the proper standards, despite the fact they are not set up into the regulation. Thank you. Thank you very much. Okay, I think I’m audible. I don’t want to say much, but for me, when I want to speak, I would like to focus more on my primary constituency, which is Africa. I think we should have more collaborations to see that Africa also comes into this kind of conversations, and like the other participants also said, no one should be left behind. How can we include everybody so that these conversations do not just remain in some certain areas of the world, but also extend to other continents as well, least developed and other categories as well. So that’s my own contribution I would like us to just do, and the key word is collaboration, more collaboration. Thank you. Thank you very much. What an amazing end to a very

Osei Manu Kagyah: fruitful discussion. So I then proceed to my reporting. So this topic is quite exceptional in a sense that globally, ESG concerns has been gaining prominence with this rapid growth growing cyber security industry. Both industries are emerging on our way into session, but seldom in these discussions are these perspectives be open » Thanks. We got a fair idea how the cycles are which are in tela data and comes from the data centers , and we had a fair idea of blockchain could be explored, that’s why we had a fair idea of data protection, transparency in reporting ESG, and also most importantly, awareness and effective communication we need around ESG topics. We also had a fair idea of how the conversation should be moved from ESG to ESGC, and not more about the ecological, but also the data protection. We dovetailed into that conversation, more research, more advocacy, and fostering collaborations. Renewable energy could be also explored. So it has been such an insightful conversation, and we hope to continue this conversation through further future discussions and other sessions which seek to explore. I think tomorrow there’s a session on effective e-waste solutions for a sustainable digital future, where Yasmin is the speaker, and just pass by and we can further move this conversation.

Moderator: further. Thank you very much. Over to you my able moderator. Thank you so much, our dearest rapporteur. That was well noted. So I would like to thank everyone for attending this session and much appreciation to my panelists and my online moderator. Thank you so much, this was very interesting. Have a nice evening. Thank you all. Denise, can we take a picture together everyone? Yes, please. For the online. Bye. Thank you all. Thanks, everyone. Bye. . . . . . . . . . .

A

Alina Ustinova

Speech speed

169 words per minute

Speech length

876 words

Speech time

310 seconds

ESG impacts environmental sustainability of internet infrastructure

Explanation

Alina Ustinova argues that ESG is important for internet governance because it impacts the environmental sustainability of internet infrastructure. She highlights the need to consider the ecological impact of digital technologies and infrastructure.

Evidence

Example of digital north data centers in Russia potentially damaging local ecosystems.

Major Discussion Point

Importance of ESG in Internet Governance

Agreed with

Jasmine Ko

Denise Leal

Thais Aguiar

Agreed on

Importance of ESG in Internet Governance

Move from ESG to ESGC to include cybersecurity

Explanation

Alina Ustinova suggests expanding ESG to ESGC, where C stands for cybersecurity. This would ensure that cybersecurity is considered alongside environmental, social, and governance factors in sustainability frameworks.

Evidence

Personal experience of data breaches in Russia, described as the most attacked country in terms of cyber attacks.

Major Discussion Point

Improving ESG Standards and Implementation

Differed with

Denise Leal

Differed on

Approach to ESG implementation

Implement laws with criminal liability for data breaches

Explanation

Alina Ustinova proposes implementing laws that impose criminal liability on companies for data breaches. This would go beyond financial penalties to ensure companies take data protection seriously.

Evidence

Example of Russian law implementing criminal liability for data breaches.

Major Discussion Point

Improving ESG Standards and Implementation

J

Jasmine Ko

Speech speed

138 words per minute

Speech length

1154 words

Speech time

500 seconds

ESG frameworks address social equity and cybersecurity

Explanation

Jasmine Ko explains that ESG frameworks include social equity and cybersecurity within their scope. She notes that these factors are part of the many indices within ESG frameworks.

Evidence

Mentions various ESG frameworks such as GLI, SASB, and TCFD.

Major Discussion Point

Importance of ESG in Internet Governance

Agreed with

Alina Ustinova

Denise Leal

Thais Aguiar

Agreed on

Importance of ESG in Internet Governance

Data centers consume significant energy and have large carbon footprints

Explanation

Jasmine Ko highlights the environmental impact of data centers, noting their high energy consumption and resulting carbon footprint. She emphasizes the need to consider this impact in sustainability discussions.

Evidence

Mentions research on carbon footprint of data centers across 14 Asia-Pacific jurisdictions.

Major Discussion Point

Environmental Challenges of Data Centers

Agreed with

Marko Paloski

Denise Leal

Agreed on

Environmental Challenges of Data Centers

Align expectations across stakeholders on ESG reporting

Explanation

Jasmine Ko argues for the importance of aligning expectations among various stakeholders in ESG reporting. This includes employees, customers, and the broader community served by an organization.

Major Discussion Point

Multi-stakeholder Collaboration on ESG

M

Marko Paloski

Speech speed

184 words per minute

Speech length

1385 words

Speech time

450 seconds

Blockchain can provide transparent records of ESG data

Explanation

Marko Paloski suggests using blockchain technology to ensure transparency and accuracy in ESG-related data. This could provide reliable records for supply chain compliance and emission reporting.

Major Discussion Point

Technology for ESG Reporting and Data Protection

Agreed with

Nicolas Fiumarelli

Agreed on

Technology for ESG Reporting and Data Protection

Differed with

Nicolas Fiumarelli

Differed on

Technology for ESG reporting

IoT sensors can monitor precise sustainability data

Explanation

Marko Paloski proposes using IoT sensors to monitor and collect precise sustainability data. This could provide accurate measurements for energy consumption and other ESG metrics.

Major Discussion Point

Technology for ESG Reporting and Data Protection

Agreed with

Nicolas Fiumarelli

Agreed on

Technology for ESG Reporting and Data Protection

AI can be used for anomaly detection in ESG reporting

Explanation

Marko Paloski suggests using AI for anomaly detection in ESG reporting. This could help identify unusual patterns or discrepancies in sustainability data over time.

Major Discussion Point

Technology for ESG Reporting and Data Protection

Agreed with

Nicolas Fiumarelli

Agreed on

Technology for ESG Reporting and Data Protection

Data encryption and privacy-preserving analytics needed

Explanation

Marko Paloski emphasizes the need for data encryption and privacy-preserving analytics in ESG reporting. This would help protect sensitive information while still allowing for meaningful analysis.

Major Discussion Point

Technology for ESG Reporting and Data Protection

Agreed with

Nicolas Fiumarelli

Agreed on

Technology for ESG Reporting and Data Protection

Renewable energy and efficiency measures needed for data centers

Explanation

Marko Paloski argues for the use of renewable energy and efficiency measures in data centers. This would help reduce their environmental impact and improve sustainability.

Major Discussion Point

Environmental Challenges of Data Centers

Agreed with

Jasmine Ko

Denise Leal

Agreed on

Environmental Challenges of Data Centers

Government regulation needed to enforce ESG standards

Explanation

Marko Paloski calls for government regulation to enforce ESG standards. He argues that without strict regulations, companies may not follow through on their sustainability commitments.

Major Discussion Point

Multi-stakeholder Collaboration on ESG

D

Denise Leal

Speech speed

132 words per minute

Speech length

1604 words

Speech time

727 seconds

ESG standards needed for internet-specific sustainability issues

Explanation

Denise Leal argues that specific ESG standards are needed to address internet-related sustainability issues. She points out that current standards may not adequately cover the unique impacts of internet technologies.

Major Discussion Point

Importance of ESG in Internet Governance

Agreed with

Alina Ustinova

Jasmine Ko

Thais Aguiar

Agreed on

Importance of ESG in Internet Governance

Need to consider location and community impacts of data centers

Explanation

Denise Leal emphasizes the importance of considering the location and community impacts of data centers. She argues that the environmental and social effects of these facilities on local communities should be taken into account.

Major Discussion Point

Environmental Challenges of Data Centers

Agreed with

Jasmine Ko

Marko Paloski

Agreed on

Environmental Challenges of Data Centers

Create ESG standards tailored for Global South realities

Explanation

Denise Leal calls for the creation of ESG standards that are tailored to the realities of the Global South. She argues that current standards often come from Europe or the USA and may not reflect the needs of developing countries.

Evidence

Mentions the need to consider traditional communities, indigenous people, and other marginalized groups in ESG standards.

Major Discussion Point

Improving ESG Standards and Implementation

Differed with

Alina Ustinova

Differed on

Approach to ESG implementation

Include marginalized communities in creating ESG standards

Explanation

Denise Leal advocates for including marginalized communities in the creation of ESG standards. She argues that this would ensure the standards reflect diverse realities and protect specific interests of people who are often overlooked.

Major Discussion Point

Multi-stakeholder Collaboration on ESG

T

Thais Aguiar

Speech speed

131 words per minute

Speech length

295 words

Speech time

134 seconds

ESG reporting should go beyond compliance to foster trust

Explanation

Thais Aguiar argues that ESG reporting should go beyond mere compliance to actively foster trust and long-term stakeholder engagement. She emphasizes the importance of data protection as a cornerstone of ESG principles.

Major Discussion Point

Importance of ESG in Internet Governance

Agreed with

Alina Ustinova

Jasmine Ko

Denise Leal

Agreed on

Importance of ESG in Internet Governance

N

Nicolas Fiumarelli

Speech speed

148 words per minute

Speech length

104 words

Speech time

41 seconds

Use blockchain for real-time measurement of ESG compliance

Explanation

Nicolas Fiumarelli suggests using blockchain technology for real-time measurement of ESG compliance. This could help identify organizations that are not genuinely complying with ESG standards in the long term.

Major Discussion Point

Improving ESG Standards and Implementation

Agreed with

Marko Paloski

Agreed on

Technology for ESG Reporting and Data Protection

Differed with

Marko Paloski

Differed on

Technology for ESG reporting

A

Audience

Speech speed

126 words per minute

Speech length

754 words

Speech time

356 seconds

Push advocacy to hold leaders and industry accountable

Explanation

An audience member emphasizes the need for advocacy to hold leaders and industry accountable for ESG implementation. They argue that this is necessary to make progress in pushing ESG initiatives forward.

Major Discussion Point

Multi-stakeholder Collaboration on ESG

Agreements

Agreement Points

Importance of ESG in Internet Governance

Alina Ustinova

Jasmine Ko

Denise Leal

Thais Aguiar

ESG impacts environmental sustainability of internet infrastructure

ESG frameworks address social equity and cybersecurity

ESG standards needed for internet-specific sustainability issues

ESG reporting should go beyond compliance to foster trust

The speakers agree that ESG is crucial for internet governance, addressing environmental sustainability, social equity, cybersecurity, and fostering trust in the digital ecosystem.

Environmental Challenges of Data Centers

Jasmine Ko

Marko Paloski

Denise Leal

Data centers consume significant energy and have large carbon footprints

Renewable energy and efficiency measures needed for data centers

Need to consider location and community impacts of data centers

The speakers concur on the significant environmental impact of data centers and the need for sustainable solutions, including renewable energy and consideration of community impacts.

Technology for ESG Reporting and Data Protection

Marko Paloski

Nicolas Fiumarelli

Blockchain can provide transparent records of ESG data

IoT sensors can monitor precise sustainability data

AI can be used for anomaly detection in ESG reporting

Data encryption and privacy-preserving analytics needed

Use blockchain for real-time measurement of ESG compliance

The speakers agree on the potential of various technologies like blockchain, IoT, and AI to enhance ESG reporting accuracy, transparency, and data protection.

Similar Viewpoints

Both speakers emphasize the need for strong government regulations and enforcement to ensure compliance with ESG and data protection standards.

Alina Ustinova

Marko Paloski

Implement laws with criminal liability for data breaches

Government regulation needed to enforce ESG standards

Both speakers advocate for more inclusive and diverse approaches to ESG standards and reporting, considering different stakeholders and global realities.

Denise Leal

Jasmine Ko

Create ESG standards tailored for Global South realities

Align expectations across stakeholders on ESG reporting

Unexpected Consensus

Expansion of ESG to include Cybersecurity

Alina Ustinova

Jasmine Ko

Move from ESG to ESGC to include cybersecurity

ESG frameworks address social equity and cybersecurity

Despite coming from different backgrounds, both speakers unexpectedly agree on the importance of integrating cybersecurity into ESG frameworks, suggesting a growing recognition of digital security in sustainability discussions.

Overall Assessment

Summary

The speakers generally agree on the importance of ESG in internet governance, the environmental challenges posed by data centers, the potential of technology in ESG reporting and data protection, and the need for more inclusive and enforceable ESG standards.

Consensus level

There is a high level of consensus among the speakers on the main issues, with some variations in emphasis and approach. This strong agreement suggests a growing recognition of the interconnectedness of environmental, social, governance, and cybersecurity issues in the digital realm, which could lead to more holistic and effective approaches to internet governance and sustainability.

Differences

Different Viewpoints

Approach to ESG implementation

Alina Ustinova

Denise Leal

Move from ESG to ESGC to include cybersecurity

Create ESG standards tailored for Global South realities

Alina Ustinova advocates for expanding ESG to ESGC to include cybersecurity, while Denise Leal emphasizes the need for ESG standards tailored to the Global South’s realities.

Technology for ESG reporting

Marko Paloski

Nicolas Fiumarelli

Blockchain can provide transparent records of ESG data

Use blockchain for real-time measurement of ESG compliance

While both speakers advocate for blockchain use, Marko Paloski focuses on transparent record-keeping, while Nicolas Fiumarelli emphasizes real-time measurement of compliance.

Unexpected Differences

Focus on data centers vs. broader ESG implementation

Jasmine Ko

Denise Leal

Data centers consume significant energy and have large carbon footprints

Need to consider location and community impacts of data centers

While both discuss data centers, Jasmine Cole unexpectedly focuses on energy consumption and carbon footprint, while Denise Leal emphasizes community impacts, highlighting different priorities within the same issue.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to ESG implementation, the role of technology in ESG reporting, and the focus of ESG standards (global vs. regional).

difference_level

The level of disagreement is moderate. While speakers generally agree on the importance of ESG in internet governance, they differ on implementation strategies and priorities. These differences reflect the complexity of applying ESG principles globally and could lead to challenges in developing universally accepted standards and practices.

Partial Agreements

Partial Agreements

Both speakers agree on the need for more specific ESG standards, but Marko Paloski emphasizes government regulation, while Denise Leal focuses on tailoring standards to Global South realities.

Marko Paloski

Denise Leal

Government regulation needed to enforce ESG standards

Create ESG standards tailored for Global South realities

Similar Viewpoints

Both speakers emphasize the need for strong government regulations and enforcement to ensure compliance with ESG and data protection standards.

Alina Ustinova

Marko Paloski

Implement laws with criminal liability for data breaches

Government regulation needed to enforce ESG standards

Both speakers advocate for more inclusive and diverse approaches to ESG standards and reporting, considering different stakeholders and global realities.

Denise Leal

Jasmine Ko

Create ESG standards tailored for Global South realities

Align expectations across stakeholders on ESG reporting

Takeaways

Key Takeaways

ESG (Environmental, Social, Governance) is increasingly important in internet governance and needs to include cybersecurity considerations

Data centers have significant environmental impacts that need to be addressed through renewable energy and efficiency measures

Technology like blockchain and IoT can enhance ESG reporting and data protection, but also raise new challenges

ESG standards and implementation need to be tailored for different regional realities, especially in the Global South

Multi-stakeholder collaboration and government regulation are needed to improve ESG practices and accountability

Resolutions and Action Items

Expand research on eco-internet impacts across more regions

Push to change ESG to ESGC to explicitly include cybersecurity

Develop ESG standards specific to internet governance issues

Create regulations with criminal liability for data breaches

Include marginalized communities in developing ESG standards

Unresolved Issues

How to balance blockchain’s potential for ESG reporting with its energy consumption

Specific ways to harmonize global ESG frameworks while addressing regional differences

How to incentivize companies to go beyond compliance in ESG reporting

Concrete steps for different stakeholders to enhance ESG transparency and accountability

Suggested Compromises

Use blockchain selectively for critical ESG data tracking rather than broadly

Develop global ESG standards but allow for regional-specific additions

Balance strict regulation with incentives for companies to improve ESG practices

Thought Provoking Comments

ESG is not always an easy and common topic among many countries and many places. I remembered when I used it to work doing the reports, it wasn’t easy to see how it impacted people’s lives, but it does impact when it makes enterprises think about how they are being sustainable, not only in the environmental way, but also in the social way and the governance way.

speaker

Denise Leal

reason

This comment provides important context about the challenges and real-world impact of ESG, expanding the discussion beyond theoretical concepts.

impact

It shifted the conversation to focus more on practical implications and challenges of implementing ESG principles across different contexts.

I come from the most attacked country in the world in terms of cyber attacks… We can understand that probably what the companies don’t just don’t need to consider only ESG I guess we should move from ESG to ESG C Where C stands for cyber security and to implement these standards

speaker

Alina Ustinova

reason

This comment introduces a new perspective on integrating cybersecurity more explicitly into ESG frameworks, based on real-world challenges.

impact

It sparked discussion about expanding ESG to ESGC and considering cybersecurity as a core component of sustainability frameworks.

We need regulations that are made for and in the global south, we use standards, sustainability standards that usually comes from Europe or USA but we need to create also specific regulations that are related to the reality of global south.

speaker

Denise Leal

reason

This comment challenges the one-size-fits-all approach to global standards and highlights the need for context-specific regulations.

impact

It broadened the discussion to consider regional differences and the importance of inclusive policy-making in ESG and sustainability efforts.

You can use blockchain to actually accurately measure if, in real-time, you can actually do something like this. real-time if the ESG parameters, right? So that could be a way to disclosure if an organization do this false ESG or quick quick fixes to fit on the ESG reporting

speaker

Nicolas Fiumarelli

reason

This comment introduces a concrete technological solution to address transparency and accountability challenges in ESG reporting.

impact

It shifted the discussion towards practical technological solutions for improving ESG implementation and reporting.

Overall Assessment

These key comments shaped the discussion by expanding it beyond theoretical concepts of ESG to consider real-world challenges, regional differences, and practical solutions. They highlighted the need for a more nuanced, inclusive approach to ESG that considers cybersecurity, regional contexts, and leverages technology for transparency. The discussion evolved from defining ESG to critically examining its implementation and proposing innovative ways to enhance its effectiveness globally.

Follow-up Questions

How can we develop more specific ESG standards that apply to different regional realities, particularly in the Global South?

speaker

Denise Leal

explanation

This is important to ensure ESG standards consider the unique contexts and needs of different regions, especially marginalized communities and traditional peoples.

How can we expand research on the carbon footprint of data centers to more Pacific islands and West Asia?

speaker

Jasmine Ko

explanation

Expanding this research would provide a more comprehensive understanding of the environmental impact of internet infrastructure across different regions.

How can we integrate cybersecurity more explicitly into ESG frameworks?

speaker

Alina Ustinova

explanation

Adding cybersecurity as a fourth pillar (ESGC) would ensure data protection and security are given proper consideration in sustainability assessments.

How can blockchain technology be used to measure ESG parameters in real-time while addressing energy consumption concerns?

speaker

Nicolas Fiumarelli

explanation

This could provide a way to enhance transparency and accountability in ESG reporting, while also addressing the environmental impact of blockchain technology itself.

How can we ensure Africa and other least developed regions are included in ESG and cybersecurity conversations?

speaker

Audience member (unnamed)

explanation

This is crucial for ensuring global representation and addressing the unique challenges and perspectives of developing regions in ESG implementation.

What role should global regulatory frameworks play in harmonizing ESG, data protection, and cybersecurity standards across different regions?

speaker

Thais Aguiar

explanation

This is important for ensuring consistent and equitable implementation of ESG standards globally while respecting regional differences.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #35 Unlocking sandboxes for people and the planet

WS #35 Unlocking sandboxes for people and the planet

Session at a Glance

Summary

This discussion focused on the concept of regulatory sandboxes, a tool for experimenting with new technologies and regulations in a controlled environment. Bertrand de La Chapelle introduced the topic, explaining that sandboxes can be used to test innovative applications, explore regulatory challenges, and foster collaboration between public and private sectors.


Participants shared experiences and insights from different regions. Thiago Moraes discussed Brazil’s approach to AI-focused sandboxes, emphasizing the importance of preparation and stakeholder engagement. Adam Zable provided an international perspective, highlighting the diversity of sandbox implementations worldwide and the distinction between European and East Asian approaches.


Maureen Amutorine shared insights on sandboxes in Africa, noting the prevalence of fintech sandboxes and the challenges faced by regulators. Katerina Yordanova discussed the European context, particularly the AI Act’s sandbox requirements and the need for clear incentives for participation.


The discussion explored various challenges in implementing sandboxes, including resource constraints, trust-building between regulators and innovators, and protecting intellectual property. Participants emphasized the importance of transparency, clear methodologies, and addressing potential disincentives for companies to participate.


The conversation also touched on the potential of sandboxes to bridge knowledge gaps between regulators and innovators, particularly in rapidly evolving tech sectors. Participants stressed the need for better communication about sandbox initiatives and their benefits to encourage participation and build trust.


In conclusion, the discussion highlighted sandboxes as a promising tool for agile regulation and innovation, while acknowledging the complexities and challenges in their implementation across different contexts and sectors.


Keypoints

Major discussion points:


– The concept and purpose of regulatory sandboxes for testing new technologies and regulations


– Different approaches to sandboxes in various regions (e.g. EU vs. East Asia)


– Challenges and incentives for both regulators and companies to participate in sandboxes


– The importance of preparation and methodology in setting up successful sandboxes


– Building trust and transparency between regulators and companies through the sandbox process


The overall purpose of the discussion was to explore the concept of regulatory sandboxes, share experiences from different regions, and discuss best practices and challenges in implementing them effectively.


The tone of the discussion was largely informative and collaborative, with speakers sharing insights from their experiences and research. There was an emphasis on the potential benefits of sandboxes, while also acknowledging the challenges in implementation. The tone became slightly more cautionary towards the end when discussing potential disincentives for participation, but remained constructive in proposing solutions.


Speakers

– Bertrand de La Chapelle – Chief Vision Officer of the Datasphere Initiative (Moderator)


– Moraes Thiago – Data Protection Authority of Brazil


– Adam Zable – Research fellow at the GovLab


– Morine Amutorine – Resource associate at the Datasphere Initiative


– Katerina Yordanova – Senior legal expert at the University of Leuven


Additional speakers:


– Farouk Yusuf Yabo – Permanent Secretary at the Federal Ministry of Communications, Innovation and Digital Economy in Nigeria


– Luis Fernando Castro – Former member of PGI, the Brazilian Internet Steering Committee


– Sophie – Mentioned as putting information in the chat, likely part of the organizing team


Full session report

Regulatory Sandboxes: Exploring Innovation and Regulation in Controlled Environments


Introduction:


This discussion, moderated by Bertrand de La Chapelle of the Datasphere Initiative, focused on regulatory sandboxes as tools for experimenting with new technologies and regulations in controlled environments. Participants from various regions shared insights on sandbox implementation, challenges, and best practices.


Purpose and Types of Sandboxes:


De La Chapelle introduced the concept of sandboxes, noting that they exist along a spectrum. Examples include regulatory sandboxes focused on compliance and testing existing regulations, and operational sandboxes used to test new applications and technologies. Adam Zable from the GovLab elaborated on how different regions emphasize various aspects of sandboxes, with the European Union approach tending to focus more on regulatory compliance and risk mitigation, while East Asian countries like South Korea emphasize economic growth and regulatory flexibility.


Regional Approaches and Experiences:


Participants shared diverse experiences from their respective regions:


1. Brazil: Moraes Thiago discussed Brazil’s preparation to launch a sandbox focused on the applicability of data protection regulation articles to AI. He emphasized the importance of thorough preparation and stakeholder engagement before launching the sandbox.


2. Africa: Morine Amutorine shared insights on sandboxes in Africa, noting the prevalence of fintech sandboxes and the challenges faced by regulators. She highlighted examples including the EcoBank Pan-African sandbox and Kenya’s Communications Authority using a sandbox to identify innovations not covered by existing regulatory frameworks.


3. European Union: Katerina Yordanova discussed the European context, particularly the AI Act’s sandbox requirements. She noted significant differences in member states’ needs and approaches to regulatory sandboxes across various sectors.


Methodology and Best Practices:


Several key points emerged regarding the implementation of successful sandboxes:


1. Thorough preparation and stakeholder engagement


2. Transparency and public engagement to build trust


3. Clearly defined stages in the sandbox process


4. External evaluation of sandbox effectiveness


5. Addressing intellectual property protection concerns upfront


Challenges and Incentives for Participation:


The discussion explored various challenges in implementing sandboxes and potential incentives for participation:


Challenges:


1. Resource constraints and lack of expertise for regulators


2. Trust issues, particularly in Eastern Europe where companies may be hesitant to share information with regulators


3. Knowledge gaps between public authorities and tech developers


4. Potential risks for regulators in implementing sandboxes


Incentives:


1. Access to valuable data sets for companies


2. Regulatory clarity for innovators


3. Removing fees to lower barriers to entry, especially in low-income areas


Building Trust and Transparency:


Speakers emphasized:


1. Clear communication about sandbox initiatives and their benefits


2. Addressing potential disincentives for companies to participate


3. Using ‘closed room’ arrangements to protect sensitive information while allowing necessary sharing


4. Engaging the public through various means, as exemplified by the Norway Data Protection Authority’s podcast about their sandbox


Global Initiatives:


De La Chapelle mentioned the Global Sandboxes Forum and Africa Sandboxes Forum initiatives, demonstrating ongoing efforts to share knowledge and best practices in this evolving field.


Unresolved Issues and Future Directions:


The discussion highlighted several areas requiring further exploration:


1. Developing standardized sandbox practices while accommodating regional differences


2. Effectively bridging knowledge gaps between regulators and innovators


3. Balancing incentives for both large companies and smaller startups to participate


4. Creating a framework that addresses common challenges for companies participating in sandboxes


5. Exploring ways to mitigate risks for regulators implementing sandboxes


Conclusion:


The discussion underscored the potential of sandboxes as tools for agile regulation and innovation while acknowledging the complexities in their implementation across different contexts and sectors. The Datasphere Initiative announced plans to release a series of use cases and experiences from past sandboxes at a meeting in Paris in February, demonstrating ongoing efforts to share knowledge and improve sandbox practices globally.


Session Transcript

Bertrand de La Chapelle: Maureen. And then Maureen, if you can, as well. She hasn’t turned on her video, so I’ll just… Oh, she’s coming. My video is coming one shortly. I’m trying to seek something, but I’m here, I’m online. Okay, we don’t see them on the screen at the moment, but can you display… Can you display the Zoom feed? And for people in the room and online, hello, everyone. We’re going to start literally in one minute. I’m just trying to get the people on the screen. Two left. But can you display them so that when they come back, we have them? Because… Okay. So, as we wait for people to be displayed, because they are supposed to be online, it’s my pleasure to welcome all of you here physically and online. And actually, if you can grab the bottle of water that is there, I think it would be great if I have one. Yes, there they are. So, as I said, it’s my pleasure to welcome you both online and offline. My name is Bertrand de La Chapelle. I’m the Chief Vision Officer of the Datasphere Initiative. And today, we’re going to talk about sandboxes. So, sandboxes is a term that is emerging very strongly around, and it’s a little bit mysterious for a lot of people. So, before we get into the discussion, I want to paint a very quick picture about what we’re talking about and why do we think it is important to have a discussion about sandboxes. You know, the term refers to what we’re all familiar with when we have kids. It’s this place where you can play. And it’s also a place where you can experiment. You can build something, see if it works. If it doesn’t work, you restructure, you organize something different. It’s something that is limiting the consequences of the experiment to the sandbox itself. It’s been used in research. You can have a sandbox to experiment some techniques. You can have a sandbox in various environments. And what we’re going to talk about today is this tool for experimentation in an environment that is connected to technology and the public authorities and the rules that apply to technology. And when I say rule, it’s not only regulation. It can also be the guiding principles, the self-organizing principles or the self-regulatory principles that a particular sector is adopting. And so this notion of sandbox, and we’re increasingly in the work in the Datasphere Initiative using this also as a verb, like to sandbox, meaning using a sandbox to experiment. This is something that is increasingly considered around the world as a tool for agile frameworks or for developing agile frameworks and providing an experimentation space for things that are innovative. It’s either to foster innovation or it is to deal with an innovative application and see what is the interfacing, for instance, with regulation. Which means, in particular, that we can make a rough distinction. It’s a little bit, not a caricature, but a strong distinction. But you can see sandboxes that are regulatory sandboxes or operational sandboxes. Without delving too much in the detail, you can have a sandbox that is mostly about what are or should be the rules. And another that is about literally experimenting, particularly with certain types of data, especially when it is sensitive data, you want to have a space that is enclosed. And actually, there’s an analogy that comes to mind as I speak, which is we’re all familiar with the notion of an air gap computer. An air gap computer is something that is not connected to the internet. So if something is wrong on the system, if you’re testing a malware or something like that, you don’t want this to go on the network. And so you create an environment that is under certain rules and protected. That’s what we’re talking about when we’re talking about sandboxes. And the studies that we’ve conducted as part of the Datasphere Initiative have recorded at that stage more than 70 countries who have in one way, shape or form used sandboxes in their preparation of a law, in the implementation of the law, or in thinking about whether a particular law applies or whether a particular law should be changed. You can have also hybrid mechanisms, but what is important is that it can come at different stages of a process. And we will discuss that a little bit further in the meeting. It can come, as I said, very early on to anticipate the problems that may be caused by a particular technology. And in the context of AI, it’s particularly relevant, even the speed at which the technology changes. But it can be also regarding existing regulations, whether they hinder innovation or whether they are applicable to a new technique. It can also be in the course of the development of a legislation to organize the consultation and the participation of different actors. I was mentioning the regulatory sandboxes and the operational sandboxes. What we’re going to talk about today is mostly about regulatory sandboxes, i.e. when there is a public authority that has a responsibility or takes the initiative to set up a particular process to engage a group of private and civil society actors in the experimentation or the exploration of the challenges around a particular technology. And what is important is whatever the shape or form, whether it is early in the process, whether it’s purely regulatory or operational, most sandboxes actually go through predetermined stages. So you have a very early phase that is basically dependent upon the readiness of the public authority to launch a sandbox because it’s a new type of interaction. It’s different from the traditional, I would say command and control adoption by purely the processes of the parliament and traditional consultations. So there is a question of do the public authorities have the preparation to organize and manage a sandbox? The second thing is the early stage of setting up the sandbox exercise is extremely important. How do you identify the relevant stakeholders, the different actors that should be participating? What is the exact purpose of the sandbox? And if you miss the early stage, if you do not spend enough time, you actually are launching into an exercise that will not produce what you’re actually wanting to produce. So the concept of sandbox has generated a lot of interest. It’s a practice that is growing on many different issues. And this is something that we’ve documented in the DataSphere initiative. But it is something that is relatively new, that is still intimidating for a lot of actors because let’s be honest, it is wonderful when it works well, but there are challenges in setting it up. If the methodology is not implemented correctly, it can incur risks for the public authorities, but also for the private actors who engage in the process. And so the methodology is important. And overall, to summarize, sandboxing is a form of, it’s an approach, it’s a spirit. And it is very complex. It’s a process, it’s a process. And it is very close to the multi-stakeholder approach. And I think it’s very topical to have this session here at the IGF, because it’s a way to insert a multi-stakeholder spirit or approach at the national level, not necessarily at the global level, but at the national level. And sometimes as a sub-national level, because it can also be used by municipalities, for instance, but a way to introduce multi-stakeholder consultation and participatory processes in the traditional rule-setting procedures. So, as the DataSphere initiative, we have launched in Rio. We have done a report that you can find online at the website, thedatasphere.org, a report that we produced for the UK government in the context of their presidency of the G7 in 2021. And we also launched in Rio, Brazil, on the occasion in July this year, on the occasion of the G20 presidency of Brazil, a global sandboxes forum that I’d be happy to discuss with you afterwards, which has the purpose of bringing together the actors who are doing sandboxing or are intending to do sandboxes to exchange experiences and connect. And Sophie here is putting in the chat more connections to the work online. And so our goal is to socialize the concept, and it’s the reason why we have this session here. And I have a few people around the table, physically and virtually, with the purpose of actually exploring two things with a big emphasis on the first one and a little bit on the second one. The first thing is to delve more in detail within the concept itself around people who have actually have the experience of doing sandboxes or studying how they work, and basically addressing the why to sandbox, when to sandbox and how to sandbox. And the second thing afterwards is that sandboxing is a trust building exercise between actors who don’t necessarily have a lot of confidence in each other. And so one of the prerequisites for an effective sandboxing is that we work on having buy-in and increasing the trust between the different actors so that they can engage. So without further ado, and sorry, I forgot to mention that apart from the Global Sandboxes Forum, we also have underway a dedicated program for Africa, which is an Africa sandboxes forum. And Maureen can say more in particular about this. So without further ado, getting into the first thing, and don’t hesitate to use the chat to make comments, ask questions, and Sophie will be following this. I now turn to the person on my right, who is Thiago Moraes, who is with, among other things, with the Data Protection Authority of Brazil, because they have, and they are embarking into a sandbox effort on a very interesting topic, which is the applicability of the articles of the data protection regulation in Brazil to AI. And what is interesting, and I’d like you to elaborate a little bit on this, is that you did also a very intense preparatory work beforehand, illustrating what I was saying earlier, that preparation before launching a sandbox is an important thing. And there’s also the question of the connection between different regulatory authorities on some of those topics. So Thiago, if you wanna shoot first.


Moraes Thiago: Yeah, and thanks Bertram for the invitation from the data sphere. And I think it’s very relevant opportunity to have the discussion in such important forum for us, because as you said, the collaborative approach that sandboxes should have embedded in it, and the IGF is all about this in its principles. And it’s also very curious, and maybe I’ll start from here, saying that we started looking in sandboxes because they were like two years ago, actually, when we were delving more and more in the AI governance, AI regulation topic, and how it connects with data protection. We start to hear this new buzzword, which was the sandbox, the regulatory sandbox. And I have to acknowledge that it’s very important that the work of the Data Sphere at that moment, you had one of the main reports on the topic. I mean, there was also, of course, a nice work from the World Bank Group. The German government also had some nice publication, but yours were the first report focused on our data-oriented sandbox, which has a lot to do with what we do, since there is a big chunk of data, which is personal data, that’s the role of DPAs to be concerned with. So, yeah, first of all, I’d like to thank for the nice report that you published some years ago. But from that, we saw, okay, actually here, it’s something that can be really, truly hands-on, which is important. I work in this unit in the DPA that’s related with how to cope with innovation and how we make, actually, regulation for and with innovation. And, of course, not any kind of innovation, but the responsible ones, innovation that’s adequate to regulations such as the Data Protection Legislation. Because of that, we decided to, okay, let’s do a throughput study. So, we started with a benchmark research. I think that was the first step of what we should look about, so we could understand more of the methodology of the sandbox. And, of course, your work was part of it, but many of these others that I mentioned. And also, we did some interviews, not only with other regulators in Brazil, from other economic sectors, agencies, but also with our peers. So, we talked with data protection authorities from other countries, like Norway, who has a very interesting AI-focused privacy sandbox. We talked with the ICO in the UK, Canal, Singapore, and also Colombia. So, this was very interesting, because we saw that the way that privacy regulators have been dealing with sandbox is a bit different from what we see in, for example, financial sector, where one of the main outputs of the sandbox is to really give this bigger leeway, giving more flexibility, to lower barriers to the innovative process developed there. In the meanwhile, what privacy regulators are usually concerned about is how often the innovators are being able to cope with this complex legal framework. framework, which can be the protection ones. So there was a lot of like guidance and support involved with it. And from there, we all that like partnerships were important. We did a first cooperation with the Development Bank of Latin America, in the Caribbean’s CAF, where one consultant, Mr. Armando Guia worked with us. So we did with that design of what we were aiming in our sandbox, we saw that we needed to understand better some provisions of our data protection law that was connected with the topic. So most specifically, we have this Article 20, which is about algorithmic decision making. So very similar to what we have in the GDPR, Article 22. And we saw that among several things, the topic of transparency was there. So this another buzzword, algorithmic transparency, okay, maybe we could look what that means, the sense of our relationship. And with that, we shared with society, we did an open consultation to collect input. This was done in the last year. And from then, we now have been advancing to actually start launching it because now we have a better idea where we want to go. This, the results of the consultation has not been shared yet, but we are going to share it just because we want to make sure that everything that our technical teams has analyzed is a consensus with our high board. So we are in this phase right now. And in parallel, what we’re doing in the meanwhile, we are gathering some expert support because we saw that that was differential in several cases, like for example, Norway, because countries that didn’t do that, they had a lot of trouble thinking that the sandbox can be an interesting tool, but they couldn’t see the complexity of it. So I mean, when we have a more mature institution, like let’s take the ICO in the UK, which has been running on for 20 years as a DPA, and they have put in 300 staff, so they could have a dedicated unit for that. That’s not our reality. We are a three years old DPA, our innovators unit has four people that cannot deal only with sandbox and not only with AI, we have blockchain, we have pats, we have several other technologies to follow up, and not only sandbox work. So we decided, okay, we need experts with us. And we did this partnership now with the UNDP, who is helping us to bring a partner university to work with us. So we’ll have a more stakeholder group working on that with us. And we also see that a last part that we still miss are missing before starting the launch is awareness raising campaign, because after all this trouble, if we cannot connect with, with potential participants, like going to incubators, talking with startups, or just like data driven companies that would be interested in knowing more, we can have a risk here of like making the call for for projects and no participant being really interested. Because as you said, trust is important part of that. I will, I’ll keep that for the next part. But this is definitely key, if we want to make a successful sandbox. So thanks for an opportunity. And I pass the floor to you.


Bertrand de La Chapelle: Thank you so much, Diego. I think one thing that I want to emphasize is that, in many cases, people understand the notion of sandbox in one angle in particular, which is the temporary lifting of some of the regulatory constraints to enable the testing of new applications. This is particularly what has been done in the fintech environment. which has been one of the first test bed and testing grounds for sandboxes. What I think it’s important to understand is that it may be one feature sometimes on a sandbox, but it’s not a necessity. And in the case of the Brazil DPA, it is not about lifting a regulation. It’s about looking at how a particular article applies to a particular type of sector. The second thing is, you highlight very clearly, and I was asking a little bit, the why, the when, and so on. I think the effort that you’ve made to talk to other actors and the comment that you made regarding the maturity of an operator is a very important one, because you use the expression, which is basically having a dedicated unit. I think it makes sense to consider that in the future, as sandboxing becomes a more widespread practice, every government, every regulator, and so on, will have to have a pole of competence to help people who need to implement a sandbox. And in that regard, given the levels of maturity that are different, the sharing of practices and experiences among all of those who have done sandboxes is a very important element, and that’s one of the reasons why we generated the Global Forum on Sandboxes. Which leads me, actually, to another dimension, and I will pass the floor to Adam Szabo, who is a research fellow at the GovLab, who has been following those issues for quite some time, to ask you, Adam, to offer perspective on sandboxes for certain regions of the world, because we’re talking a lot about sandboxing, particularly in the European Union at the moment, because of the explicit mention of the use of sandboxes on the AI Act, because you may know that the AI Act requires all governments in Europe to put in place a sandbox by 2026. But what I would like to hear from you is, how do you see, for instance, the differences in approaches between different regions, and particularly in Asia versus Europe or the US, in your view, given your experience in that regard? Adam, the floor is yours.


Adam Zable: Thank you. Can you hear me? Okay, fantastic. First, thanks so much for inviting me to speak. As Bertrand said, I can provide something of an international comparative perspective, because over the past year, I have been working as a Datasphere Initiative Fellow on sandboxes. So I’ve been doing research on sandboxes for data and AI around the world. And so that’s pretty much the question I can answer about the differences between regions in terms of sandboxes. And I would just like to, I mean, I think my comments here are going to really put a finer point, a pin in the point that both of the previous speakers, Bertrand and Thiago, have already mentioned, which is, so right now, there is really an incredible diversity of sandboxes around the world. They differ so much in terms of their objectives, their scope, intended impact, and the regulatory flexibility allowed for the participants. You have local sandboxes, you have international sandboxes, and you have national sandboxes, which are by far the majority in terms of data and AI sandboxes. My research has shown that there are around 19 data and AI sandboxes at the national level, 15 of which are regulatory sandboxes, which are the, I think, the main focus of this session. And the other two are operational or data sandboxes, and the other two are, you could say, hybrid of some kind. But I think the main difference that I have seen in the approaches to sandboxes in different regions, is this question of regulatory flexibility, and really just the the underlying objectives of the sandbox, what the government, the implementing entity, what their goal is in creating the sandbox. And I have seen two main camps here. The first can be considered kind of the European approach. It is built off of, I think, a number of EU member states’ governments’ data sandboxes, and some now for AI. And again, as Bertrand has said, in the AI Act, every member state is required to establish an AI sandbox within the next few years. These sandboxes, as well as others from other countries that have taken the model, they really focus on regulatory certainty, risk mitigation, and compliance. The idea, I think, of many of these sandboxes is to, you know, as sandboxes do, they provide a controlled testing environment with collaboration between the innovators and the regulators. But here, the focus is really mostly on identifying risks, ensuring compliance with existing regulations, and promoting the sharing of best practices. Fostering competitiveness is an element here, but they are primarily aimed at ensuring regulatory compliance, making sure that the regulatory environment, which is taken as a given, they take that as a given and they ask, how can we better enable companies to compete while complying with this regulation? So that is, I would say, perhaps the main or most, in terms of number of countries implementing sandboxes, this is a very prominent approach. But there is another camp that is… is very different, right? So the sandboxes, there’s just an incredible variety of implementation. But the other main camp that I see can be considered in a reductive sense to be at the East Asian approach, I would say. Singapore and South Korea specifically have done this a lot, also Japan. But taking just one, you can see it very prominently in the South Korean approach. South Korea has, for a number of years, implemented a regime of sandboxes in different fields and different sectors. But the focus of the South Korean sandboxes is much more on economic growth, technological innovation, and regulatory flexibility that encourages experimentation rather than compliance. The South Korean example specifically, they specifically shift the paradigm from restrictive regulation and compliance to permitting activities in the sandbox unless they are explicitly prohibited. So they can temporarily, in these kinds of sandboxes, they can temporarily restrict the application of certain regulations. Whereas in the EU, the regulatory environment is taken as a given. And the regime in South Korea that includes sandboxes includes other elements of agile governance as well, such as rapid regulatory confirmation and temporary permitting and other measures that allow businesses to begin their operations after safety checks but before legislative updates. And just in the same vein as legislative updates, in these sandboxes, when the regulator and companies come together, it’s not only to make sure that the company is complying with the law, but also to make sure that also to bring the company and the regulator in to help understand where the law might be changed to better accommodate the new technology that is being experimented with in the sandbox. If I may interject here, my understanding is that it can even go as far in the case of Singapore, of doing something on a completely exploratory manner, like in a very early stage, just to get the different actors to have a better mutual understanding of what the challenges might be, even without the objective of developing a legislation or changing the legislation. Is that indeed the case? Yes. So Singapore has a very interesting, Singapore has a few different sandboxes, but the one that I think is most relevant here, they have a generative AI sandbox where it’s very different from other kinds of sandboxes you see elsewhere, and it can’t really be classified, I think, as a regulatory or operational sandbox necessarily. But that one brings together the main, you know, some of the biggest companies in the world. And I believe the goal there is they, the Singapore, the IMDA, they developed some guidelines, and they bring all these companies together to work on these guidelines and to implement them and to develop them further. So in a very, and they’re guidelines for trusted use of generative AI, if I’m not completely mistaken.


Bertrand de La Chapelle: Yeah. So all distinctions, thank you, Adam. All distinctions are always a little bit caricatural, because it’s, of course, there are applications in the EU that will be more flexible, and some in Asia that will be different. But it’s interesting to look at the huge diversity. And I would make an analogy here. You know, if you look at countries as different as European and US, I mean, the US, the UK, France, and, and Germany, they’re all representative democracy countries with a parliament with a different institutional structure. But the arrangement within those countries in terms of institutions is extremely different. And I think we can consider the same thing for sandboxes. The spirit of sandboxing is an experimental, proactive, participatory and discussion building and trust building exercise. There are many different ways to implement in terms of purpose in terms of how it’s structured, when, or the reason why it is being set up. There are common elements. But just like you can have a parliament a Prime Minister and a Supreme Court, you can have very different organizations of the relationship between those two entities, three entities, and in sandboxing it’s the same. You can have different stages, different roles of the public actors and the private actors. Sometimes you can even have a sandbox that is triggered or initiated by a private actor saying, I really would like the landscape to be explored together because there’s a new technology and I don’t know how the regulatory frame is going to apply. So thank you for this distinction between the different regions and I would like to continue the exploration of the globe in a certain way by going to first Maureen Amoutourine, who is a resource associate at the Datasphere Initiative and who is in charge in particular of what I mentioned earlier, which is the Forum on Sandboxes in Africa. Maureen, can you give us a little bit of a perspective on how the notion of sandboxes is being used in Africa or envisaged in Africa?


Morine Amutorine: Yes, sure. Thank you, Bertrand. I hope you all can hear me well. Perfect, thank you. So under the Africa Forum on Sandboxes for Data, one of the recent activities that we’ve been involved in to feed into our report on an outlook on Africa when it comes to sandboxes is mapping sandboxes across the continent. Where are they? What are they focusing on? Who’s running them? And, you know, to really get insights about what’s happening on the continent. And so we have come across a number of case studies over of sandboxes, and surprisingly, most of them are in the FinTech sector. Over 90% of the sandboxes on the continent are in the FinTech sector. So the goal there is, of course, for competitive advantage, really, for most of them. But maybe for my insights, really, about Africa today, I’m going to share about a case of one sandbox that is run by a government organization, which is the Kenya Communications Authority. And their sandbox is focusing on anything ICT. And so one thing that we have identified is there, well, from engaging with the people in that sandbox, is there is the need for the experimentation of regulations and guidelines for innovators is high. And that’s one of the things that we have learned across all case studies. When a sandbox is set up, the applications are usually overwhelming, which means that the appetite of the people to understand, to have regulatory clarity is there, both in the private sector, but also in the public sector. And so one of the things we identified for the Kenya Communications Authority, for example, they were interested in learning what innovations cannot be covered by their former frameworks of regulation. Because they realized there’s lots of innovation happening, but they were not sure their old frameworks would be able to regulate these emerging solutions. And so that was their motivation for starting the sandbox. And when they did, one of the other things that they learned along the way was the need for multi-regulators in the same sandbox. Because they realized one solution can cut across different sectors. And this is a sandbox that is actually new. They have not yet started having many participants. because one other lesson that we have been picking up is most participants sometimes are not ready for the sandbox, the application will come in and you realise whatever is coming in people are not yet ready for the sandbox, rather to participate in the sandbox maybe based on the level of their innovation, where they are at, their understanding of the sector for which they are trying to innovate and so there are also those cases of accepting people into a sandbox takes long, why? Because the regulator has to take on the responsibility of making sure that people who are getting into the sandbox are ready to participate in the sandbox. But why this case is interesting is because again it’s a government entity, the Kenya Communications Authority is a government entity, it’s one of a kind because the rest of the sandboxes in Africa are about fintech and the way that it’s been approached is the level of experimentation, as much as I must mention that there’s not much documentation online that we have found about sandboxes, so we have had to look for people to interview, have one-on-one which sometimes of course takes a bit of time, but for the few that we have engaged, the idea of regulatory experimentation is very welcome and is picking ground on the continent because even with a few stakeholders that we have engaged, there is so much interest in setting up sandboxes, but the lessons we are learning from people who are already running these sandboxes is this idea of preparation before starting a sandbox, for the goal of the sandbox to be very clear, because we’ve had people talk about this issue where a sandbox is supposed to say, based on a cohort, it’s supposed to test a particular type of technology but then you’re getting people applying with all sorts of other things. because there was not good communication probably with the public for them to understand exactly what happens, what is expected in a sandbox. So I think in a nutshell I can say that almost, let me see, about 34 countries on the continent have sandboxes, of which of course most of them are in the fintech sector, in the financial sector for fintech, but other sectors are picking interest because of dialogue that is happening, but also the community that we as the Data Sphere Initiative have been trying to put together through the Africa Forum on Sandboxes for Data project. Yes, I’ll stop here for now. Thank you, Maureen. I think this is something


Bertrand de La Chapelle: that is becoming recurrent, the degree of interest that is being triggered and in the regard the uncertainty about how to do it and whether actors are actually ready. This is why it is so important to, one, have the preparatory process to bring people up to speed and I want to give another example in that regard. In Lithuania, somebody who was participating in a workshop, an online workshop we were doing, was mentioning that for an upcoming sandbox that they are planning to do, they will have actually a few weeks bringing together the actors who are going to participate in the sandbox, both from private and public authorities, to actually do a preparatory work before the sandboxing exercise itself. This is why the methodology is so important and the methodology may vary a little bit depending on the purpose, as was mentioned already, but in all cases the preparatory work is absolutely a key criteria for success and this is why it is so important to share the lessons from the different experiments. If there is a domination of fintech sandboxes, not all lessons can be transposed identically, but still you can learn and have information from other regions or people who have developed sandboxes in other topics than the ones that a regulator is contemplating. Thank you, Maureen. Let me move to Catherine now, who is a senior legal expert at the University of Leuven and you’ve heard the different perspectives on how to do it in Brazil, the other regions, the different types, the different reasons why people want to do sandboxing, when they should do it and how they should do it. What comes to mind when you listen to those elements and can you share what are the lessons from what you’ve been working on?


Katerina Yordanova: Yes, first of all, thank you, Bertrand, for organizing the panel. for organizing this workshop and also for inviting such inspirational speakers. I was really listening very carefully for what they shared from their respective regions. And yeah, what comes to mind really is that something that you started the workshop with by emphasizing that when we’re talking about sandboxes, we are really having a problem or more like a challenge to identify what exactly is this that we are talking about, because there are so many approaches to them and there are so many ways to do them. And I personally don’t think that’s a bad thing, because I do not really subscribe to a one-size-fits-all approach, not only worldwide when we’re talking about sandboxes, but also inside the European Union. Because I mean, I work inside the EU framework and legal framework, and also if we talk about regulatory sandbox framework. And even inside EU, where we have so many laws that are basically the same for all of us, I see many differences in terms of the needs of member states that want to have regulatory sandboxes, not only when we’re talking about AI sandboxes, but in other sectors as well. But also the way that they need to approach them, so those sandboxes can actually be useful for them and their economies. And in the recent years, I would say that maybe I have the fortune to work mostly with member states that have zero experience with regulatory sandboxes inside EU, which is kind of an exciting setting, because you actually need to start from scratch and be creative, but also be wise. Because a lot of those countries, including my country of origin, which is Bulgaria, we don’t have these kind of resources that the UK would have, and Tiago already mentioned some of those differences that are quite obvious, and at the same time we also do have these lessons that we learn from GDPR, because GDPR was a huge monumental threshold that changed a lot of things in the regulatory landscape in the EU, and of course one of those things was the creation of this network of competent authorities that had to monitor GDPR, which was a challenge, and this challenge became more apparent the more time passed, so if you look at this report that the Fundamental Rights Agency came up, I think it was last June, they actually had some very, not surprising, but concerning remarks, and it was that different member states like Bulgaria, like Slovenia, that do not have that rich resources, do not really succeed in implementing GDPR in a meaningful way compared to countries like France, for example, and it’s going to be the same when we talk about the AI Act, because of course those resources are not just magically going to appear, and that’s when we are talking about the sandboxes, which of course the AI Act establishes an obligation for the member states to have at least one working by 2026, there we have a problem, because the regulatory sandboxes, the way that they are described in the AI Act, are a very expensive exercise, and that’s why when we are looking at this obligation from the perspective of a member state that has zero experience with sandboxes in general, then this price becomes even greater, because you need to learn how to do it, first realise why you have to do it, okay, you have the obligation. but then you need to realize how you can do this to actually attract some people to apply in the sandbox. And then you have the methodology part where you need to figure out what’s the best methodology that works for you, for your structure, for the fact if you’re a federal-like state or not, because there’s, of course, a lot of differences there. And when you figure all this preparatory work, which can take more than a year or two, depends on how many people work on it, it’s just then when you can start informing the society, informing the other stakeholders, and try to prep them to get excited for the opportunity to work in the sandbox. So I would say that these differences, realizing those differences, and working within the limitations inside the member states, and inside countries in general, but it’s very vital because, yes, the idea of experimental regulation and legislation is super exciting, especially for legal scholars, but at the end of the day, we need to figure out what’s the maximum that we want to achieve, and what’s the realistic results that we can achieve. And of course, it’s better to have something than have nothing. So I’m very pragmatic about the whole approach to sandboxes in general.


Bertrand de La Chapelle: But Katarina, do you feel that the AI Act is sufficiently making the case of the benefit of using a sandbox instead of basically presenting it as an obligation? Because there’s a sort of feeling that it basically says you have to do a sandbox, but the reason why you should be doing this is not necessarily elaborated fully, and this resonates very strongly with the problems that we in this community at the IGF have when explaining why we should have a multi-stakeholder approach. Because in many cases, it’s an injunction to use a multi-stakeholder approach with a lot of uncertainty on how you can do it. Do you think the case for why to use this is sufficiently made?


Katerina Yordanova: No, I personally don’t think it is. sufficiently made. I mean, if you ask the Brussels bubble, yeah, it’s amazing. It’s great. We want to do it. It’s perfect. It’s not really. And it’s not only that we can’t explain to the companies that are the potential participants in the sandboxes, why would they want to participate. But it’s also that sometimes it’s hard to explain to the regulators why would they want to do it. And I would say that because we talked about those different types of sandboxes, where you have wavering of certain rules of the laws that serve as an incentive for participants. And of course, in EU, that’s not something that we can do to that extent, maybe we can lift certain administrative rules here and there. And Hungary, actually, in their fintech sandbox, they have a good, a good use case. But it’s not, in my opinion, sufficient to inspire someone to dedicate, let’s say, six months of their time to work in the sandbox, especially if you’re talking about SMEs that have limited resources. So one of the things that I personally feel strongly about is that if we don’t have in Europe, if we don’t have the ability to offer this waiver of certain rules, also, because we get this common laws on the European level, maybe we can offer something else. And that’s what the data, where data comes into play. And again, I will give this example from Bulgaria, where we are currently trying a very weird bottom up approach to make a sandbox. And one of the things that we offer as an incentive is our data sets that are privacy preserving. So you can be sure that GDPR is sufficiently, the rules of GDPR are sufficiently implemented in this data sets, but also offering data sets that not necessarily have anything to do with personal data, because we have a really vast amount of non personal data that can be useful for innovators. when they’re developing their products.


Bertrand de La Chapelle: Thanks, I think it’s what came to mind as you were speaking about lifting rules. If you look at a structure like the European Union where you have different instruments regarding directives or regulations, lifting rules is harder to do when the instrument was directives because it would require coordination between the different States. Whereas when it is a regulation, it’s probably possible to decide to lift something, but I’m not sure that the competence of the commission is sufficient to take the unilateral decision to lift a particular provision on something that has been adopted by the whole community of 27 nations. So I don’t want to belabor on this, but thank you for the remark regarding the difference between putting something in a text and having to actually implement it and developing it when there’s no past experience. I want to pose here before we get to the second thing, because actually what you mentioned, Katharina, on the incentives is a very good segue to the second part that I wanted to raise regarding the trust question and why is it beneficial, but also why different actors may want or not want to participate in a sandbox. But before that, let me open the floor and the room to anybody who would want to ask a question, including online, if Sophie, you see anybody in the chat having a comment. And if you want to ask a question, please introduce yourself beforehand. Anybody? Yeah, looks like it’s working.


Farouk Yusuf Yabo: Okay, thank you very much for a very interesting presentation. My name is Farouk Yusuf Yabo. I’m the permanent secretary at the Federal Ministry of Communications, Innovation and Digital Economy in Nigeria. So I have two questions. One is to find out whether there is a standard methodology, if you like, or framework that is generally used for running a sandbox. Now, I’ve had two versions of the sandbox, the regulatory and the operational. I also want to check which part does the one we are running falls in, because it appears to me to be somewhat in between. So our goal was to create opportunity for individuals and small startups that do not have the resources to navigate around the regulatory space or even around payment for certain government owned rights. For example, access to frequency. So we wanted to create a pre-frequency that will allow for technologists to come in and then run frequency related technology projects. And it’s meant to be a national thing because we noticed there are so many people around who are into different things, but who may not necessarily be able to come on, follow the process. Is it mostly on access to free spectrum, for instance? Yes, access to free spectrum, but not free in this case. We issue spectrums for non-commercial uses, but you pay. So, but many of the people may not be able to pay or follow the process. So we just decided to create an access and run a competition of some sorts. Is it, for instance, applicable to rural access? Yeah, it could be for anything. Could be rural access.


Bertrand de La Chapelle: The reason why I ask is because there were references in some sandboxes being developed particularly or explored in Brazil regarding lifting some constraints regarding local communities, municipalities, making community services when the operators are not.


Farouk Yusuf Yabo: Yeah, so that’s part of it, but we wanted to make it very wide open. Community, it could be materials, it could be for some podcasts, it could be for anything that somebody wants to use. It could be for metering, it could be for anything that young talents can come in and demonstrate. So I wanted to know where does this fall? Is it a regulatory, is it an operational one? The goal is to allow access for people who ordinarily cannot pay or cannot get access based on the constraints of payment and other procedures. Thank you very much.


Bertrand de La Chapelle: Thanks for the question. A very quick answer on the two questions. One, as always, nuance is always the name of the game. So whenever you make a dichotomy between a regulatory or an operational, I mentioned later on that there is the hybrid notion and that most often it is on the spectrum. I mean, the comment that Katerina was mentioning is that even if you do something on the regulatory, you may have an incentive, which is you get access to a particular data set to work with it, which you wouldn’t have access to in normal conditions. The second thing is I can respond with three letters on the question, is there a standard methodology or framework? No, actually it’s two letters. It’s three letters in French. It’s no. There is no standard established methodology. However, nuance is the key word again. This is precisely the kind of work that we’re trying to do by gathering the experiences. What are the lessons you can draw? And I mentioned at the beginning, what is emerging clearly is the stages. You have the preparatory stage, which can be on the responsibility of the initiator of the sandbox or the actor wants to do a sandbox. Then you have the actual setting up of the procedure with a certain number of questions or what is the exact purpose? What is the range of stakeholders that have to be engaged? What is the type of data that has to be accessed if that is the case? Or what is the problem that has to be solved? And who is going to be in charge of this? Like as Thiego was mentioning, sometimes you have multiple regulators that may be involved. Who’s taking the lead? Is it something that is one regulator organizing it or is it something where there is a sort of third party facilitator in the government or outside that comes and organizes the discussions? Then you get the actual operation of the thing. But the preparatory phase is fundamental. The operation can last for a certain period of time. And one thing that people forget as well as not paying attention enough to the early phase is the exit of a sandbox is an important thing. How do you implement the solutions that have been developed on the occasion of the sandbox on an ongoing basis afterwards? Particularly if the sandbox has involved certain actors in the private sector and not others. How do you ensure that there is not a… distortion of the competitive environment. So, formalizing the methodology is clearly one of the objectives of the Global Sandboxes Forum that we’re having. The first phase being that people will listen to what is happening in the different other countries. Any other comments? Yes. Go ahead.


Luis Fernando Castro: Quick question. Thanks Bertrand. I’d like to ask any of you. Can you tell who you are? Sorry, I’m Luis Fernando Castro from Brazil, former member of PGI, the Brazilian Internet Steering Committee. I’d like to ask you, all of you, if you can bring any concrete experience that showed successful in this matter of sandbox.


Moraes Thiago: Yeah, go ahead. Yeah, I guess I’m from Brazil. So, I mean, as I said, in the case of the DPA, we’re still finishing the design phase. So, of course, talking the success of implementation is not yet ready. Although, I could say that the design itself has been quite mature. So, yeah, we have some expectations. I think we’ll start working from the next year. But even now in Brazil, we already have like some very nice experience from the financial institutions. Like in Brazil, there was something very interesting that this central bank has done a partnership together with the security, the stock markets authority and also the security. So, exactly. So, basically, the three of them have three independent but also interdependent sandbox in the sense that any participant can join any of these three. And if whatever they are doing has synergy with the other markets, they actually go together and work together in the initiative. So, it’s a model for this kind of joint corporations. Of course, it’s still all within the financial sector, but so that’s something to look for the challenge that come when it overlaps. And yeah, you can find a nice experience talking about other DPAs or the data protection authorities. You can look for the ICO and in Norway, I can share with you like we did a benchmark study that’s in Portuguese that has some interesting use cases. There are also these reports from the World Bank Group that I mentioned focused on fintech sandboxes. So, definitely, there are a lot of interesting use cases. And of course, there’s always room for growing more maturity, but yeah, sure, there are.


Bertrand de La Chapelle: Maybe to move to the next thing I want to also mention, unless there is a question from the online part, Sophie. One thing that I want to highlight is that we will have a dedicated meeting in Paris in February on the occasion of the summit on AI that the French government is going to host at the end of February. And on that occasion, one of the things that we will be releasing that we’re finalizing at the moment is precisely a series of use cases and comments about what have been experiences in the past. Because as Thiago was saying, there’s a lot about fintech, but for the fields that we’re talking about, there are some different elements and it’s good to be able to document and the paper will have a certain number of elements. I want to shift and please, if you have specific examples that you want to share on the occasion of the comments afterwards, don’t hesitate. What I want to finish with and explore a little bit is what I mentioned at the beginning. A sandbox is an exercise to bring people together and make people address policy issues in a different way. It’s basically turning the problems that different actors have with each other, like governments considering that the private sector is not doing what it should be doing, or the private sector considering that the governments are not regulating the way they should be regulating. It’s turning this into addressing a problem that people have in common by saying there’s a new technology. How does the existing regulatory framework apply? Should we change it? Should we improve it? Should we develop a new one? This is something that is important because the implementation of the new agile regulations needs to be iterative. It needs to be able to adapt to the evolution of the technology itself. There’s no better way than having a space for the different actors to talk to one another. All this is wonderful. There is a real interest for sandboxes. There’s an emerging methodology that’s being developed on the basis of experiments. There are benefits, but as Thiago and others were saying, it can be costly. It requires an awareness and a preparation to run it correctly. It can take long. The outcome is not certain. If you embark on a legislative process, you basically know what are the steps, and especially if there’s a majority in your parliament, you know how it’s going to go. There’s a bit of negotiation, but you know how the voting is going to go in the end. When you embark in any type of multi-stakeholder process as a government entity, it is less predictable. There’s an irony. A tool that is intended to produce legal predictability is a process that cannot guarantee that the thing will be successful. This is why the methodology is so important. But that is on the governmental side, and there’s also, because we have to be transparent, there’s also the personal challenge for the people who are the regulators, because there’s a risk. If this doesn’t function properly, are you going to be blamed for not having fulfilled the objective? There may be reasons for government authorities to hesitate to launch a sandbox. This is why I was asking Katharina, making the argument on the benefits needs to be strengthened, and the methodology as well. But now I want to go to the other side. Are there disincentives to companies to get into a sandbox? Is there a fear that what you’re going to explain to a public authority is going to be taken against you, because you have revealed how your system is going to operate? So I want to throw the question on the floor and maybe start in the reverse order, starting with Katharina. What are the disincentives?


Katerina Yordanova: And I like you mentioning the access to data as an incentive. Well, yeah, the disincentive… It really depends on where we are looking at all over the world, because again, coming from Eastern Europe, I would say that companies in general are very, very unwilling to talk with the regulator and explain how their system works, precisely because they think that at some point in the future, what they shared could be used against them. So there’s a lot of distrust, which is I guess in some way historical, but it’s still there. And another concern that I’ve met with companies I talked about sandboxes was regarding their IP rights. So the rights of like patents and trade secrets, these kinds of things, especially when their product is in an earlier stage of development, and some sandboxes actually offer the ability of participants to communicate between each other. If you look at the digital sandbox in the UK, that’s the case. So in that particular instance, they are very worried that somewhere among this process, they may have their rights infringed in some way, and they are looking for some more guarantees on the side of the regulators.


Bertrand de La Chapelle: When I listened to what you were saying, you know that, for instance, in the procedures for mergers and acquisitions, you have the notion of closed rooms where you can access the data about the financials of a company and so on, if you are the acquirer. I see a sort of analogy, actually, when you can talk about the IP and so on, but there needs to be a framework, to come to your question, that establishes clearly what can be used and cannot be used, which is not easy, I suppose, because how do you take into account an idea that emerged from seeing what somebody has shown? This is why sometimes exploring how an existing legislation applies to a new technology is a little bit different from really revealing everything about the new technology and having to test it and making it. Can you elaborate, Catharina, just briefly on this notion of incentivizing actors by providing access to a data set, because I think it’s an important element.


Katerina Yordanova: Well, this. This was actually something that we were inspired by the digital sandbox, actually. And it’s a very interesting type of sandbox that ICF developed. And it was basically what they were doing there in a very limited use cases. They were asking the participants that were already selected if they need specific data that they can provide for them. And that was done either by connecting them with a data, with someone acting a bit like a data intermediary in a way, or by curating certain data set either by using real data or synthetic data. So that was the idea that ICF had. So we took that and complemented it with, first of all, with what we have in terms of specific data sets in this institute that we are working together to create the sandbox. So that could be mostly like data related to urban environment, because that’s a bit of a specialty of the institute. But also, we assigned some curators of data sets that contain personal data that can basically make a bit of a, if you wish, compliance exercise of the data set. So they helped the companies to use data sets that had data that was already in compliance with GDPR. So they were a bit more certain that they are compliant, not with the AI, because, of course, not enforceable and applicable yet, but with GDPR, which is currently, in my opinion, the biggest problem for SMEs, at least in Bulgaria. They still haven’t figured out how to apply it correctly. But yeah, that helped quite a lot as an incentive.


Bertrand de La Chapelle: Thank you very much, Maureen, and Adam, and then Thiago. Any thoughts on this question of how to overcome these incentives?


Morine Amutorine: So I can go first. and probably share something that I have learned from the EcoBank’s Pan-African sandbox, which ideally is EcoBank providing an API to solution.


Bertrand de La Chapelle: Can you put your video on or is the bandwidth not good enough?


Morine Amutorine: Yeah, my bandwidth is not good enough actually.


Bertrand de La Chapelle: We prefer to be able to hear you. Go ahead.


Morine Amutorine: Sorry, yeah. Yeah, so yes, I was saying that the EcoBank sandbox is literally EcoBank providing APIs to the developers to be able to build on top of what they already have. And so the incentive there for the developers has been the ability to access a small percentage of data about their clients because Africa has had financial inclusion as one of the biggest goals for the financial sector. And so having access to like the bankers’ information, not in its entirety, it’s definitely, I mean, from what they explained, the process that they do for the developers to access the APIs, they definitely have done their systems well over time. So that alone is an incentive that the developers are able to use their API and build on top of it and they’re able to access some bit of data. But other than FinTech, which of course both the clients, the solution providers and the bank are both benefiting, I’ve noticed that for sandboxes, regulatory sandboxes, which are not necessarily for FinTech, regulatory clarity has been just good enough for the people to want to participate in these sandboxes, at least for the few cases that I’ve looked at. But I also know that I’ve come across some opinion papers about participants in sandboxes who have thought that probably the regulators were not well-equipped to… run the sandbox. But we know also that that sometimes comes from the point that regulators, their background probably doesn’t put them in the best place to understand everything about technology and the new emerging technologies. So sometimes you’ll find cases where things are taking long, but that’s because probably the regulators are trying to do enough due diligence on the technology to be sure that they understand what they are regulating. So but the idea of just innovators getting regulatory clarity has been good enough incentive, at least for the cases that we have looked at for now.


Bertrand de La Chapelle: I think one of the one of the challenges is the gap of understanding between public authorities and the tech developers, where a lot of the public policy actors are confronted to a rapidly evolving technology and are having difficulties keeping up with the changes. Also, because some of what is being developed is not made public yet. And so you are thinking in terms of regulating what is visible, but you’re not regulating for what is going to come up. And vice versa, when the private actors are developing something, they don’t necessarily know what is fully the applicable legal landscape, if it is a sector where they were not before. I’ve had the discussions with people developing AI applications or foundational models, who are talking about how this will be used for medical applications. And it was striking to understand that they didn’t necessarily know the entire regulatory framework that applies to any medical device, using expert systems and so on already. They were just thinking that it was starting almost from a blank page. So bridging this gap between the public actors and the private sector technologies is one of the objectives of putting in place an appropriate sandbox. Adam, any feedback? And I actually would like, as you asked the question, to have you chime in on the incentives and disincentives as well. Adam, your term?


Adam Zable: Yeah, so I think I think that there are a number of, you can call them disincentives or challenges, for any company to participate in any kind of collaboration with a regulating entity. And I think that there’s kind of, there may be a somewhat standard set of issues that always get repeated. And I think the solution to some of these problems, and it’s not just with the entities participating in the sandbox, but also with the broader society, there’s a lot of fragmentation in the field, as I mentioned, a huge amount of diversity among different sandboxes, which has caused the practices and the ability of regulators to evaluate what’s happening in the sandbox and to build trust. All of those things are difficult to build and standardize when the question of some kind of a standard guideline that the question came up, that is made very difficult by this fragmentation that we see around the world. And these challenges or disincentives that are likely very similar in all the cases of sandboxes around the world. Perhaps there could be some kind of a framework that very quite, quite simply and straightforwardly addresses some of these challenges that could be introduced by the regulator at an early stage of interaction with the participating companies. But such a framework does not exist right now. I think what regulators can do to build trust and try to alleviate some of these disincentives, is transparency and trying to build engagement with not just the participating entities in the sandbox, but also spread the word about the sandbox, the existence of the sandbox, what a sandbox is, what it is trying to do, get the word out to people. Because right now, knowledge about sandboxes is very low. And even though a lot of governments have sandboxes and are even more working to build more sandboxes, most people have no idea what a sandbox for data, even just fintech sandboxes, most people don’t know what they are. So if I could take one example that’s been brought up before by Thiago, I think, is Norway. Norway’s data protection officer runs a data and AI sandbox, and they do all sorts of things to engage the public and stakeholders and build transparency. My favorite thing that they do is they have a podcast about the sandbox. I think, I don’t know what they talk about, but they not only post updates on the website and they have a newsletter, but they also have a sandbox podcast. And I think that’s quite novel and interesting. They also, they organize workshops, they participate in international conferences. They really make an effort more than any other government that I’ve seen to really get the word out about the existence of these sandboxes. In most cases, most sandboxes around the world, the way that they advertise the sandbox’s existence is by posting an invitation to apply to the sandbox on their webpage. They don’t even make a real effort to get the word out to companies that the sandbox exists and is taking applications. So I think you’ve… As a regulator, you’ve got to try to build trust through transparency and getting the word out in proactive ways. Another thing, just briefly, that the Norwegian DPA did was they hired a consulting firm to conduct an external evaluation of the sandbox’s effectiveness. I think that was really another very indicative of the approach that the Norwegian DPA is taking because they produced something like a 50-page report on how the sandbox is going, providing recommendations for how to improve it. That kind of thing is very, very important at this early stage of sandbox development around the world. And then just finally, I will mention this just because Katarina brought up IP protections as a disincentive, or rather the lack of IP protections as a disincentive. One thing that, again, in Norway, the sandbox does is they tell the companies upfront that participants retain ownership of intellectual property brought into sandbox collaborations. I think that kind of thing could be done by more regulators, is just to be upfront about the potential disincentives and how the regulator is addressing those disincentives within the design of the sandbox.


Bertrand de La Chapelle: Thank you. Thank you, Adam. There are, without belaboring, there are other comments that were made in previous discussions regarding the history of interactions with a particular regulator and a particular type of companies, whether there is bad blood that existed beforehand, the fact that some of the smaller companies may sometimes be more inclined to engage, but at the same time not having the bandwidth to do it, while the large companies are probably relying tendentially more on the traditional lobbying mechanism. mechanism than other. So this is a landscape, and I just wanted to make sure that as we are advocating and supporting the notion that sandbox approach is a really important tool, we don’t belittle the challenges in making it just like we don’t belittle the challenges of making it a multi-stakeholder element. So we have the last five minutes here. I will go in this direction. A very quick contribution on the discussions we had and then you can close. I’m sorry, you need the… Yeah, it’s good, it’s good.


Audience: Thank you for the opportunity. You see, different jurisdictions will have different priorities. I think that’s very important to take. Now, having said that, the regulators are seen as tax collectors by most people. So in places like Africa, where people tends to have little income in terms of their ability to pay for big charges, one key disadvantage would be for, I mean, exposing a small company to the fact that they have to pay and they have actually committed themselves, there is no exit route. I think that’s one. And so what we did was try to get that objective was to ensure no payments, right? So you don’t have to pay anything. You are allowed to come in, use the same services, which you would have put there is when a lot of money and processes. So I think as one of the key disincentives for sandboxes, get audience, stakeholders getting engaged is the fact that they have to be made to take responsibilities that in some cases are difficult for them to handle. So I think one of the key issues is for us to make sure we break the barriers of entry, especially for areas where we’re dealing with low income category of entrants that holds the ability to maybe develop concept ideas, but they don’t have resources to make sure these things are done too. So I think that’s the point.


Bertrand de La Chapelle: Thank you. Thank you. It’s very interesting because we haven’t discussed that much the situation where the regulator is actually distributing a shared common resource and is therefore collecting revenues from the availability of this. And just, we were talking about lifting a regulatory obligation. One of the regulatory obligation may be paying for having access to this. That’s an interesting use case. Tiago, you have the, basically the final word. Go ahead.


Moraes Thiago: Okay, so, well, can you? Is it working? Yeah, it’s working. So, yeah, well, I, we’ve basically, I call it. It’s not working very well. Yeah, I think it’s. No, the battery is, battery is off. Do you have another one? Because the battery is off. Can you give me this one? Okay. So, well, just to conclude then, I think I should also highlight that beyond everything that has been said, there are also other external positivity factors that we should be aware of. So, for example, just of being part of a collaborative approach that involves the regulator, because if a company use that in a good way, it can actually share with the potential consumers, the affected subjects, how actually this is bringing more trust to whatever they are innovating there. So I think this is connected to the idea of trust. And for that, well, since we don’t have more time, I think I’ll finish for here, but I’d like to thank the Dr. Sphere for this amazing discussion with so many other experts, and I hope to continue to be in this environment for further maturity for Sandboxes.


Bertrand de La Chapelle: This will definitely be the case. I want just to finish by highlighting, first of all, thanking you all as panelists and the people who attended the session for this discussion. I want to highlight that Sophia has shared in the chat, a certain number of links to resources. Please go to the website, thedatasphere.org to basically have the more information. And the final element is that this is a new avenue. This is a way for the different actors to basically explore what the multi-stakeholder approach can be at the national and even sub-regional levels. And I really encourage you to think about how this can be developed and integrated in your respective processes. And the Data Sphere Initiative team is there to give you information in the context of the Global Sandboxes Forum, and also to assist you and support whatever effort you want to engage in on a topic of interest. Thank you very much. Enjoy the rest of the IGF. Thank you.


M

MODERATOR

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Sandboxes provide controlled environments for experimentation with new technologies and regulations

Explanation

Sandboxes are spaces where new technologies and regulations can be tested in a controlled setting. They allow for experimentation without widespread consequences.


Evidence

The term refers to what we’re all familiar with when we have kids. It’s this place where you can play. And it’s also a place where you can experiment.


Major Discussion Point

Purpose and Types of Sandboxes


Sandboxes can be used to anticipate problems with new technologies or test existing regulations

Explanation

Sandboxes serve multiple purposes in relation to technology and regulation. They can be used proactively to identify potential issues with new technologies, or to evaluate how existing regulations apply to innovations.


Evidence

It can come, as I said, very early on to anticipate the problems that may be caused by a particular technology. And in the context of AI, it’s particularly relevant, even the speed at which the technology changes.


Major Discussion Point

Purpose and Types of Sandboxes


Thorough preparation and stakeholder engagement is crucial before launching a sandbox

Explanation

The success of a sandbox depends heavily on the preparatory work done before its launch. This includes identifying relevant stakeholders and clearly defining the sandbox’s purpose.


Evidence

If you miss the early stage, if you do not spend enough time, you actually are launching into an exercise that will not produce what you’re actually wanting to produce.


Major Discussion Point

Methodology and Best Practices for Sandboxes


Agreed with

Moraes Thiago


Katerina Yordanova


Agreed on

Importance of thorough preparation before launching a sandbox


There is no standard methodology, but common elements are emerging like defined stages

Explanation

While there isn’t a universally accepted methodology for sandboxes, certain common elements are becoming apparent. These include defined stages in the sandbox process.


Evidence

There is no standard established methodology. However, nuance is the key word again. This is precisely the kind of work that we’re trying to do by gathering the experiences. What are the lessons you can draw?


Major Discussion Point

Methodology and Best Practices for Sandboxes


Agreed with

Adam Zable


Moraes Thiago


Morine Amutorine


Agreed on

Diversity in sandbox approaches across regions


A

Adam Zable

Speech speed

129 words per minute

Speech length

1689 words

Speech time

784 seconds

Regulatory sandboxes focus on compliance, while operational sandboxes test new applications

Explanation

Adam Zable distinguishes between two main types of sandboxes. Regulatory sandboxes are primarily concerned with ensuring compliance with existing regulations, while operational sandboxes are used to test new applications or technologies.


Major Discussion Point

Purpose and Types of Sandboxes


The EU approach focuses on regulatory compliance and risk mitigation

Explanation

Adam Zable explains that the European Union’s approach to sandboxes emphasizes ensuring compliance with existing regulations and identifying potential risks. This approach prioritizes regulatory certainty over flexibility.


Evidence

These sandboxes, as well as others from other countries that have taken the model, they really focus on regulatory certainty, risk mitigation, and compliance.


Major Discussion Point

Regional Approaches to Sandboxes


Agreed with

MODERATOR


Moraes Thiago


Morine Amutorine


Agreed on

Diversity in sandbox approaches across regions


Differed with

Moraes Thiago


Differed on

Approach to regulatory flexibility in sandboxes


East Asian countries like South Korea emphasize economic growth and regulatory flexibility

Explanation

In contrast to the EU approach, East Asian countries, particularly South Korea, focus on using sandboxes to promote economic growth and innovation. Their approach allows for more regulatory flexibility to encourage experimentation.


Evidence

South Korea has, for a number of years, implemented a regime of sandboxes in different fields and different sectors. But the focus of the South Korean sandboxes is much more on economic growth, technological innovation, and regulatory flexibility that encourages experimentation rather than compliance.


Major Discussion Point

Regional Approaches to Sandboxes


Agreed with

MODERATOR


Moraes Thiago


Morine Amutorine


Agreed on

Diversity in sandbox approaches across regions


Differed with

Moraes Thiago


Differed on

Approach to regulatory flexibility in sandboxes


Transparency and public engagement are important for building trust in sandboxes

Explanation

Adam Zable emphasizes the importance of transparency and public engagement in building trust for sandbox initiatives. He suggests that regulators should make efforts to inform the public and stakeholders about the existence and purpose of sandboxes.


Evidence

Norway’s data protection officer runs a data and AI sandbox, and they do all sorts of things to engage the public and stakeholders and build transparency. My favorite thing that they do is they have a podcast about the sandbox.


Major Discussion Point

Methodology and Best Practices for Sandboxes


External evaluation of sandbox effectiveness can provide valuable insights

Explanation

Adam Zable highlights the value of external evaluations in assessing the effectiveness of sandboxes. Such evaluations can provide objective insights and recommendations for improvement.


Evidence

Another thing, just briefly, that the Norwegian DPA did was they hired a consulting firm to conduct an external evaluation of the sandbox’s effectiveness.


Major Discussion Point

Methodology and Best Practices for Sandboxes


M

Moraes Thiago

Speech speed

144 words per minute

Speech length

1524 words

Speech time

634 seconds

Brazil’s DPA is using a sandbox to test AI’s applicability to data protection regulations

Explanation

Moraes Thiago explains that Brazil’s Data Protection Authority is implementing a sandbox to explore how AI interacts with data protection regulations. This sandbox aims to provide clarity on the application of specific articles of Brazil’s data protection law to AI technologies.


Evidence

We saw that among several things, the topic of transparency was there. So this another buzzword, algorithmic transparency, okay, maybe we could look what that means, the sense of our relationship.


Major Discussion Point

Purpose and Types of Sandboxes


Agreed with

MODERATOR


Katerina Yordanova


Agreed on

Importance of thorough preparation before launching a sandbox


Brazil is taking a collaborative approach involving multiple regulators

Explanation

Thiago describes Brazil’s approach to sandboxes as collaborative, involving multiple regulatory bodies. This approach allows for coordination between different sectors and regulatory domains.


Evidence

Like in Brazil, there was something very interesting that this central bank has done a partnership together with the security, the stock markets authority and also the security. So, exactly. So, basically, the three of them have three independent but also interdependent sandbox in the sense that any participant can join any of these three.


Major Discussion Point

Regional Approaches to Sandboxes


Differed with

Adam Zable


Differed on

Approach to regulatory flexibility in sandboxes


F

Farouk Yusuf Yabo

Speech speed

132 words per minute

Speech length

374 words

Speech time

169 seconds

Nigeria is exploring sandboxes to provide access to spectrum for innovators and startups

Explanation

Farouk Yusuf Yabo explains that Nigeria is considering using sandboxes to provide access to spectrum for innovators and startups. This approach aims to lower barriers to entry for those who may not have the resources to navigate traditional regulatory processes.


Evidence

So we wanted to create a pre-frequency that will allow for technologists to come in and then run frequency related technology projects.


Major Discussion Point

Purpose and Types of Sandboxes


K

Katerina Yordanova

Speech speed

151 words per minute

Speech length

1609 words

Speech time

638 seconds

Lack of resources and expertise can be a barrier for regulators implementing sandboxes

Explanation

Katerina Yordanova points out that many regulators, especially in smaller or less developed countries, may lack the resources and expertise to effectively implement sandboxes. This can be a significant barrier to their adoption and success.


Evidence

We are a three years old DPA, our innovators unit has four people that cannot deal only with sandbox and not only with AI, we have blockchain, we have pats, we have several other technologies to follow up, and not only sandbox work.


Major Discussion Point

Challenges and Incentives for Sandbox Participation


Agreed with

MODERATOR


Moraes Thiago


Agreed on

Importance of thorough preparation before launching a sandbox


Companies may be hesitant to share information with regulators due to distrust

Explanation

Katerina Yordanova highlights that companies, especially in certain regions, may be reluctant to participate in sandboxes due to distrust of regulators. There are concerns that information shared during the sandbox process could be used against them in the future.


Evidence

I would say that companies in general are very, very unwilling to talk with the regulator and explain how their system works, precisely because they think that at some point in the future, what they shared could be used against them.


Major Discussion Point

Challenges and Incentives for Sandbox Participation


Access to data sets can be an incentive for companies to participate

Explanation

Katerina Yordanova suggests that providing access to valuable data sets can be an effective incentive for companies to participate in sandboxes. This can be particularly attractive for companies working on data-driven technologies or AI.


Evidence

So we took that and complemented it with, first of all, with what we have in terms of specific data sets in this institute that we are working together to create the sandbox.


Major Discussion Point

Challenges and Incentives for Sandbox Participation


Differed with

Morine Amutorine


Audience


Differed on

Incentives for sandbox participation


Addressing IP protection concerns upfront can incentivize participation

Explanation

Katerina Yordanova points out that addressing intellectual property protection concerns at the outset can encourage more companies to participate in sandboxes. Clear guidelines on IP rights can alleviate fears of idea theft or infringement.


Evidence

One thing that, again, in Norway, the sandbox does is they tell the companies upfront that participants retain ownership of intellectual property brought into sandbox collaborations.


Major Discussion Point

Methodology and Best Practices for Sandboxes


Eastern European companies tend to be more distrustful of engaging with regulators

Explanation

Katerina Yordanova notes that companies in Eastern Europe often have a higher level of distrust towards regulators. This historical distrust can be a significant barrier to participation in sandbox initiatives.


Evidence

So there’s a lot of distrust, which is I guess in some way historical, but it’s still there.


Major Discussion Point

Regional Approaches to Sandboxes


M

Morine Amutorine

Speech speed

145 words per minute

Speech length

1167 words

Speech time

482 seconds

Regulatory clarity is a key incentive for innovators to join sandboxes

Explanation

Morine Amutorine observes that the prospect of gaining regulatory clarity is a significant motivator for innovators to participate in sandboxes. This is particularly true for emerging technologies where the regulatory landscape may be uncertain.


Evidence

But I also know that I’ve come across some opinion papers about participants in sandboxes who have thought that probably the regulators were not well-equipped to… run the sandbox.


Major Discussion Point

Challenges and Incentives for Sandbox Participation


Differed with

Katerina Yordanova


Audience


Differed on

Incentives for sandbox participation


Africa has seen mostly fintech sandboxes so far, with growing interest in other sectors

Explanation

Morine Amutorine notes that in Africa, the majority of existing sandboxes are in the fintech sector. However, there is growing interest in applying the sandbox model to other sectors as well.


Evidence

Over 90% of the sandboxes on the continent are in the FinTech sector.


Major Discussion Point

Regional Approaches to Sandboxes


Agreed with

MODERATOR


Adam Zable


Moraes Thiago


Agreed on

Diversity in sandbox approaches across regions


A

Audience

Speech speed

134 words per minute

Speech length

237 words

Speech time

105 seconds

Removing fees can lower barriers to entry, especially in low-income areas

Explanation

An audience member suggests that removing fees associated with sandbox participation can make them more accessible, particularly in low-income areas. This can encourage participation from smaller companies or individual innovators who may not have the resources to pay significant fees.


Evidence

So what we did was try to get that objective was to ensure no payments, right? So you don’t have to pay anything. You are allowed to come in, use the same services, which you would have put there is when a lot of money and processes.


Major Discussion Point

Challenges and Incentives for Sandbox Participation


Differed with

Katerina Yordanova


Morine Amutorine


Differed on

Incentives for sandbox participation


Agreements

Agreement Points

Importance of thorough preparation before launching a sandbox

speakers

MODERATOR


Moraes Thiago


Katerina Yordanova


arguments

Thorough preparation and stakeholder engagement is crucial before launching a sandbox


Brazil’s DPA is using a sandbox to test AI’s applicability to data protection regulations


Lack of resources and expertise can be a barrier for regulators implementing sandboxes


summary

Speakers agreed that careful planning and preparation are essential for successful sandbox implementation, including stakeholder engagement and resource allocation.


Diversity in sandbox approaches across regions

speakers

MODERATOR


Adam Zable


Moraes Thiago


Morine Amutorine


arguments

There is no standard methodology, but common elements are emerging like defined stages


The EU approach focuses on regulatory compliance and risk mitigation


East Asian countries like South Korea emphasize economic growth and regulatory flexibility


Africa has seen mostly fintech sandboxes so far, with growing interest in other sectors


summary

Speakers highlighted the variety of sandbox approaches across different regions, each tailored to specific regulatory and economic contexts.


Similar Viewpoints

Both speakers emphasized the importance of trust and clarity in encouraging participation in sandboxes, noting that companies may be hesitant to engage without assurances of regulatory certainty and protection.

speakers

Katerina Yordanova


Morine Amutorine


arguments

Companies may be hesitant to share information with regulators due to distrust


Regulatory clarity is a key incentive for innovators to join sandboxes


Unexpected Consensus

Importance of incentives for sandbox participation

speakers

Katerina Yordanova


Audience


arguments

Access to data sets can be an incentive for companies to participate


Removing fees can lower barriers to entry, especially in low-income areas


explanation

Despite coming from different perspectives, both speakers highlighted the importance of providing tangible incentives to encourage participation in sandboxes, particularly for smaller or resource-constrained entities.


Overall Assessment

Summary

The main areas of agreement included the importance of thorough preparation, the diversity of sandbox approaches across regions, the need for trust-building and regulatory clarity, and the significance of providing incentives for participation.


Consensus level

There was a moderate level of consensus among speakers on the fundamental principles and challenges of sandboxes. This consensus suggests a growing understanding of sandbox best practices, while also highlighting the need for flexibility in implementation across different contexts and regions.


Differences

Different Viewpoints

Approach to regulatory flexibility in sandboxes

speakers

Adam Zable


Moraes Thiago


arguments

The EU approach focuses on regulatory compliance and risk mitigation


East Asian countries like South Korea emphasize economic growth and regulatory flexibility


Brazil is taking a collaborative approach involving multiple regulators


summary

Adam Zable highlighted the contrast between the EU’s focus on compliance and risk mitigation versus East Asian countries’ emphasis on economic growth and flexibility. Moraes Thiago presented Brazil’s approach as a middle ground, involving collaboration between multiple regulators.


Incentives for sandbox participation

speakers

Katerina Yordanova


Morine Amutorine


Audience


arguments

Access to data sets can be an incentive for companies to participate


Regulatory clarity is a key incentive for innovators to join sandboxes


Removing fees can lower barriers to entry, especially in low-income areas


summary

Speakers presented different views on what incentivizes participation in sandboxes. Katerina Yordanova emphasized access to data sets, Morine Amutorine highlighted regulatory clarity, while an audience member suggested removing fees as a key incentive.


Unexpected Differences

Regional differences in trust towards regulators

speakers

Katerina Yordanova


Morine Amutorine


arguments

Eastern European companies tend to be more distrustful of engaging with regulators


Regulatory clarity is a key incentive for innovators to join sandboxes


explanation

While Katerina Yordanova pointed out a high level of distrust towards regulators in Eastern Europe, Morine Amutorine suggested that regulatory clarity is a key incentive for participation in Africa. This unexpected difference highlights how regional contexts can significantly impact the effectiveness of sandbox initiatives.


Overall Assessment

summary

The main areas of disagreement revolved around regulatory approaches, incentives for participation, and regional differences in trust and implementation of sandboxes.


difference_level

The level of disagreement among speakers was moderate. While there were clear differences in approaches and perspectives, there was a general consensus on the value of sandboxes as a tool for innovation and regulation. These differences highlight the need for flexible, context-specific approaches to implementing sandboxes across different regions and sectors.


Partial Agreements

Partial Agreements

Both speakers agreed on the importance of building trust for sandbox participation, but emphasized different aspects. Adam Zable focused on public engagement and transparency, while Katerina Yordanova highlighted the need to address IP protection concerns.

speakers

Adam Zable


Katerina Yordanova


arguments

Transparency and public engagement are important for building trust in sandboxes


Addressing IP protection concerns upfront can incentivize participation


Similar Viewpoints

Both speakers emphasized the importance of trust and clarity in encouraging participation in sandboxes, noting that companies may be hesitant to engage without assurances of regulatory certainty and protection.

speakers

Katerina Yordanova


Morine Amutorine


arguments

Companies may be hesitant to share information with regulators due to distrust


Regulatory clarity is a key incentive for innovators to join sandboxes


Takeaways

Key Takeaways

Sandboxes provide controlled environments for experimenting with new technologies and regulations, with regulatory sandboxes focusing on compliance and operational sandboxes testing new applications.


There is no standard methodology for sandboxes, but common elements are emerging like defined stages and thorough preparation.


Transparency, public engagement, and addressing concerns like IP protection are important for building trust and incentivizing participation in sandboxes.


Regional approaches to sandboxes vary, with the EU focusing more on compliance while East Asian countries emphasize economic growth and flexibility.


Challenges for sandbox implementation include lack of resources/expertise for regulators and distrust from companies in sharing information.


Access to data sets and regulatory clarity are key incentives for companies to participate in sandboxes.


Resolutions and Action Items

The Datasphere Initiative will release a series of use cases and experiences from past sandboxes at a meeting in Paris in February.


The Datasphere Initiative team is available to provide information and support efforts to develop sandboxes through the Global Sandboxes Forum.


Unresolved Issues

How to effectively bridge the knowledge gap between public authorities and tech developers in sandbox environments


Best practices for incentivizing both large companies and smaller startups to participate in sandboxes


How to standardize sandbox practices globally while accommodating regional differences and priorities


Suggested Compromises

Using ‘closed room’ type arrangements to protect companies’ IP and sensitive information while still allowing necessary sharing in sandboxes


Removing fees for sandbox participation to lower barriers to entry, especially in low-income areas


Thought Provoking Comments

You can have a sandbox that is mostly about what are or should be the rules. And another that is about literally experimenting, particularly with certain types of data, especially when it is sensitive data, you want to have a space that is enclosed.

speaker

Bertrand de La Chapelle


reason

This comment introduces the key distinction between regulatory and operational sandboxes, providing a framework for understanding different sandbox approaches.


impact

It set the stage for the rest of the discussion by establishing a fundamental categorization of sandboxes. Subsequent speakers often referred back to this distinction when describing specific sandbox implementations.


The South Korean example specifically, they specifically shift the paradigm from restrictive regulation and compliance to permitting activities in the sandbox unless they are explicitly prohibited.

speaker

Adam Zable


reason

This insight highlights a fundamentally different approach to sandboxes, contrasting with the European model focused on compliance.


impact

It sparked a discussion about regional differences in sandbox approaches and objectives, leading to a more nuanced understanding of how cultural and regulatory contexts shape sandbox implementation.


Even inside EU, where we have so many laws that are basically the same for all of us, I see many differences in terms of the needs of member states that want to have regulatory sandboxes, not only when we’re talking about AI sandboxes, but in other sectors as well.

speaker

Katerina Yordanova


reason

This comment challenges the assumption of uniformity even within a seemingly homogeneous regulatory environment like the EU.


impact

It deepened the conversation by highlighting the complexity of implementing sandboxes across different contexts, even within a shared regulatory framework. This led to further discussion about the need for flexible approaches.


One of the things we identified for the Kenya Communications Authority, for example, they were interested in learning what innovations cannot be covered by their former frameworks of regulation.

speaker

Morine Amutorine


reason

This insight reveals how sandboxes can be used proactively to identify regulatory gaps, rather than just testing compliance.


impact

It shifted the discussion towards considering sandboxes as tools for regulatory learning and development, not just for testing or compliance purposes.


Companies in general are very, very unwilling to talk with the regulator and explain how their system works, precisely because they think that at some point in the future, what they shared could be used against them.

speaker

Katerina Yordanova


reason

This comment brings attention to a significant barrier to sandbox participation – distrust between companies and regulators.


impact

It led to a discussion about the importance of trust-building measures and transparency in sandbox design, highlighting a critical challenge in sandbox implementation.


Overall Assessment

These key comments shaped the discussion by moving it from a general overview of sandboxes to a nuanced exploration of regional differences, implementation challenges, and the diverse objectives of sandbox initiatives. The conversation evolved from defining sandboxes to examining their practical applications, cultural contexts, and potential barriers to success. This progression deepened the analysis, highlighting the complexity of sandbox implementation and the need for flexible, context-specific approaches.


Follow-up Questions

How to develop a standard methodology or framework for running sandboxes?

speaker

Farouk Yusuf Yabo


explanation

A standard methodology could help guide implementation of sandboxes across different contexts and countries.


What are concrete examples of successful sandbox implementations?

speaker

Luis Fernando Castro


explanation

Examining successful cases could provide valuable insights and best practices for others implementing sandboxes.


How to address intellectual property concerns in sandboxes?

speaker

Katerina Yordanova


explanation

IP protection is a key concern for companies participating in sandboxes and needs to be addressed to encourage participation.


How to improve public awareness and understanding of sandboxes?

speaker

Adam Zable


explanation

Increased public awareness could lead to greater participation and trust in sandbox initiatives.


How to evaluate the effectiveness of sandboxes?

speaker

Adam Zable


explanation

External evaluations, like the one conducted in Norway, could help improve sandbox design and implementation.


How to address the resource constraints of smaller companies in participating in sandboxes?

speaker

Bertrand de La Chapelle


explanation

Ensuring smaller companies can participate is important for inclusive innovation and regulation.


How to design sandboxes that are accessible to low-income participants?

speaker

Audience member


explanation

Addressing financial barriers to entry is crucial for encouraging participation from diverse innovators, especially in developing regions.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #6 Promoting tech companies to ensure children’s online safety

Open Forum #6 Promoting tech companies to ensure children’s online safety

Session at a Glance

Summary

This open forum focused on promoting tech companies’ role in ensuring children’s online safety. The discussion brought together perspectives from various stakeholders, including UNICEF, government organizations, academia, and tech companies.

Speakers highlighted the importance of a proactive approach to child online protection, emphasizing the need for safety-by-design principles in product development. They stressed the significance of a comprehensive strategy involving multiple sectors, including government, civil society, and the private sector. The discussion underscored the global nature of online risks to children and the need for international cooperation to find effective solutions.

Key points included the need for tech companies to conduct child rights impact assessments, implement robust child protection policies, and raise awareness about online safety. Speakers emphasized the importance of balancing protection with children’s rights to access information and express themselves freely online. The role of positive parenting and digital literacy for both children and parents was also highlighted.

Examples of initiatives were shared, such as China’s efforts to regulate online content for minors and Tencent’s Minor Protection Center providing guidance to families. The potential benefits and risks of AI in children’s online experiences were also discussed, with a call for responsible innovation in this area.

The forum concluded by emphasizing three key concepts: proactive engagement from tech companies, comprehensive multi-stakeholder strategies, and the need for global solutions to ensure children’s online safety.

Keypoints

Major discussion points:

– The importance of protecting children’s safety and rights online, while balancing with other rights like access to information and freedom of expression

– The need for a multi-stakeholder approach involving government, tech companies, civil society, and others to address online child protection

– The role and responsibility of tech companies in designing safe products/services and implementing child protection measures

– Promoting digital literacy and awareness among children, parents, and society about online risks and safety

– Addressing emerging challenges from new technologies like AI while leveraging tech solutions for protection

Overall purpose:

The goal was to highlight the critical role of tech companies in safeguarding children online and fostering dialogue between different stakeholders on creating a safe digital environment for children.

Tone:

The tone was largely formal and professional, with speakers presenting information and perspectives from their respective fields. There was an underlying sense of urgency and importance placed on the topic. The tone became slightly more personal and relatable when speakers shared anecdotes or spoke from personal experience as parents.

Speakers

– Moderator: Shenrui Li, UNICEF China

– Zhao Hui: Secretary General of China Federation of Internet Society (CFIS)

– Dora Giusti: Chief of Child Protection, UNICEF China

– Afrooz Kaviani Johnson: Global Lead on Child Online Protection, UNICEF

– Dandan Zhong: Secretary of Party Committee, School of Information and Communication Engineering, Communication University of China

– Li Yi: Founder of APE Programming

Additional speakers:

– Sally Hsakli: Former cultural counselor at the Embassy of Saudi Arabia in China, professor at the history department of Imam University

– Liang Lingling: Family education specialist, Tencent Minor Protection Center

Full session report

Expanded Summary of the Open Forum on Promoting Tech Companies’ Role in Ensuring Children’s Online Safety

Introduction

This open forum, facilitated by UNICEF China, brought together diverse stakeholders to discuss the critical role of technology companies in safeguarding children’s online safety. The discussion featured perspectives from government organizations, academia, tech companies, and international organizations, highlighting the need for a comprehensive and collaborative approach to address this global challenge.

Key Themes and Discussion Points

1. UNICEF’s Global Approach to Child Online Protection

Afrooz Kaviani Johnson, Global Lead on Child Online Protection at UNICEF, presented UNICEF’s global approach, emphasizing four key priority areas:

a) Policy and governance

b) Safe and empowering digital environments

c) Children’s digital literacy and resilience

d) Data and evidence

Johnson stressed the importance of recognizing children’s interconnected rights in the digital environment, as outlined in the UN Convention on the Rights of the Child and General Comment No. 25. She highlighted the need for companies to conduct child rights impact assessments and implement “safety by design” principles in product development.

2. China’s Efforts in Promoting Child Online Safety

Zhao Hui, Secretary General of China Federation of Internet Society (CFIS), detailed their work in four main areas:

a) Research on children’s online safety needs

b) Development of industry standards and guidelines

c) Promotion of public awareness

d) International cooperation

Zhao emphasized the importance of engaging multiple sectors, including government, tech companies, and civil society. He also mentioned CFIS’s collaboration with UNICEF on research regarding children’s online safety needs in China.

3. Specific Online Risks and Challenges

Dora Giusti, Chief of Child Protection at UNICEF China, highlighted specific online risks faced by children, including sexual abuse, cyberbullying, economic exploitation, and exposure to harmful content. She emphasized the need for comprehensive protection strategies.

Dandan Zhong from the Communication University of China discussed challenges faced in China regarding children’s internet use, including the potential risks and benefits of AI-driven applications in enhancing child safety.

4. Tech Companies’ Initiatives

Liang Lingling presented Tencent’s Minor Protection Center, which focuses on:

a) Implementing minor protection features across products

b) Conducting research on online risks

c) Promoting digital literacy

d) Collaborating with stakeholders to create a safer online environment

Li Yi, founder of APE Programming, discussed their work in teaching coding to children using a four-in-one training model. He emphasized the importance of incorporating AI into education while simultaneously teaching critical thinking skills to children.

5. Multi-stakeholder Collaboration

Speakers unanimously agreed on the necessity of a multi-stakeholder approach to effectively address online child protection. This includes collaboration between government bodies, tech companies, civil society organizations, and academic institutions.

Johnson mentioned the global digital compact as an opportunity for stakeholders to come together and address child online safety on a global scale.

6. Balancing Protection with Children’s Rights and Agency

The discussion emphasized the need to balance child protection with respecting children’s rights and agency in the digital world. Speakers stressed the importance of age-appropriate protection measures, promoting digital literacy for both children and parents, and empowering children to understand and use technology responsibly.

Key Takeaways and Action Items

1. Technology companies must play a proactive role in protecting children online by implementing safety by design principles, conducting child rights impact assessments, and developing AI applications with child safety in mind.

2. A multi-stakeholder, collaborative approach involving government, tech companies, civil society, and academia is essential for addressing child online safety effectively.

3. There needs to be a balance between protecting children online and respecting their rights, agency, and developmental needs.

4. Promoting digital literacy for both children and parents is crucial for ensuring online safety.

5. Ongoing research and data collection on children’s online behaviors, risks, and needs are necessary to inform policy and product development.

6. International cooperation and knowledge sharing are vital in addressing the global nature of online risks to children.

The forum concluded with a commitment to continued collaboration between stakeholders to promote safe digital environments for children, integrate child rights principles into product design processes, and conduct further research to inform policy and practice in the field of child online safety.

Session Transcript

Moderator: . . . . . . . . . . . . . . . . . . . . . . . . . So, first, please allow me to introduce or distinguish the guests and the speakers. Ms. Zhao Hui, from the China Federation of Internet Society, and also our Dora, the Chief of Child Protection of UNICEF China. And also we are delighted to have our global lead on child online protection, which is Ms. Afro, who is joining us online and later will share her insight on our UNICEF strategy on child online protection. And myself is Li Xunrui. I’m from UNICEF China. As a child protection officer, I’m glad to facilitate this session. So now, shall we start? We will start by the Secretary General of CFS, Ms. Zhao Hui, will give us a presentation on CFS work in China. The floor is yours, Ms. Zhao Hui, please.

Zhao Hui: Good afternoon. I’m delighted for the forum on promoting tech companies to ensure child life safety. On behalf of the China Federation of Internet Societies, I’d like to extend warm congratulations and welcome to our distinguished guests. The China Federation of Internet Societies started in May 2018. We are honored to hold a consultative status with the United Nations Economic and Social Council. Currently, we have 524 members, including major Internet corporations like Tencent, Baidu and Douyin. China is one of the largest Internet markets in the world. with almost 1.1 billion users, including 196 million minors. As new technologies like AI and big data take off, the Chinese government is focusing on protecting children online. Chinese President Xi Jinping has emphasized the importance of creating a clean and positive online environment, especially for young users. Since China joined the UN Convention on the Rights of the Child, the government has been working to protect children online through laws, enforcement, courts, and education.

Moderator: Notably, Cyberspace Administration of China has introduced the regulations to protect minors in cyberspace. Special actions are carried out to improve the online environment for minors during the summer months. Companies are doing their part, and other groups like schools and media outlets are involved too. This has created a strong atmosphere in societies. There is growing concern about efforts to protect children online. In keeping with the theme of this forum, I’d like to share four main areas of action. First, setting up a specialized institution. In 2022, we set up a special committee with China Song Qinglin Foundation, Tencent, and 39 other organizations to protect children online. The committee has encouraged people to get involved. in protecting children online. We host conferences, offer online courses, set industry standards, recognize practices. Second, public welfare actions. We launched the AdSprout program, a public welfare initiative to improve online safety for children. We run in Xinjiang, Shandong, Guangdong, and Shaanxi. We also made online safety guide to schools in 12 cities. Third, research and reports. We worked with UNICEF to research children’s online safety. Visiting 12 counties in four provinces, the findings were put together in a report called Children’s Online Safety Needs Research. We released the report on the protection of minors in cyberspace 2024, which reviewed China’s progress in areas such as legislation and platform practices. Fourth, international collaboration and exchanges. We have hosted events on children’s online protection at international such as the IGF, the UN Human Rights Council, and the World Internet Conference. This event has helped raise global awareness on this issue. In 2024, CFII and UNICEF launched the responsible innovation in technology for children’s collection. Outstanding cases will be recommended to the UNICEF Global Case Database, contributing

Zhao Hui: China’s experiences to the technological innovation efforts of Internet companies worldwide. Ladies and gentlemen, children are our future, and it is a social responsibility to use technology for good. Let us join hands to improve children’s digital literacy online, and we can create a secure and healthy online environment for everyone.

Moderator: Thank you, Zhaohui. Mr. Zhao summarized how CFII as a network, as a civil organization, could unite efforts from society, from the Internet companies, and from the government, and demonstrate how China’s approach to strengthen the safeguarding of children’s online. And also, Mr. Zhao mentioned that CFII has a good and fruitful relationship with UNICEF China. So now we invite Dora Drusty, the Chief of Child Protection of UNICEF China country office, to give us opening remarks. Dora, please. Good afternoon. I think this mic goes on. Good afternoon. I hope you can hear me. Distinguished panelists, thank you for joining us today.

Dora Giusti: Thank you to the audience. So today we are hosting this open forum to highlight the critical role that technological companies play in the development of children’s digital literacy. So I would like to start by thanking all of you for being here. play in safeguarding children in the digital environment and the purpose is really to foster dialogue and collaboration among different stakeholders represented here, so tech companies, policy makers, researchers and practitioners on creating a safe digital environment. A child goes online for the first time every half second and in China, as Ms. Zhao mentioned, there are 196 million children online with an internet penetration rate of 97%. The internet provides great opportunities for children to learn, to stay connected, but the internet was not created for children, so there are potential risks and harms that have been identified and that are on the rise and with diversified patterns that are happening across the world. Some of these risks and harms include misuse of data and economic exploitation, cyber bullying and harassment, and more severely sexual abuse and exploitation online, and the use of AI and extended reality also offer an opportunity, but they have also exacerbated these risks as perpetrators can potentially use this reality to take advantage of children. So my colleague Bruce will speak in a moment about UNICEF’s global approach, but I just want to highlight that UNICEF supports a multi-sectoral and multi-stakeholder approach to the issue of digital safety with emphasis on policies and laws, on protection services when children require support, management of responsibility of tech companies, and also preventive efforts. So UNICEF China has been working with China Federation of Internet Societies since 2019 to promote the safe digital environment and also with the Communication University of China that is the other host in this event and UNICEF China is committed to responsible business practices in the digital environment. So at the moment, as Ms. Zhao mentioned, we’re working on a sort of action-oriented research to foster dialogue and exchange among companies and among their experience in safety by design practices. So how companies are integrated child rights principles and safety principles in their products and services. And this is a process, so it’s not pure documentation, it’s really a process that involves dialogue and sharing so that it can strengthen the processes and products of these companies but also guide other companies in building safe products. We’ve also worked on AI and child rights. Together we worked on an AI standard for children based on the UNICEF policy guidance on AI for children as well on also identifying positive and missing practices in Chinese companies. UNICEF China has also promoted unprecedented research on behaviors and risks and needs of children which will be published soon and hopefully will inform companies and policymakers. We also work with other partners on strengthening child protection systems and services with the welfare sector and also with the justice sector. So I just want to highlight the role of the tech sector in responsible innovation. And the role is key, critical in shaping a safe digital environment. So first of all. It is key that there is a balance between technological innovation and the responsibility to protect children. And this means that companies can introduce child rights principles and produce products that are designed and aligned with safety by design. So they also need to undertake impact assessment, identify risks and align their products using a safety by design framework. Also companies should fight tech with tech. So if there are dangers and risks in the products and services and platforms, they can use AI and tech to make sure that these products are safe, but also that these platforms or contents are removed. They can implement mechanisms to take down these harmful contents and also report to the authorities and also use AI to make sure that children are reached out and get some counselling or support as they require. Then they have a preventive role to raise awareness of children among parents, among educators of potential risk and how to navigate safely. And finally, I mean, make child protection as a key priority. So at the moment what happens across the world, often in companies, is that it’s delegated to different areas, but this should be a key priority of all tech companies and it should be a high level priority mainstreamed across the different areas. And obviously the role of the tech sector that we’re discussing here today is fundamental, but for success we also need to remember, and I go back to my earlier point, that we need a comprehensive strategy engaging multiple stakeholders, as I mentioned, with laws that regulate the tech companies, but also… with preventive efforts that are systemic, for example, in school, with protection services that can respond to the needs of children. And also, we should remember that safety online is a global challenge, and for a global challenge, we need global solutions. So international cooperation, finding solutions together, is a must. And with this spirit that today we’re here, bringing together different stakeholders to share different perspectives, to share the challenges and the solution. So I hope this forum will provide some of these insightful discussions that will lead to some concrete recommendations, and that we can join hands to ensure these global solutions are found to ensure children are safe online. Thank you. Okay, thank you, Dora. Thank you, Dora, for sharing the diverse and promoting way of UNICEF China engaging with Chinese government and the civil society, exploring opportunities and highlighting the tech companies’ role in safeguarding children’s online safety.

Moderator: Thank you, Dora. And then we will shift to the global perspective, and I would like to invite our colleague, Afru, who is joining us online, to share UNICEF’s view on how child online protection could be further strengthened and the role of technology companies in this important issue and topic. Please, Afru, the floor is yours.

Afrooz Kaviani Johnson: Thank you so much. Thank you so much for the invitation. It’s a pleasure to join you, albeit remotely. May I confirm that you can see my slides?

Moderator: Just a second, Afrooz, we are cutting.

Afrooz Kaviani Johnson: I’m sharing my screen. I am sharing my screen. Can you see it or is it better to manage it from there?

Moderator: Just spare us with 5 seconds.

Afrooz Kaviani Johnson: No worries.

Moderator: Our colleague is still solving a technical problem and they try to start sharing from our end. Please kindly wait for us.

Afrooz Kaviani Johnson: No problem. In the interest of time, I can start speaking while they set it up. Does that work for you?

Moderator: Of course. Of course, please.

Afrooz Kaviani Johnson: Okay. Thank you so much. So, I think the scene has already been set by the opening remarks with respect to the incredible opportunities that the digital environment provides for children and the need to address the risks. So, this is really the critical question, how we can maximise the benefits of digital technology for children while mitigating the risks of harm. And one of UNICEF’s global strategic goals is to protect every child from all forms of violence and exploitation. And in today’s age, this includes forms of violence and exploitation that are enabled or facilitated by digital technologies. And in order to design effective prevention and response strategies, we have to be specific about the risks that we’re talking about. And at global level, we have identified four key priority areas. The first is to protect children from sexual abuse and exploitation facilitated by digital technologies. The second is to protect children from bullying, harassment and other forms of violence online. The third is to protect children from economic exploitation and misuse of their personal data and the fourth to protect them from harmful content online. So UNICEF’s work globally is guided and shaped by the principles in the United Nations Convention on the Rights of the Child and General Comment No. 25 by the Convention’s treaty body. And these principles really ensure a balanced and rights-based approach. Perhaps I can check in there to see if you’re able to see my screen or not yet. Yes, we are able to see your screen but it’s quite small. It’s a technical issue from our end but please continue. I think, yeah, you can change the slide, please. Okay, so you can see or some people can see a slide that says guiding principles. Anyway, I will not take up time with talking about the technical issues but let me just talk through some of these key principles that guide our work in this area and that should also guide the work of technology companies. So the first is really understanding that children’s rights are interconnected, they’re interrelated, they’re indivisible. So this means that efforts to protect children online necessarily intersect with their other rights in the Convention on the Rights of the Child such as their access to information, their freedom of expression, their freedom of association, privacy and education. So while measures to protect children and to, you know, realise their rights to protection are critical, they cannot arbitrarily limit other rights. The second key principle to highlight is that we need to recognise and support children’s agency and resilience. So this includes giving weight to what children think and seeking their views when we’re considering policy design as well as technology design and implementation. The third point is to recognise that children are not a homogenous group. When we talk about children, we’re talking about everyone under the age of 18, which is a very broad range of children, but there are also children who face particular risks in the digital environment, for example, children with disabilities. So steps are necessary to make the digital environment safe, but also counter any biases that may lead to overprotection or exclusion of certain groups of children. The fourth key principle is the need to consider risks and opportunities that shift with children’s age and developmental stage. So like I mentioned, you know, when we look at this age range up until the age of 18, the needs and considerations for protecting a two-year-old are very different, for example, than protecting a 10-year-old versus protecting a 17-year-old. And then the other point here is that risks and solutions need to go beyond, you know, this artificial distinction between online and offline. And finally, the point is that we need to underpin all our interventions by using the most up-to-date and robust data, research, monitoring and evaluation that are accessible. So in summary, when we’re thinking about these guiding principles, we must recall that protecting children in digital spaces requires thoughtful, inclusive and evidence-based strategies. Now, the evolution of digital technologies has uppaced many countries’ legislative and regulatory frameworks, as well as educative and support services that are required to keep children safe. So as was mentioned by the previous speakers, you know, catching up really requires a collaborative and cross-sectoral approach. It calls for an expanding community of people and sectors committed to protecting children. And we can only achieve this by leveraging skills and capabilities across different sectors. including digitalization, criminal justice, social services, education, health, civil society, and the private sector. And of course, the focus of this session, you know, the private sector, we know that the private sector plays a pivotal role in shaping children’s digital experiences. The digital environment is highly commercialized. When we’re talking about businesses in this space, it’s ranging from social platforms and search engines to mobile operators to e-retail services and data brokers. And all of these playing a really important and influential role in the design and deployment of digital tools and experiences that impact children’s rights both directly and indirectly. And with this influence obviously comes both an opportunity and a responsibility to respect children’s rights and ensure their safety online. And importantly, this responsibility is not limited to companies whose primary audience is children. It extends to all of those whose products or services may impact children. And the responsibility also extends beyond just the big tech giants, which we often think of when we’re thinking about this topic. But rather, companies of all sizes and across all sectors are increasingly adopting digital technologies in ways that pose potential actual risks to children. So, all companies, regardless of their size or sector, have a responsibility to respect children’s rights and to enable the remediation of any adverse child rights impacts that they cause or contribute to. And this responsibility is laid out in the UN Guiding Principles on Business and Human Rights and the Child Rights and Business Principles. And every company has a different level of influence and potential to affect children’s rights. Conducting child rights impact assessments can allow companies to identify specific risks and challenges and help shift from the reactive approaches that we’ve often seen to more proactive, preventative measures. The Committee on the Rights of the Child really emphasises corporate accountability. They state that states should require businesses to undertake child rights due diligence and in particular to carry out child rights impact assessments in order to prevent and address any risks to children. And UNICEF, we’ve heard about the experience in China, but globally we’ve collaborated with companies and stakeholders to develop practical tools for child rights impact assessments and due diligence, as well as other influences in the business ecosystem, spanning investors, standard setters and industry associations to drive action. We’ve also provided policy guidance. Some of this was mentioned in the first opening remarks, for example on AI, also on data governance. And these resources are really designed to help companies understand their impact and take action to respect children’s rights. We also engage with companies through multi-sectoral alliances, such as the We Protect Global Alliance, which brings together governments, companies, civil society and international organisations to tackle the specific issue of online child sexual abuse and exploitation. Across these efforts, let me emphasise that UNICEF does not endorse any company, brand, product or service, rather all our efforts are guided by the goal of improving outcomes for children at scale. So this includes by building an open knowledge base of practical guidance on responsible business conduct in relation to child rights in the digital age. To drive positive change, UNICEF has developed recommendations addressing those four priority areas that I mentioned at the beginning. These include actions relating to strengthening systems and services, engaging companies, policy advocacy, legal reform, community action and research. I’m not sure if you’re yet seeing my screen. If you are, you’ll see that there’s a QR code, which you can scan to read more from our policy brief. In closing, I really want to emphasise that it is a unique opportunity at this moment for us to anticipate and address potential risks to children when we’re thinking about technology design and governance. It was just a couple of months ago that member states agreed, you know, this new global digital compact and it really gives us an opportunity to reinforce the commitment to children’s rights in the digital age. And this environment that we want to create needs to ensure accountability, but at the same time, it needs to be encouraging of companies to actively identify problems and persist in finding solutions. And this means collaborating across different sectors, engaging with children, young people, experts and researchers, and maintaining open dialogue about successes and challenges. By sharing these insights and learning, I’m very optimistic that we can achieve meaningful change. Thank you. Thank you.

Moderator: Thank you a lot, Afro. And thank you for introducing the UNICEF position on safeguarding child online protection and also the guiding principles. And I would like to highlight again the last two words you introduced as proactive as a way of interpreting responsibility of the ICT companies and also at scale, I believe it’s embedded in the gene of ICT companies. So thank you, Afro, again for joining us and deliver this insightful sharing speech. And next, I would like to invite Dr. Sally Hsakli, the former cultural counselor at the Embassy of Saudi Arabia in China, and the professor at the history department of Imam University. The floor is yours, doctor, please.

Speaker 1: Thank you, Dr. Hsakli, and ladies and gentlemen. and following the difficulties for children online safety. I am honored to share with the CFIS, alongside with UNICEF China and Communication University of China. Thank you for all of you. Today we gather to address a recent concern ensuring tech companies prioritize the children online will be. Our discussion will revolve around three perfect aspects. They call it online, children online and I like in future parents even with the children even parents online they need companies to with that not just the children they need protect even the parent or the old people they need prefer. Ensuring safe technology and for education firstly tech companies must design innovation, protects and services that prioritize children’s safety and privacy. That involves consideration considering the children and needs and riot during the development process. We are we argue companies and adopt child country appropriated interdiction safety future and parental control into the products. By doing so we can make we can might risk. and to create a safe online environment. Effect policies and measures. Secondly, tech companies must establish and implement robust policies and child online protection. This includes developing clear guidelines the delegating terms and overseas protection effort and utilising technical and manual review process. We encourage collaboration between companies and government and organisations to share best practice and drive collective progress. Lastly, rising awareness about child online protection is critical. Tech companies must assume responsibility for promoting safety through education and publicity initiatives. We advocate for increased public engagement encouraging individuals to participate in shaping a safer online ecosystem. Together we can foster future culture and responsibility. In conclusion, our collective efforts can be significant impact to children online safety. We urge each companies to advertise an invitation safety solution, implement effective policies and promote awareness. Let us unite to build a digital future where children and adults can thrive free from harm. Thank you very much. Thank you. Thank you a lot. Sorry, I can’t see. I lost. I forgot my eyes, my glasses.

Moderator: My glasses and I can’t read. Thank you. No problem. Thank you. Thank you, doctor. I think the doctor highlighted that at the very beginning stage, at the design stage, the ICT companies should consider the safety by design and the child rights when they develop products and platforms and also agreements think alike that it’s very important to have industrial regulation on the guidance, which is the CFRS and you have China and also a lot of ICT partners we are devoting into this progress. And the next I would like to invite Mr. Zhong Dandan, the Secretary of Party Committee, School of Information and Communication Engineering of the Communication University of China. Please, the floor is yours.

Dandan Zhong: Thank you, respected Secretary General of CFRS, Ms. Zhao Hui, Chief Child Protection University of China, Ms. Dora, ladies and gentlemen, good afternoon. It’s my great honor to participate in the 2024 IGF Open Forum on promoting tech companies to ensure children’s online safety. My name is Dandan Zhong, the Director of International Office at Communication University of China. In this era of digital intelligence, we are fortunate to gather here to discuss a highly relevant and urgent topic of children’s online safety. Communication University of China was founded in 1954, and this year marks our 70th anniversary. So we are regarded as a cradle of talents for China’s media and industry, for broadcasting and television, as well as we know, in the University for the Education of Information Communication. With the rapid development of the Internet, the wave of information networking has swept across the globe. As of June 2024, the number of Internet users in China has reached nearly 1.1 billion, an increase of 7.42 million compared to December 2023, with an Internet penetration rate of 78%. The number of underage Internet users in China continues to grow, exceeding 193 million, and the Internet penetration rate among minors has risen to 97.2%. The wide spread of availability of the Internet has led to an increasing number of scenarios where children can access and use emerging technologies. The importance of Internet-related scientific and technological advancement in the lives of children is becoming increasingly evident, presenting various opportunities for their growth while also bringing numerous challenges. How Internet companies can fulfill their social responsibilities through technological innovation and better serve the vast underage user space has become one of the focal points of social concern. CUC has always placed high emphasis on the construction of disciplines related to emerging technologies. and cybersecurity, actively promoting the integration of technological progress and social responsibilities and has a strong academic foundation in the field of intelligent media networks, encourage internet companies to, it’s right? No voice, okay. In responsible technological innovation, CEC scientific research team has participated in the response to technological innovation for children project initiated by the China Internet Development Foundation together with the foundation and the UNICEF. They have conducted a collection of typical corporate case studies to gain a deeper understanding of the practices of internet companies in responsible technological innovation against the backdrop of China’s strong emphasis on the online protection of children. This initiative aims to on earth corporate examples that actively fulfill social responsibilities in the field of internet technology innovation, providing safer, healthier and more beneficial products and services for children by sharing successful experience. It further stimulates the innovation awareness and a sense of responsibility across a society. Unlike traditional internet application, AI driven internet application incorporate intelligent technologies such as machine learning, deep learning, natural language processing and analogy graphs. The use of this technologies helps provide greater benefits for children such as using motor, motoring quality content recommendations and a company shape for special groups. However, this emerging intelligent technologies also pose many risks to children, including unfairness, data privacy concerns and internet addiction. Therefore, internet companies should deepen communication, enhance consensus and strengthen cooperation with stakeholders such as government departments, research institutions and social organizations. This collaborative efforts aims to find global guidelines and the rules for protecting children. The collaborative efforts aims to find a global guideline and the rules for protecting children online safety, thereby promoting the healthy development of emerging technologies and better benefit people around the world. Here, I call upon all the esteemed guests to join us in our efforts to ensure that internet applications bring greater benefits to the most vulnerable and deserving children. We must take effective measures to minimize risks as much as possible. Through this open forum, I hope we can reach a consensus on children’s online safety and actively promote global technology companies to ensure the safety of children online. And thank you for your attention.

Moderator: Thank you. Thank you a lot, Ms. Zhong, for bringing our voice from the academia. And in Zhong’s sharing, she also mentioned that the importance of our proactive action to prepare the emerging technologies, for example, the AI-driven internet applications. And that’s also an important reason why we also bring the voices from the ICT companies to join this open forum. And for next two speakers, they will come from the ICT industry. For next, let’s welcome Liang Lingling from the Tencent Matter Protection Camps, family education specialist. Please, the floor is yours.

Speaker 2: Okay, thank you. Distinguished leaders, honorable guests, ladies and gentlemen, it’s an honor to participate in this workshop and deliberate on building our multi-stakeholder digital future. Today, I would like to present on the topic of heritage in the digital age, cultivating responsible online behavior. In the context of globalisation, conflicts regarding mobile phone usage time between parents and children have become prevalent. In China, due to parents’ prolonged working hours, children’s academic stress, and a scarcity of peer playmates, children are increasingly inclined to use online entertainment as their main form of entertainment. Confronted with this challenge, China has implemented a stringent weekly time limit of three hours for…next slide, please. Confronted with this challenge, China has implemented a stringent weekly time limit of three hours for handheld games, accompanied by anti-divergification and purchase restrictions. Tencent, as an internet enterprise, not only complies with these regulations, but also takes the initiative to provide parental guidance and consulting services for the public good. The Tencent Customer Service Manor Protection Centre represents a crucial step in this regard. Since 2017, our centre has expanded from a team of 20 to a professional team of 500. A national hotline has been established to assist manors in their utilisation of digital products. To date, it has served over 36 million domestic users. thereby augmenting internet literacy, safety awareness, family education, and online mental health. Next, please. We have assembled a team of educational psychology counselors, offering complimentary one-on-one online homeschooling counseling services to benefit a larger number of parents in the digital age. A public service homeschooling AI model has been launched furnishing families with personalized educational counsel and solutions. Next, please. We have mobilized 280,000 volunteers across the nation to engage in the work of safeguarding manners on the internet. On the 39th International Volunteer Day, we recognize the outstanding family education volunteers and outstanding volunteer service teams of the year 2024. Ms. Dora Justine, okay, she’s here today. How lucky. Director of the Child Protection Division of the UNICEF office in China was invited to address the event, during which she stated that positive parenting is an efficacious strategy for promoting family harmony, child wellbeing, and child protection. We advocate for this concept. Research indicates that parents’ digital literacy can influence children’s perspectives on online activities. When parents serve as exemplary models in terms of internet usage, and possess the ability to discern online information, children are more likely to perceive the internet as a tool for learning and personal development. Through patient guidance and effective communication, parents and children can reach a consensus on the appropriate purpose and duration of Internet use, thereby guiding children to utilize the Internet purposefully and responsibly. I would like to share a case. There was a 16-year-old boy whose father initially failed to comprehend his gaming activities. Subsequently, upon realizing that their child was engaged in gaming due to a profound interest in tanks, the father purchased a model tank and accompanied the child to visit several military museums. Currently, their child is studying tank design at university. This exemplifies that with the presence of supportive systems for children’s responsible online behaviors, they can achieve remarkable feats. In China, we collaborate with local governments, academic institutions, and social organizations for the common good, with a focus on the well-being of minors. Looking ahead, we will continue to make contributions to the development of youth in the digital age. Thank you.

Moderator: Thank you. Thank you a lot, Ms. Liang, for introducing the practices and the promising experience from the Minor Protection Center that gave us an example of how to hear the voice from children and also highlight the importance of parenting skills, which is also a focus area of UNICEF China’s work in China. We highlight a lot of positive parenting and also the digital literacy, not only for children but also for the community, for the family. Thank you again, Ms. Liang. And next, I would like to invite Mr. Li Yi, the founder of APE Programming, to introduce his work. Thank you. Thank you. Please, the floor is yours.

Li Yi: Thank you. Hello, everyone. I’m honored to be with you today at IGF 2024. I was fortunate to have grown up during the internet era. And at that time, I was a programmer. For me, programming weren’t just a way to make a living. It taught me valuable skills like logical thinking, creativity, and problem solving. I realized how beneficial these skills could be for a young mind in the long term. And that’s why my team and I founded YBC seven years ago. Today, we have trained over 5 million students in coding. We foster children’s development through four key aspects, which I refer to as the four-in-one training model. One language, which is programming language. One way of thinking, which is computational thinking. One ability, which is innovation ability. And one perspective, which has a view of the future. Our products are designed with children’s safety in mind from the very start. Each product undergoes testing multiple times before launch to ensure there’s no harmful content. We also consider the different stage of student development to ensure our products are friendly and safe for kids of all ages. Besides being a programmer and an educator, I’m also a father of three. Unlike me, today’s children are growing up in the age of AI. Whether we like it or not, we are all witnessing the dawn of AI. And it will profoundly influence and shape our kids’ lives. As a company that cares about child safety, child safety, we recognize the potential benefits and the threat posed by AI. As an educator and a father, I constantly think about how AI will impact education and how children can grow up well in the new era. AI can absorb knowledge and information so efficiently and widely, much more than a single person could ever learn. This makes AI a fantastic assistant in helping children gain knowledge. We have already begun to incorporate AI into education, helping kids understand what AI is, how it works, and how to use it more effectively. However, AI is not all-powerful. Currently, AI still makes mistakes, and humans need to identify the results generated by AI and make the final decisions. We hope that, through our efforts, children can approach AI more wisely, rather than simply trusting or rejecting it. AI also presents certain threats. We want the children to be aware of this danger so that, when they encounter AI-based schemes, such as a deepfake, they can recognize them and protect themselves. Our company is committed to public welfare and is dedicated to helping more children understand the future world of technology. We strive to share the wisdom behind great scientists to the next generation. It’s a challenging task, but we deeply care about the long-term benefit for children. It is commitment to their future. It’s why we are dedicated to this mission. Thank you.

Moderator: Thank you a lot, Mr. Li, sharing her insightful opinions, not only on behalf of the founder of a technology company, but also perspective from a father. Thank you again, Mr. Li. For today, we only got one hour, but I believe this open forum is successful, and I would like to conclude with three key words. The first one is proactive, as highlighted by Ms. Afro, the global leader on child online production of InDesign headquarters. The proactive is why we try to highlight the importance of promoting technology companies to participate in this important topic. And the second one is comprehensive strategy, highlighted by Dora. That’s the reason why today we invite multiple stakeholders and from diverse background, they also have diverse promoting practices experience to share their insight. And the last one is global solution, also highlighted by Dora, is the reason why we meet here and the importance of this open forum to give us a platform to discuss and exchange our promoting experience. And I would like to conclude by thank you all for participating and sharing your insight. And also please stay tuned to next year. We may meet again at the IGF 2025. Thank you all. Thank you again for attending this open forum. Bye-bye. Bye-bye. you you

D

Dora Giusti

Speech speed

125 words per minute

Speech length

948 words

Speech time

454 seconds

Implementing safety by design principles

Explanation

Dora Giusti emphasizes the importance of tech companies incorporating child safety principles into their product design process. This approach ensures that child protection measures are built into digital products and services from the outset.

Evidence

Giusti mentions that companies should introduce child rights principles and produce products aligned with safety by design.

Major Discussion Point

The role of technology companies in protecting children online

Agreed with

Afrooz Kaviani Johnson

Dandan Zhong

Li Yi

Agreed on

Proactive measures by tech companies to ensure child safety

A

Afrooz Kaviani Johnson

Speech speed

138 words per minute

Speech length

1564 words

Speech time

675 seconds

Conducting child rights impact assessments

Explanation

Afrooz Kaviani Johnson advocates for companies to conduct child rights impact assessments as part of their due diligence process. This allows companies to identify specific risks and challenges related to children’s rights in their digital products and services.

Evidence

Johnson mentions that UNICEF has collaborated with companies to develop practical tools for child rights impact assessments and due diligence.

Major Discussion Point

The role of technology companies in protecting children online

Agreed with

Dora Giusti

Dandan Zhong

Li Yi

Agreed on

Proactive measures by tech companies to ensure child safety

Differed with

Li Yi

Differed on

Approach to AI integration in children’s digital experiences

Recognizing children’s interconnected rights in the digital environment

Explanation

Johnson emphasizes that children’s rights in the digital environment are interconnected and indivisible. Efforts to protect children online must consider and balance various rights, such as access to information, freedom of expression, and privacy.

Evidence

Johnson refers to the principles in the UN Convention on the Rights of the Child and General Comment No. 25 as guiding UNICEF’s work in this area.

Major Discussion Point

Balancing protection with children’s rights and agency

Supporting children’s agency and resilience online

Explanation

Johnson argues for recognizing and supporting children’s agency and resilience in the digital environment. This includes giving weight to children’s views in policy and technology design and implementation.

Major Discussion Point

Balancing protection with children’s rights and agency

Collaborating across sectors to address emerging risks

Explanation

Johnson calls for collaboration across different sectors to address the risks posed by emerging technologies. This includes engaging with children, young people, experts, and researchers to find solutions and maintain open dialogue.

Evidence

Johnson mentions UNICEF’s engagement with companies through multi-sectoral alliances like the We Protect Global Alliance.

Major Discussion Point

Multi-stakeholder collaboration for child online safety

Agreed with

Dora Giusti

Dandan Zhong

Speaker 2

Zhao Hui

Agreed on

Multi-stakeholder collaboration for child online safety

D

Dandan Zhong

Speech speed

121 words per minute

Speech length

665 words

Speech time

329 seconds

Developing AI-driven applications with child safety considerations

Explanation

Dandan Zhong discusses the importance of considering child safety in the development of AI-driven internet applications. While these technologies offer benefits, they also pose risks that need to be addressed.

Evidence

Zhong mentions that AI-driven applications can provide benefits like quality content recommendations but also pose risks such as unfairness and data privacy concerns.

Major Discussion Point

The role of technology companies in protecting children online

Agreed with

Dora Giusti

Afrooz Kaviani Johnson

Li Yi

Agreed on

Proactive measures by tech companies to ensure child safety

Partnering with academic institutions for research

Explanation

Zhong highlights the role of academic institutions in researching and promoting responsible technological innovation for children. This collaboration aims to better understand and improve corporate practices in this area.

Evidence

Zhong mentions CUC’s participation in the ‘Response to Technological Innovation for Children’ project with the China Internet Development Foundation and UNICEF.

Major Discussion Point

Multi-stakeholder collaboration for child online safety

Agreed with

Dora Giusti

Afrooz Kaviani Johnson

Speaker 2

Zhao Hui

Agreed on

Multi-stakeholder collaboration for child online safety

S

Speaker 2

Speech speed

100 words per minute

Speech length

576 words

Speech time

345 seconds

Providing parental guidance and counseling services

Explanation

The speaker from Tencent discusses their initiative to provide parental guidance and counseling services. This approach aims to help parents navigate digital challenges and improve family dynamics around technology use.

Evidence

The speaker mentions the Tencent Customer Service Manor Protection Centre, which has served over 36 million domestic users and offers complimentary online homeschooling counseling services.

Major Discussion Point

The role of technology companies in protecting children online

Working with local governments and organizations

Explanation

The speaker emphasizes Tencent’s collaboration with various stakeholders to promote child well-being in the digital age. This multi-stakeholder approach aims to create a more comprehensive support system for children online.

Evidence

The speaker mentions collaboration with local governments, academic institutions, and social organizations for the common good, focusing on the well-being of minors.

Major Discussion Point

Multi-stakeholder collaboration for child online safety

Agreed with

Dora Giusti

Afrooz Kaviani Johnson

Dandan Zhong

Zhao Hui

Agreed on

Multi-stakeholder collaboration for child online safety

Promoting digital literacy for both children and parents

Explanation

The speaker highlights the importance of improving digital literacy for both children and parents. This approach aims to help families navigate the digital world more effectively and responsibly.

Evidence

The speaker shares a case study of a father who learned to understand and support his child’s gaming interests, leading to positive outcomes.

Major Discussion Point

Balancing protection with children’s rights and agency

Offering mental health support for online issues

Explanation

The speaker mentions that Tencent provides mental health support for online issues. This service aims to address the psychological impacts of digital experiences on children.

Evidence

The speaker mentions that their center has expanded to include a professional team of 500, offering services to augment internet literacy, safety awareness, family education, and online mental health.

Major Discussion Point

Addressing specific online risks to children

L

Li Yi

Speech speed

148 words per minute

Speech length

481 words

Speech time

194 seconds

Incorporating AI into education while teaching critical thinking

Explanation

Li Yi discusses the integration of AI into education while emphasizing the importance of critical thinking. This approach aims to help children understand and use AI effectively while being aware of its limitations.

Evidence

Li mentions that their company has begun incorporating AI into education, helping kids understand what AI is, how it works, and how to use it effectively.

Major Discussion Point

The role of technology companies in protecting children online

Agreed with

Dora Giusti

Afrooz Kaviani Johnson

Dandan Zhong

Agreed on

Proactive measures by tech companies to ensure child safety

Differed with

Afrooz Kaviani Johnson

Differed on

Approach to AI integration in children’s digital experiences

Empowering children to understand and use AI responsibly

Explanation

Li Yi emphasizes the importance of teaching children to approach AI wisely. This includes helping them understand both the benefits and potential threats of AI technology.

Evidence

Li states that they want children to approach AI more wisely, rather than simply trusting or rejecting it.

Major Discussion Point

Balancing protection with children’s rights and agency

Teaching children to recognize AI-based threats like deepfakes

Explanation

Li Yi discusses the importance of educating children about AI-based threats such as deepfakes. This education aims to help children protect themselves in the digital environment.

Evidence

Li mentions that they want children to be aware of AI-based dangers so they can recognize and protect themselves from schemes like deepfakes.

Major Discussion Point

Addressing specific online risks to children

Z

Zhao Hui

Speech speed

83 words per minute

Speech length

214 words

Speech time

153 seconds

Engaging multiple sectors including government, tech companies, and civil society

Explanation

Zhao Hui emphasizes the importance of multi-stakeholder collaboration in protecting children online. This approach involves coordinating efforts between government bodies, technology companies, and civil society organizations.

Evidence

Zhao mentions the establishment of a special committee with various organizations to protect children online, as well as collaboration with UNICEF for research.

Major Discussion Point

Multi-stakeholder collaboration for child online safety

Agreed with

Dora Giusti

Afrooz Kaviani Johnson

Dandan Zhong

Speaker 2

Agreed on

Multi-stakeholder collaboration for child online safety

Implementing regulations and special actions to improve online environment

Explanation

Zhao Hui discusses the implementation of regulations and special actions to enhance the online environment for children. These measures aim to create a safer digital space for minors.

Evidence

Zhao mentions the introduction of regulations by the Cyberspace Administration of China to protect minors in cyberspace and special actions carried out during summer months.

Major Discussion Point

Addressing specific online risks to children

S

Speaker 1

Speech speed

71 words per minute

Speech length

344 words

Speech time

290 seconds

Considering age-appropriate protection measures

Explanation

The speaker emphasizes the need for protection measures that are appropriate for different age groups. This approach recognizes that children of different ages have varying needs and vulnerabilities online.

Major Discussion Point

Balancing protection with children’s rights and agency

Developing clear guidelines and review processes

Explanation

The speaker advocates for the establishment of clear guidelines and review processes for child online protection. This includes developing policies and utilizing both technical and manual review processes.

Evidence

The speaker mentions the need for clear guidelines, delegating terms, and overseeing protection efforts, as well as utilizing technical and manual review processes.

Major Discussion Point

Addressing specific online risks to children

Agreements

Agreement Points

Multi-stakeholder collaboration for child online safety

Dora Giusti

Afrooz Kaviani Johnson

Dandan Zhong

Speaker 2

Zhao Hui

Collaborating across sectors to address emerging risks

Partnering with academic institutions for research

Working with local governments and organizations

Engaging multiple sectors including government, tech companies, and civil society

Multiple speakers emphasized the importance of collaboration between various stakeholders, including tech companies, governments, academic institutions, and civil society organizations, to effectively address child online safety issues.

Proactive measures by tech companies to ensure child safety

Dora Giusti

Afrooz Kaviani Johnson

Dandan Zhong

Li Yi

Implementing safety by design principles

Conducting child rights impact assessments

Developing AI-driven applications with child safety considerations

Incorporating AI into education while teaching critical thinking

Speakers agreed on the need for tech companies to take proactive measures in ensuring child safety, including implementing safety by design principles, conducting impact assessments, and considering child safety in AI development and education.

Similar Viewpoints

These speakers emphasized the importance of empowering children and parents with digital literacy and critical thinking skills to navigate the online world safely and responsibly.

Afrooz Kaviani Johnson

Speaker 2

Li Yi

Supporting children’s agency and resilience online

Promoting digital literacy for both children and parents

Empowering children to understand and use AI responsibly

Unexpected Consensus

Addressing mental health in relation to online safety

Speaker 2

Afrooz Kaviani Johnson

Offering mental health support for online issues

Protecting children from bullying, harassment and other forms of violence online

While mental health is not typically a primary focus in discussions about online safety, both speakers highlighted the importance of addressing psychological impacts of digital experiences on children.

Overall Assessment

Summary

The main areas of agreement included the need for multi-stakeholder collaboration, proactive measures by tech companies, empowering children and parents through digital literacy, and addressing both technical and psychological aspects of online safety.

Consensus level

There was a high level of consensus among the speakers on the fundamental approaches to child online safety. This consensus suggests a shared understanding of the complexities involved and the need for comprehensive, collaborative solutions. The implications of this consensus are positive for developing effective strategies to protect children online, as it indicates alignment among various stakeholders on key principles and approaches.

Differences

Different Viewpoints

Approach to AI integration in children’s digital experiences

Afrooz Kaviani Johnson

Li Yi

Conducting child rights impact assessments

Incorporating AI into education while teaching critical thinking

While both speakers acknowledge the importance of addressing AI’s impact on children, they differ in their approaches. Johnson emphasizes the need for child rights impact assessments, while Li focuses on integrating AI into education and teaching critical thinking skills.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to protecting children online and integrating new technologies like AI into their digital experiences.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers share common goals but propose different strategies or emphasize different aspects of child online safety. This suggests a multifaceted approach may be necessary to address the complex issue of children’s online safety effectively.

Partial Agreements

Partial Agreements

All speakers agree on the importance of protecting children online, but they propose different methods. Giusti emphasizes safety by design, Johnson focuses on recognizing interconnected rights, and Li advocates for empowering children to use AI responsibly.

Dora Giusti

Afrooz Kaviani Johnson

Li Yi

Implementing safety by design principles

Recognizing children’s interconnected rights in the digital environment

Empowering children to understand and use AI responsibly

Similar Viewpoints

These speakers emphasized the importance of empowering children and parents with digital literacy and critical thinking skills to navigate the online world safely and responsibly.

Afrooz Kaviani Johnson

Speaker 2

Li Yi

Supporting children’s agency and resilience online

Promoting digital literacy for both children and parents

Empowering children to understand and use AI responsibly

Takeaways

Key Takeaways

Technology companies play a critical role in protecting children online and need to implement safety by design principles, conduct child rights impact assessments, and develop AI applications with child safety in mind.

A multi-stakeholder, collaborative approach involving government, tech companies, civil society, and academia is essential for addressing child online safety effectively.

There needs to be a balance between protecting children online and respecting their rights, agency, and developmental needs.

Specific online risks to children that need to be addressed include sexual abuse, bullying, economic exploitation, and exposure to harmful content.

Promoting digital literacy for both children and parents is crucial for ensuring online safety.

Resolutions and Action Items

UNICEF and China Federation of Internet Societies to continue collaboration on promoting safe digital environments for children

Tech companies to integrate child rights principles and safety features into product design processes

Continued research and data collection on children’s online behaviors, risks, and needs to inform policy and product development

Unresolved Issues

Specific metrics or standards for evaluating the effectiveness of child online safety measures

How to address potential conflicts between child protection measures and other digital rights like privacy or freedom of expression

Strategies for protecting children from emerging AI-related risks while still allowing them to benefit from AI technologies

Suggested Compromises

Balancing technological innovation with responsibility to protect children by implementing age-appropriate safety measures

Using AI and technology solutions to enhance online safety while also teaching children to use these technologies responsibly

Thought Provoking Comments

A child goes online for the first time every half second and in China, as Ms. Zhao mentioned, there are 196 million children online with an internet penetration rate of 97%.

speaker

Dora Giusti

reason

This statistic powerfully illustrates the scale and urgency of the issue of child online safety, especially in China.

impact

It set the tone for the discussion by emphasizing the critical importance and timeliness of addressing online child protection.

So UNICEF’s work globally is guided and shaped by the principles in the United Nations Convention on the Rights of the Child and General Comment No. 25 by the Convention’s treaty body. And these principles really ensure a balanced and rights-based approach.

speaker

Afrooz Kaviani Johnson

reason

This comment introduces a rights-based framework for approaching child online protection, balancing safety with other rights like access to information and freedom of expression.

impact

It shifted the discussion from purely protective measures to a more holistic approach considering children’s various rights and developmental needs.

Conducting child rights impact assessments can allow companies to identify specific risks and challenges and help shift from the reactive approaches that we’ve often seen to more proactive, preventative measures.

speaker

Afrooz Kaviani Johnson

reason

This insight highlights a concrete step companies can take to improve child safety, moving from reactive to proactive approaches.

impact

It provided a specific actionable recommendation for tech companies, steering the conversation towards practical solutions.

Unlike traditional internet application, AI driven internet application incorporate intelligent technologies such as machine learning, deep learning, natural language processing and analogy graphs. The use of this technologies helps provide greater benefits for children such as using motor, motoring quality content recommendations and a company shape for special groups. However, this emerging intelligent technologies also pose many risks to children, including unfairness, data privacy concerns and internet addiction.

speaker

Dandan Zhong

reason

This comment thoughtfully explores both the potential benefits and risks of AI technologies for children, adding nuance to the discussion.

impact

It broadened the conversation to include emerging AI technologies, highlighting the need for ongoing adaptation of child protection strategies.

Whether we like it or not, we are all witnessing the dawn of AI. And it will profoundly influence and shape our kids’ lives. As a company that cares about child safety, child safety, we recognize the potential benefits and the threat posed by AI.

speaker

Li Yi

reason

This perspective from a tech company founder and parent acknowledges the inevitability of AI’s impact on children, calling for a balanced approach.

impact

It brought the discussion full circle, connecting the technical aspects of AI with real-world implications for children and families, and emphasizing the need for education about AI.

Overall Assessment

These key comments shaped the discussion by progressively broadening its scope from basic online safety concerns to a more comprehensive view of children’s rights in the digital age. They highlighted the scale of the challenge, introduced rights-based frameworks, suggested practical steps for companies, and explored the complexities of emerging AI technologies. The discussion evolved from identifying problems to proposing solutions, while consistently emphasizing the need for collaboration among various stakeholders to ensure children’s safety and rights in an increasingly digital world.

Follow-up Questions

How can AI be effectively incorporated into education to help children understand and use it wisely?

speaker

Li Yi

explanation

As AI becomes more prevalent, it’s crucial to teach children how to interact with and critically evaluate AI-generated content.

What are effective strategies for balancing technological innovation with child protection responsibilities?

speaker

Dora Giusti

explanation

Finding this balance is key for tech companies to create safe products while continuing to innovate.

How can tech companies implement ‘safety by design’ principles in their product development process?

speaker

Dora Giusti

explanation

Integrating safety considerations from the earliest stages of product design is crucial for protecting children online.

What are best practices for conducting child rights impact assessments in the tech industry?

speaker

Afrooz Kaviani Johnson

explanation

These assessments are critical for companies to identify and address potential risks to children’s rights.

How can international cooperation be strengthened to develop global solutions for online child protection?

speaker

Dora Giusti

explanation

Given that online safety is a global challenge, finding collaborative international solutions is essential.

What are effective ways to improve digital literacy among parents to better guide their children’s online activities?

speaker

Liang Lingling

explanation

Parents’ digital literacy significantly impacts children’s perspectives on online activities and safety.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #159 Domain names: digital inclusion and innovation

WS #159 Domain names: digital inclusion and innovation

Session at a Glance

Summary

This discussion focused on digital inclusion and innovation in the domain name system, particularly regarding new generic top-level domains (gTLDs). Participants explored challenges and opportunities in making the internet more accessible and diverse globally.

Key points included the importance of linguistic diversity in domain names, with speakers emphasizing the need for internationalized domain names to better serve non-English speaking communities. The discussion highlighted the challenges faced by underserved regions, particularly Africa, in participating in the domain name market due to cost barriers and limited awareness.

Speakers debated the definition of success for new gTLDs, with some arguing that traditional metrics like registration numbers may not fully capture their impact. The concept of “lowercase innovation” was introduced, suggesting that seemingly small changes can lead to significant improvements in accessibility and usability.

The Applicant Support Program was discussed as a mechanism to increase diversity in gTLD applications, though some participants noted that more comprehensive, long-term support may be necessary for true success. The discussion also touched on the need for realistic expectations, acknowledging that not all new gTLDs will succeed in the market.

Participants shared experiences from previous gTLD rounds, highlighting the time and investment required to establish successful domains. The importance of community involvement and tailored business models for different regions was emphasized.

Overall, the discussion underscored the complex balance between fostering innovation, ensuring inclusivity, and maintaining the stability and security of the domain name system. Participants agreed on the need for continued efforts to make the internet more representative of global linguistic and cultural diversity.

Keypoints

Major discussion points:

– The importance of linguistic diversity and inclusion in the domain name system

– Challenges and opportunities for new generic top-level domains (gTLDs), especially in underserved regions

– The need for realistic expectations about the success rate of new gTLDs

– ICANN’s Applicant Support Program and efforts to increase participation from underserved regions

– Different perspectives on how to measure success for new gTLDs

The overall purpose of the discussion was to explore ways to foster innovation and digital inclusion through the domain name system, particularly as ICANN prepares for the next round of new gTLD applications. Participants shared experiences from previous rounds and discussed strategies to increase participation from underserved regions.

The tone of the discussion was generally constructive and collaborative, with participants offering different perspectives on challenges and potential solutions. There was a sense of cautious optimism about the potential for new gTLDs to promote inclusion, balanced with realistic acknowledgments of the difficulties involved. The tone became more solution-oriented towards the end as participants discussed concrete ways to improve the application process and support for new gTLDs.

Speakers

– Adam Peake: Moderator

– Jennifer Chung: Works for .Asia corporation, manager of .Kids top level domain name, former member of IGF multi-stakeholder advisory group, leader of IGF Support Association

– Lucky Masilela: CEO of Zakaar (manager of .za domain), leader of Registry Africa group (manages .Africa and city domain names)

– Kristy Buckley: Leading ICANN’s Applicant Support Program

– Sajid Rahman: ICANN board member, background in international banking and venture capital

– Rebecca McGilley: ICANN board member

– Maarten Botterman: ICANN board member

– Ram Mohan: Chief Strategy Officer for Identity Digital, former ICANN board member, current chair of ICANN’s Security Stability Advisory Committee

Additional speakers:

– Paulus Nirenda: From Malawi (audience member)

– Nick Wenban-Smith: Employed by Nominet (audience member)

Full session report

Digital Inclusion and Innovation in the Domain Name System: A Comprehensive Discussion

This summary provides an in-depth analysis of a discussion on digital inclusion and innovation in the domain name system, with a particular focus on new generic top-level domains (gTLDs). Adam Peake introduced the panel of experts from various sectors of the internet governance ecosystem, including domain registry operators, ICANN board members, and industry leaders.

Key Themes and Discussion Points

1. Linguistic Diversity and Inclusion

A central theme of the discussion was the critical importance of linguistic diversity in the domain name system. Ram Mohan, Chief Strategy Officer for Identity Digital, emphasised that linguistic diversity should be “a core outreach goal” and “a significant metric” for measuring success in the next round of gTLDs. This sentiment was echoed by Jennifer Chung, representing DotAsia and managing the .Kids domain, and Sajid Rahman, who highlighted the opportunity for new gTLDs to serve underrepresented languages and scripts.

The speakers agreed that promoting internationalized domain names (IDNs) is crucial for better serving non-English speaking communities and making the internet more representative of global linguistic diversity. This approach was seen as essential for driving adoption of new gTLDs and fostering true digital inclusion.

2. Challenges and Opportunities in Underserved Regions

The discussion highlighted significant challenges faced by underserved regions, particularly Africa, in participating in the domain name market. Lucky Masilela, CEO of Zakaar and manager of the .za domain, pointed out the limited success of African gTLDs due to market conditions, noting that there are only 3.5 million domain name registrations for a continent of 1.4 billion people.

High costs and the need for long-term investment were identified as major barriers, even in developed markets. Nick Wendman-Smith from Nominet emphasised the complexity and resource-intensive nature of launching and sustaining a new gTLD, citing the Welsh community’s experience as an example.

Despite these challenges, speakers saw opportunities for innovation. Lucky Masilela shared the example of M-Pesa in Africa, demonstrating how mobile technology addressed financial inclusion needs in underserved markets. He also suggested a business model involving cities or municipalities as applicants for new gTLDs, which could potentially address some of the challenges faced in the African market.

3. Defining and Measuring Success for New gTLDs

A significant point of discussion was how to define and measure the success of new gTLDs. Ram Mohan argued against using the number of domain registrations as the sole metric, introducing the concept of “lowercase innovation” through memorable domain names. He suggested that the impact of new gTLDs might only be visible years after their introduction.

Jennifer Chung supported this view, stating that success can mean different things for different regions and communities. However, Lucky Masilela emphasised the importance of sustainable business models over time, suggesting that registration numbers remain important for achieving scale and enabling innovation, especially in price-sensitive markets.

4. Realistic Expectations and Market Forces

Several speakers, including Ram Mohan and Sajid Rahman, stressed the importance of having realistic expectations about the success rates of new gTLDs. Ram Mohan argued that allowing some gTLDs to fail is a natural part of market forces and should be expected. He suggested that ICANN should include an estimation of the number of TLDs that will not succeed as part of its planning for the next round.

5. Improving the Next Round of New gTLDs

The discussion yielded several suggestions for improving the next round of new gTLD applications:

a) Enhanced Applicant Support: Christy Buckley emphasised the need for an improved Applicant Support Program targeting underserved regions. She explained that the program offers a 75-85% discount on evaluation fees and extended support for the first three years of operation. Rebecca McGilley added that there should be a balance between providing support and encouraging eventual independence for new gTLDs.

b) Registry Service Provider (RSP) Pre-evaluation: Rebecca McGilley explained the RSP pre-evaluation process and its potential to reduce costs for applicants.

c) Focus on Linguistic Diversity: Sajid Rahman and others stressed the importance of promoting IDNs and linguistic diversity in the next round.

d) Leveraging Lessons Learned: Jennifer Chung highlighted the value of applying insights from previous rounds to improve the process.

e) Realistic Planning: Ram Mohan advocated for incorporating realistic expectations about success rates and market demand into the planning process.

Unresolved Issues and Future Considerations

Despite the productive discussion, several issues remained unresolved:

1. How to precisely define and measure success for new gTLDs, especially those serving niche communities.

2. Striking the right balance between providing support for new gTLDs and encouraging their eventual independence.

3. Addressing the high costs and long-term investment required for new gTLDs, even in developed markets.

The conversation also generated important follow-up questions, such as how to increase the amount of African content on the internet (currently less than 15%) and how to develop more price-sensitive domain name offerings for the African market.

Conclusion

The discussion underscored the complex balance between fostering innovation, ensuring inclusivity, and maintaining the stability and security of the domain name system. While there was general agreement on the importance of linguistic diversity and support for underserved regions, differences in perspective emerged regarding the definition of success and approaches to market challenges.

As ICANN prepares for the next round of new gTLD applications, the insights from this discussion highlight the need for a multifaceted approach that considers diverse stakeholder perspectives, regional differences, and the long-term sustainability of new gTLDs. The conversation demonstrated a cautious optimism about the potential for new gTLDs to promote inclusion, balanced with a realistic acknowledgment of the challenges involved in expanding the global domain name space.

The panel concluded by mentioning an upcoming workshop on multilingualism, IDNs, and universal acceptance, highlighting the ongoing efforts to address these critical issues in the domain name system.

Session Transcript

Adam Peake: you you you you you you . . . . . . . . . . . . . . . . . . . . . to bring new geographies online, to bring new brands from the Global South online, to bring more content, access to more content and available connectivity. I don’t think you really need to hear too much from me. I will run through our speakers and try to do this in a way that will also introduce what they’re going to talk about. I will begin with Sajid from across the table. Sajid is an ICANN board member, a member of the ICANN community, a background as an international banker, leader in the financial sector, venture capital firms, but also a business strategist who’s very active in the Global South, bringing development to what we were thinking of as underserved regions. And so he will give us a perspective as an ICANN board member and also member of the community about the discussions we’re having around the enhancing what’s available to you is top level domain names and the new GTLD programs. Lucky Masarela is the CEO of Zakaar, which is the manager of the largest CCTLD in Africa, South Africa’s .za domain name. And also he’s the leader of the Registry Africa group, which are the managers of the .Africa top level domain name and city names for Cape Town, .Durban, Joburg, which are also managed by Zakaar. So bringing a perspective of someone who’s been bringing in new communities into the domain name space. Jennifer Chung on my right, works for one of the leaders of the .Asia corporation and manager of the .Kids top level domain name. Many of you will know Jennifer also as a member or former member of the IGFs multi-stakeholder advisory group and also leader. of the IGF Support Association, so a great supporter of the IGF. Coming soon, we hope, will be Ram Mohan, who’s the Chief Strategy Officer for a company called Identity Digital, which is running the largest number of new top TLDs, around 500. He’s also a former ICANN board member and the current chair of ICANN’s Security Stability Advisory Committee. So he will join us when he’s finished with another session in another room. We have quite a packed schedule. And last but not least is my colleague, Christy Buckley, who is leading our Applicant Support Program. Christy is online and will give a perspective from how the staff is organizing this particular activity and what we’re hoping for and what we’re achieving at the moment with the work that the new GTLD program and the Applicant Support Program are working on. So with that, I would like to begin turning over to Jen, Jennifer and Lucky. Lucky, I hope your microphone will be working. To give us a perspective as two people representing organizations that introduce top level domain names in an earlier round when we introduced new TLDs to the internet. Lucky for .Africa, Jen .Kids and also the earlier experience with .Asia. So I think Jennifer, if you would like to begin and here’s a microphone.

Jennifer Chung: Thanks Adam for that really lovely introduction. So I’ll talk a little bit about the experience .Asia had. I always like to say .Asia is one of the middle children, not a legacy TLD, but definitely not part of the new round. It was quite interesting for DotAsia because it was really in response to DotEU, which is obviously with the CCs, but it was an initiative to support the Asia-wide collaboration and upholding the ethos really of the Asia-Pacific community. Then there was quite a lot of will, geopolitical will and community will to actually have this namespace. It actually pioneered quite a lot of different things. Our sunset, our sunrise policies were now used in a lot of different ways when you look at the different registry policies that you see right now. It was also one of the very first top-level domains that offered IDN registrations after of course the CCTLD fast-track that was passed through back in 2009. We started offering internationalized domain names in our namespace starting from 2011. Now, moving back a little bit and talking about the new GTLD part, we also do manage.Kids, which is a kids-friendly for kids by kids namespace, and that came in the 2012 round. I’m sure Christy will be really happy to talk to you a little bit more about the applicant support program. But one of the very interesting things about.Kids was, in the last round, it was the only recipient for the applicant support program. So we are happy to share lessons we’ve learned, happy to share the feedback, and of course, we’re looking forward to learning more about the applicant support program coming up. I think I’ll stop here because I could talk at length, but I’d love for Adam to moderate.

Adam Peake: Thank you, Jen. I think the link between.Asia is one of, as you said, not quite a legacy, but an in-betweener, that is important in the lessons of.Kids. I wonder, Lucky, hoping that your microphone will work. if you could give us an introduction to the work you’ve been doing with .Africa and perhaps some of the city names. But also, what motivated you to apply for a GTLD in the 2012 round? How’s it being used? And, yeah, any words of wisdom about, you know, what you’re thinking for the next round as well, and applicant support? And over to you, and fingers crossed this works. Jen has very kindly offered to continue a little bit and perhaps touch on the questions I was asking Lucky, and I will just pop to the desk and see what we can find over there. So Jennifer, thank you very much for your help.

Jennifer Chung: Thank you, Adam. Like I said, I’m happy to talk more about, you know, what we were thinking behind some of the reasons. Well, not really for .Asia, because I think all of us already know the reasons why .Asia was applied for, but I will talk a little bit about .Kids. I think it was really, at the time, a response to the children’s rights and children’s welfare community concerns over the over-commercialisation of .Kids, wanting this namespace to become exemplary, you know, GTLD with children’s rights and interests at heart. So, something central to the .Kids namespace is that there was a best practices and guiding principles that were actually crafted and created by child-led organisations, children’s rights and welfare organisations, to make sure that this is something that they can definitely back behind. They actually form part of the advisory committee. that we all often consult with when we come across interesting cases of, you know, possible abuse. There’s very, very strong, strict policies on many of the categories that go beyond, above and beyond many of the different registry operator policies right now. So happy to talk about that. And another important thing I wanted to bring up is there’s always this notion when you start applying for a new GTLD, you think that it will be used for something, you envision different uses for it. But when we actually look at the real use cases, we’ve always been surprised because we’re like, oh, okay, it’s been used for not only educators and children who actually want to, you know, express themselves online as well. We also saw some interesting clothing brands that use interesting .kids names, such as copycat.kids to launch their products, to look at their markets. We looked at futureleaders.kids as well. And they actually provide high-quality educational materials for children to look at, like tutorial modules as well. And also artclubs.asia is an interesting example that actually spans both, I guess, the child community or youth or child community and also the use of Asia as, I guess, the name and the identity that they have online. One thing also that I’ve looked at when we were going through this whole idea of, you know, how can we foster innovation? How can we use this opportunity, having a GTLD, having a new GTLD? How can we use this to further innovation, both in terms of our business models, because there’s a lot of different use cases, as well as pushing forward the innovation in policymaking? And I think three of the things I want to highlight is having. and active suspension quite early on in the domain name life cycle to send the signal to market to kind of you know serve as a warning for people who might look at registration for nefarious purposes to you know kind of back off and understand that the same space is being organized being governed by policies that are quite you know open for innovation but definitely guarding against abuse. Having stable pricing policies is always a very good use case for innovation because a lot of small and medium businesses a lot of individual entrepreneurs really want to know that they’re able to use this namespace to grow their business which they might be starting startups or something like that to be able to fit that into a startup budget. And then finally I guess this is back to this ICANN world as well to foster innovation and inclusion to you know I think it’s happening now as well to stop kind of over requiring RSPs or registry service evaluation process for every single thing. So I think for a lot of registry operators when we’re looking at our business model looking at business use cases, we really want to have predictability to be able to look at how we can grow innovation, how we can introduce this to new markets, how we can have new markets interested to apply for this upcoming new round as well so hopefully that gives us a little bit more introduction of what we’re thinking.

Adam Peake: Thank you very much. Thank you Jen, that was brilliant. And I’m going to try Lucky again. How are you doing Lucky? I’m hoping your microphone is unmuted and please see if you’re able to speak to us.

Lucky Masilela: Good afternoon. Yes, I hope I’m clear. Yes. You can hear me well.

Adam Peake: I don’t hear you but I see your microphone seems to be working there.

Christy Buckley: I can hear Lucky. online. This is Christy.

Lucky Masilela: Oh, perfect.

Adam Peake: Christy, I see you were trying to say something and we can’t hear you either.

Lucky Masilela: Oh, okay. You can’t hear me there?

Adam Peake: Right. It seems the audio Zoom isn’t coming into the room. Sajid, I wonder if I could put you on the spot and jump around and give us an introduction and some comments on what you heard from Jen in particular, but also your thoughts on the program, particularly notions around innovation and inclusion. That would be great. And apologies

Sajid Rahman: for the jump. Thanks, Adam. When Adam mentioned to me a few hours back that I need to talk about domain names and inclusion and innovation, I was really scratching my head on how to connect domain name with innovation. I can understand the inclusion bit of it. So, you know, if you look at the whole digital divide inclusion aspects of it, there are three fault lines that we can think of. So the first fault line is along the line of access to internet. How we can ensure that the people across the world, irrespective of where they are based, can access internet uninterruptedly. So that’s the first fault line that needs to be addressed. The second fault line is the fault line of bias. So just to give you an idea, on the first fault line, there are even today, in 2024, around 2 billion people don’t have access to internet. Between north and south, in north, 93% of the people have access to internet. In south, it may go up to 42%. Between capital city, between urban city and rural city, the percentage varies from 92% in urban to anywhere between 20% to 30% in poor, some cases even more. So the percentages varies a lot based on where you are set up. So access to internet is the first fault line that we need to address if we want to improve digital inclusion. The second fault line is around biases. As much as we believe that internet is in a way, is a result of human biases that we live in. So if you look at it, there was a data that the facial recognition that is used, in case of dark skinned color women, 44% of cases, their facial recognition may be faulty, compared to 1% into a white male. So it sort of reflects the people who are developing the internet and working behind it. So the second fault line is the line around biases. The third fault line, I think, is the fault line of innovation. As we go around the world, there is a challenge of innovation and access to innovation, whether it is an innovation around web infrastructure, whether it’s an innovation around artificial intelligence, whether it’s an innovation on the latest that is coming out with quantum computing and everything. So the third fault line, which is a fault line of innovation, that impacts how people are included in the internet of today and will be included in the internet of tomorrow. If we look at all these three fault lines, where ICANN really comes handy, is the first fault line to address, which is access to internet. Now, if you look at the new GTLD programs that we are working on, so the new GTLD programs is an issue of digital identity. We believe that as more and more domain names are allowed, and more and more domain names are allowed, allowed to exist, like .asia or .africa or all the other domain names, there will be more people who will create better identity on the internet. And that will obviously improve the digital inclusion or access to internet for a wider group of people who are entrepreneurs, who are individuals, who wants to create an identity on the internet. So new GTLD really helps in that way. Then there’s this question of internationalized domain names. So non-Latin scripts. So as we improve the international domain names, then we’ll have people who are of different languages and others. They will at least have an identity which is a non-Latin script, but an identity they can relate with. So that is an important part of this whole ICANN initiative. The third one is, of course, universal acceptance. And we can talk about it a lot in different. I was told by Adam not to touch upon that in details, because we apparently have another program to do that. But the point is that that is also a critical part of how ICANN helps into the new GTLD programs. The second thing is, of course, the whole grant, which Martin is here, who has been leading it for a while from the ICANN side. So the grant program is essentially designed for people on the under-resourced areas to help them get into internet, the people who are working on the different parts of the world, ensuring that they have more people in terms of infrastructure, in terms of innovations. The people who are not represented well are somehow financially supported so that they can access the grant program and can get into the internet. So that’s another critical part of it that works out. Do you want me to continue? We have time? OK.

Adam Peake: Thank you very much. I like the three fault lines idea. So thank you. Let’s just see. Christy, would you like to have another go and see if we can hear you in the room? We could see that the microphone was working. And perhaps now the captioning will show that you’re also. coming through. So over to you, Christy, please.

Christy Buckley: Thanks, Adam. Can you hear me OK?

Adam Peake: Yes. Yes.

Christy Buckley: Hooray. OK. It’s great when technology works. So greetings, everyone, and thanks very much, Adam. I wanted to just say hello from Vancouver, Canada, where it’s four in the morning here, so apologies if I’m not entirely awake yet. But it’s wonderful to see everyone online and also in the room. Thanks for joining this session today. I wanted to share a few observations about digital inclusivity and the domain name system and also highlight how ICANN’s Applicant Support Program is intended to foster broader and more diverse participation in technical internet infrastructure. As some of my colleagues have highlighted, we’re already seeing exciting examples of innovation and global participation in GTLBs, or generic top-level domains, and we hope that the next round of GTLBs will open even more doors for both. From the lens of digital inclusion, one thing that I’ve observed is that concepts and definitions and methodologies for assessing digital inclusion do not typically include any mention of technical internet infrastructure like GTLBs, nor the need for universal acceptance of GTLBs with different languages or scripts, and we have another session related to this tomorrow. And so while definitions of digital inclusion vary, the focus generally falls on access, connectivity, skills, and participation. However, when infrastructure is discussed, it usually refers to internet connectivity, devices, or online services. The underlying technical infrastructure of the domain name system, and who has the ability to participate in or shape that infrastructure, often gets overlooked. when talking about digital inclusion. And as I think about this, it actually reminds me of some of the previous work that I did in a previous life in global food systems. So we know that everyone needs to eat food. But when we look at a plate of food, we rarely think about the complex network of local and global systems that brought those ingredients, those foods together on the plate. And the same holds true for the internet. Millions of people use it every day, and yet very few people think about the infrastructure and relying it, the policies governing it, and who has opportunities to participate in managing that infrastructure or in shaping those policies. I know that many in the internet community are eager to see greater participation and accessibility and inclusion in managing internet infrastructure. And one key opportunity that I see to advance this is ICANN’s Applicant Support Program, which Jennifer had spoken about earlier. It’s often described as a sort of scholarship for GTLB applicants, and it aims to make the process more accessible globally. It offers fee reductions and capacity development and access to professional volunteer resources. And in doing so, the program fosters, the intent is to foster more innovation and ensure diverse participation in that technical infrastructure of the internet, which is, again, a critical but often overlooked aspect of digital inclusion. I’ll speak a bit more in detail about the Applicant Support Program, but for now, I just wanted to emphasize that it’s a sort of tangible way to help foster that the future of the internet is inclusive and innovative and globally representative of the next billion users. Thanks.

Adam Peake: And thank you that we finally got to hear you as well as see you. Thank you. Lucky, perhaps I think it’s time to try you again and we’re having some success getting people online and speaking. So again, if we can come back to that original question that Jen started to cover, when you applied for .Africa and also the city names we mentioned, Joburg and Durban, et cetera, what was the motivation and what was your sort of inspiration and hope for those TLDs? Hope in how that they would be used and have those hopes been met and a little bit of also for you, what’s next? So hoping that the audio works, over to you, Lucky, and thank you.

Lucky Masilela: I hope you can hear me now. Am I audible?

Adam Peake: Yes, we can. Yes, you are.

Lucky Masilela: Thank you. Yeah. Look, thanks, Adam. I think one can answer the question in multiple facets, following the topic around digital inclusion and some of the inhibitors. So we broke ranks in 2012 and we applied for .Africa, .CapeTown, .Durban and .CapeTown, those four names. And ideally, we were looking to bring the continent of Africa into the mainstream. We had to make sure that Africa is also participating in this digital world, in the digital space. More than anything else, we had this dream that the African community felt that they had missed out in the previous round. And they wanted to have this domain name, .Africa, being utilized not only as a digital identity, but as an instrument that would be used to unite the continent, as an instrument that will be used to express the cultural. interests, the cultural diversity of the continent. And that is what was underlying some of the important pointers towards the application. And then for us, it was also one of those great honors by the African community to support us and also identify us to be the ones who are leading this campaign of applying for the domain name and now I’m referring to .Africa when I say the domain name, and also to be the administrators of the name. And we had also to do the marketing for us. Really, it gave us a lot of comfort and confidence that was bestowed on us by the continent. Now, the interesting thing was we’re talking about this inclusiveness where we succeeded. We must also try and look where we failed, because it is where we failed where we need to be focusing at to understand how we go further. In that last round of domain names or GTLDs, there were at least 1,900 names. And there were 13 names that were applied for from Africa, the entire continent, continent of 1.3 plus probably 1.4 billion people. Only 13 names were applied for. And of all the 13 names, only five are still active. And I’m referring to .Africa, .CapeTown, .Deben, .Jobec, and .MTN. Now, the dream of inclusiveness begins to falter immediately. The fault lines, because here are 13 names that do not see the end of day. And now we are looking at the next round. The first round on its own had its own challenges. Challenge of pricing, 185,000 per name, per domain name, and it is also restricting. When you think about it on the continent, how do we begin to get the continent of Africa to be included in this space? Now we look at only 13 names, and most of those 13 names, if not all of those 13 names, the applicants of those names were all in the Southern Hemisphere or in Southern Africa. And up until today, those names are still administered by entities that would be based in Southern Africa for the rest of the world, for the rest of the continent. So for us, this is a litmus test of the success of using inclusiveness, digital inclusiveness, by looking at where have we succeeded and where have we failed as a continent, or as an ecosystem in the DNS space, you know, are the domain names beginning to achieve on what is required or what is expected from us? When we look at the GTLD names and look at how they have performed in the last 10 years, we can still see that, in particular, the geographics have not been able to have some stellar performance that is outstanding. DotAfrica on its own today is ranked fourth in the geographic names with 51,000 names. And this is nowhere next to what would be ideal for a continental name. I mean, we should be looking at being closer to DotAsia as an example, who are the leaders in the pack, but all other names. are becoming difficult to achieve. We can only look at the CCTLD and say, our second level domains, our CO2, ZD, or web and net are the stellar performers. And you compare the same number in South Africa across the continent, you realize that again, the biggest challenge when we talk of inclusiveness is that a continent of 1.4 billion people has only shy of 3.5 million registrations. Now, 3.5 million registrations in this continent, it’s very small. It just shows on the degree or the percentage of penetration or the usage. And we need to find out where are these bottlenecks, where are we losing the plot? Why is it that we are not participating? One of the thing is free names, possibility of people not being literate or being a commercially viable space or other things that we still must find. I think the DNS market, the Africa DNS market study must still find out why we are not growing. But one of the things that I was thinking about earlier on today was if we continue having free names, they are not going to be a solution for the continent. If you think of the Gmail as an example, it is offered for free and it has really put a damper on all the other 3TLDs. And it also puts a damper onto any other GTLD that would enter the market. And we need to have a conversation around to what extent will the Gmail continue being provided, especially on the continent, because it is really, it is a weed amount. the critical CCTLDs in the continent. Now, another thing once we move away from that, it’s managing, who manages those country CCTLDs? We still have CCTLDs that are administered outside the continent. Now, that on its own, it limits on this inclusiveness that we want to achieve, that we want to talk to. Now, it also makes it difficult for our own software developers to participate and build and develop their own solutions. Now, this immediately takes me to the next round. Just looking at the next round, if the previous round in 2014 gave the results that we have the numbers that I presented and the demise of 13 to five names, what will the next round bring for Africa as a whole? It seems like we are heading for another similar failure for the continent, but the rest of the world might be fine. If today it is projected that a domain name will be 285,000 per name and discounted at 85%, and you think of the economies on the continent, what is important? Is it a domain name or bread on the table? Or take your kids to school? The priorities shift and such that there is not going to be a single entity or company that will want to pay whether it is 35,000 US dollars or 285. I do not see that happening. And the next round means it will still be excluding a lot of players from the continent. There is a round that has just begun of your registry service providers. It equally, the evaluation process or the mechanism for that requires a. exorbitant amounts of money to the region of $90,000 to be evaluated to be a service provider. That immediately marginalizes all the service providers on the continent. You will find that the service providers that will be participating in the next round are going to be from the Northern Hemisphere. We’ll have nothing coming from the Southern Hemisphere or the third world countries. And that also further stems this approval on exclusivity. And I can go on and talk about solutions. How do we think we should do this? Not necessarily providing the names or this process for free for the continent, but creating enabling mechanisms. And we are discussing in this direction. And I hope, you know, as we look at the challenges and the beauty whilst we’re celebrating Dode Africa, but I thought, let me bring another dimension on some of the challenges that I think need to be addressed as we discuss the digital inclusion. I will take a break and come back here to deal with other issues. Thanks. Thanks, Ed.

Adam Peake: Thanks very much, Laki. And the solutions part, perhaps, Jen, you can also start thinking of ideas from what Laki’s been presenting to us here about what are the solutions. And I will mention that you’re quite right about the challenges, Laki, with the notion of 1.4 billion people and only three, five billion, sorry, million names registered. We have looked at that. We’ve done, ICANN and the community has looked at DNS studies, marketplace studies for Africa, and have tried to respond to that with different ways to encourage. And of course, there’s the Coalition for Digital Africa bringing together not just ICANN, but the TLD, sorry, the Country Code Top-Level Domain Community. and also yourself and other operators, the Smart Africa groups and others who are working more directly in the development processes. And perhaps some of the ICANN board members who are in the room might want to comment at some point on some of the work we’re doing there. But Christy, I wonder if we can come back to you and ask you if you could say a few words, you know, what we’re doing in engagement activities for the next round, and also how the Applicant Support Program is going to address some of the issues that Lucky referenced there. If that’s all right, thank you. Over to you, Christy.

Christy Buckley: Sure. Thanks so much, Adam. And Lucky, thank you so much for sharing your perspective and observations from the last round. And I think my understanding in working very closely with the ICANN community on developing the next round, and in particular the Applicant Support Program, is that there’s a lot of emphasis and attention and desire to make sure that the next round has global, diverse participation. And one of the opportunities for helping to support that is the Applicant Support Program. In fact, the community provided guidance recommendations on outreach and communications for the Applicant Support Program, which asked ICANN to emphasize underserved communities, nonprofits, social enterprises, and community groups. And so far on the Applicant Support Program, which just opened to receive applications last month, outreach and communications for that has really only targeted underserved regions so far, and not at all in more developed economies just yet. The Applicant Support Program is open for 12 months, and the idea behind that is to give applicants a really long runway to learn about the program, to build their understanding about it, and hopefully to apply. And when they do apply, get access to all of the… supports available, which includes not just the fee reductions in the GTLD evaluation fee, but also access to volunteer professional service providers, as well as a capacity development program that ICANN is constructing right now. And so what’s interesting just in the first months, and I can’t provide detailed numbers just yet because I don’t want to take the wind out of our sails on Wednesday when we announce the numbers, but just in the first month we are already seeing interest in the applicant support program from all corners of the world and all regions, including Africa, which is really fantastic because it’s only been open for just a few weeks. But we’re also under sort of no illusion that ICANN and the ICANN community can do this alone, right? So while we’re putting a lot of resources and effort into spreading awareness and understanding about the next round and the opportunities therein, especially to advance digital inclusion, we’re also relying on the ICANN community and the broader internet governance community to spread the word, to raise awareness, to help people understand the relevance of the debate name system and GTLDs to the work that they’re already doing. And that’s where this broader community, including the IGF, comes in. I will note that similar to the 2012 round, but again only DocKids was able to take advantage of it, the fee reduction provided to supported GTLD applicants is really intended to be meaningful and significant. And so this will be a 75 to 85 percent discount on the GTLD evaluation fee. That’s sort of the base fee of evaluating the application, but also on some conditional evaluations. So for example, if you are a supported applicant that applies for a geographic name, that’s a conditional evaluation that would also be receiving the same discount of 75 to 85 percent. If you’re a community priority evaluation applicant, you’re applying for a top-level domain that represents a community, that’s another evaluation fee that again would be receiving that 75 to 85 percent discount. So the idea is to, you know, provide financial support not just in a sort of one-time, you know, discount, but also to think about the whole life cycle and journey of the applicant and how do we support and sustain more diverse entrants to this space over that course of that life cycle. And so ICANN did research to understand, you know, what other sort of similar programs like the Applicant Support Program tend to provide, and it’s usually this beyond the sort of one-time upfront investment you’re providing that long-term capacity development training and support. For supported applicants that become registry operators like DocKids, there would also be a discount in the annual base registry agreement fees. And so that’s something that we’re again trying to help the first few years of a supported applicant becoming that registry operator to kind of help them get up to speed in the market and run their business. We’re providing that discount in the longer term for the first three years post-operation. I know that there’s been some discussion about the fee for the registry service providers, and even I’ve heard some folks in the community talk about the fact that, you know, that was not considered in terms of like how do we provide support for registry service providers. So it just wasn’t a policy recommendation, but it’s something that has gained a lot of interest and discussion since the registry service provider program has launched. And, you know, I think it’s interesting to consider that in terms of the future continuous improvement of future next rounds. How can we further improve the opportunities for diverse participation in all aspects of the next? round, not just on the GTLD side, but also the RSP side. Adam, did I address your question? Is there anything else you want me to speak to?

Adam Peake: I think that’s brilliant. Thank you very much, Christy. And I just wanted to say, I think what’s clear from this is that there’s many thousands of hours of work have gone into looking at reviewing the 2012 process and making improvements for this latest round. And while it sounds when we say the word ICANN has done, it’s important to remember that this is a community of volunteers. The staff guide the process. But these ideas and these improvements, these mechanisms come from our multi-stakeholder community, of course. I was wondering, so we I mentioned perhaps board member might want to respond on an ASP related issue or one of the comments from from from from Lucky. But really, Sajid, if that’s something you want to pick up on or if it’s we’d like to continue on to Jen and the opportunities and challenges.

Sajid Rahman: But you know, I was previously talking about the three fault lines, right? So the fault lines that is essentially causing a digital divide. And if you look at the first fault line that I talked about, which is how to make people access to Internet, I think it’s very important to create the ability for different people to participate in the system. At the end of the day, if we believe in the hypothesis that competition creates innovation, then the more open we we make it for people to join through different support programs, the better competition we’ll have, the better innovation we’ll have and the whole ecosystem will flourish. So on the application support program, we have done a lot of work. Martin, I know you wanted to say something, but or Becky.

Rebecca McGilley: Thanks. Christie’s talked a lot about the applicant support program, but it is based on longstanding policy and policies. that the community implemented to support diversity and inclusiveness on the internet. And the support, as Christy noted, will be meaningful in terms of both the application fee reduction, also enabling applicants, supported applicants, to participate in auctions in a meaningful way. And unlike the 2012 round, in this round, we will be contemplating support, ongoing support, to enable the domain to get up and running. So deferred, discounted ICANN fees and the like, all that are going to be very important. Ultimately, there needs to be a market that supports a domain. So it’s not intended to be forever. But the goal would be to provide enough support to ensure that the domain is able to operate and able to educate people about its existence and create a market and create interest in it. So it’s a very important aspect of increasing access to the internet locally across the world.

Adam Peake: Thank you, Becky. Thanks, Ajit.

Lucky Masilela: Adam.

Adam Peake: Yes.

Lucky Masilela: If I may come in.

Adam Peake: Please, Lucky.

Lucky Masilela: And thanks for those comments that have been made regarding the solutions. and how this applicant support program is being implemented and rolled out to include people from the underdeveloped world or countries. But one of the things that I want us to think of and consider, think of it very strongly, whilst we’re talking of bringing in other mechanisms, training and other support mechanisms, what if we make this leap of faith and say, based on the evidence that we have, there is empirical evidence that currently the four GTLDs in Africa, dot Africa, dot Cape Town, Durban and Joburg, have not been very successful, but they’ve been successful in the sense that they are still active 10 years later, and they are growing very slowly. Dot Africa has just made the turn the corner, there is growth. What if we use that very lessons that we have in our hands that we have seen and allow those very entities to be the ones working with the local communities, to give training to the local communities, those potential applicants on the continent, to be trained or given support by the guys who are on the ground, the guys that have seen it, the guys that have walked the journey. And I’m referring here to Registry Africa. Registry Africa has been very active in growing these domain names, these geographics, and I believe we have been successful. I believe that the model that is a business model that we have shared at the last Africa Strategy Session in Istanbul, where we believe that if we were to create a model whereby there’s a sponsor or an applicant, and that applicant would be a city or a municipality, a county, or even a community. These will be the entities that apply for a domain name, and that domain name, they will then appoint an operator consisting of a registry and a registry back-end provider, and this could be a local provider, and this could be done as a build, operate, and transfer basis, and they would be building this on behalf of this city or municipality. The interest for the city and the municipality is to provide service to its residents and derive revenue from the utilities or the services that they’re providing to their citizens, and out of that, they grant each citizen, each utility holder a domain name, an email address, and that email address would enhance the delivery of bills and the settlement of bills based on what would be utilized. We believe such a model, when applied or implemented on the continent and underserved or underdeveloped countries, it would attract more players, more participants, and this needs some kind of understanding that we have seen it. We have walked this ground. We think we understand what is the best mechanism of bringing about inclusion, and this is my submission to this audience that consider this business model of an applicant being a city municipality, gaining or getting access to their discount, appointing a registry operator and a back-end provider. to be the ones who are doing the work, and building websites alongside that, providing. And we are looking now at where is the next wave of a domain name. For us, the next wave for domain names, when we talk of inclusivity and innovation, is getting more of our own participating in e-commerce. E-commerce is the next great space for domain names. If we can participate extensively, solid in that space, we would have been able to achieve a lot in this space. Thanks, Adam.

Rebecca McGilley: So, I just want to say something that Lucky mentioned that’s very important. Part of the program for applicant support will be sort of pro bono assistance in terms of, we’re looking for people to help applicants with writing applications, understanding the legal issues, but also the business models. And so, what Lucky was saying about learning the lessons from what .Africa, .CapeTown have done, and how they’ve grown, and the insights that you’re providing, Lucky, are critically important. And I hope we will have a lot of people who will volunteer to be part of the applicant support to get that kind of hands-on, on-the-ground experience with it. But I know that’s something that the applicant support program is putting together as the outreach for the pro bono, for the sort of non-monetary, but very critical support in terms of business models and dealing with the paperwork and the like. So, there’s also a question from Neil in the chat about the prospect of an applicant for the registry service provider pre-evaluation form. The cost for the evaluation of back-end service providers is, like the cost of the GTLD program in general, cost recovery. So, there is a $92,000 projection, but that is based on a certain number of applicants, and the fee itself will go down if we get more applicants for that. The other thing that’s really important to keep in mind is, in the last round, every evaluation, every application included an evaluation of the back-end service provider. And so, there was a cost to the applicant in the form of that evaluation. That won’t be here this time because the registry service providers are being evaluated on a sort of once-and-done process. So, there are savings and benefits that will accrue to supported applicants, and applicants in underserved regions from the program itself. And then, finally, the question of sort of whether there should be applicant support for back-end registries raises really important stability and security issues. Back-end service providers are businesses that are going to be operating multiple top-level domains all around the world, and the very last thing that we should allow to happen is to not thoroughly evaluate the ability of that service provider to provide high-quality service. And that costs money. That evaluation costs money. So, although I think we understand and agree with the desire to have globally located back-end service providers, we have to balance that with the absolutely critical stability and security requirements and take into account the fact that this once-and-done evaluation process will benefit applicants globally who will not have to bear the costs of those as part of the evaluation process.

Adam Peake: Thanks, Becky, and thanks for the – oh, go ahead, Martin, please.

Maarten Botterman: Hi, this is Maarten Botterman. Sorry for – just to add to what Becky said, another big difference between the first round was that there were just a handful of back-end providers. Actually, there’s now a market with choice, including non-profit organizations and CCs, so you have much more offer and much more reasonable pricing as well. Just wanted to add that part. Thank you, Martin. Thank you, Becky,

Adam Peake: for your comments. It’s very helpful, very kind of you. Just wanted to welcome Ram Mohan to the room. I know you’ve been in much demand in other sessions, so Ram, as I mentioned, is the Chief Strategy Officer for Identity Digital and one of the largest operators of the new TLD batch from 2012. And wanted to say we’ve been talking about – Lucky and Jen have been talking about their experience from 2012 and Jen from before with .Asia and how that’s worked, and Lucky’s made some very important points about inclusion and how we can get people applying from the African content, etc. But please make an introduction and give us some ideas. Thank you.

Ram Mohan: Thank you so much. Can you hear me? Okay, great. Thank you so much, and my apologies for joining this session late. I was speaking at another one. So I want to focus my comments on two areas. One is that we look at innovation with a lowercase I rather than innovation with an uppercase I. Let me explain what that means. Often the success of programs is only seen years out. And in the meanwhile, you have many prognosticators who pre-decide and who say that a program has failed or has succeeded based on conventional metrics. Metrics, for example, in the domain name industry, such as how many domains have been registered. And you find especially a prevalent logic inside of the domain name industry that focuses on success almost directly correlated to the number of domains that people have registered. But I’d like to say that that is actually a myth. If you look at my own company and the 300 plus domain names that we have, I can tell you that we have success in all of them. Not in the way of looking at it purely from a commercial, is it a profit-making enterprise, one TLD at a time? One way of looking at innovation for us has been in the existing domain name space prior to the various rounds that ICANN has introduced. The gold standard has been .com in the GTLD space and businesses, organizations applying for a .com, getting a .com domain name. name. We are now in a situation where it’s somewhere in the order of 17 or 18 characters that you have to string together to get an open name that is just easily available in .com without paying any kind of a premium. So just to give you an example, if I say, I want to get roms studio, if I type in romsstudio.com, it’s hard to get. It’s probably gone. Even if I type in roms-studio, even that is hard to get. And what you’ll find is engines that come back and say, how about romsstudioonline.com? How about romsdigitalstudio.com, et cetera, right? The lowercase i innovation that has happened is, with the advent of new TLDs and the availability of them, is that I can go and get rom.studio, or I can get rom.photography, or whatever it is, right? And there is innovation that has come about just by that. Because you’re bringing communities that were otherwise forced to get very long strings that are often not easy to remember, often not easy to relay. Those strings are now no longer as important, because you can get memorable, descriptive strings available directly in the domain space. And I think that is true innovation that is being fostered. So that’s the one thing that I’d like to make a point on. The second is on, we’ve talked here about diversity and inclusion. The thing here is that, if you do not get to linguistic diversity combined with the other kinds of diversity that ICANN is looking for, you’re going to fail. There has to be linguistic diversity as a core outreach goal, as a core model for a definition of success. It’s not the only determinant of success, but it ought to be a significant factor and a significant metric that you measure, because the world that we know is not a world of English and Spanish and Chinese and Arabic. The world that we know is far more multilingual, but we do not have systems at the domain name space or the domain name level that can reflect the actual reality of the people of the world. For that, we need to really have a focus on linguistic diversity. I’m pleased that there is a session tomorrow on universal acceptance and internationalized domain names. It’s not enough to just say, let us get names in your language, let us get names accessible online. We have to also look at, are the various languages and the communities that have those languages, do they have the knowledge, understanding, awareness to be able to participate in what you’re bringing forward? Because when they do that, you will find lowercase innovation coming through.

Adam Peake: A nice new way to look at this, gen.asia and language and the issues around that. How do you respond to this idea of lowercase innovation based on language and inclusion? Thank you.

Jennifer Chung: Thanks, Adam. I thought we weren’t supposed to talk about it, but I’m happy. As ever, to be able to say more, I think especially coming from Asia-Pacific, language is ultimately so important. Almost none of us in Asia-Pacific has English as our first language. Some of us, it’s our second language, third language, or even fourth language. It’s really important for the community that we’re trying to serve to actually serve their needs, for them to be able to not only know they can navigate the internet in their own language, that they have the know-how to do so. And I’m really happy to hear both from Becky and also Christy at the improvements that have been made towards the Applicant Support Program, because obviously DOT Kids was a beneficiary. Not a single beneficiary of the 2012 round Applicant Support Program, but I mean, we’re happy in that way. But looking at it overall, that is a sign that there’s a lot more things that need to be improved coming to the new round. Where can we really target and provide this benefit? International domain names is one of the priorities that ICANN has said time and time again that they are looking for the new round. Even this morning, we heard from Curtis that this is what ICANN really wants to happen, to be able to serve the underserved or underrepresented regions, to be able to get these people online, being able to use these new domain names in a way not only to benefit the community, but in a way that allows for innovation, allows for market-driven innovation and entrepreneurship. The lowercase i, see, English is not my first language either, that Ram was saying, is so critical. It’s so critical. And I think especially for Asia-Pacific, because linguistic diversity is so broad, more than just that, it is, we’re talking now about digital inclusion and language justice. And I think that’s something that Ram touched on is really near and dear to DotAsia’s heart, of course, DotKids as well. What we’re trying to look, our wishes, if we had three wishes for what DotAsia wants to see for the next round is, of course, more applicants coming in from Asia Pacific region. We are a huge region with most of the world population and growing and more from community applicants because I heard both from Becky and Chrissy that the improvements done for these different evaluation processes, including three reductions, that is very important. But in addition to that, to provide the knowledge and the upskilling that allows them to succeed and sustain their business, that’s the most important part. So more from community, that the community priority evaluation should lean towards supporting those who want to be community and not just kind of looking at it from the lens of, oh, we must weed out these people or these organizations that are trying to game the system. Of course, every single system will have people who are looking to look for the loopholes, but the outset of how we’re trying to organize and design these programs really should be for people who want to use this for the benefit of that community. So hopefully that answers a little bit more. And the final little bit on internationalized domain names, my last wish, perhaps, that it’s now on the equal footing to all the different ASCII domain names, the English language domain names that you see. So that could be not something that’s weird or new, but that becomes something that’s really common and nobody blinks an eye at it.

Adam Peake: Thanks. We switched it on. I put a link into the chat about the workshop tomorrow and the words beginning for that session are that the internet must be multilingual and inclusive is the first sentence of the description there. So we have about 24 minutes left, 20 minutes left, something like that, 20 minutes left. Are there any questions from the audience? You’re here being very attentive and patient at the beginning. So would anybody like to raise a hand and we’ll pass a mic around. Or the same for any question online. If not, I would like to go back to… Could you pass the microphone backwards, please? I would like to go back to Lucky and his comment about solutions. I don’t think we should miss that.

Paulus Nirenda: Thank you very much. Paulus Nirenda from Malawi. I just wanted maybe to raise the issue, one of the issues that Lucky, which is success of the new GTLDs that were put up in the last round, especially those from Africa, Africa.Johannesburg.Cape Town, they haven’t been as successful as expected. And I think that this applies to quite a few other new GTLDs. I don’t know if in the next round there is an evaluation on this and how ICANN wants to move forward with the success of new GTLDs.

Ram Mohan: Thank you. I hope you can hear me. So, I want to say we should expect not all GTLDs will succeed. We should walk away from this idea that just because ICANN introduces a new GTLD program and they have hundreds if not thousands of new GTLDs that come through, that they all must be successful. I think the, at least from ICANN’s point of view, the goal has to be to create a level playing field and to make sure that the ingredients for success are present and accessible to all. Those ingredients include the applicant support, they include the technical knowledge, they include linguistic ability, they include universal acceptance, things like that. The rest of it, where there are market forces that come to bear, I really think we should allow economics and market forces to do what they do well rather than try to engineer some kind of social experiment to arrive at some definition of success.

Sajid Rahman: If I can add a few points, the whole multi-stakeholderism that ICANN is very proud of essentially means that all the different voices get heard and get built into the different activities and policies that we form. As a result of that, this multilingualism, which is an issue, and that’s why you see different activities like UA Days and international domain names and many initiatives including application support programs that have been taken. But I completely agree with Ram that at the end of the day, it needs to make sense for someone to continue a domain name indefinitely. I mean, the application support program can only continue up to a certain extent, it cannot be infinite.

Lucky Masilela: Yeah, if I may come in, I think I’m quite excited. by Ram’s approach to complex issues, he explains them with lower and upper cases, and of course it makes sense the more he pulls context into that. And starting with the second point where he talks to diversity inclusion, and in particular the linguistic diversity, for us, one of the things that we picked up in early days is that this is very critical, the issue of linguistic diversity. And this was supported by the fact that there is less than 15% of African content, I’m talking about history books, our music, etc., that is available on internet. And it makes it difficult for the African child to go into internet and find sufficient information of themselves, something that had been created and generated by themselves, across the multiple languages on the continent. And that, again, for us, becomes an inhibitor to this digital inclusion. And we need to start bridging that gap to this digital inclusion by ensuring that more African languages are translated into, or more internet content is translated into African languages, so that this can be accessible to the larger community. And with that, we will see more people participating, and we’ll see more appreciation for what we are discussing today, the DNS, the domain name, and the inclusiveness. And then those lower cases and innovation, it is well and good, again, there are certain things that are price sensitive. Whilst we say The success of a domain name is not the number of names that you would have sold. It proves slightly different, you know, when you don’t have the numbers to begin to innovate around. We have been able to innovate as an entity around names like your co.za. We’ve been able to build other solutions because we have scale. We don’t need to reach the scale that would be taking away from our creativeness or what you would call the uppercase innovation. It tells us that numbers do matter, especially for certain markets. You need to be price sensitive on the African continent. You cannot charge any fee that is very far from the market conditions. That will make it even more difficult for people to participate. So we need to be grappling. For us, we are grappling with all those things. How sensitive can we be to the pricing? Make sure that it’s correct. And once the numbers are there and the scale is there, then we begin to bring in innovation. We bring more solutions into this thing so that this domain name is not just what it is, blank and boring. But we put color, we put flair into these things. This reminds me of a different solution that was innovated on the continent. I don’t know. Most of you might not or could have heard of it. Banks, for a very long time, have been excluding a lot of the citizens across the continent. And one mobile operator realized that we have a lot of people who are not banked. And we can use our platform, our mobile platform, to make sure that people are not banked. people have access to cash, access to money, or they can use this tool, this mobile phone, to make payments and transact, buy tomatoes, pay for the piki-piki or that taxi in town. And that brought in M-Pesa. And that was the innovation, when people felt marginalized by banking systems, that they couldn’t qualify, they couldn’t fulfill some of these KYC. And then another instrument was developed, which was more accommodating. And I’m seeing that for us again, if we want to have this inclusiveness properly addressed, we have to think with an uppercase innovation and come up with solutions that are going to bring more participants in the third world or those underdeveloped or underserved markets. We have to think outside what we are starting. We are on the right track, but let’s think again, if is this all that we can do? Is this sufficient? Does this provide a foolproof solution for what we want to achieve inclusiveness? History will judge us that we have not done enough. We have failed to think far and fast to ensure that we include those that are marginalized. Thank you.

Adam Peake: We have over to Jen again, and then I have Nick Wendman-Smith behind me, asking questions. Oh, go to Nick, please.

Nick Wenban-Smith: Oh, thank you, Adam. I think that works, I hope. Very strange with headphones, not headphones. So yeah, so my name is Nick Wendman-Smith. I am employed by a company called Nominet. And from our perspective, from the last round of the new GDG, from our perspective, from the last round of the new GTLDs, I wanted to just share a little bit of a story because although the United Kingdom is, you know, one of the G7, wealthy nations, we had areas of significant deprivation and social challenges and I think that’s the case even in the very wealthiest of countries. So we were very interested to get a better digital presence for the Welsh community, which is a population about five million and includes some of the very poorest parts of the United Kingdom. And I have to tell you that when we approached the Welsh elected officials in terms of the cost of the new GTLD applications, plus of course the technical time and infrastructure and expertise required, and to make it twice as bad they wanted two because they wanted the one in the Welsh version, because part of this is about linguistic diversity, so they need to have two application fees plus two letters of credit for the continuing obligations thing, which is also extremely expensive. Now I want to be positive to say that actually it’s been a very good initiative and over the course of time they’ve now got a lot of very high profile registrations, for example the national sport is rugby and so they use a dot Wales and a dot Cymru domain names for their national sport, for the local government, for the musical and cultural things are all very well represented online and they have now I think between the two 20,000 domains under registration which is sustainable. But I wanted to say that it has been, first of all it required an investor which was prepared to take a lot of risk and over the long time and that investor was nominate in fact, so we paid the application fees and we did all of the things that they needed to do and then it has taken the order of 12 years of continuous investment and over a long long period of time before you see any sort of return. So I suppose what I’m just saying, while the applicant support program is being still refined within ICANN, you just need to understand that even for sophisticated applicants with relatively deep pockets, it was quite a hard piece of work and it required a lot longer. Certainly the business models that we put together, which our finance team signed off on when we paid for the applications, were well far off the mark, is what I would say, and I think just urge everybody to sort of try to, whatever you can do to help people with the applications, it’ll need ten times that what you’re currently, in order to get these things off the ground, because it’s a hugely complex process and a hugely expensive and technically time-consuming one, even for people who are experts in the area. That’s my thoughts on that, I don’t know, it’s already a question, I’m just saying, you need to do more.

Adam Peake: Over to Jen, who’s been doing a lot already. Do more, Jen, thank you.

Jennifer Chung: Okay, actually, I was really happy to go after Nick, because I was going to bring back the example of .Kids, which was the sole recipient of the applicant support programme back in 2012. We didn’t launch until about two years ago, it’s taken a very, very long time, and .Asia is actually the organisation behind all of this. We are supposedly on a cost recovery basis, but really, we have underwritten everything, the know-how on how to create the registry policies, leveraging on our registry back-end providers, of course, to do all of that, and I foresee coming into the new round, I mean, talking again about success, right, looking at, if we’re looking at pure numbers, that’s just one measurement of success. I think success means… means different things to different people. I like how Lucky has mentioned time and time again as well, and it’s really important to stress the African continent really needs to look at this, whether or not this new GTLD program for them coming in the new round, what success will look like for the African region, and I could bring it back to Asia Pacific, what does success might look like for Asia Pacific? Is it more applications with internationalized domain names? Is it more brands coming in? Is it more SMEs? Is it more innovative applications? I think the answer is all of the above, and I think right now as a community as well, we’re trying to refine the ways to be able to get to the success. Not only, of course, ICANN, the organization, but ICANN is also a community who’s trying to look at lessons learned from the previous round to be able to apply them and create solutions that allow for innovation, but not to the point where this new, as a new applicant, you want to be able to eventually independently run your domain, not always have this guard rails around or training wheels, as I like to call them, forever, because that is not really a true sense of success. So being able to give that boost and that help to the underserved regions, to the markets that really need this, but don’t know how, I think that is the balance we really have to strike here.

Adam Peake: Ram, please.

Ram Mohan: Briefly, success for ICANN in this next round perhaps should include some estimation of the number of TLDs that will not succeed, because that is the market reality. And we really, I think, are doing a lot of work to make sure that we’re not just doing this for all of us a disservice by going into this with an idea that 100% success rate or else. And I think that’s unrealistic. That’s not how the marketplace works. We ought to have a recognition of that. We also ought to recognize that in some cases the need may be apparent but the demand may not be evident, right? Just because there is a need for something doesn’t mean that the people who profess to have the need will be willing to go and stand up and, you know, open up their wallets and buy that name, right? So I think some level of realism and some level of, you know, projecting and being quite clear that while the new TLD program will and should work on diversity, applicant support, especially linguistic diversity, which is close to my heart as well, while all of those things are there, if you do all of those things you should still expect some level of failure.

Adam Peake: Final comment and then we’ll probably have to wrap up, I think.

Sajid Rahman: Thank you. I mean, like, you know, wearing my investor hat, we always accept some failures. And I’ve seen companies launching products, which are which do very well till they start charging for it. So, you know, there is always this reality and, you know, there are realities that we need to accept, but we continue to support. I mean, like I said, you know, I can listen to the voices. The whole idea of multistakeholderism is to listen to voices around and, you know, do whatever we can to support.

Adam Peake: Thank you very much, everybody, for your time. this afternoon. We mentioned there is a workshop tomorrow afternoon around the issues of multilingualism, IDNs, universal acceptance, so please look at the schedule for a workshop number 150. Want to thank particularly online Lucky and Christy. Lucky for the challenges you’re facing across the region and also solutions and issues that are very relevant to the whole of WSIS and not just discussions with an ICANN community, so very relevant. To Jen for just being very kind and helpful throughout and providing a lot of useful information, particularly at the beginning while I was running around there. Thank you, incredibly kind. And Sajid for the default nine scenarios and ideas. Ram, I like lowercase innovation and the importance of language, so I think it’s been very helpful and very grateful to all of you for being here and to our speakers. So thank you. The end. Bye. Thank you. Thank you.

L

Lucky Masilela

Speech speed

129 words per minute

Speech length

2714 words

Speech time

1258 seconds

Limited success of African gTLDs due to market conditions

Explanation

Lucky Masilela points out that African gTLDs like .Africa, .CapeTown, .Durban, and .Joburg have not been as successful as expected. He attributes this to market conditions and price sensitivity in the African continent.

Evidence

He mentions that out of 13 names applied for from Africa in the last round, only 5 are still active. He also notes that the continent of 1.4 billion people has only about 3.5 million domain name registrations.

Major Discussion Point

Challenges and opportunities for new gTLDs in underserved regions

Importance of sustainable business models over time

Explanation

Lucky Masilela emphasizes the need for sustainable business models for new gTLDs, especially in price-sensitive markets like Africa. He argues that while the number of registrations isn’t the only measure of success, it does matter for certain markets to achieve scale and enable innovation.

Evidence

He gives an example of how they’ve been able to innovate around names like .co.za because they have scale, allowing them to build other solutions.

Major Discussion Point

Defining and measuring success for new gTLDs

Differed with

Ram Mohan

Differed on

Measuring success of new gTLDs

Potential for new gTLDs to unite communities and express cultural diversity

Explanation

Lucky Masilela discusses the potential for new gTLDs to unite communities and express cultural diversity. He argues that domain names can be used as instruments to unite continents and express cultural interests and diversity.

Evidence

He mentions the example of .Africa being used not only as a digital identity but as an instrument to unite the continent and express its cultural diversity.

Major Discussion Point

Fostering innovation and inclusion through new gTLDs

R

Ram Mohan

Speech speed

121 words per minute

Speech length

1072 words

Speech time

527 seconds

Need for linguistic diversity and local content to drive adoption

Explanation

Ram Mohan emphasizes the importance of linguistic diversity in the domain name space. He argues that without linguistic diversity combined with other forms of diversity, efforts to increase adoption will fail.

Evidence

He points out that the world is multilingual, but current domain name systems do not reflect this reality.

Major Discussion Point

Challenges and opportunities for new gTLDs in underserved regions

Agreed with

Jennifer Chung

Sajid Rahman

Agreed on

Importance of linguistic diversity in new gTLDs

Success should not be measured solely by number of registrations

Explanation

Ram Mohan argues against using the number of domain registrations as the sole metric for success. He suggests that innovation and serving community needs are also important measures of success.

Evidence

He mentions that his company has success in all of their 300+ domain names, not just in terms of profit but in serving various needs.

Major Discussion Point

Defining and measuring success for new gTLDs

Differed with

Lucky Masilela

Differed on

Measuring success of new gTLDs

“Lowercase innovation” through memorable domain names

Explanation

Ram Mohan introduces the concept of “lowercase innovation” in the domain name space. This refers to the ability to create more memorable and descriptive domain names with new gTLDs, as opposed to long strings in traditional TLDs like .com.

Evidence

He gives an example of being able to get ‘rom.studio’ instead of a long string like ‘romsdigitalstudio.com’.

Major Discussion Point

Fostering innovation and inclusion through new gTLDs

Realistic expectations about success rates and market demand

Explanation

Ram Mohan argues for setting realistic expectations about success rates and market demand for new gTLDs. He suggests that ICANN should include some estimation of the number of TLDs that will not succeed, as this reflects market reality.

Evidence

He points out that just because there is a perceived need for something doesn’t mean there will be market demand for it.

Major Discussion Point

Improving the next round of new gTLDs

Agreed with

Sajid Rahman

Agreed on

Need for realistic expectations about gTLD success

N

Nick Wenban-Smith

Speech speed

160 words per minute

Speech length

551 words

Speech time

205 seconds

High costs and long-term investment required even for developed markets

Explanation

Nick Wenban-Smith highlights the significant costs and long-term investment required for new gTLDs, even in developed markets. He emphasizes that the process is complex, expensive, and time-consuming, even for experts in the field.

Evidence

He shares the experience of Nominet in launching .Wales and .Cymru gTLDs, which required 12 years of continuous investment before seeing any return.

Major Discussion Point

Challenges and opportunities for new gTLDs in underserved regions

C

Kristy Buckley

Speech speed

148 words per minute

Speech length

1393 words

Speech time

562 seconds

Importance of applicant support program for underserved regions

Explanation

Kristy Buckley emphasizes the significance of ICANN’s Applicant Support Program in fostering broader and more diverse participation in technical internet infrastructure. The program aims to make the process more accessible globally by offering fee reductions, capacity development, and access to professional volunteer resources.

Evidence

She mentions that the program is open for 12 months to give applicants a long runway to learn about and apply for the program.

Major Discussion Point

Challenges and opportunities for new gTLDs in underserved regions

Agreed with

Jennifer Chung

Rebecca McGilley

Agreed on

Importance of applicant support for underserved regions

Differed with

Ram Mohan

Differed on

Expectations for gTLD success rates

Enhanced applicant support program targeting underserved regions

Explanation

Kristy Buckley discusses the improvements made to the Applicant Support Program for the next round of new gTLDs. She emphasizes that the program is designed to target underserved regions and foster global, diverse participation.

Evidence

She mentions that the program now includes fee reductions, access to volunteer professional services, and a capacity development program.

Major Discussion Point

Improving the next round of new gTLDs

Agreed with

Jennifer Chung

Rebecca McGilley

Agreed on

Importance of applicant support for underserved regions

S

Sajid Rahman

Speech speed

168 words per minute

Speech length

1137 words

Speech time

405 seconds

Need to allow for some gTLDs to fail as part of market forces

Explanation

Sajid Rahman argues that it’s important to accept that some gTLDs will fail as part of normal market forces. He suggests that this is a reality in any market and should be expected in the domain name space as well.

Evidence

He draws a parallel with companies launching products that do well until they start charging for them, indicating that market demand doesn’t always match perceived need.

Major Discussion Point

Defining and measuring success for new gTLDs

Agreed with

Ram Mohan

Agreed on

Need for realistic expectations about gTLD success

Differed with

Ram Mohan

Kristy Buckley

Differed on

Expectations for gTLD success rates

Focus on linguistic diversity and internationalized domain names

Explanation

Sajid Rahman emphasizes the importance of focusing on linguistic diversity and internationalized domain names in the next round of new gTLDs. He argues that this is crucial for improving digital inclusion and access to the internet for diverse communities.

Evidence

He mentions initiatives like UA Days and international domain names as examples of efforts to address this issue.

Major Discussion Point

Improving the next round of new gTLDs

Agreed with

Ram Mohan

Jennifer Chung

Agreed on

Importance of linguistic diversity in new gTLDs

J

Jennifer Chung

Speech speed

153 words per minute

Speech length

1972 words

Speech time

772 seconds

Success can mean different things for different regions/communities

Explanation

Jennifer Chung argues that success for new gTLDs should be defined differently for various regions and communities. She emphasizes that pure numbers are just one measurement of success, and other factors should be considered.

Evidence

She suggests that success for the African region or Asia Pacific might include more applications with internationalized domain names, more brands coming in, more SMEs, or more innovative applications.

Major Discussion Point

Defining and measuring success for new gTLDs

Opportunity to serve underrepresented languages and scripts

Explanation

Jennifer Chung highlights the importance of internationalized domain names in serving underrepresented languages and scripts. She argues that this is crucial for digital inclusion and language justice, especially in regions like Asia-Pacific.

Evidence

She mentions that almost none of the people in Asia-Pacific have English as their first language, emphasizing the need for domain names in local languages and scripts.

Major Discussion Point

Fostering innovation and inclusion through new gTLDs

Agreed with

Ram Mohan

Sajid Rahman

Agreed on

Importance of linguistic diversity in new gTLDs

Leveraging lessons learned from previous rounds

Explanation

Jennifer Chung emphasizes the importance of leveraging lessons learned from previous rounds of new gTLDs. She argues that these lessons should be applied to create solutions that allow for innovation while providing necessary support to new applicants.

Evidence

She mentions the experience of .Kids as the sole recipient of the applicant support program in the 2012 round, and how this experience can inform improvements for the next round.

Major Discussion Point

Improving the next round of new gTLDs

R

Rebecca McGilley

Speech speed

112 words per minute

Speech length

616 words

Speech time

327 seconds

Need to balance support with eventual independence for new gTLDs

Explanation

Rebecca McGilley emphasizes the need to balance support for new gTLDs with the goal of eventual independence. She argues that while initial support is crucial, the aim should be for gTLDs to eventually operate independently and create their own market.

Evidence

She mentions that the Applicant Support Program provides discounts and deferred fees, but notes that this support is not intended to be permanent.

Major Discussion Point

Fostering innovation and inclusion through new gTLDs

Agreed with

Kristy Buckley

Jennifer Chung

Agreed on

Importance of applicant support for underserved regions

Agreements

Agreement Points

Importance of linguistic diversity in new gTLDs

Ram Mohan

Jennifer Chung

Sajid Rahman

Need for linguistic diversity and local content to drive adoption

Opportunity to serve underrepresented languages and scripts

Focus on linguistic diversity and internationalized domain names

Speakers agreed on the critical importance of linguistic diversity in new gTLDs to foster digital inclusion and better serve diverse communities.

Need for realistic expectations about gTLD success

Ram Mohan

Sajid Rahman

Need to allow for some gTLDs to fail as part of market forces

Realistic expectations about success rates and market demand

Speakers emphasized the importance of accepting that some gTLDs will fail due to market forces and that success should not be expected for all new gTLDs.

Importance of applicant support for underserved regions

Christy Buckley

Jennifer Chung

Rebecca McGilley

Importance of applicant support program for underserved regions

Enhanced applicant support program targeting underserved regions

Need to balance support with eventual independence for new gTLDs

Speakers agreed on the significance of the Applicant Support Program in fostering participation from underserved regions while emphasizing the need for eventual independence.

Similar Viewpoints

Both speakers highlighted the challenges of launching and sustaining new gTLDs, emphasizing the high costs and long-term investment required, even in developed markets.

Lucky Masilela

Nick Wendman-Smith

Limited success of African gTLDs due to market conditions

High costs and long-term investment required even for developed markets

Both speakers argued for a more nuanced understanding of success for new gTLDs, beyond just the number of registrations, considering factors like community needs and regional differences.

Ram Mohan

Jennifer Chung

Success should not be measured solely by number of registrations

Success can mean different things for different regions/communities

Unexpected Consensus

Acceptance of gTLD failures as part of the process

Ram Mohan

Sajid Rahman

Need to allow for some gTLDs to fail as part of market forces

Realistic expectations about success rates and market demand

Despite coming from different perspectives, both speakers unexpectedly agreed on the need to accept that some gTLDs will fail, viewing it as a natural part of market dynamics rather than a policy failure.

Overall Assessment

Summary

The main areas of agreement included the importance of linguistic diversity in new gTLDs, the need for realistic expectations about gTLD success, and the significance of applicant support for underserved regions. There was also consensus on the challenges of launching and sustaining new gTLDs, and the need for a nuanced understanding of success beyond registration numbers.

Consensus level

Moderate consensus was observed among speakers on key issues. While there were differences in perspectives, particularly regarding the definition of success and the approach to market challenges, there was general agreement on the importance of inclusivity, linguistic diversity, and the need for support in underserved regions. This level of consensus suggests a shared understanding of the complexities involved in expanding the gTLD space and the need for balanced approaches that consider both market realities and inclusivity goals.

Differences

Different Viewpoints

Measuring success of new gTLDs

Ram Mohan

Lucky Masilela

Success should not be measured solely by number of registrations

Importance of sustainable business models over time

Ram Mohan argues against using the number of domain registrations as the sole metric for success, emphasizing innovation and community needs. Lucky Masilela, while acknowledging other factors, stresses the importance of registration numbers for achieving scale and enabling innovation, especially in price-sensitive markets.

Expectations for gTLD success rates

Ram Mohan

Kristy Buckley

Need to allow for some gTLDs to fail as part of market forces

Importance of applicant support program for underserved regions

Ram Mohan argues for realistic expectations about gTLD success rates, suggesting that some failure should be expected as part of normal market forces. Christy Buckley, while not directly contradicting this, emphasizes the importance of support programs to foster broader participation and success in underserved regions.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around how to measure the success of new gTLDs, the balance between market forces and support for underserved regions, and the expectations for gTLD success rates.

difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on certain issues, there is also a significant amount of common ground, particularly in recognizing the importance of linguistic diversity, supporting underserved regions, and acknowledging the complexities of the gTLD market. These differences in perspective contribute to a richer discussion and highlight the multifaceted nature of the challenges and opportunities in expanding the gTLD space. The implications of these disagreements suggest that a balanced approach, taking into account various stakeholder perspectives, will be crucial in shaping the future of the gTLD program.

Partial Agreements

Partial Agreements

All speakers agree on the importance of considering regional and community-specific factors in defining success for new gTLDs. However, they differ in their emphasis: Lucky Masilela focuses on sustainable business models, Jennifer Chung highlights the need for diverse metrics beyond registration numbers, and Ram Mohan stresses the importance of realistic market expectations.

Lucky Masilela

Jennifer Chung

Ram Mohan

Success can mean different things for different regions/communities

Need for linguistic diversity and local content to drive adoption

Realistic expectations about success rates and market demand

Similar Viewpoints

Both speakers highlighted the challenges of launching and sustaining new gTLDs, emphasizing the high costs and long-term investment required, even in developed markets.

Lucky Masilela

Nick Wenban-Smith

Limited success of African gTLDs due to market conditions

High costs and long-term investment required even for developed markets

Both speakers argued for a more nuanced understanding of success for new gTLDs, beyond just the number of registrations, considering factors like community needs and regional differences.

Ram Mohan

Jennifer Chung

Success should not be measured solely by number of registrations

Success can mean different things for different regions/communities

Takeaways

Key Takeaways

New gTLDs face significant challenges in underserved regions like Africa due to market conditions and high costs

Linguistic diversity and local content are crucial for driving adoption of new gTLDs

Success of new gTLDs should not be measured solely by number of registrations

The next round of new gTLDs should focus on fostering innovation and inclusion, particularly for underserved regions and languages

Realistic expectations are needed about success rates and market demand for new gTLDs

Resolutions and Action Items

Enhance the applicant support program to better target and assist applicants from underserved regions

Focus on promoting internationalized domain names and linguistic diversity in the next round

Provide more comprehensive, long-term support to help new gTLDs become sustainable

Unresolved Issues

How to define and measure success for new gTLDs, especially those serving niche communities

How to balance providing support for new gTLDs with encouraging their eventual independence

How to address the high costs and long-term investment required for new gTLDs, even in developed markets

Suggested Compromises

Accept that some new gTLDs will fail as part of normal market forces, while still providing support to increase chances of success

Consider alternative business models for new gTLDs, such as the city/municipality sponsorship model suggested by Lucky Masilela

Thought Provoking Comments

Often the success of programs is only seen years out. And in the meanwhile, you have many prognosticators who pre-decide and who say that a program has failed or has succeeded based on conventional metrics. Metrics, for example, in the domain name industry, such as how many domains have been registered.

speaker

Ram Mohan

reason

This comment challenges the conventional way of measuring success in the domain name industry, introducing the concept of ‘lowercase i’ innovation.

impact

It shifted the discussion from focusing solely on registration numbers to considering other forms of innovation and success in the domain space.

There has to be linguistic diversity as a core outreach goal, as a core model for a definition of success. It’s not the only determinant of success, but it ought to be a significant factor and a significant metric that you measure, because the world that we know is not a world of English and Spanish and Chinese and Arabic.

speaker

Ram Mohan

reason

This comment highlights the critical importance of linguistic diversity in achieving true digital inclusion.

impact

It broadened the conversation to include the need for multilingual approaches in domain names and internet governance.

Banks, for a very long time, have been excluding a lot of the citizens across the continent. And one mobile operator realized that we have a lot of people who are not banked. And we can use our platform, our mobile platform, to make sure that people are not banked. people have access to cash, access to money, or they can use this tool, this mobile phone, to make payments and transact, buy tomatoes, pay for the piki-piki or that taxi in town. And that brought in M-Pesa.

speaker

Lucky Masilela

reason

This comment provides a concrete example of innovation that addressed a specific need in underserved markets, relating it back to the domain name discussion.

impact

It encouraged participants to think more broadly about innovation and inclusion, considering solutions that may be outside traditional domain name approaches.

Success for ICANN in this next round perhaps should include some estimation of the number of TLDs that will not succeed, because that is the market reality. And we really, I think, are doing a lot of work to make sure that we’re not just doing this for all of us a disservice by going into this with an idea that 100% success rate or else.

speaker

Ram Mohan

reason

This comment introduces a realistic perspective on success rates, challenging the notion that all new TLDs must succeed.

impact

It prompted a more nuanced discussion about expectations and metrics for success in the next round of TLDs.

Overall Assessment

These key comments shaped the discussion by broadening the perspective on what constitutes success in the domain name industry. They moved the conversation beyond simple metrics like registration numbers to consider linguistic diversity, innovative solutions for underserved markets, and realistic expectations for success rates. This led to a more nuanced and comprehensive dialogue about digital inclusion and the role of new TLDs in fostering innovation and addressing global needs.

Follow-up Questions

How can we address the low number of domain name registrations in Africa (only 3.5 million for a continent of 1.4 billion people)?

speaker

Lucky Masilela

explanation

This highlights a significant gap in digital inclusion and domain name adoption in Africa, which needs to be investigated to improve participation.

How can we create enabling mechanisms for the next round of gTLDs to increase participation from the Global South, particularly Africa?

speaker

Lucky Masilela

explanation

This is crucial for ensuring more diverse and inclusive participation in the next round of gTLD applications.

How can we address the issue of CCTLDs still being administered outside the African continent?

speaker

Lucky Masilela

explanation

This impacts digital sovereignty and local control over internet infrastructure in Africa.

How can we make the Registry Service Provider (RSP) evaluation process more accessible to providers from the Global South?

speaker

Lucky Masilela

explanation

The current high costs ($90,000) for RSP evaluation may marginalize service providers from developing countries.

How can we improve the success rate of new gTLDs, particularly those from Africa?

speaker

Paulus Nirenda

explanation

Understanding the factors behind the limited success of some new gTLDs is important for improving future rounds.

How can we increase the amount of African content (currently less than 15%) available on the internet?

speaker

Lucky Masilela

explanation

This is critical for improving digital inclusion and making the internet more relevant for African users.

How can we develop more price-sensitive domain name offerings for the African market?

speaker

Lucky Masilela

explanation

Affordability is a key factor in increasing domain name adoption in developing markets.

How can we better support applicants throughout the entire lifecycle of launching and operating a new gTLD?

speaker

Nick Wendman-Smith

explanation

Even for well-resourced applicants, the process of launching and sustaining a new gTLD is complex and requires long-term support.

How can we realistically assess and prepare for the potential failure rate of new gTLDs in the next round?

speaker

Ram Mohan

explanation

Understanding that not all new gTLDs will succeed is important for setting realistic expectations and planning appropriate support mechanisms.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.