High-Level Session 4: From Summit of the Future to WSIS+ 20

High-Level Session 4: From Summit of the Future to WSIS+ 20

Session at a Glance

Summary

This discussion focused on assessing the achievements and future priorities of the World Summit on the Information Society (WSIS) as it approaches its 20-year review. Participants highlighted significant progress in expanding internet connectivity, with users growing from 1 billion to 5.5 billion since 2005. The WSIS framework was praised for its adaptability and multi-stakeholder approach, which has helped address evolving digital challenges.

Key achievements included the creation of the Internet Governance Forum (IGF) and the WSIS Forum, which have facilitated global dialogue on digital issues. The alignment of WSIS action lines with the Sustainable Development Goals was also noted as an important development. However, speakers emphasized that significant challenges remain, including persistent digital divides, the need for meaningful connectivity, and emerging issues like AI governance and data protection.

Looking forward, priorities identified for the next phase of WSIS included bridging digital divides, promoting digital inclusion, addressing environmental sustainability in the tech sector, enhancing digital skills and capacity building, and developing ethical frameworks for emerging technologies. The importance of aligning the WSIS process with the recently adopted Global Digital Compact was stressed.

Participants agreed on the need to strengthen multi-stakeholder cooperation and ensure greater inclusion of voices from the Global South in shaping digital governance. The discussion concluded with a call for the WSIS framework to continue evolving to meet new challenges while building on its foundational principles of inclusivity and people-centered development.

Keypoints

Major discussion points:

– Achievements and progress made since the original WSIS summit 20 years ago, including increased internet connectivity globally

– Remaining challenges and gaps, such as the digital divide, lack of meaningful connectivity for many, and new issues like AI governance

– The role of multi-stakeholder cooperation and the IGF in addressing digital governance challenges

– Priorities for the future, including environmental sustainability, digital inclusion, capacity building, and aligning WSIS with the Global Digital Compact

– The need to evolve governance frameworks to keep pace with rapid technological change

Overall purpose:

The purpose of this discussion was to reflect on the achievements and lessons learned from 20 years of the WSIS process, identify current challenges and priorities, and consider how to strengthen digital governance frameworks and multi-stakeholder cooperation for the future.

Tone:

The overall tone was constructive and forward-looking. Speakers acknowledged progress made while emphasizing the significant work still needed. There was a sense of urgency about addressing emerging challenges, but also optimism about the potential for continued cooperation. The tone remained consistent throughout, with participants building on each other’s points in a collaborative manner.

Speakers

– Thomas Schneider: Moderator

– Mohammed Saud Al-Tamimi: Governor of Communications, Space and Technology Commission of the Kingdom of Saudi Arabia

– Junhua Li: UN Under Secretary General for Economic and Social Affairs

– Nthati Moorosi: Minister of Information, Communication, Science and Technology and Innovation of Lesotho

– Takuo Imagawa: Vice Minister of Internal Affairs and Communications from Japan

– Sally Wentworth: Chief Executive Officer of the Internet Society (ISOC)

– Sherzod Shermatov: Minister of the Development of Information, Technologies and Communications of Uzbekistan

– Stefan Schnorr: State Secretary at the Federal Ministry for Digital and Transport from Germany

– Jennifer Bachus: Principal Deputy Assistant Secretary for the Bureau of Cyberspace and Digital Policy of the United States

– Torgeir Micaelsen: State Secretary of Digitalization and Public Governance of Norway

Additional speakers:

– Doreen Bogdan-Martin: Secretary-General of the International Telecommunications Union

– Tawfik Jelassi: Representative from UNESCO

– Gitanjali Sah: Representative from ITU

– Robert Opp: Representative from UNDP

– Mike Walton: Representative from UNHCR

– Angel González Sanz: Representative from UNCTAD

– Paul Gaskell: Deputy Director for Digital Trade, Internet Governance, and Digital Standards of the United Kingdom

Full session report

Expanded Summary of the World Summit on the Information Society (WSIS) 20-Year Review Discussion

Introduction

This discussion focused on assessing the achievements and future priorities of the World Summit on the Information Society (WSIS) as it approaches its 20-year review. Participants reflected on the progress made since the original WSIS summit, identified current challenges, and considered how to strengthen digital governance frameworks and multi-stakeholder cooperation for the future.

Key Achievements of WSIS

Participants highlighted significant progress in expanding internet connectivity globally, with users growing from 1 billion to 5.5 billion since 2005, as noted by Gitanjali Sah from the International Telecommunication Union (ITU). The WSIS framework was praised for its adaptability and multi-stakeholder approach in addressing evolving digital challenges.

Stefan Schnorr, State Secretary at the Federal Ministry for Digital and Transport from Germany, emphasized the establishment of the multi-stakeholder approach to internet governance as a key achievement. Angel González Sanz from UNCTAD highlighted the mapping of WSIS action lines to the Sustainable Development Goals, which has helped align digital development efforts with broader global development objectives.

Persistent Challenges and Gaps

Despite progress, significant challenges remain. Doreen Bogdan-Martin, ITU Secretary-General, noted that one-third of the global population still lacks internet access. Angel González Sanz highlighted persistent digital divides based on gender, geography, and education.

Tawfik Jelassi from UNESCO raised concerns about limited multilingual and culturally diverse online content, emphasizing the need for more diverse and culturally relevant content online, particularly for indigenous communities.

Robert Opp from UNDP emphasized the environmental impacts of digital technologies, while Mike Walton from UNHCR highlighted ethical concerns around AI and emerging technologies.

Global Digital Compact and WSIS Alignment

Several speakers, including Stefan Schnorr, stressed the importance of aligning the WSIS process with the recently adopted Global Digital Compact (GDC). The GDC was seen as a crucial framework for addressing emerging digital challenges and reinforcing the principles of WSIS. Speakers discussed how to integrate the GDC’s objectives into the existing WSIS framework and action lines.

Priorities for the Future of WSIS

Looking forward, speakers identified several priorities:

1. Bridging remaining digital divides, especially in rural areas (Gitanjali Sah)

2. Promoting media and information literacy and combating misinformation (Tawfik Jelassi)

3. Developing digital skills and capacity building (Junhua Li, UN Under Secretary General for Economic and Social Affairs)

4. Addressing environmental sustainability of digital technologies (Robert Opp)

5. Ensuring inclusive global governance of AI and data (Angel González Sanz)

6. Aligning WSIS with the SDGs and the 2030 Agenda (mentioned by several speakers)

Role of IGF and Multi-stakeholder Approach

The Internet Governance Forum (IGF) was a key point of discussion. Sally Wentworth, CEO of the Internet Society, emphasized the IGF’s role as a crucial platform for inclusive internet governance discussions. Jennifer Bachus from the US State Department stressed the need for meaningful participation from developing countries.

Takuo Imagawa, Vice Minister of Internal Affairs and Communications from Japan, noted the IGF’s ability to address emerging issues like AI governance. Discussions also touched on potentially extending the IGF’s mandate and ensuring sustainable funding for its operations.

Thomas Schneider mentioned the São Paulo guidelines for inclusive multi-stakeholder processes as a valuable framework for future cooperation.

WSIS+20 Review Process

Speakers discussed the ongoing preparations for the WSIS+20 review by various UN agencies. This process aims to evaluate the progress made since the original summit and set the agenda for the next phase of digital development.

Thought-Provoking Comments and Unresolved Issues

Nthati Moorosi, Minister of Information, Communication, Science and Technology and Innovation of Lesotho, provided a stark reminder of infrastructure disparities: “We still have students who have to sit under the tree to learn. So when we talk about connecting schools, for us, it’s quite a big, a long journey.”

Robert Opp’s comment broadened the discussion on environmental sustainability: “Environmental sustainability has to go into the next version of what we do. And it’s the two areas. It’s what digitalization can contribute to environmental sustainability, climate change, but it’s also the contribution to climate challenges or environmental challenges.”

Unresolved issues included:

1. Specific mechanisms for aligning the Global Digital Compact with the WSIS process

2. Effective strategies for addressing the environmental impacts of digital technologies

3. Ensuring meaningful participation from developing countries in AI and data governance discussions

4. Developing strategies for combating misinformation and promoting information integrity online

5. The potential permanence of the IGF mandate

Conclusion

The discussion concluded with a call for the WSIS framework to continue evolving while building on its foundational principles of inclusivity and people-centered development. Participants agreed on the need to strengthen multi-stakeholder cooperation and ensure greater inclusion of voices from the Global South in shaping digital governance.

As WSIS moves towards its 20-year review, it is clear that while significant progress has been made, substantial challenges remain. The future of WSIS will require balancing technological advancement with ethical considerations, environmental sustainability, and the imperative of leaving no one behind in the digital age.

Session Transcript

Thomas Schneider : in Geneva in Tunis 20 years ago so I’m having the honor to to moderate this session, but of course this is not about me, but it’s about content and sharing some views. So this session, of course, is focusing on what is coming in the next few months until the end of next year with the 20-year review process of the World Summit on the Information Society and, of course, integrating in all of these reflections that will be made during the next months also newer elements and developments like the Global Digital Compact and how to implement this. So this session will focus on assessing progress and envisioning the future of digital governance and hopefully it will serve as a platform to reflect on past achievements, identify gaps and strategize the way forward for global digital cooperation. So I have a very distinguished number of speakers here that I would quickly like to present to you. So first, His Excellency Mr. Mohammed Saud Al-Tamimi, Governor of Communications, Space and Technology Commission of the Kingdom of Saudi Arabia, our very nice host, Mr. Li Junhua, UN Under Secretary General for Economic and Social Affairs. Her Excellency Ms. Ntati Morosi, Minister of Information, Communication, Science and Technology and Innovation of Lesotho. His Excellency Mr. Takuo Imagawa, Vice Minister of Internal Affairs and Communications from Japan. Then Ms. Sally Wentworth, Chief Executive Officer of the Internet Society, also known as ISOC. And His Excellency Mr. Shermatov Sherzod, Minister of the Development of Information, Technologies and Communications of Uzbekistan. Then Mr. Stefan Schnorr, State Secretary at the Federal Ministry for Digital and Transport from Germany. Ms. Jennifer Bachus, Principal Deputy Assistant Secretary for the Bureau of Cyberspace and digital policy of the United States. And last but not least, and looking forward to coming to your country next summer, His Excellency Mr. Torgeir Micaelsen, State Secretary of Digitalization and Public Governance of Norway. So looking forward to hearing from all of you and sharing some thoughts about WSIS plus 20 and how to get there and what to want from the process in the coming few minutes. Of course, let us start with some introductory remarks by Mr. Junhua Li from Undersecretary General from UNDESA. Thank you very much. The floor is yours.

Junhua Li: Well, thank you. Thank you very much, Thomas, or Ambassador for giving me the floor. It’s so amazing to have such a distinguished panelist to discuss the WSIS plus 20. But first of all, let me say just a few words. I would like to start by saying that on behalf of the United Nations and also that I guess on behalf of the whole panel to express our profound gratitude to the host country for this exceptional hospitality and excellent efforts in organizing this important event. As you said, Mr. Moderator, we are at a critical moment in global digital governance. The recent adoption of the Global Digital Compact in the United Nations and the upcoming WSIS plus 20 review next year present a very unique opportunity for the global community to shape our digital future in the coming decade. So UNDESA will serve as a secretariat supporting the President of the General Assembly to prepare the WSIS plus 20 process. We are fully committed to coordinating all the efforts with our stake partners. and across the sectors, particularly UN Group Data on the Information Society to be chaired by ITU and UNESCO. This collaboration brings together the key partners like, just now as I mentioned, ITU-UNESCO plus UNDP and UNCTAD for a unified approach. So we are working very closely to define the whole process and support the process. But let me say the IGF in this process plays a very crucial role in our forthcoming review. It can certainly help to amplify and synthesize contributions from the diverse stakeholders to inform and provide the guidance to the WSIS review and also the negotiation process in the final package. So in all, our commitment is here and it also will lead us for next year. Thank you.

Thomas Schneider : I need to talk into the mic, that helps normally. Thank you, dear Under-Secretary General. Now we have a few questions that I’d like to hear you reflect on and as we have quite a big panel which is followed by another panel and time is limited as we all know. The time limit for each intervention is not 30 minutes but 3 minutes. So I would like you to adhere or stick to the time because I’m sure we all have a lot to hear from all of you. So let me start with question 1, which is the question about the most significant achievements since the WSIS Summit 20 years ago and what lessons that can be learned when looking back almost 20 years? from your point of view. So what are the biggest achievements and lessons learned? Let me first turn to Her Excellency Ms. Ntati Morozi from Lesotho. Thank you.

Nthati Moorosi: Thank you very much, Programme Director. I am really honoured to be sharing a stage with these excellencies today. I want to start by acknowledging the work that IGF is doing as a platform that is rooted in the visionary principles of the World Summit. As a country located in Africa, I feel like we in Africa, we in Lesotho are at different stages of achievements since 20 years ago when the World Forum was held. We have some milestones that are recordable. The policy framework, we’ve done wonderful work on that. We have the laws that are applicable. However, we still have big gaps, especially on the cyber security legal framework. We find a lot of challenges, especially from the media fraternity. From time to time they feel like we are taking their freedom of expression away from them. However, we are happy to to report that we have created recently the sectoral cyber incident response team, which is an effort to step forward in enhancing our national cyber security resilience and ensuring a secure digital environment. So every time I, the whole time since yesterday, I’ve been listening to different speakers talking about leaving no one behind. I stand here today thinking about my own country, that Lesotho as a country, compared with other countries, it’s already left behind. The people who live in Lesotho, there are still such big digital gaps. Infrastructure-wise, we have been able to achieve close to 95 per cent plus, but somebody talked about the fact that infrastructure doesn’t mean that everyone is connected, because of the challenges that have been highlighted about infrastructure, about skills, about – in Lesotho, the biggest challenge is electricity. We don’t even have electricity to charge whatever smart devices that people have. We have connected only 2 per cent of the schools, because at the moment, where we are as a country, we are still struggling with getting students into classrooms. We still have students who share classrooms. One classroom will host more than three grades in one room. We still have students who have to sit under the tree to learn. So when we talk about connecting schools, for us, it’s quite a big, a long journey. However, we are not discouraged. We are working hard to ensure that we take those baby steps and we ensure that we get our people connected. So in terms of what has been achieved, we have achieved some, but we still have a long way to go. I’m weary about three minutes. Thank you.

Thomas Schneider : Thank you very much, Minister. Next, Germany, State Secretary General. What have we achieved? What are the lessons that Germany has learned?

Stefan Schnorr: Thomas, thank you very much, Excellencies, ladies and gentlemen. Before I start, I want to thank the Kingdom of Saudi Arabia for their hard work in organising this year’s IGF. We deeply appreciate their efforts in making these events possible in such important times. To come to your question, Thomas, the World Summit on the Internet, on the Information information society was, in my opinion, truly a milestone for the Internet governance, and not only for the Internet governance, but for the digital cooperation worldwide. And it laid the foundation for the first comprehensive global framework for digital cooperation. The summit is, in my opinion, still highly relevant, even if the wording of information society sounds a little bit outdated, and the acronym WSIS is not familiar to everyone. But as I mentioned, the summit is still important, and it remains a cornerstone in fostering an inclusive, human-centered, and digital transformation. For two decades now, the WSIS has provided essential guardrails for global digital cooperation. For example, the WSIS action lines continue to guide and inspire digital efforts worldwide. I think it’s a clear testament for its long-lasting impact. And very important, WSIS was also the groundbreaking, because it was the first time that non-governmental stakeholder, private sector, technical community, academia, and society worked side-by-side with the representatives of 175 nations to shape the future for the digital cooperation. And this collaboration gave birth to the multi-stakeholder approach, a concept that we, that Germany, has been proud to support since its inception. But perhaps, ladies and gentlemen, the most important and successful outcome of the WSIS is the internet governance, which has brought us all together here in Riyadh. What started as a forum to discuss only technical aspects of internet governance. has evolved into a platform for addressing all aspects of the digital world. Now we discuss, in the past we have discussed 5G, now we discussed artificial intelligence, so all the relevant topics in the digital world are discussed here at the IGF. And therefore, I think no other event succeeds like the IGF in bringing together such diverse voices from all stakeholder groups and fosters meaningful networks in the digital world. I think this is the right way to shape the future and the United Nations can be very proud to host such an influential platform for global digital cooperation. Thank you very much.

Thomas Schneider : Now let’s turn again to our Under-Secretary-General from UNDESA, Mr. Junhua Li, what are UNDESA’s lessons learned and how do you see the biggest achievements, or where?

Junhua Li: Thank you. Thank you, Thomas. I guess it is really challenging to reflect 20 years’ achievement within three minutes. But I would first reflect that the IGF itself is a crowning achievement for this WSIS process. IGF started with the mandate given by WSIS. This very unique, only a global premier forum brought all the multi-stakeholders to engage with each other on a number of issues. Just now, as the Secretary mentioned, the IGF started with a single event. Now we have multi-disciplined work tracks and also thousands of participants joined our discussion. So I think we have learned a lot from the IGF. I think we can learn here and benefit. It’s not only about the government, it’s also about private sectors. civil societies, technical communities, scientific academias, and also the vulnerable groups. But most importantly, we have increasing number of youth participants who helped us to define, to discuss the future of the digital process. Thank you.

Thomas Schneider : Thank you very much. We have another question that, of course, after the achievements, we need to look at the challenges and regarding also the newest, the latest instruments. So my question number two is, what are the main challenges in implementing the Global Digital Compact? And what role does the multi-stakeholder approach or should the multi-stakeholder approach play in tackling them? Sally Wentworth from ISAB, please.

Sally Wentworth: Thank you, Ambassador, and I’d like to echo my colleagues in thanking the Kingdom of Saudi Arabia for hosting us this week. It’s a marvelous venue and a really nice environment for this kind of multi-stakeholder discussion. The Global Digital Compact aspires to achieve an inclusive, open, sustainable, fair, safe, and secure digital future for all. For many of us here, some of the WSIS veterans, so to speak, this is a familiar theme. This is a body of work that many of us have been engaged in over the last 20 years. And so there are some lessons learned, I think, from the last 20 years that can help us overcome some of the challenges in implementing that vision that’s set out in the Global Digital Compact. I draw on lessons related to connectivity. How is it that the world has… made such impressive progress, growing from one billion people connected in the original WSIS Summit, to 5.4 billion people connected today, and I think what is clear is that it took a tremendous amount of collaboration by all stakeholders to achieve that result. I’m mindful of the question that IT Secretary General Doreen Bogdan put to us this morning, which is, despite that progress, are we satisfied? And of course, the answer is no. We will not be satisfied until the remaining populations are part of this digital society that we’re building. To do that, and if we’re drawing lessons from the last 20 years, we must work together as stakeholders, and that has really been the hallmark of the WSIS, and that was actually the groundbreaking effort that took place during the WSIS Summit in 2003 and 2005, and I remember very well how hard we all worked from governments and civil society, private sector, the technical community, to figure out how we would work together to achieve these results. And so, as we look forward to how we implement the Global Digital Compact, and as we look towards the WSIS Plus 20, it is absolutely crucial that we remain committed to that model, that the model that brings the expertise from all parts of our society to the table and harnesses that is the model that is going to make us successful. It is the way we will move to connect the last 2.6 billion people and ensure that they come online to a world that is safe, secure, and protects them in the digital environment. So, for us at the Internet Society, It’s absolutely critical that the processes that we set up to review the WSIS 20 years later and to implement the Global Digital Compact really do remain firmly grounded in the multi-stakeholder model that has delivered us a tremendous amount of progress in the last 20 years, and has also given us a taste of what’s possible if we really lean into that.

Thomas Schneider : Thank you very much. Now let’s turn to His Excellency Mr. Mohamed Saad al-Tamimi. What are the main challenges in implementing the Global Digital Compact, and what role does the multi-stakeholder approach have in your view?

Mohammed Saud Al-Tamimi: Thank you, Thomas. First of all, I would like to thank our host, His Excellency Ahmed El-Swayyan, and his great team for putting everything for us to make sure that we are gathering here in a quality environment. Thank you. And let me convince that this is the first time I’ve been in a big panel like this, so I’m watching the time. I’m really glad to be a cambion with this great panelist. Kingdom of Saudi Arabia, we are glad that we’re part of the consultation and preparation process for issuing the Global Digital Compact. And definitely there is challenges to implementing that one. And they echo what has been said before. Number one challenge is connecting the unconnected. Right now we have 2.6 billion human beings that are not connected. Right now, today, that’s almost 33% of the globe. 14% of those unconnected due to coverage. This is lack of network. The rest is due to affordability. So there is a huge challenge by facing us as a globe to connect the unconnected. There’s multiple way to solve it. Right now we are partnering with the ITU to find innovative and sustainable solution. the Internet, the Internet is the only way to connect the unconnected, to solve this problem. The most challenging things over the last 20 years, the cage of unconnected people falling dramatically over the past 20 years. Before, this gap of unconnected been decreasing significantly. Right now, over the last 20 years, we have a huge gap of unconnected people falling over the past 20 years. In the past, in the ITU, we have 2. 7 billion, today, at the end of 2024, we still have 2. 6, so over 24 months, we’re adding only 100 million. The second challenge, I think, it’s not about only connecting the unconnected. That connection should be sustainable. So, sustainability should be from the point of view of the ITU, the ITU should be sustainable. The third challenge, I think, is to make sure any solution offered to the table should be sustainable, and there should be fair and safe access to all of these solutions to connect the unconnected. Coming back, Thomas, to your last question, or last part of your question, which is about multilateral discussion. Definitely, there’s two key principle or guiding principle that we should have in the world to make sure that we have a global digital impact, and there are two key challenges of implementing global digital impact. The first one, I think, multiple of my colleagues mentioned it, which is collaboration. Definitely, collaboration between government, private sector, academia. Everyone should be involved, developed and developing country as well. The second one, which is accountability or inclusion. Everyone should be included in the solution, and everyone should be included in the solution. Everyone should be added to that And this means we don’t leave anyone behind. Thank you.

Thomas Schneider : Thank you very much, His Excellency. Now let’s go to His Excellency Mr. Takuo Imagawa from Japan. Thank you, Thomas.

Takuo Imagawa: It’s my great honor to be here at the session. I’d like to express my sincere gratitude to the Kingdom of Saudi Arabia and the UN IGF Secretary and stakeholders for organizing this friendly meeting. Despite advancement in digital technologies, 2.6 billion people worldwide remain unconnected to the Internet, and many people are not fully benefiting from digital advancements. It is necessary to accelerate international cooperation in this digital field. Japan welcomes the GDC adopted at the UN Future Summit. I believe it is essential to follow up on this compact in an effective manner. I would like to emphasize two points, the importance of multistakeholder engagement and the utilization of existing forums. Firstly, it is difficult to realize the commitment of the GDC only through a top-down approach by the UN or its member states. Cooperation among multistakeholders, including industry, civil society, the tech community, academia, and international organizations is indispensable. The importance of multistakeholder engagement is clearly stated throughout the GDC, but the key is effective implementation. The GDC has agreed to establish new mechanisms, such as the AI scientific panel and the global dialogue. As discussions on the modalities progress, it is necessary to provide multistakeholders with transparent and ample opportunities for input, and to carefully consider those inputs. To make multistakeholder participation more effective, meaningful participation from developing as well as developed countries is necessary, including the efforts of capacity building. We need to advance our efforts by building on existing forums, including the forums outside the U.N. system, and expert efforts avoiding overlaps. The IGF in which we are participating is a forum where various stakeholders contributing digital development gather, and active discussions are currently taking place. We need to advance our efforts by building on existing forums, including the IGF in which we are participating is a forum where various stakeholders contributing digital development gather, and active discussions are currently taking place. The IGF symbolizes the importance of multistakeholder efforts, and is very effective for discussing the follow-up of the GDC. Regarding the follow-up of the GDC and its relationship with existing initiatives such as WSIS and IGF, I understand that specific discussions will take place in the near future. We will continue to work with the Japanese to develop the existing forums, and we will contribute to these discussions. Thank you.

Thomas Schneider : Thank you very much. We have another question that I’m going to ask also to three panelists. The question is about how we can effectively address rapidly evolving technologies, knowing that our governance mechanisms take their time. His Excellency, Mr. Torgeir Micaelsen, from Norway.

Torgeir Micaelsen: Thank you, Thomas. I mean, in general, emerging technologies, the disruptive technologies, they should be discussed in an environment like this. In a multistakeholder approach. And I think that we need to have a multistakeholder approach, and we need to have a multistakeholder approach. We need to have a multistakeholder approach. We need to have a multistakeholder approach. As we need to see this from all sorts of angles before going home, collaborating, setting things into motions. So for instance, if we look at AI, it’s obvious that AI-based solutions, they can basically be something that we can save the world with. On the other hand, a lot of ethical and other topics that need to be considered. This is now addressed in the Global Digital Compact. WSIS has engaged in several important topics in that regard as well, with workshops, dialogues. I think this is the right way to move forward, also in the future, to discuss it. For me, and from the Norwegian point of view, the Scandinavian point of view, it’s extremely important that we maintain a human-centric focus in these very important international debates on AI. Emerging technologies, for instance, we have some nice feedback on how to test out technologies in a safe environment. We have these different examples from our sandboxes, where AI systems with high-risk potential can be tried out in a data-protected, data-privacy-safe environment with really nice results. We’d love to share that sometime. We think that we need to have experience with these sorts of times. And as I mentioned, use privacy in enhancing digital development must be stimulated. I think, as I started with… that we need to keep this AI-related topics into the multi-stakeholder dialogue. Lastly, I think we must renew our commitment to ethics and accountability, as called for under Objective 5 in the Global Digital Compact. As AI and other technologies reshape our societies, we must ensure that our multi-stakeholder collaboration upholds the highest ethical standards, safeguarding human rights, privacy and security. If we can manage all those things together, I think we could have a lasting future. Thank you.

Thomas Schneider : Thank you very much. Let’s move to Ms. Jennifer Bachus from the United States. How do we cope with the speed of technology?

Jennifer Bachus: I will echo my colleagues here in thanking both the Kingdom of Saudi Arabia as well as the MAG and others involved in putting together this really impressive conference. And I’ll also highlight, I have found the evolution of the questions to be great, because now we’re sort of looking to the future. And I think, I hope it’s not a surprise to everybody that the United States is very much committed to harnessing emerging technologies, including AI, for sustainable development, to help ensure all countries are able to access the benefits of technology and to use AI and other technologies to help address the world’s greatest challenges. We are committed to engaging in international AI conversations with a range of partners and across geographies to promote safe, secure and trustworthy AI. But to answer your question and to turn to GDC, it’s been a little over two months since we were in New York, and I was in New York with many of you for the GDC’s adoption. From the beginning, the United States supported the GDC’s adoption of AI. And we’ve been doing this for a long time. And we’ve been doing this for a long time. And we’ve been doing this for a long time. And we’ve been doing this for a long time. And we’ve been doing this for a long time. an inclusive and transparent process to develop an appropriately scoped and rights-respecting GDC to help outline a shared digital future for all, underlining, I think, what everyone here has been talking about today. Throughout this process, we were constructive and proactive. I hope there’s agreement on that. And we very much celebrate the GDC’s focus on multistakeholderism and an inclusive, rights-based and gender-responsive approach to digital issues at the United Nations. These core principles underpin our approach, regardless of the pace of technological evolution. We appreciate that the GDC strengthens the work of the United Nations on new issue areas like AI and data governance in an appropriate manner that is inclusive and transparent. We listened and will continue to listen to non-governmental stakeholders on their concerns that the consultation process did not meet the expectation of these stakeholders’ meaningful participation and their very strong eagerness to be part of the implementation. The United States really welcomes stakeholders to be actively involved in the GDC implementation process. Examples include discussions on the multidisciplinary, independent, international scientific panel on AI, which I will say is a very long title, Global Dialogue on AI Governance, the CSTD Data Governance Working Group, the Proposal for an Office. I could go on and on in the many initiatives. We’re looking ahead to the WSIS plus 20 overall review. And in that point, I’ll flag a couple of key points from the point of view of the United States. We support an inclusive, transparent, and as multistakeholder process as possible for the WSIS plus 20 overall review. We should ensure the WSIS plus 20 overall review focuses on a review of implementation over the last 20 years before we think about what’s next, much like you laid out the questions in this paper. And we should use the WSIS plus 20 overall review to integrate GDC implementation within the WSIS framework. And one last point, because I think I’m running out of my time, it’s important that any role for the U.N. system on evolving and emerging technologies complements existing work by outside entities in U.N. agencies. It does not and should not supersede them. Thank you.

Thomas Schneider : Thank you very much. Now let’s turn to His Excellency Mr. Shermatov Sherzod with Pakistan. How can we cope with rapid technology with WSIS and GDC frameworks?

Sherzod Shermatov: Thank you very much. First of all, I would like to express my gratitude for all the organizers of this important forum. And this opens up perspectives for discussing important topics for the development of internet. And from that perspective, I would like to focus on the importance of discussing the human-centric, the people-centric approach. Because in the beginning of the opening ceremonies, there was an excellent presentation by His Excellency Minister Abdullah about the importance of increasing the digital divide between the global north and global south. And from that perspective, if you look into the other dynamics, like the demographics. So in the global north, most of the developed countries, they see the demographic challenges. There are not so many babies born. Whereas in the global south, you see the opposite. There are so many new babies born. So there are different approaches on the government. So governments, they need more jobs to be created. Whereas the internet can really help people to find the remote jobs. So from that perspective, in Uzbekistan, we tried to create. the very favorable conditions for IT companies to open up their delivery centers so that they can have their outsourcing hubs from Uzbekistan, and this will help for the companies in the developed countries to decrease their costs, as well as to create the remote jobs for the people of Uzbekistan, which is a kind of double-locked country. And this, from that perspective, this can open up the additional perspectives, opportunities for all the countries in the Global South, which can utilize this kind of potential of opening the new opportunities for additional sources of income, so that the people in the Global South, they are not looking towards moving physically as potential migrants to the Global West, rather than trying to enjoy living with their families in their own houses, and are able to find good opportunities for income. For that, we have to heavily invest in education, upskilling of the people. For the case of Uzbekistan, we are leading the world in terms of the number of people learning on the Coursera platform as a share of the total workforce, and we try to invest heavily on upskilling of our people in terms of the foreign languages, and in terms of the jobs which can be required in this global digital economy. And from the IGF perspective, I think for the future, we have to think about the ways of avoiding any potential artificial kind of limitations for any type of global work. Because we know that for physical movement of labor, there are limitations with visa, with anti-immigration policies, etc., but we should avoid any potential kind of limitations for the global work. So, all the remote work opportunities… should be available globally, because there should be no limit in terms of the Internet working. And we have to also promote the global cooperation, because the planet as a whole faces lots of challenges which we have identified in sustainable development goals, like even the global green policies are being implemented. So Internet itself is not just an enabler of the green agenda, now it’s becoming one of the biggest pollutants as well, because the global footprint of all the data centers is now more than the global footprint of the airplane industry. So we have to think about creating the green data centers. And from that perspective, I have to showcase a very important cooperation between Saudi Arabian companies, the Aqua Power Data World, which are creating the green energy resources in Uzbekistan, and creating the green data center, which can be utilized by the AI companies which are in very much need of having global computing power, which should be also based on the green energy. So only through opening up the artificial borders in the Internet, through massive education and promoting the global cooperation, we can work together for the benefit of all people living in our single planet. Thank you.

Thomas Schneider : Thank you very much. Now we have a fourth question that I would like to hear you all on this question, given that we’re slightly running behind schedule, I dare to reduce your specific time of three minutes to two and a half each. Unfortunately, we don’t have a clock here that we see it. That was announced that there should be one, somehow it didn’t make it. So please try to… to be very concise, but thank you for your interesting point. So the question is the following. It’s about the mandate of the IGF, which will be renewed during the WSIS Plus 20 process. And the question is, what would be your vision for IGF beyond 2025, and how could the IGF contribute to the implementation of the GDC? So we start with His Excellency Mohammed Saud Al-Tamimi, thank you.

Mohammed Saud Al-Tamimi: So here in Saudi Arabia, we firmly believe that IGF should continue as principal platform for multi-stakeholder discussions, forming Internet policy. We see IGF as well as a platform to produce well-crafted policy that will help us as a globe to implement global digital compact. And with that, it will add more inclusivity, with more discussions with multiple stakeholders. And the second one, undermining the time, which is adding more innovation during this discussion. As His Excellency the Minister Abdullah Souha, the Minister of Telecommunication and IT in Saudi Arabia, mentioned this morning, there was multiple forms of digital divide and global divide and AI divide. So we need more discussions, more platforms like this one, to discuss AI ethics, data privacy, digital sovereignty, and so on, the kind of topics that need more discussion, more platforms. And more importantly, for us as a committed nation to deliver global digital compact, definitely we have to continue with the IGF, with WSIS as well, to make sure that we have enough platform for collaborations and innovation to deliver our commitment.

Thomas Schneider : Thank you very much.

Junhua Li: Thank you. Certainly, UN believes digital transformation is one of the strategic vehicles for almost all member states to catch up their national efforts in attending the 2030 Agenda, even beyond. So I guess over the past two decades, we have achieved enormously. The beauty of the IGF, certainly, we need to commit to, number one, inclusiveness, number two, openness, number three, neutrality. So all in all, as other panelists highlighted very much, we are committed to this multi-stakeholder approach. But beyond this review, what we would like to see is the IGF continues to serve as a premium forum, global forum, on the digital discussion with the participation of the all stakeholders, and we would like to see this IGF to serve as a premium tour to execute the global digital compact. Also, we would like to see that with a stronger mandate after this review, IGF can invest more efforts on the capacity building for those countries in vulnerable situations to help to bridge the gap between the north and south. I’ll stop here.

Thomas Schneider : Thank you very much. Next is Ms. Sally Wentworth.

Sally Wentworth: Thank you, Tomas. The Internet Society has been a long supporter of of the Internet Governance Forum since its earliest days. And part of the reason for that, again, as I said earlier, is our belief that we will be more effective at implementing the aspirations of the World Summit on the Information Society, the Sustainable Development Goals, the Global Digital Compact, if we are working together. And platforms like the IGF allow all stakeholders on an equal footing in an open and inclusive way to come together to tackle those challenges. What is impressive about the IGF as well is its ability to evolve over time, to meet the needs of the community. And we see that through the national and regional IGFs that have emerged around the world. And the Internet Society has supported many of them over the past years, where we take a global consensus, and the communities themselves start implementing that at the local level. So translating this model of multi-stakeholder internet governance from a global dialogue into local implementation, I think, is a really important feature of the IGF, and one that we would certainly want to see continued and strengthened, and perhaps even a vehicle for the kind of capacity building that the undersecretary spoke about. So we would strongly call for the IGF’s mandate to be renewed as part of the WSIS Plus 20. We would like to see stronger and more sustainable support for the IGF going forward, and ensuring that as it evolves, it retains those key characteristics of inclusion, stakeholders on an equal footing, and this ability to translate global issues into local action.

Thomas Schneider : Thank you very much, His Excellency Mr. Takou Imagawa.

Takuo Imagawa: Thank you, Thomas. Japan has been a strongly We are very proud to be a member of the IGF, and we are very supporting the multi-stakeholder approach in the Internet of Companies. Last year we hosted the IGF 2023 in Kyoto with more than 11,000 participants registered from 178 countries and regions, including more than 6,000 attending in person. This is a record number in IGF’s history and we are very honoured and also grateful for this community. We are also very proud to be a member of the OASIS Plus 20 community. This year we have a number of multi-stakeholders, active discussions are taking place, demonstrating strong support for multi-stakeholder Internet Governance and the IGF, which is functioning effectively. Based on this, we hope that discussions to sustain and promote IGF will take place in the OASIS Plus 20 review. The theme of this year, building our digital world, is the future of the digital world, and the future of the digital world should be considered, including the possibility of making it permanent in the future. Furthermore, the IGF itself needs to be constantly evolved to meet the demands of the times. Last year, Japan held a special session on AI at the IGF. This year, a wide range of digital issues, including AI, are being addressed. We are also working on the future of the digital world, and we hope that this will be achievable by establishing new tracks, such as the youth track, and we believe this journey will continue. The global dialogue needs to be inclusive for multi-stakeholders, including developing countries. In this sense, it is important to actively leverage existing forums, such as the IGF. Thank you. Also, to repeat, we believe that IGEF is very effective for discussing the follow-up of the TDC. Finally, the IGEF is led by the activities of the MAG. Among all, the leadership panel has greatly contributed to the IGEF, especially in terms of external advocacy and fundraising. Although the MAG and LP are not stipulated in the Tunis agenda, they play a significant role. And we believe that the mechanism leading the IGEF should be discussed in the WSIS Plus 20 review. Thank you.

Thomas Schneider : Thank you, Her Excellency Ms. Nthati Moorosi.

Nthati Moorosi: Thank you very much. We believe that IGEF’s mandate is still pretty much relevant. However, beyond 2025, it can re-imagine itself having a role in becoming a more dynamic, innovation-oriented platform aligned with the objectives of the Global Digital Compact. Its vision could center around being a catalyst for inclusive, rights-based, and sustainable digital transformation. We want to believe that IGEF can play a role in accelerating the achievements of SDGs. The IGEF could position itself as a global convener for multi-stakeholder partnerships aimed at accelerating SDGs, focusing on digital inclusion initiatives. The GDC emphasizes the need to address connectivity gaps, promote universal, meaningful, and affordable internet access, and ensure that underserved and unserved communities, including marginalized groups, are not left behind. So we believe that IGEF could convene all stakeholders, bring these problems that we’ve been talking about since morning, and come up with solutions together. We talked about the need for cheaper devices, smart devices. IGEF could We believe that IGF has a role to play to bring solutions to the world. We believe that IGF has a role to play to convene the government, the private sector, the civil society, and sit around the table and come up with solutions for that. Solutions for data, price, all the solutions. We believe that IGF has a role to play to convene everyone to bring those solutions. Thank you.

Thomas Schneider : And with that, I’m going to turn it over to my colleague, Dr. Jennifer Bachus.

Jennifer Bachus: Thanks. And I’m probably will reflect some of the answers you’ve already heard, and I apologize for that, but I think it’s because there’s significant agreement among those of us in this room. We, the United States, we support the IGF as the preeminent global venue for international engagement. We support the IGF as a platform to bring solutions to Internet public policy issues that are rights-respecting, innovative, and empowering. We recognize this is a really pivotal time for the IGF as it comes under the GDC’s, after the GDC’s adoption, and at the outset of the WSIS plus 20 review. While there’s always space for strengthening the IGF, the IGF has continued to be a model of the IGF. We support continued efforts to strengthen the IGF, including through increased participation by stakeholders from developing countries. On GDC implementation, we’ve been clear about the need to build on existing processes, including the IGF. We’ve seen over the years the IGF as a great venue for discussions on the latest topics and flexible enough to accommodate the evolution of key issues, as was already noted by some of the other panelists. As we move into the WSIS plus 20 review, we expect the IGF to play a key role in strengthening the IGF, and we expect strongly for the UN General Assembly to extend the mandate of the IGF before it expires in 2025. We have also heard a lot of discussion by stakeholders on to come up in the WSIS Plus 20 process. Ultimately, the United States is committing to ensuring the IGF has clear and stable funding. Thank you.

Thomas Schneider : His Excellency, Mr. Shermatov Sherzod.

Sherzod Shermatov: Thank you. And we hope that IGF would help us to bring more countries together, because a long time ago, we used to talk about the world becoming like a global village. But unfortunately, the latest event shows that the world is becoming more polarized, and the internet becoming a different kind of silos of different types of internet. And second thing, we used to talk about the importance of connectivity, since like 10% growth on high-speed connectivity will bring like 2% growth of GDP per capita, or it’s used for learning the new things, or increasing the productivity, et cetera. But if you look back to what it is used for, especially with the kids, unfortunately, not all the time internet is helping the kids, but sometimes it’s hurting them as well. So there are countries, even in developed world, who are kind of banning access to social media, banning access to some sort of content. So from that perspective, I would like to see that IGF would focus more on the benefits of internet for the whole human society, and especially for the young, growing kids, so that internet would become a very safe and promoting, developing area, not the area where parents would be kind of cautious about getting access to internet to their kids. So these are the areas I think we should focus more.

Thomas Schneider : Thank you, His Excellency, Mr. Torgeir Micaelsen.

Torgeir Micaelsen: Thank you. I will keep it brief. I see that. I think it’s important to make sure that the multistakeholder approach is strengthened after the process starts next year. There are some people anxious to wrap up, is that true? No, but I think it’s otherwise, my main point is I believe that we should make sure that the multistakeholder platform or the approach is strengthened after the process starting next year. I think it’s important to make sure that the multistakeholder approach is strengthened after the process starts next year. We have been doing this for almost 20 years. The IGF should continue to be the primary arena. That’s our position. Lastly, I think it’s okay to mention that we also need to be careful that we don’t create many new arenas when looking forward. We need to be careful that we don’t create many new arenas when looking forward. I think that’s a crucial piece of inclusion, which we will ultimately give us smaller influence, less influence, if we spread our wings on too many initiatives. We have to build on the strength, the initiatives, the bodies we already have. So, thank you.

Stefan Schnorr: I think that’s a key piece of inclusion. I think that’s a key piece of inclusion. I think that’s the first instrument to implement, to successfully implement the GDC, because the goal of the global digital impact is an inclusive, open, sustainable, fair, safe, and secure digital future for all. There are two key priorities to achieve this goal. Internet shutdowns and censorships. So this is central to the UN mandate and the GDC. We cannot compromise on fundamental rights that shape our digital future. And second, to achieve this goal, it is crucial that all stakeholders collaborate and work together. And in the digital domain, that means that stakeholder expertise is essential for reaching the objectives of the GDC. And the solution is very easy, because the solution is the Internet Governance Forum. It is in a very strong position to facilitate both of these priorities at the end. As we have discussed today, the IGF is one of the most inclusive, open, and transparent forums hosted by the United Nations and the Global Digital Compact that truly seeks to achieve its goals, must recognize the vital role of the IGF in this process. The IGF is not only a platform, it is a cornerstone for shaping our digital future. And I think the IGF mandate is well-suited to this task due to its broad scope. And the IGF has already proven its value, and now it’s time to realize its full potential. Also, the negotiations were challenging for the GDC. There was a broad agreement at the end among all UN members states on the point that we don’t want to have any overlaps. We don’t need to reinvent the wheel. We need to build on what works, and this is the IGF. And therefore, as I mentioned, I think the IGF is the only instrument for the successful implementation of the GDC.

Thomas Schneider : Thank you very much. Now I would like to invite Mr. Paul Gaskell, Deputy Director for Digital Trade, Internet Governance, and Digital Standards of the United Kingdom. Paul.

Paul Gaskell: Thank you. which is effective and successful. Since 2005, we have seen how multi-stakeholder governance of the Internet, with roles for governments, the private sector, civil society and the technical community, has driven increased connectivity, fostered technological innovation, and supported a stable and resilient Internet. The ability of the Internet’s global infrastructure to withstand the COVID pandemic and help the world get through that global crisis is evidence of this success. We now face new challenges, however, driven by rapid technological change and a more complex digital landscape. Back in 2005, no one was thinking about AI, social media or the metaverse, and the WSIS Plus 20 Review needs to be ambitious and future-focused, taking full account of new and emerging technologies and addressing the challenges faced by developing countries in particular. As others have said, it’s also a reality, at a much more basic level, that one-third of the world’s population has no access to the Internet and there is still urgent work to do to connect the unconnected. So we must ensure that the WSIS Review fully contributes to the UN Sustainable Development Agenda, WSIS should have a real focus on how the potential of digital can contribute to all aspects of sustainable development. development. And finally, of course, the WSIS review should extend the mandate of the IGF and we think it should consider a permanent mandate as well. And just regarding the success of the IGF, a recent report by independent researchers in Oxford in the UK highlights the IGF success in becoming a global ecosystem for knowledge sharing, particularly for developing world partners. We believe we should build on that, that record of achievement, and strengthen the IGF for the future. The UK looks forward to actively participating in the WSIS review and working with global partners on these issues. Thank you.

Thomas Schneider : Thank you very much, Paul. So we have a final round of wrap-ups. There has been some convergence of views, I think, so let’s try to focus on things that we maybe have not heard yet or what you think is particularly important. So I’ll ask each panelist to give a final message of maximum two minutes or less, says the script here. But we are not so bad in time, actually. To each of you, and now this time I start from the other end, so please, what are your key learnings, what are new things that you’ve heard in this round, or what has not been said that you would like to say?

Torgeir Micaelsen: I think there’s a lot of important stuff that has been said. I was just sitting here thinking back, I think it was like in 1994 or 1995, the first time I accessed the internet myself. My father had bought me an Amstrad 64, a modem with a cable, put it into the socket, and hearing that wonderful noise, it almost gets me like a nostalgia here, looking back that’s 30 years ago, I would like everyone to have that kind of feeling that I got in the 90s, everyone should get the feeling to be connected, to make new friends, to learn new stuff online in a safe and secure manner, at the same time respecting their rights to as human beings, so this is kind of the conversation that’s moving forward inside the IGF or at some point later this week, that would be highly appreciated, thank you.

Thomas Schneider : Thank you very much Jennifer Bachus.

Jennifer Bachus: I would conclude by saying what’s sort of already been said here repeatedly, which is that we will only be successful in the future technologies and the current technologies if all stakeholders are around the table, we can’t do it as governments alone, we need all the voices, we need the private sector, the academic community, the technical community, civil society, otherwise we’re going to miss something that’s incredibly important, so we will continue to be committed to these sorts of engagements and we look forward to Oslo in six months. Thanks to Norway for stepping up and we look forward to seeing you in not that many months ahead, so congrats and thanks.

Torgeir Micaelsen: She saved me, so I’m so sorry, you obviously, all of you, warmest welcome to Oslo in Norway next year for the IGF 2025.

Stefan Schnorr: That’s noted. Thank you very much, I think this panel has truly highlighted the value of the IGF and I’m looking forward not only for the IGF in 2025 in Norway but also for the IGFs in 2026, 2027 and so on, so let’s continue this successful story. I think what we can do is better and strengthen the inclusion of the Global South to make the IGF more visible. I think this is very important. But at the end, we have so many new challenges also in the future. And the best way to address these challenges and to find common solutions is to work together with all multi-stakeholders. Therefore, I’m looking forward for the future of the IGF.

Thomas Schneider : Thank you very much.

Sherzod Shermatov: As a wrap-up, I would like to thank all the organizers, and I think that this IGF was a very successful event. And I hope that this kind of meetings are very important to have a better idea about the future of Internet, about the future of cooperation among the countries, and improving the necessary areas in terms of the governance. And I’m just looking forward for having more productive and more successful such events, which will help us to cooperate, collaborate with each other, and for the benefit of humanity as a whole. Thank you.

Thomas Schneider : Thank you very much. Sally Wentworth.

Sally Wentworth: At the Internet Society, our vision is that the Internet is for everyone. And we believe that we are all stakeholders in the future of the Internet. And that is really what a platform like the IGF represents. And as I said earlier, we look forward to many future IGFs and seeing the IGF continue to evolve to meet the challenges of the future. As we look to the WSIS plus 20, we are a stakeholder in that process. We are part of the Internet technical community. And we really hope that our voice, that the voice of the Internet technical community is included and welcomed in that process, both in the process to evaluate the WSIS and also to think about the future of the WSIS and in the implementation of the Global Digital Compact. Those of us in the internet technical community are working very hard to ensure that the internet continues to evolve in a way that is open and secure and puts people at the center and ensuring that the technology that we all depend upon for the exciting things and the emerging technologies that we’ve spoken about is available and is scalable and is meeting the needs of the future. So we look forward to engaging in that process. We hope that our voice is included and welcomed. And we are excited about the IGF in Oslo and in seeing how the modalities and the process for the WSIS Plus 20 emerges over the weeks and months ahead and contributing our voice to that. Thank you.

Thomas Schneider : Takuo Imagawa.

Takuo Imagawa: Thank you. My final comment is just adding one point. Within the limited time and the resources, we need to efficiently advance the WSIS Plus 20 review. We have reached an agreement on the GDC through difficult negotiations, and the agreed items in the GDC should not be reopened, but rather used as a basis for the WSIS Plus 20 review, I guess. For example, regarding the WSIS action lines, I believe we should have effective and efficient discussions by basing them on the existing 11 items. So we look forward to discussion to come, including the IGF 2025 in Norway. Thank you.

Thomas Schneider : Thank you. Nthati Moorosi.

Nthati Moorosi: Thank you. I think my last words is just to thank the organizers, the government for hospitality, but also to challenge IGF to say that I I want to reiterate that it has to play in the space of SDGs, it’s about time we discuss difficult problems, come up with great solutions that can save little budgets that we have in the country, that can grow the economy. It’s about time we talk about even big problems such as voting online for national elections, like writing examinations online, things like that. We have to start talking about big problems and coming up with big solutions. Thank you.

Thomas Schneider : Thank you. Junhua Li.

Junhua Li: Thank you. Thank you, Thomas. I would say that I will leave Riyadh with a strong conviction that the IGF’s or its potential needs to be further tapped with a stronger mandate, with a review, then we would equip this IGF platform to a new phase that would provide more recommendations, solutions to the member states, to the multi-stakeholders, that how AI or digital process would really benefit the whole humanities. So, I look forward to seeing everyone in Oslo for the next IGF.

Thomas Schneider : Mohammed Saud Al-Tamimi.

Mohammed Saud Al-Tamimi: Since I’m the last one, let me take the liberty to speak on behalf of this panel. As we are approaching the 20th anniversary of WSIS, we all agree that we have to have more WSIS coming, and more IGF, as my friend in Norway says that we will have 25, 26 and more to come, as a mandatory platform, collaboration and inclusive, to implement our commitment as a global digital compact. And I wish you a successful… And I think that’s what we are going to see in the coming days for this coming IGF Riyadh 2024.

Thomas Schneider : So thank you very much. We are approaching the end of the first part. Before I let you go, let me allow to try and add one or two things. I think there seems to be broad support that an inclusive, not just a multi-stakeholder approach, but an inclusive multi-stakeholder approach, but not just the big ones, but also to have a voice, a sitting at the table and have a voice at the table. Something that I would like to add to the discussion is that it is important that everybody is sitting at the table, but we should not forget that we may have the reference on the respective roles of the different stakeholders. It was hard fought, paragraph 35 or 36 or whatever it was in the agenda. We may have different roles, but anyone that is missing, there’s always room for discussion. So I think it’s important to, if we talk about multi-stakeholder, to bring everyone together, discuss our roles, agree on the roles and agree on solutions, hopefully, and also agree on making sure that all voices are heard, and in this respect, I would actually also like to refer to the São Paulo guidelines that have been adopted earlier this year, the guidelines that have been adopted earlier this year, which have indicators and tools and solutions to how to make sure that a multi-stakeholder process is actually real inclusive. It helps to also counter power imbalances, which exist not just among governments, but also, of course, among the other stakeholders. And I think everybody agrees on the relevance of the IGF, on the potential of the IGF, also, of course, if the more funding is available. And one of the reasons for the potential and the success of the IGF, it has been and I would like to conclude that this is the agility and the dynamic of the IGF, its ability to deal with emerging issues that pop up many times first in the IGF on the agenda and are then picked up by the ITU, by UNESCO, by OECD, by other institutions that rely on the IGF to identify emerging issues, which is also one of the functions according to paragraph 72, point G, whatever the exact thing is of the Tunis agenda. So we are not inventing these things. These things have been foreseen. The potential of the IGF has been foreseen in the WSIS documents in Geneva and in Tunis, and we are looking forward very much to us all together driving the IGF forward. And the IGF is not an end in itself. It’s a means, as we’ve heard, to achieve the SDGs, to make sure that all of us are able to benefit from digital technologies for the good and not for the bad. Thank you very much to all of you, and I’m looking forward to hearing you and seeing you again at the next occasion. Thank you very much. With this, we will move to the second part of the… I’m supposed to moderate also the second session, and I just need to switch the PDF. Thank you very much for this. Now we’ll move to the second session, which, if I see this correctly, is somewhat less multi-stakeholder in the sense… We are missing Tafik Yelassi from UNESCO. And Gregor Salis is connected online. Okay, let’s give him a chance to come in and use, in the meantime, he’s walking in, excellent. This is such a big venue that sometimes it takes time to get from A to B, but he’s coming. While he’s taking his seat, let me happily introduce to you Ms. Doreen Bogdan-Martin, Secretary-General of the International Telecommunications Union, to give some introductory remarks to all of us. Welcome, Doreen, thank you.

Doreen Bogdan-Martin: If the technicians could turn on the microphones. The light is here, so now we have the visuals, now we need the audio as well, in order to have a… Okay, that’s better. Now, we’re all there. Thank you so much, Ambassador Schneider, excellencies, ladies and gentlemen, good afternoon, good evening. It’s great to be here with all of you to share some thoughts as you kick off this session. I think, ladies and gentlemen, we are standing together today at the cusp of the next chapter of the inclusive digital future, and we’re doing so at a time when technology often feels like it’s always one step or more ahead of us. Two decades ago, the world convened to declare a common desire and a commitment to build a people-centered, inclusive, and development-oriented information society. where everyone can create, access, utilize, and share information and knowledge. And while the WSIS framework has helped to make great strides towards this goal, I think now is the time that we have to pause, although I hate to say the word pause because we don’t want to stop, but it’s really the moment for us to reflect and take stock of our progress, to review the current state of the world and the technology around us, and to double down on our commitment towards an equitable and sustainable digital future. The WSIS has presented us with a powerful example of digital cooperation in action, withstanding the test of time by building adaptable, and Thomas, as you said when I was walking in, agile, if I can say that, adaptable, agile governance processes that can keep pace with the opportunities and the challenges of emerging technologies. We need to continue to build on this momentum. And today, we have the adoption of the Global Digital Compact, and that is an important milestone in the journey to next year’s WSIS Plus 20 review. And I want to leave you quickly with perhaps three thoughts as we drive forward our shared ambition. So the first is to think about connectivity in terms of universal, meaningful connectivity because how can we achieve the vision of WSIS if a third of humanity and countless others that we, third of humanity being unconnected and countless others that are under-connected as we see it, not part of today’s digital experience. Second, we have to invest in trust and security. The next phase of WSIS must play a critical role in ensuring that AI and other emerging technologies are developed responsibly and inclusively. At stake is a sustainable development agenda, our 17 SDGs, and our progress towards achieving those 17 goals. And third, last but not least, making building our multi-stakeholder digital future a top priority. We have to do that, ladies and gentlemen, with a shared commitment for a safe and inclusive and a sustainable digital ecosystem. Rest assured that you can count on the ITU to accompany you every way, every step of this process. I think it’s fair to say that we are in a race against time. The future of digital has not yet been written, but let’s remember who we are, where we came from, and what we can achieve when we work together. So let’s write that next chapter of our shared digital future together. With that, ladies and gentlemen, I thank you for the opportunity. And Thomas, Ambassador, Chair, back to you. Thank you.

Thomas Schneider : Thank you, Secretary General. So, yeah, over to the panel. I think we can actually use question one and question two, put them together so that I think that makes sense. And not surprisingly, the questions are not that different from what we’ve heard. Basically, the clients, your customers of the UN bodies speak. So it’s good to see how you see these things. So question one is also, what have been the achievements from your point of view of the WSIS process in the last 20 years? How has the WSIS impacted the work of the UN, of your institutions? And then you may also, I encourage you to also talk about looking forward. So, let me start with Tafiq Elassi, I would like to hear from you.

Tawfik Jelassi: Thank you very much, Thomas. Good afternoon to all of you. Let me share with you the perspective from UNESCO. 20 years ago, OASIS came up with a visionary framework to bridge the global digital divide, to increase accessibility to the Internet, and to harness the potential of information and communication technologies, obviously for socio-economic development. It did not foresee the rise of digital platforms, it did not foresee the rise of AI and generative AI, but again, we had a vision and we had a framework. This was approved at the Tunis edition of OASIS, building on the Geneva Summit that happened two years earlier, 2003, Tunis being in 2005, and obviously it set up a number of objectives and action lines to be implemented at both the national and the international levels. So, your question, what have we accomplished since then? I think OASIS 2005 created a momentum at a global stage. It was a call for collective action, and also it created a political will among participants, and again the participants were very much multi-stakeholder, to make the concept of an information society become a reality. So there were some key principles and guidance that came out of the Tunis summit, with this long-term goal, how can we all benefit from the digital age? Today we can say that some of these objectives were achieved, but many others were not, or not fully achieved. Is the information society today a reality worldwide? No. We see that one-third of the world’s population is still offline, not even connected to the internet. Today, this morning, we heard more about the rise of the knowledge-based society. Here we talk only, quote-unquote, about the information-based society. So the world around us has changed. The question is, have we changed? Have we changed enough in the face of these global changes? So I want to give maybe a balanced response to your question, Thomas. Major achievements were made, for sure, including through collective efforts. But again, I think as IETU Secretary General just mentioned, there is still a lot of work ahead of us, and there are new challenges that came to the fore, including when the information ecosystem moved to digital, good news, because it democratized access to information. But with that came mis-disinformation, hate speech, discrimination, racism, and a whole set of harmful online content. What are we doing, or what have we been doing to combat that? When we see that the rise of digital influencers, in addition to digital content creators, when we see some of these youngsters, called digital influencers, each having 50, 60, 100 million followers online, more than all the UN organizations’ followers combined. Are these professional journalists? They are not. Do they check the content before they post it online? They don’t. In 62% of cases, they don’t check the content before they post it online. So I think these are new challenges that we face. It’s a whole new world. And obviously, we don’t want the internet to become the online Wild West. We want some global governance of the internet. We want to ensure, as again, Doreen said, a safe, secure, open, accessible internet to all. Multilingualism online. We are here in Saudi Arabia. The Arab world represents almost half a billion people. How much content in Arabic is there online? 3%. 3% only. And there are so many communities, including indigenous communities, who have no content online whatsoever. So is that an information society? It cannot be an information society. So just to say that a lot has been achieved, and we are delighted, but again, this user-generated content and the explosion of that, diversity of cultural expressions online, digital influence. There are many other issues that we need to not only make note of, but we have to actively find solutions. We have to create open access… access to information we are respecting, human rights and the openness, accessibility and the multi-stakeholders.

Gitanjali Sah: Thank you very much. As our Secretary-General mentioned in the opening speech, we have to create open access to information we are respecting, human rights and the multi-stakeholders. of stakeholders worldwide who really wanted to make a difference on the ground, bringing the benefits of digital to the people. It was about the people, about bringing the benefits of technology to the people on the ground. And to create a trusted, connected world, to look at the gender digital divide, to look at the intergenerational divide, the economic and social benefits that technology, information and communication technologies could bring to the ground at that time. So looking back, the three main achievements that stand out for us as ITU is first, we focused on people, not just the technology, ensuring that everyone everywhere gets the benefit of the digital progress. Second, we’ve made the framework extremely collaborative, inclusive, through our multi-stakeholder efforts. We’ve made sure that the private sector, civil society, technical community, the UN and of course governments reflect the digital world’s diversity and complexity at the same time. The third, we build this adaptable governance processes, you know, so that they can keep pace with the opportunities and challenges that keep emerging with the development of technologies. For instance, look at the WSIS action lines. They’ve given stakeholders a clear framework to tackle evolving digital challenges, from infrastructure to ethics to capacity building to cyber security, including to the really important things that UNESCO is looking at, for example, the indigenous languages, culture, media. It’s really a diverse range, a gamut of ICTs that the action lines cover. And they’ve been evolving with technology. They have a framework which adapts to the ever- changing technology and innovation. We also want to give a special recognition to So, I would like to turn to two very key outcomes of WSIS, and the two complementary processes, the Internet Governance Forum and the WSIS Forum, that have really given action and the grassroots digital development and movement that WSIS is all about. As for the impact of WSIS on UN’s work, it’s important to mention the role that digital now plays in advancing the Sustainable Development Goals, and in 2015, UNESCO, ITU, WHO, ILO, FAO, all UN agencies, UNDP, involved in the WSIS process, we mapped the WSIS action lines with the Sustainable Development Goals, clearly showing a rationale as to how the action lines can implement and can help accelerate the achievements of the Sustainable Development Goals. We also mapped the WSIS action lines with the Sustainable Development Goals, thereby showing clearly how technology can impact and have a very important impact on sustainable development. So, we’ve seen the world go from 1 billion Internet users in 2005 to 5.5 billion today, as Dr Jalasi mentioned, from dial-ups to 5G networks, from fragmented social networks, and then in 2015, a logo S Laurie today showing how S Laurie stood the test of time as a powerful framework for inclusive digital cooperation. Thank you.

Angel González Sanz: Thank you very much. Thank you very much for the introduction and the organizers for giving us the possibility to participate in this discussion. It’s going to be very difficult for me to highlight different achievements of the WSIS process from the ones that have already been identified by my distinguished predecessors in the WSIS process. Identified by my distinguished predecessors in this panel. As has been said, many of the aspirations of the WSIS 20 years ago have been not just fulfilled, but probably exceeded in expectations that the stakeholders had at the time. If one thinks, for example, in terms of the pervasive presence of information and communication technologies in everyday life for many, many billions of people around the planet, and how access to the Internet and access to different forms of ICTs have improved productivity in the economy, but also improved the access to public services, to education, to health, how during the crisis of the COVID pandemic, the Internet enabled some form of social activity to continue in many contexts. At the same time, it has to be said that many of the aspects of the vision of WSIS remain only partially fulfilled, or not fulfilled at all. For example… It has been raised, the question of the digital divide is not only a matter of having more and more people connected, it’s also a matter of giving these people meaningful connectivity. Connectivity that enables them to actually participate as full members of society and exercise the right to political participation, to access to reliable information, to engage meaningfully as citizens. There are divides that affect people along different lines, gender is an important one and we still have very serious gender digital divide, rural versus urban is another one and even educational levels translate into very different kinds of experiences when connecting to the internet. We also see many developments in the area of ICTs that are completely or were completely out of the radar at the time when WSIS was conceived. It would have been difficult for example to imagine 20 years ago that some of the biggest public sector investment would be in this area or that the private sector, the biggest multinational companies would be built around digital service provision, particularly around data. So in terms of how all these changes have affected our work I think it has already been said that the WSIS process has been fundamental in highlighting that there cannot be development without taking a very development oriented perspective into the world of internet connectivity and ICTs. The work that was done by the WSIS actors to map the SDGs to the different action lines I think was very important in that sense and I think it would be impossible for any of us UN agencies to really carry out our work today, our development work without keeping this intimate connection between digitalization and development present in all of our activities. The WSIS Placement Review in which we are now engaged as Secretariat of the CSTB in the moment. partnership with your colleagues from the ITU, from UNESCO, UNDP and others, highlights this intimate connection between development and the WSIS in discussion. It has already been mentioned the crucial role of multi-stakeholder participation in all these processes and I would like to again, like Gitanjali said, highlight the role of the IDF but also of the WSIS Forum in helping identify fundamental trends in technology and in development and in the interface between the two of them, which are essential for any successful development policy. I think I will stop here and I would again just want to close with a reminder of the importance of the WSIS Plus Central Review in the sense of identifying, through a multi-stakeholder approach, the strategic lines in which the WSIS Plus can advance in the convergence between development policy and ICT policy. Thank you.

Thomas Schneider : Thank you very much. Robert Opp.

Robert Opp: Thanks very much. I don’t want to repeat what colleagues have said, so if it’s okay I’d like to make another observation and some of it might be a bit provocative, but if I reflect on where we’ve come in the last several years, now there’s 20 years since WSIS, but the last several years have seen an acceleration of digitalization, largely I would say as a result of the pandemic. And so there have been three big shifts from the development perspective at the country level that we’ve noticed. One is a shift from thinking of digital and ICTs as solutions and thinking of it more as ecosystems. So thinking really the interconnectedness, the interoperability across entire societies. Second big shift is going from a very fragmented state to a very much more holistic understanding of digital transformation. So it’s not digital transformation only in separate sectors or ministries, but rather a whole of society. And the third big shift is from what I would call techno-optimism to an understanding of the fundamental issues around rights and inclusiveness, that the technology brings risks with it. And now I’m speaking from the perspective of development practitioners, which when I say techno-optimist, because in fact when you look back at the WSIS and you look back at the vision and the fundamental principles that were established, they were actually very visionary in that sense. And the basic framework is still valid despite these kind of seismic shifts in digitalization and ICTs. And so I think the WSIS framework that includes IGF and so on gives us the platform to continue those discussions and has adapted accordingly. I think that what we’re starting to see in addition is a broader interest in these mechanisms because the space of digitalization, the topic of digitalization has become so much more prominent. And although, Gitanjali, I would agree that, you know, I know there was work that was done as part of the SDGs and Agenda 2030, the fact is that a lot of the issues that we’re discussing today were still absent in 2015. And it was almost as though it was a kind of a niche on the side, like, you know. But when it comes to the successor discussion around what’s going to come after the SDGs, the current Agenda 2030, I don’t think there’s any question that areas of digitalization will be at the center. So I think the world is shifting. I think that WSIS and IGF have also been accommodating that because the fundamental framework is solid and I think that we do need to look at the future at what is coming next and how do we continue to evolve and strengthen and broaden what we’ve established and what has been working for the last 20 years.

Thomas Schneider : Thank you very much Robert. Mike Walton.

Mike Walton: Thanks and I would agree with you Rob, it was visionary and I just look at the basis of what was in the original principles and I think now apply that to UNHCR’s digital strategy and a lot of it still rings true. So digital inclusion is primary for refugees and forcibly displaced and now this wouldn’t have happened 20 years ago but now it wouldn’t have happened 10 years ago. We can see that people are accessing information, life-saving information, information that helps them rebuild their lives. Ten years ago we didn’t have a help website for refugees, now 14 million refugees and forcibly displaced visit that every year and that just wouldn’t have been possible but it’s thanks to the work of pushing this agenda that we actually managed to get this inclusion happening. So just in terms of another opportunity, when I went to Kakuma there was a group of refugees that were coding Android apps for revising, learning for the local community. They were unable to do that, they had a small generator, they had coding, they had equipment and the impact of that was designed by the community and it worked for the community. That wouldn’t have happened 20 years ago, so thanks to all of this kind of pushing that’s just become a reality. A couple of unsung heroes that are in the original document… knowledge and digital preservation, which I know that UNESCO and others do a lot of work on, we have a huge amount of knowledge and stories and a fantastic archive team in UNHR that’s trying to digitally preserve those conversations that happen, the documents that exist, the strategies that are written, and that digitization of knowledge and sharing of knowledge is so critical to actually become useful in the future. So let’s really push forward with that. And with accessibility, we wouldn’t have had auto-captioning 20 years ago. Digital accessibility for those with disabilities has moved on, and people can access lots of assistive technologies and lots of different pieces. We’re not completely there yet, but it has come on leaps and bounds. So just to flag those two parts of the original WSIS principles that perhaps aren’t talked about enough. But I’ve made it sound really rosy, but there are still huge gaps. We’ve talked about that difference in gaps between people who have access and people who don’t. And yes, more refugees do have access, but we’re changing the landscape. The information landscape has changed. It’s far more risky. There’s far more ability for fraud, for risk of trafficking, for toxic narratives to exist online. How do we make sure? This is the next looking at the priorities question. How do we really make sure that we focus on those increased risks and we can tackle those risks going forward? In terms of the importance of multi-stakeholder approach, I’ll just give a couple of examples there. And this is why it’s so important to keep convening, is that we’ve had two examples with the Global Compact on Refugees, which have been invaluable to move forward multi-stakeholder agenda. One is on misinformation and information integrity, which is really about having trusted information out there that people can go to and they know what trusted content is. And we’ve had fantastic support. the Norwegians, the Swiss, the Google and Meta supporting this pledge, and so thanks to them for that multi-stakeholder approach. And on connectivity, the work that ITU have done has been fantastic in terms of pushing with us on that joint approach to connectivity. So without all of the joint working, without the dialogue happening, we wouldn’t have been able to achieve and to get where we are. But focus on these new things that have come since the initial principles.

Thomas Schneider : Thank you. So, looking forward, what are the most critical priorities? Very briefly, Tawfik.

Tawfik Jelassi: That’s a good question, Thomas. What are the most critical priorities for the next phase of WSIS? I think that today we are in this digital information ecosystem, and I think we should really maybe look more deeply into the supply and demand of information in that digital ecosystem. When I mentioned in my first intervention the exponential dissemination of harmful online content, that’s obviously from a supply perspective of information. We have, and I mentioned one initiative of UNESCO, the guidelines of UNESCO for the governance of digital platforms, which were published a year ago, and you are moving into the pilot implementation of that. But then we have to look also at the demand side of information, at the usage side of information. When our studies show that on average a youngster spends six hours a day connected to these digital platforms, more than doing homework or anything else, what information do they come across? Is that fact-checked information, or is that misleading and even harmful information? When we see, again, the usage side, what can we do about that? One initiative, again, by UNESCO is to make the users become media and information literate. And we have developed curricula, we have developed content on media and information literacy. As countries have taught foreign languages in the 60s and the 70s, the language of the youngsters today is digital. Have we prepared them to that? Have we developed a critical mindset among the users of these digital platforms so they can hopefully distinguish between the fact-checked information and the fake or deepfake information, but also at least to check the source of information before they like and share and become themselves amplifiers of misinformation? So I think that’s something that we need to tackle. This is a priority, as I see it. Another priority is inclusivity. And of course, inclusivity includes gender equality online. We know today that women are less connected and significantly less connected than men to the Internet. But also the presence of women in digital technologies, including AI. We know that they represent 10 to 15% of the workforce, depending on what discipline of digital we want to look at. A third priority is the environment. To what extent can WSIS contribute to tackling the issues of climate change and the environmental crisis? That’s, I think, something also very important nowadays. But also there is another priority, which is the next phase of WSIS, today that we have a global digital compact. WSIS was a UN event adopted by heads of state around the world, but so is the global digital compact. How can the two UN processes I think it’s very important that we work together, that we hopefully complement each other, or work in some symbiosis with each other, without duplication, without dilution of effort in this regard. And when we talk about AI also and other emerging technologies like generative AI, the issue of the global data governance, I think which we did not have maybe to the same extent, and with that, we should pay attention to the digital skills. We should pay attention to the digital skills divide. This morning we saw the Saudi minister showing us the huge number of digital jobs not fulfilled today, because we do lack digital skills and competencies for a number of these digital tasks and activities. So I think we should pay more attention to capacity building, capacity development in digital. And finally, let’s say if we look at the future of AI, I think we are in an era where we cannot just reform education, but to what extent generative AI is transforming education, transforming teaching, transforming learning, transforming assessing students’ skills and competencies, I think we are in an era where we cannot just reform education, or incrementally improve on it. Gen AI disruptive technology that is what 훨씬 quan a ashed for not only to people only to students but also we talk today back life-long learning, evide-2 professionals along their active career. So again, this is what I see as a first set of priorities to consider.

Gitanjali Sah: ** Thank you very much. ** We’ve accepted this first session as you mentioned. I’ll try to maybe bring in the collective efforts we’ve making for business plus 20 and where all stakeholders can contribute to the uplifting game for business with efficiency with sustainability. They live by theoral and prize-based So, of course, one of our main priorities, which, again, I’d like to highlight, is to bridge the digital divides. We can no longer accept that 38% of the population in Africa, only 38%, uses the Internet. That the gender disparities, at least in these developing countries, are expanding. The biases in AI, and the algorithm biases that are existing in AI, are also to be looked at. Now, the 1.8 billion of the 2.6 billion offline live in rural areas. So these are the divides we really need to focus on. And what we are doing as a collective, as a UN, as the UN for the WSIS plus 20, is that we’ve started our preparatory process, UNESCO, ITU, UNCTAD, through CSTD, UNDP. So we’ve been, and the regional commissions, we’ve been looking at how the regional WSIS plus 20 processes, and our main events, like the upcoming UNESCO conference in July, the IGF in June, the WSIS plus 20 high-level event in Geneva in July, 7th to 11th of July, the CSTD in April. How can these upcoming milestones play an important part in multi-stakeholder contributions towards the WSIS plus 20 review? So we have the GDC, which of course, again, was a milestone towards the WSIS plus 20 review. How can we make the upcoming events more impactful, contributing towards the WSIS plus 20 review? Now all of us are aware that two co-facilitators will be appointed for the WSIS plus 20 review, and we are hoping that this is done as soon as possible. so that the modalities resolutions could be worked on and that we could move on with the WSIS plus 20 review process and ensure that the vision of WSIS beyond 2025 is as strong and as multi-stakeholder as it was in 2000 even into a 2015 where we looked at the at the emerging trends and opportunities and challenges of each action lines as the UN as and as multi-stakeholders so of course the GDC has set an ambitious vision for us we must all get together and see how we can incorporate that milestone into the WSIS plus 20 review process and looking at the IGF how we could strengthen it we all support the IGF we love it a lot of concrete multi-stakeholder outcomes come out of it the Swiss chairs summary at the WSIS plus 20 forum high-level event this year in 2024 those of you who haven’t read it you must really go and read it because it has all these components in it the WSIS plus the important role that IGF and the WSIS forum plays of how we can align the WSIS and the GDC so there is already a lot of work going on around all of this and we should see how we can put all of it together now once the GDC was adopted also the UN has been in action we’ve developed an ONGIS action plan where we have mapped the WSIS action lines with the 2030 agenda and the GDC principles clearly highlighting the frameworks and the institutions and activities that exist to implement the GDC so this exists At the ITU, with the UN agencies in Geneva, we also have this Geneva UN digital kitchen where we are meeting quite often to come up with a Geneva action plan on what we could do together as the UN to contribute to the GDC process. And as the ITU, our member states have also asked us to come up with an ITU action plan to implement the GDC and present it to our next council working group on WSIS and SDGs. So there is a lot of work going on and there is good momentum. We should feed this all into the WSIS plus 20 review process to ensure that we have a bright vision and future for multi-stakeholderism and the WSIS process beyond 2025.

Thomas Schneider : Yes, thank you. Robert, please.

Robert Opp: All right. A lot has been said already. Taufik was super comprehensive. So let me just emphasize a couple that were mentioned and mention a couple that weren’t. Environmental sustainability has to go into the next version of what we do. And it’s the two areas. It’s what digitalization can contribute to environmental sustainability, climate change, but it’s also the contribution to climate challenges or environmental challenges. E-waste is a huge problem. Carbon emissions are a huge problem. And with the advent of AI and Gen-AI, that is actually going to get worse. I know that there is technology that is also becoming more efficient, but we have to be cognizant of the balance. I think it was a recent report from our colleagues at UNCTAD, actually, that one of the figures that stuck in my mind was that data centers a couple years ago were emitting the same carbon emissions as France. And that is going to grow. So we have to be, as a UN system, responsible. when we actually look at these issues and proliferate technology. I think capacity building, no question for us. We see that as probably the number one gap when we work at country level. Requests come to us for more capacity, more skills, stronger ecosystems, et cetera. Then I would mention digital public infrastructure is something that has emerged in the last several years. It is something that is now in the global digital compact. And I think that we need to consider what it looks like in the context of WSIS as well, because it is something that we think has a lot of promise for accelerating digital transformation at the country level. The last one I can’t remember, so I’ll stop there.

Thomas Schneider : Thank you very much. Thanks. Mike.

Mike Walton: Yeah, and another good segue from Rob there, talking about climate change, the impact. I mean, agree, digital inclusion, information integrity, I think we’re all agreed in terms of those priorities. We haven’t talked about ethics more broadly in the ethical use of technology, and actually do no harm in the products that we use and we produce is really important. We are about to work on a refugee gateway, which will be a one-stop access point for refugees. How do we make sure that we deliver that in an ethical way? How do other suppliers make sure that they’re creating technology solutions that take into account all of the different ethical principles that are there? So I would say, let’s really broaden it out. Climate change, absolutely. That has a huge impact on refugees when it’s combined with conflict situations. But what are the other ethical principles that we should focus on as part of our prioritization? And how could we build out a joint framework for how we agree on how that can be applied to new solutions?

Thomas Schneider : Thank you. Angel.

Angel González Sanz: Thank you. It’s very, very, very difficult to say anything different from what has already been said. I would just join my voice to those that mentioned the environment, and not just because we just published a report on. on this issue, but also because it’s really a big concern and it’s probably going to become an even bigger concern and we need to take action quickly on both the environmental burden of digital technologies but also on using better the potential of digital technologies to address environmental issues. The second point that perhaps I would like to emphasize that we need to work together to address the risks that digitalization can bring about in terms of increasing equality. Increasing equality among men and women, increasing equality within countries between different social groups, but also increasing equality across countries. And one particular problem or area of concern in that regard could be the intimately linked questions of data governance and artificial intelligence governance. We were just now finishing a survey of the major artificial intelligence governance initiatives that exist in the world, seven of them. And we found that there are 118 countries that are not involved in any of them, whereas only the G7 member countries are involved in all of them. The risk that the interest, the voice, the concerns of the developing countries will be completely ignored in these discussions is very serious and really we need to do much more effort in becoming inclusive in developing responses to the challenges created for artificial intelligence. The same can be said about data governance and data is after all the fuel on which artificial intelligence models are driven. So we need to also come up with much more inclusive ways to develop… principles for data governance. The CSTD is now tasked by the GDC to set up a multi-stakeholder working group on principles of data governance that should report to the UN at the end of 2026. And this is, in my opinion and Anto’s opinion, a major challenge that we need to address through, again, a deep multi-stakeholder engagement including, in particular, the voices from the Global South. Thank you very much.

Thomas Schneider : Thank you, Angel. And yeah, we have one minute left, so I would take that one minute from me trying to wrap up with what we’ve heard. I think, obviously, we all agree that a lot has been achieved, but there’s much more work to do. And time doesn’t stand still, so with every task that we’ve accomplished, we get two, three more tasks in the future, which is a challenge for all of us. And I think we will have ample time to discuss, and what Tafik already said, how to align, how to bring together the GDC and the WSIS process into one, based on the 20 years of work that has been done so far. And I’m looking very much forward to continuing the discussion in various combinations and in various sessions this week, but also in Geneva, in Paris, wherever this is going to be. Thank you very much, and enjoy the nice evening here. And this is the end of today, as far as I know, at least of this official part. So thank you very much, and see you tomorrow, I guess. Thank you very much. Thank you. I get the poles back in a quick second. Here we go!

G

Gitanjali Sah

Speech speed

134 words per minute

Speech length

1208 words

Speech time

538 seconds

Increased global internet connectivity from 1 billion to 5.5 billion users

Explanation

Gitanjali Sah highlights the significant growth in global internet connectivity over the past 20 years. This increase represents a major achievement of the WSIS process in expanding digital access worldwide.

Evidence

The number of internet users grew from 1 billion in 2005 to 5.5 billion today.

Major Discussion Point

Achievements of WSIS over the past 20 years

Bridging remaining digital divides, especially in rural areas

Explanation

Gitanjali Sah emphasizes the need to focus on bridging remaining digital divides, particularly in rural areas. This priority aims to ensure more equitable access to digital technologies and opportunities across different geographical regions.

Evidence

1.8 billion of the 2.6 billion offline population live in rural areas.

Major Discussion Point

Priorities for the future of WSIS

Agreed with

Mohammed Saud Al-Tamimi

Angel González Sanz

Agreed on

Persistent digital divides

S

Stefan Schnorr

Speech speed

141 words per minute

Speech length

832 words

Speech time

353 seconds

Established multi-stakeholder approach to internet governance

Explanation

Stefan Schnorr emphasizes the importance of the multi-stakeholder model in internet governance as a key achievement of WSIS. This approach involves collaboration between governments, private sector, civil society, and other stakeholders in shaping internet policies.

Evidence

The IGF was cited as an example of a successful multi-stakeholder platform for internet governance discussions.

Major Discussion Point

Achievements of WSIS over the past 20 years

Agreed with

Sally Wentworth

Jennifer Bachus

Mohammed Saud Al-Tamimi

Agreed on

Importance of multi-stakeholder approach

Multi-stakeholder model as crucial for implementing Global Digital Compact

Explanation

Schnorr argues that the multi-stakeholder approach is essential for successfully implementing the Global Digital Compact. He emphasizes the need for collaboration between all stakeholders to achieve the goals of an inclusive and secure digital future.

Major Discussion Point

Role of IGF and multi-stakeholder approach

Agreed with

Sally Wentworth

Jennifer Bachus

Mohammed Saud Al-Tamimi

Agreed on

Importance of multi-stakeholder approach

T

Tawfik Jelassi

Speech speed

137 words per minute

Speech length

1351 words

Speech time

590 seconds

Created momentum for collective action on digital development

Explanation

Tawfik Jelassi highlights how WSIS generated global momentum for collective action on digital development. This momentum led to increased political will and collaboration among stakeholders to work towards the vision of an inclusive information society.

Major Discussion Point

Achievements of WSIS over the past 20 years

Limited multilingual and culturally diverse online content

Explanation

Jelassi points out the lack of linguistic and cultural diversity in online content as a persistent challenge. This gap limits the inclusivity of the digital space and hinders full participation of diverse communities in the information society.

Evidence

Only 3% of online content is in Arabic, despite the Arab world representing nearly half a billion people.

Major Discussion Point

Challenges and gaps in digital development

Promoting media/information literacy and combating misinformation

Explanation

Jelassi emphasizes the need to focus on media and information literacy to combat the spread of misinformation online. He argues for developing critical thinking skills among users to distinguish between fact-checked and misleading information.

Evidence

UNESCO has developed curricula and content on media and information literacy.

Major Discussion Point

Priorities for the future of WSIS

Differed with

Robert Opp

Angel González Sanz

Differed on

Prioritization of challenges

M

Mike Walton

Speech speed

173 words per minute

Speech length

814 words

Speech time

281 seconds

Enabled access to life-saving information for refugees

Explanation

Mike Walton highlights how digital technologies have improved access to crucial information for refugees and displaced persons. This access has helped in rebuilding lives and providing essential services to vulnerable populations.

Evidence

14 million refugees and forcibly displaced people visit UNHCR’s help website annually.

Major Discussion Point

Achievements of WSIS over the past 20 years

Ethical concerns around AI and emerging technologies

Explanation

Walton raises concerns about the ethical implications of AI and other emerging technologies. He emphasizes the need for ethical frameworks to guide the development and deployment of these technologies, especially in humanitarian contexts.

Major Discussion Point

Challenges and gaps in digital development

A

Angel González Sanz

Speech speed

126 words per minute

Speech length

1019 words

Speech time

481 seconds

Mapped WSIS action lines to Sustainable Development Goals

Explanation

Angel González Sanz highlights the effort to align WSIS action lines with the UN Sustainable Development Goals. This mapping demonstrates the integral role of digital technologies in achieving broader development objectives.

Major Discussion Point

Achievements of WSIS over the past 20 years

Persistent digital divides based on gender, geography, and education

Explanation

González Sanz points out the ongoing challenge of digital divides across various dimensions. These disparities limit equal participation in the digital economy and society, particularly affecting developing countries and marginalized groups.

Major Discussion Point

Challenges and gaps in digital development

Agreed with

Mohammed Saud Al-Tamimi

Gitanjali Sah

Agreed on

Persistent digital divides

Ensuring inclusive global governance of AI and data

Explanation

González Sanz emphasizes the need for more inclusive governance frameworks for AI and data. He argues that developing countries’ voices and interests must be better represented in global discussions on these critical issues.

Evidence

A survey found that 118 countries are not involved in any major AI governance initiatives, while G7 countries are involved in all of them.

Major Discussion Point

Priorities for the future of WSIS

Differed with

Tawfik Jelassi

Robert Opp

Differed on

Prioritization of challenges

M

Mohammed Saud Al-Tamimi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

One-third of global population still lacks internet access

Explanation

Mohammed Saud Al-Tamimi highlights the persistent challenge of connecting the unconnected. Despite progress, a significant portion of the world’s population remains without internet access, hindering their participation in the digital economy and society.

Evidence

2.6 billion people, or about 33% of the global population, are still not connected to the internet.

Major Discussion Point

Challenges and gaps in digital development

Agreed with

Angel González Sanz

Gitanjali Sah

Agreed on

Persistent digital divides

Importance of collaboration between governments, private sector, and civil society

Explanation

Al-Tamimi emphasizes the need for collaboration among various stakeholders to address digital challenges. He argues that this multi-stakeholder approach is crucial for implementing the Global Digital Compact and achieving digital inclusion.

Major Discussion Point

Role of IGF and multi-stakeholder approach

Agreed with

Stefan Schnorr

Sally Wentworth

Jennifer Bachus

Agreed on

Importance of multi-stakeholder approach

S

Sally Wentworth

Speech speed

137 words per minute

Speech length

1016 words

Speech time

444 seconds

IGF as key platform for inclusive internet governance discussions

Explanation

Sally Wentworth highlights the importance of the Internet Governance Forum as a crucial platform for inclusive discussions on internet governance. She emphasizes the IGF’s role in bringing together diverse stakeholders to address digital challenges.

Major Discussion Point

Role of IGF and multi-stakeholder approach

Agreed with

Stefan Schnorr

Jennifer Bachus

Mohammed Saud Al-Tamimi

Agreed on

Importance of multi-stakeholder approach

J

Jennifer Bachus

Speech speed

167 words per minute

Speech length

973 words

Speech time

347 seconds

Need for meaningful participation from developing countries

Explanation

Jennifer Bachus emphasizes the importance of ensuring meaningful participation from developing countries in internet governance processes. She argues that this inclusion is crucial for addressing global digital challenges effectively.

Major Discussion Point

Role of IGF and multi-stakeholder approach

Agreed with

Stefan Schnorr

Sally Wentworth

Mohammed Saud Al-Tamimi

Agreed on

Importance of multi-stakeholder approach

T

Takuo Imagawa

Speech speed

156 words per minute

Speech length

857 words

Speech time

328 seconds

IGF’s ability to address emerging issues like AI governance

Explanation

Takuo Imagawa highlights the IGF’s capacity to tackle emerging technological issues such as AI governance. He emphasizes the forum’s flexibility in adapting to new challenges in the digital landscape.

Evidence

Japan held a special session on AI at a previous IGF.

Major Discussion Point

Role of IGF and multi-stakeholder approach

R

Robert Opp

Speech speed

141 words per minute

Speech length

761 words

Speech time

323 seconds

Environmental impacts of digital technologies

Explanation

Robert Opp raises concerns about the environmental consequences of digital technologies. He emphasizes the need to address both the positive potential of digitalization for environmental sustainability and its negative impacts.

Evidence

Data centers were reported to emit as much carbon as France a couple of years ago.

Major Discussion Point

Challenges and gaps in digital development

Addressing environmental sustainability of digital technologies

Explanation

Opp argues for prioritizing environmental sustainability in the future of WSIS. He emphasizes the need to balance the benefits of digital technologies with their environmental costs, particularly in light of growing AI usage.

Major Discussion Point

Priorities for the future of WSIS

Differed with

Tawfik Jelassi

Angel González Sanz

Differed on

Prioritization of challenges

J

Junhua Li

Speech speed

116 words per minute

Speech length

718 words

Speech time

368 seconds

Developing digital skills and capacity building

Explanation

Junhua Li emphasizes the importance of focusing on digital skills development and capacity building. This priority aims to address the digital skills gap and ensure that people can effectively participate in and benefit from the digital economy.

Major Discussion Point

Priorities for the future of WSIS

Agreements

Agreement Points

Importance of multi-stakeholder approach

Stefan Schnorr

Sally Wentworth

Jennifer Bachus

Mohammed Saud Al-Tamimi

Established multi-stakeholder approach to internet governance

Multi-stakeholder model as crucial for implementing Global Digital Compact

IGF as key platform for inclusive internet governance discussions

Need for meaningful participation from developing countries

Importance of collaboration between governments, private sector, and civil society

Multiple speakers emphasized the critical role of the multi-stakeholder approach in internet governance and implementing the Global Digital Compact, highlighting the IGF as a key platform for inclusive discussions.

Persistent digital divides

Mohammed Saud Al-Tamimi

Angel González Sanz

Gitanjali Sah

One-third of global population still lacks internet access

Persistent digital divides based on gender, geography, and education

Bridging remaining digital divides, especially in rural areas

Several speakers highlighted the ongoing challenge of digital divides, emphasizing the need to connect the unconnected and address disparities based on various factors such as geography, gender, and education.

Similar Viewpoints

Both speakers emphasized the importance of addressing ethical concerns and promoting digital literacy in the face of emerging technologies and misinformation.

Tawfik Jelassi

Mike Walton

Promoting media/information literacy and combating misinformation

Ethical concerns around AI and emerging technologies

Both speakers highlighted the need to address the environmental impact of digital technologies and ensure inclusive governance of emerging technologies like AI.

Robert Opp

Angel González Sanz

Environmental impacts of digital technologies

Addressing environmental sustainability of digital technologies

Ensuring inclusive global governance of AI and data

Unexpected Consensus

Environmental sustainability in digital development

Robert Opp

Tawfik Jelassi

Environmental impacts of digital technologies

Addressing environmental sustainability of digital technologies

The focus on environmental sustainability in the context of digital development was an unexpected area of consensus, as it was not a primary focus of the original WSIS but has emerged as a critical issue.

Overall Assessment

Summary

The main areas of agreement included the importance of the multi-stakeholder approach, the need to address persistent digital divides, the role of the IGF in internet governance, and emerging concerns about environmental sustainability and ethical use of digital technologies.

Consensus level

There was a high level of consensus among speakers on the core principles of WSIS and the need for continued multi-stakeholder cooperation. This consensus suggests a strong foundation for future collaboration in addressing digital challenges and implementing the Global Digital Compact. However, there were also diverse perspectives on specific priorities and approaches, indicating the complexity of the issues at hand and the need for continued dialogue and negotiation.

Differences

Different Viewpoints

Prioritization of challenges

Tawfik Jelassi

Robert Opp

Angel González Sanz

Promoting media/information literacy and combating misinformation

Addressing environmental sustainability of digital technologies

Ensuring inclusive global governance of AI and data

While all speakers agreed on the need to address various challenges, they differed in their emphasis on which issues should be prioritized. Jelassi focused on media literacy and misinformation, Opp highlighted environmental sustainability, and González Sanz emphasized inclusive AI and data governance.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were related to prioritization of challenges and specific focus areas within the broader agreement on multi-stakeholder approaches.

difference_level

The level of disagreement among speakers was relatively low. Most speakers agreed on the fundamental principles and achievements of WSIS, as well as the importance of multi-stakeholder approaches. The differences were mainly in emphasis and prioritization of specific issues, rather than fundamental disagreements. This suggests a generally unified vision for the future of WSIS and internet governance, with variations in how to best address specific challenges.

Partial Agreements

Partial Agreements

All speakers agreed on the importance of the multi-stakeholder approach and the role of the IGF, but they had slightly different emphases. Schnorr focused on its importance for implementing the Global Digital Compact, Wentworth highlighted its role in inclusive discussions, and Bachus specifically emphasized the need for developing countries’ participation.

Stefan Schnorr

Sally Wentworth

Jennifer Bachus

Multi-stakeholder model as crucial for implementing Global Digital Compact

IGF as key platform for inclusive internet governance discussions

Need for meaningful participation from developing countries

Similar Viewpoints

Both speakers emphasized the importance of addressing ethical concerns and promoting digital literacy in the face of emerging technologies and misinformation.

Tawfik Jelassi

Mike Walton

Promoting media/information literacy and combating misinformation

Ethical concerns around AI and emerging technologies

Both speakers highlighted the need to address the environmental impact of digital technologies and ensure inclusive governance of emerging technologies like AI.

Robert Opp

Angel González Sanz

Environmental impacts of digital technologies

Addressing environmental sustainability of digital technologies

Ensuring inclusive global governance of AI and data

Takeaways

Key Takeaways

The WSIS process has made significant achievements over the past 20 years, including increasing global internet connectivity and establishing a multi-stakeholder approach to internet governance.

Despite progress, major challenges remain, including persistent digital divides, environmental impacts of technology, and ethical concerns around AI and emerging technologies.

Key priorities for the future of WSIS include bridging remaining digital divides, promoting digital literacy, developing digital skills, addressing environmental sustainability, and ensuring inclusive global governance of AI and data.

The Internet Governance Forum (IGF) continues to play a crucial role as a platform for inclusive internet governance discussions and addressing emerging issues.

There is broad agreement on the need to align the Global Digital Compact (GDC) with the WSIS process going forward.

Resolutions and Action Items

UN agencies to develop action plans for implementing the Global Digital Compact

Prepare for the WSIS+20 review process, including regional consultations and key events in 2024

CSTD to set up a multi-stakeholder working group on principles of data governance, reporting to the UN by end of 2026

Strengthen the IGF’s mandate and ensure its continuation beyond 2025

Unresolved Issues

Specific mechanisms for aligning the Global Digital Compact with the WSIS process

How to effectively address the environmental impacts of digital technologies

Ways to ensure meaningful participation from developing countries in AI and data governance discussions

Strategies for combating misinformation and promoting information integrity online

Suggested Compromises

Leverage existing forums and processes like the IGF to implement the Global Digital Compact, rather than creating entirely new mechanisms

Balance the promotion of digital technologies with efforts to mitigate their environmental impacts

Develop a joint ethical framework for technology solutions that can be applied across different contexts and stakeholders

Thought Provoking Comments

We still have students who have to sit under the tree to learn. So when we talk about connecting schools, for us, it’s quite a big, a long journey.

speaker

Nthati Moorosi

reason

This comment provides a stark reminder of the vast disparities in basic infrastructure and education between countries, highlighting the immense challenges in achieving digital inclusion.

impact

It shifted the discussion to focus more on the realities faced by developing countries and the need for addressing fundamental infrastructure issues alongside digital connectivity.

The Arab world represents almost half a billion people. How much content in Arabic is there online? 3%. 3% only. And there are so many communities, including indigenous communities, who have no content online whatsoever.

speaker

Tawfik Jelassi

reason

This comment highlights a critical aspect of the digital divide beyond just access – the lack of relevant, localized content for many language communities.

impact

It broadened the conversation about digital inclusion to encompass not just connectivity, but also the creation and availability of diverse, culturally relevant content online.

We need to work together to address the risks that digitalization can bring about in terms of increasing equality. Increasing equality among men and women, increasing equality within countries between different social groups, but also increasing equality across countries.

speaker

Angel González Sanz

reason

This comment draws attention to the potential negative impacts of digitalization on equality, challenging the often optimistic narrative around digital technologies.

impact

It prompted a more nuanced discussion about the complex effects of digitalization, encouraging participants to consider both positive and negative consequences across various dimensions of equality.

Environmental sustainability has to go into the next version of what we do. And it’s the two areas. It’s what digitalization can contribute to environmental sustainability, climate change, but it’s also the contribution to climate challenges or environmental challenges.

speaker

Robert Opp

reason

This comment introduces the critical link between digitalization and environmental sustainability, an aspect that had not been prominently discussed earlier.

impact

It expanded the scope of the discussion to include environmental considerations, prompting others to reflect on the dual role of digital technologies in both contributing to and potentially mitigating environmental challenges.

Overall Assessment

These key comments significantly broadened and deepened the discussion by introducing critical perspectives on the challenges and complexities of digital inclusion. They shifted the conversation from a focus on technological progress and connectivity to a more holistic consideration of socioeconomic, cultural, and environmental factors. This resulted in a more nuanced and comprehensive dialogue about the future of digital governance and the implementation of the WSIS goals, emphasizing the need for multifaceted approaches that address both opportunities and risks associated with digitalization.

Follow-up Questions

How can we effectively address rapidly evolving technologies, knowing that our governance mechanisms take their time?

speaker

Thomas Schneider

explanation

This question addresses the challenge of keeping governance and policy frameworks up-to-date with fast-paced technological advancements.

How can we ensure that the WSIS Plus 20 review focuses on implementation over the last 20 years before considering what’s next?

speaker

Jennifer Bachus

explanation

This highlights the importance of thoroughly evaluating past progress before setting future goals.

How can we integrate GDC implementation within the WSIS framework?

speaker

Jennifer Bachus

explanation

This question addresses the need to align and combine two major UN digital initiatives.

How can we avoid potential artificial limitations for global digital work?

speaker

Sherzod Shermatov

explanation

This relates to ensuring open access to global digital job opportunities without unnecessary restrictions.

How can we address the increasing risks in the digital landscape, such as fraud, trafficking, and toxic narratives?

speaker

Mike Walton

explanation

This question highlights the need to tackle emerging threats in the evolving digital environment.

How can WSIS contribute to tackling climate change and environmental crises?

speaker

Tawfik Jelassi

explanation

This explores the potential role of digital technologies in addressing urgent global environmental challenges.

How can we develop a joint framework for ethical principles in technology solutions?

speaker

Mike Walton

explanation

This addresses the need for a common approach to ensure ethical development and use of technology.

How can we ensure more inclusive participation in AI governance initiatives, particularly from developing countries?

speaker

Angel González Sanz

explanation

This highlights the importance of involving diverse voices in shaping AI governance globally.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Tackling disinformation in electoral context

Tackling disinformation in electoral context

Session at a Glance

Summary

This session focused on tackling disinformation in electoral contexts, exploring the roles of various stakeholders and potential solutions. Participants discussed the challenges posed by disinformation during elections, emphasizing its threat to human rights and democracy. The European Union’s approach was highlighted, including the Code of Practice on Disinformation, which involves multiple stakeholders in a co-regulatory framework.

The importance of fact-checking, media literacy, and public awareness campaigns was stressed by several speakers. There was debate about the responsibility of digital platforms in moderating content, with some arguing for greater accountability and others cautioning against overregulation that could stifle free speech. The need for tailored approaches considering cultural contexts was emphasized, particularly for smaller countries.

Multi-stakeholder partnerships and collaborations were seen as crucial in combating disinformation. Speakers highlighted the role of traditional and social media in spreading information during elections, and the need for empowering citizens to identify misinformation. The discussion touched on the challenges of regulating content without infringing on freedom of expression, with some advocating for a focus on systemic risks rather than specific content.

Participants also debated the effectiveness of algorithmic content moderation and the importance of transparency in platform policies. The session concluded with calls for greater collaboration, awareness-building, and a focus on information integrity, while recognizing the regional and national specificities of disinformation challenges.

Keypoints

Major discussion points:

– The role of regulations, platforms, and multi-stakeholder partnerships in combating election disinformation

– Balancing efforts to counter disinformation with protecting freedom of expression

– The importance of fact-checking, media literacy, and public awareness campaigns

– Regional and cultural differences in how disinformation manifests and should be addressed

– Debate over platform accountability and content moderation vs. user empowerment

The overall purpose of the discussion was to explore strategies for tackling disinformation in electoral contexts, with a focus on the roles and responsibilities of different stakeholders including tech platforms, governments, civil society, and citizens.

The tone of the discussion was largely collaborative and solution-oriented, with panelists sharing insights from different regional perspectives. However, there were moments of debate and disagreement, particularly around issues of platform regulation and accountability. The tone became more urgent towards the end as some participants expressed frustration with the lack of concrete progress on these issues.

Speakers

– Peace Oliver Amuge: Moderator

– Giovanni Zagni: Expert on EU regulations and disinformation

– Poncelet Ileleji: Expert on sub-Saharan Africa and community radio

– Aiesha Adnan: Representative from Women Tech Maldives

– Juliano Cappi: Representative from CGI (Brazilian Internet Steering Committee)

– Nazar Nicholas Kirama: Expert from Tanzania

Additional speakers:

– Tim: Audience member

– Kosi: Student from Benin

– Nana: Audience member with experience in elections and disinformation

– Peterking Quaye: Representative from Liberia IGF

Full session report

Expanded Summary of Discussion on Tackling Disinformation in Electoral Contexts

Introduction:

This session focused on addressing the critical issue of disinformation during elections, exploring the roles of various stakeholders and potential solutions. The discussion brought together experts from different regions and backgrounds to examine the challenges posed by disinformation and its threat to human rights and democracy.

Key Themes and Discussion Points:

1. Regulatory Approaches and Frameworks:

The discussion highlighted various approaches to regulating disinformation, with a particular focus on the European Union’s strategy. Giovanni Zagni introduced the EU Code of Practice on Disinformation as a voluntary co-regulatory instrument, emphasising the “European way” of bringing all relevant stakeholders together. He noted that the Code has 34 signatories, including major tech platforms, fact-checking organizations, and civil society groups. This approach contrasts with more stringent regulatory measures, sparking debate about the appropriate level of government involvement.

Juliano Cappi shared insights from Brazil, mentioning the Brazilian Internet Steering Committee’s guidelines and the Internet and Democracy Working Group’s publications on combating disinformation. He also introduced the concepts of “systemic risk” and “duty of care” in platform regulation, emphasizing the need for digital public infrastructure and digital sovereignty.

2. Multi-stakeholder Collaboration and Partnerships:

A recurring theme throughout the discussion was the crucial role of multi-stakeholder partnerships in combating disinformation. Speakers agreed that collaboration between fact-checkers, platforms, civil society organisations, and government bodies is essential for developing effective strategies. Juliano Cappi emphasised the need for improved processes to better integrate work from different forums addressing disinformation, suggesting that current efforts may not be sufficiently effective.

3. Role of Tech Platforms and Content Moderation:

The responsibility of digital platforms in moderating content emerged as a contentious issue. Nazar Nicholas Kirama advocated for proactive content moderation and transparency in algorithmic policies by tech platforms. He suggested implementing advanced algorithms for flagging misleading information and collaborating with fact-checkers. Kirama provocatively framed tech platforms as de facto electoral commissions, highlighting the need for accountability.

Juliano Cappi raised concerns about potential bias in platform business models and the advancement of certain political views. He also suggested a “Follow the money” approach to investigate the financing of disinformation campaigns.

An audience member cautioned against over-regulation that could stifle innovation, highlighting the tension between combating false information and protecting free speech. Another participant warned about the potential misuse of regulation by governments to suppress social media, citing examples from African countries.

4. Fact-checking and Media Literacy:

The importance of fact-checking and media literacy was stressed by several speakers. Poncelet Ileleji called for fact-checking websites supported by organisations like UNESCO, while Aiesha Adnan emphasised the need for civic education and information literacy programmes. These initiatives were seen as crucial tools for empowering citizens to identify misinformation and disinformation.

Poncelet Ileleji highlighted the shift in information dissemination and consumption patterns, noting that young people, political parties, and lobbyists increasingly use social media platforms like Twitter, TikTok, and X to spread information, rather than traditional mainstream media. He also stressed the importance of empowering people at the grassroots level, particularly through community radios, to combat disinformation.

5. Cultural Context and Localised Approaches:

Speakers emphasised the need for tailored programmes that fit cultural norms and consider the specific needs of smaller countries and populations. Aiesha Adnan, drawing from her experience with the recent presidential election in the Maldives, stressed the importance of designing tools and interventions that account for the unique challenges faced by smaller nations, as their needs may be overlooked in global approaches.

An audience member raised concerns about the cultural context in algorithmic content moderation, highlighting that algorithms may misinterpret content due to cultural differences. This point emphasised the need for explainable AI in content moderation and the importance of considering diverse cultural perspectives in developing anti-disinformation strategies.

6. Empowering Communities and Grassroots Initiatives:

Poncelet Ileleji stressed the importance of empowering people at the grassroots level, particularly through community radios, to combat disinformation. This approach aligns with Aiesha Adnan’s call for promoting “information integrity” across society, shifting the focus from just regulating media to empowering citizens.

7. Balancing Regulation and Free Speech:

A significant point of debate was the tension between regulating disinformation and protecting freedom of expression. Giovanni Zagni highlighted this challenge, while Nazar Nicholas Kirama suggested the need for “facilitative regulations” that balance platform accountability with the protection of free speech. The discussion revealed the complexity of addressing disinformation without infringing on fundamental rights.

An audience member raised concerns about the potential misuse of conspiracy theory labels, referencing experiences during the COVID-19 pandemic. This highlighted the need for careful consideration when categorizing and addressing potentially misleading information.

Conclusion:

The session concluded with calls for greater collaboration, awareness-building, and a focus on information integrity. Participants recognised the regional and national specificities of disinformation challenges, emphasising the need for both localised and global approaches. Giovanni Zagni particularly stressed the importance of considering regional and national contexts when addressing disinformation problems.

Unresolved issues and areas for further exploration include:

1. Effectively regulating tech platforms without stifling innovation or free speech

2. Addressing political biases and power dynamics in the spread of disinformation

3. Creating global standards while respecting regional and national differences

4. Determining the extent of platform accountability for user-generated content

5. Developing sustainable national civic education programs

6. Implementing transparent and culturally sensitive algorithmic policies

The discussion provided valuable insights into the multifaceted nature of election-related disinformation and underscored the need for continued dialogue, research, and collaborative efforts to safeguard democratic processes in the digital age. Participants emphasized the importance of more listening and discussion at the global level to find common ground on addressing disinformation while respecting diverse perspectives and stakeholder interests.

Session Transcript

Peace Oliver Amuge: briefly introduce the session to you and so this session is a NRA collaborative session. It’s on tackling disinformation in election electoral contexts. Channel 4, 4 please. We are on channel 4. Is everyone there? Channel 4, yes. Channel 4, if you have just gotten in, please get yourself the gadgets. And so yes, I just say that this session is tackling disinformation in the electoral context and as you are all aware that this year has been called the year of elections. We’ve had several countries going through elections and we know that during elections human rights is at stake and we’ve had growing issues of disinformation during elections and these are issues that put human rights at stake, democracy at stake, so this is a very important and crucial discussions to have and so we are very happy to have you join here. We have distinguished panelists that you’re seeing here. We should have one panelist joining online and so this session will discuss a couple of issues. We’ll discuss the role of different stakeholders, the role of social media, we will discuss the norms, the principles, the frameworks, standards, you know, some of the ways that we can take to counter disinformation and we will be sharing different contexts when it comes to disinformation and I see the room is quite diverse so I expect that we will have a wealth of discussions and I want to just mention that we have an online moderator who is here in the room, we have rapporteurs who will be supporting us, we have Michelle from Zambia, we have Umut who will also support us with rapporteuring and also big thanks to the organizers who are in the room and some might not be here, who are the Asia Pacific Youth IGF, the Bangladesh, the Benin IGF, the Caribbean IGF, the Colombia, Eurodig, Gambia, you know, and several others, South Sudan. So, I will not mention them all because we lost a little bit of time at the beginning and I will come already to say that the panelists who are here, one who might join, we have Aisha who will introduce herself later very well, we have Juliano, you’re most welcome, we have Nazar who is here and Ponce de Tuma I just mentioned that will join and Giovanni. Thank you very much for making time and so I think that we will already start our conversation. I am keeping fingers crossed that we don’t have any technical glitches and please just give me a note if you can’t hear me or you are having trouble and if anyone walks in and sits near you, let them know we are on channel 4 under tech, please let people know that we are on channel 4. So, since Poncelet is not on yet, I will come to you Giovanni and to open our discussions and the question to you is how have existing regulations addressed this information during elections? What are the practices, you know, that balances combating false information with protecting free freedom of expression?

Giovani Zagni: Now it’s on, now it works. Okay, thank you for this question, good afternoon and I will answer by making reference to what I know more which is the European Union case which is peculiar in many ways. First of all, the European Union is not the place where the majority of very large online platforms are based, which is clearly the US, but the EU has at the same time always taken a very proactive approach when it comes to regulation. Common say in this area is that the US innovates while the EU regulates. Secondly, 2024 was the year when about a dozen European states went to the polls for a variety of national elections, from Spain to Finland and from Greece to France, but also when a European-wide common election, so to say, took place for renewing the European Parliament, the only elective body of the European Union, only directly elected body of the European Union. Thirdly, in 2024 new important EU regulations like the Digital Services Act, DSA, and Digital Markets Act, DMA, were not yet fully enforced because even if they have been approved by the relevant institutions, the process of implementing them is still ongoing, so they were not able to impact the electoral processes that took place this year. So how did the EU address the issue of disinformation in the crucial electoral year 2024? The main tool was the strengthened Code of Practice on disinformation, which was promoted by the European Union. The Code was presented in June 2022 and it is a voluntary and co-regulatory instrument developed and signed by 34 signatories at the time of the adoption. Who are the signatories? Players from the advertising ecosystem, advertisers, app tech companies, fact checkers, many but not all very large online platforms, civil society, and third party organizations. A few names, Meta, Microsoft, Adobe, Avast, the European Fact Checking Standards Network, the European Association of Communication Agency, Reporters Without Borders, World Federation of Advertisers, TikTok, Twitch, and Vimeo. So all these signatories agreed to establish a permanent task force with the idea of ensuring that the Code adapts and evolves in view of technological, legislative, societal, and market developments. The task force is a rare place where representatives from the different stakeholders have a place to exchange information, require specific action, and discuss the best way ahead. The Code therefore is the key instrument of the European Union’s policy against disinformation and its two key characteristics are to be a voluntary and co-regulatory instrument. So coming back to your question, the second half of it is how do you balance that with protecting freedom of information? And the European way, so to say, is to have all the relevant stakeholders around the same table and do not to impose any kind of kind of direct intervention from the authorities on the specific content, but more to have a forum where, I don’t know, like potentially damaging cases or potential threats or things that need to be looked after are discussed and then the platforms decide to take action or not. I’ll give you a very practical example to conclude. The recent elections in Romania that took place a few days ago made the headlines in Europe and beyond because under the strong suspicion of foreign interference they were annulled by the Romanian Constitutional Court and the first round of the elections has to be redone. So during this process basically all the stakeholders that were involved in the code decided to set up a rapid response system. What that meant was that there was a mechanism through which all the, I don’t know, like fact checker or a civil society organization could say look in our day-to-day work we noticed that this particular suspicion activities happened in this particular social network platform. So now it’s up to you my dear social network platform to check if that this particular phenomenon violates the terms of use. So as you can see there is no direct intervention or no kind of, I don’t know, regulation or law by which you have to do something yet, but there is this co-regulatory and collective effort to work together as stakeholders involved. Thank you.

Peace Oliver Amuge: Thank you very much for those very informative regulations that you mentioned and the collective actions that we are taking, I think it’s very key to have these frameworks in place when we talk about this information. We will park it there. I’ve been told that we have a pamphlet in the room and I would like us to hear from pamphlet. As we all know, sometimes tech can be difficult, so it would be nice to hear from pamphlet. But also I wonder why we don’t have pamphlet on the screen, if you could let us have pamphlet. And pamphlet, if you can hear us, would you please just say something? Pamphlet, are you able to hear us? No. Okay. I think our online moderator is trying to sort out that. And then I will come to you, Aisha. We’ve had Giovanni pamphlet. Can you just open your mic and say something?

Poncelet Ileleji: Yes, thank you very much. Peace, I’m here, sorry. I was waiting to be granted access. Can you hear me?

Peace Oliver Amuge: Yes, we can hear you. And would you, are you able to speak? We’d either hear from you or listen to Aisha.

Poncelet Ileleji: Yes, yes, you can definitely hear from me. And thank you all.

Peace Oliver Amuge: And one of the most… Pamphlet, so this is the question that I would like you to take, share with us what the role that traditional media plays and social media in election, you know, during elections and how this is effective and how regulations have been used to address these issues of disinformation.

Poncelet Ileleji: I think speaking from a sub-Saharan point of view, you will notice that the role of social media in terms of disinformation and even misinformation is very important. We have to observe, we have to also know that why has this become very important? Most young people, most political parties, most lobbyists, what they use all over the world today to disseminate information has been through social media. Whether it’s Twitter, whether it’s TikTok, whether it’s X, they have used all this to disseminate information. And most people don’t naturally use mainstream media. They use social media as a way in which they get information. And one way, in a good example, we can use to combat this is making sure that within countries, we have what we have, we have what are called fact-checking websites, like what we did in the Gambia. Coming into our last presidential elections, we worked with the Gambian Press Union to set up what we call a fact-checking website that was supported by UNESCO. So UNESCO has always been a good agency in supporting a number of countries to setting up fact-checking websites. And it’s important that you have to have a bottom-top approach in training journalists at grassroots level, especially journalists working at community level using community radio and how they can work with various fact-checking websites to be able to do this. Unfortunately, I don’t know why my camera is not coming on. It’s showing here, but this is the little intervention I’ll make for now and I’ll take any other question. Thank you.

Peace Oliver Amuge: Thank you. Thank you very much, Consulate, for your intervention and sharing like Giovanni also talked about the fact-checking how that is important amid election times and when the widespread disinformation. And also you mentioned how social media is an important tool that people use to access information and also it’s the same tool that is used to spread disinformation. I will still park that, we will park at that and I would like to hear. Yes, we can now see you in the room, Consulate. So Aisha, I will come to you. How can tailored programs, initiatives and overall good values help people identify misinformation, engage diverse communities and ensure there is electoral integrity?

Aiesha Adnan: Yes. Hello, everyone. Great to be here and coming from very far from the Maldives. And this is an interesting topic because I come from an organization called Women Tech Maldives where we initially started to support women and girls. And then we realized we need to talk about safety. And when you talk about safety, you have to talk about everyone. That is where actually our work began on this space, disinformation. Then we had the opportunity to actually communicate with a lot of organizations. And one of the area we work is on identifying disinformation and doing media analysis. Okay, coming to the election. One of the interesting thing that highlighted was the last presidential election in the Maldives. Traditionally, it has been, we had several election observers group. There was nothing like disinformation in most of the reports, but this time it was quite a bit different because we saw that major percentage of the disinformation came from the online media and then very few were from the traditional media. So we know that this shift is happening. In few years time, we don’t see much in the traditional media. It goes through the social media. Okay, then you mentioned about what are the initiatives. So when you talk about the initiatives, I know that there are a lot of tools available. And can it really fit all the countries? No, it has to be designed in a way that it fits the cultural norms.

Peace Oliver Amuge: Are we having a cut in the?

Aiesha Adnan: Okay, sorry about that. I hope you’ve heard some of my words, okay? All right, so when we talk about misinformation and what comes to my mind is like, what are the ways that we can really tackle it? So right now, we are coming up with a lot of tools actually to debunk this information and everything that you see. But I would like to see a place where that we actually build a culture where we promote information integrity across everyone. And when we especially talk about election, then everyone says, it’s the media. It’s the media spreading the information. But yes, of course, some part is the media, but it is the citizen who believe in it. If they are not equipped in ways with knowledge and tools on how they can really identify them, and then that means that’s where we fail. Because it’s not only the election, it’s everywhere. Then information integrity is an important factor to consider. And we as an organization, we have conducted several sessions with the political parties as well and with the parliamentarians as well. How can they actually support these kind of processes within the communities? Because in Maldives, we have remote islands as well. Then the councillors and the parliamentarians, they do really travel across. And then that’s the way that we can actually connect with the communities and run more programs. But in light of this, I also want to highlight there are two interesting guides from NDI. NDI has a very interesting guide to promote information integrity. And this guide has a lot of tools, like the tools they have supported to develop fact-checking and other frameworks as well. Another one I would also like to highlight is UNDP Strategic Guidance Information Integrity Framework. They have one as well. And when we go back and talk about these kind of initiatives that both UNDP and NDI have supported, I would like to highlight one of the initiatives that NDI took. One is in Georgia. They partnered with local organization to address the spread of misinformation during elections. And the impact, the effort empowered citizens to make informed decisions and reduced the effectiveness of misinformation aimed at influencing public opinion and what behavior. So we know that these kind of interventions actually help. So, one is definitely going for fact-checking tools and empowering citizens to make the right choices, and also involving the media and the political parties itself. And on another note, I would also like to highlight that in Maldives we are currently working with the community to develop a fact-checking system, so that hopefully this will be a way that smaller countries like us, where we have very few population and we speak one language, so most of the time what happens is, when this kind of information is posted online in our native language, you know the algorithms cannot pick it up, so that is challenging for us. I hope that all these platforms, they do consider us, because we also need to exist, and we need that support from everyone. And at the national level, we are doing our work, but it is labor-intensive, so that support is required. Thank you.

Peace Oliver Amuge: Thank you very much, Aisha, for pointing out those gaps and the need for capacity-building, and also empowering, you know, the citizen when we talk about issues of countering disinformation. And we are in a multi-stakeholder space, and I think it would be nice to talk about that a little bit. I would come to you, Giuliano, and my question to you is that, how can multi-stakeholder partnerships and public-private collaborations improve efforts to combat election disinformation and expand media literacy programs to reach all parts of the society?

Juliano Cappi: Thank you so much. Well, I decided to bring a reflection here based on fiction. In Aurel’s novel, 1984, he states, who controls the past, controls the future. Who controls the present, controls the past. Well, Aurel builds a dystopian reality where a state institution, the records department, releases every single piece of stored information and rewrites it, if necessary, to conform to the party’s vision. He reminds us of the sets of institutions, disciplines, and propositions built throughout human history to organize discourse. More importantly, he sheds light on the social disputes to control and impose discourse as a strategy for maintaining or gaining power. Well, at this point, we must recognize that the internet has brought the challenge of organizing speech to an entirely new level. This is a challenge for society as a whole. In this sense, I understand that most stakeholder spaces are especially important to foster social arrangements capable of dismantling a highly developed industry, the disinformation industry. At CGI, we have been working on the production of principles and guidelines to combat disinformation. I guess what happened in 2018 in England was like a trigger to promote debate at international level on disinformation. Despite that, in 2017, the United Nations signs a declaration on freedom of expression, fake news and disinformation. In the same year, European Union sponsored a first major study on disinformation, information disorder, towards an interdisciplinary framework to research and policymaking. One year after that, in 2018, the European Union creates the High-Level Group on Fake News and Disinformation online to produce one important first report called A Multidimensional Approach to Disinformation. This is kind of important. I ask you please to go to the next slide because it is in 2018 that the Brazilian Internet Steering Committee creates the Internet and Democracy Working Group, which produced a first publication, the Fake News and Elections Guidebook for Internet Users. The guide outlined the general problem and present concepts, discuss risks and offer guidelines to prevent the distribution of disinformation. In 2018, we have the election of Jair Bolsonaro in Brazil. The working group carries on working on the challenges imposed by disinformation and produces the report on disinformation and democracy, discussing international initiatives to combat disinformation in the industry and proposing 15 directives to address the phenomenon while promoting what we now call information integrity. In 2021, the working group presented another work, which is the contributions to combating disinformation on the Internet during electoral periods, looking specifically to electoral periods. Additionally, CJWR participates in the Electoral Supreme Court Task Force to Combat Disinformation. All this work that you can see here at the presentation is available on the Internet. It is in Portuguese. We nowadays can easily translate a PDF for any language through Internet applications. And also, if any country or working group is interested, we could translate this work. But we should consider that they try to address a specific reality, which is what happens in Brazil. Well, still, my feeling is that we can do more. We should ask ourselves about the impact of the work carried out in most stakeholder spaces, this includes obviously IGF, to combat disinformation. I believe we may find opportunities to improve processes and foster intersectional collaboration to better integrate this different forum. We carry out every year a lot of work. Lots of people go to the IGF, and then this is time for us to recognize that we have to think of how to organize this for us, considering that in some measure we fail as a society to combat disinformation. Thank you.

Peace Oliver Amuge: Thank you very much, Julio, for that, and I think you share the need for research. Harmonizing strategies and efforts, and embracing collaborations. We are about to open up to hear from you, but let’s just hear from Nazar. And Nazar, we would just like to hear from you what should digital platforms and tech companies, what role should they play in reducing disinformation during elections, and how can regulations ensure that they are held accountable?

Nazar Nicholas Kirama: Thank you for posing this question. Can you hear me? Okay, thank you so much for organizing this, and my name is Dr. Nazar Nicholas Kirama from Tanzania, and before I answer that question, I would like to take us to a little background in terms of why is it we are discussing about misinformation in the electoral context. It is because the elections have an enormous power either to put people in power or to disempower candidates. It is a very fertile ground where the misinformation, disinformation, fake news is attracted. It is like a sort of space where a lot of activities happens in a very short period of time during the campaign. If you look, for example, the elections that we had in the United States this year, there was a lot of information, misinformation, disinformation, fake news. And I wouldn’t want to comment much on that, but I think this led to one of the candidates being sort of, you know, the campaign was sunk because of misinformation. If I were to look at what the tech companies and platforms need to actually do to mitigate the situation, to ensure that the electoral processes, you know, are free of all these bugs in terms of disinformation and misinformation, fake news, there are several areas that the platforms and the tech companies need to either invest or do more. Number one is actually being proactive in terms of content moderation. They need to implement advanced algorithms that will actually flag and detect flagging and any misleading information so that people who are going to elect a candidate can understand that this information is the right kind of information from the campaign or from which is being put out there. And before I proceed, I would like to say that the tech companies and platforms have sort of, in terms of the electoral processes, they’ve become sort of electoral commissions without regulation and without anybody to sort of answer to. Because they are out there, the campaigns or the countries that are being affected have actually little or nothing to be able to make the platforms and the tech companies be answerable for the content that are being posted by either the proxies of the campaigns or the bad actors for a certain campaign. So I think the regulation in terms of what they do is very important. Number two is transparency in algorithm policies. These tech companies and platforms, they need to be transparent in the algorithm that they use so that information is clear and out there and make sure that the sort of misinformation and other content is put away for the candidates, for example. Number three is collaboration with fact-checkers. They need to collaborate with fact-checkers. The platform should collaborate with independent fact-checking organizations to verify accuracy of content and label or demote false information. This partnership ensures objectivity and credibility. So these tech platforms, because they have the ability to make things go viral, they have the ability to reach thousands and millions of people around the world and within the country, it is important through this transparency with the partnership for collaboration with the fact-checkers, we’ll be able to undo all these defects and fake news and disinformation and make sure that the right kind of information for a candidate is being put out there. So that is very critical. Number four is about having public awareness campaigns that are within the platform themselves, within the country and the actors, the politicians, the regular consumer on the tech and media platforms. Awareness is very key. And the ordinary citizens need to know when should I believe this information is true about a certain candidate, for example. So that is very important. There has to be some kind of real-time safeguards during the electoral periods. For the tech platforms, they have to collaborate even with the electoral commissions to ensure that these kind of safeguards are out there and they are put up and make sure the information that is there is key. Regulatory measures to ensure accountability. The platforms and tech companies have got to be accountable also for the content that is posted on their platforms online. This is because sometimes the tech companies and platforms tend to hide behind the veil of freedom of expression and all this that. But now the freedom of expression does not exclude them from the accountability. They have to be accountable in some ways for the content that is posted and which is false. So I think that is very key in terms of making sure defects and misinformation, disinformation is rooted out of electoral processes and in the end the citizens enjoy the right of electing the right kind of candidates, not because of the misinformation or the information from the bad actors against the candidate, but because of the actual policies that that candidate put out there for the common or ordinary citizens to be able to consume. Because this misinformation, disinformation, fake news and defects have the ability to actually disenfranchise citizens of a particular country. Because now the tech platforms, if they are not accountable for the content, that means they can be able to elect a president of a country, a member of parliament of a certain country, a judge for example. In the US, I know judges in the United States of America, they get voted for in the office. So that’s why I said at the beginning that these tech platforms, tech companies and media platforms, electoral commissions without regulation, without guardrails for them to be able to ensure that the content that is delivered on their platforms resonates with what is actually happening on the ground. With that, I thank you for this.

Peace Oliver Amuge: Thank you very much, Nazar, for your input and highlighting some of the issues that are happening and also steps that need to be taken by platform owners and also other stakeholders. For instance, valuing and focusing on accountability, transparency, fact-checking, awareness creation. I must say that as someone that works in the African region, I followed elections that were happening across Africa. We had over 18 countries going through elections and disinformation was such a big threat to human rights and we had undermining of freedom of expression, civic engagement, having people decide, like Nazar just mentioned, people decide on rumours, fake news. So I think that’s a very important thing and also access to information. We use digital platform a lot to access information and these were things that are very much undermined during elections. So we will open up a bit. Do we have anyone online? If you have a question here, you can raise your hand. One, two, three. But let’s just check if anybody is online. Is there anyone? Okay. So you go first.

Audience: Thank you so much. I have a question for Giovanni. You spoke about the platform, an inclusive platform that is more or less saying if the misinformation, disinformation impacted the result of election, are their conclusions binding for the decision-makers? Because decision-makers may be interested by the election. They may be. candidate, the government, the sitting government may be also candidate for the new term. So is this platform, the conclusion of this platform binding for the decision makers?

Peace Oliver Amuge: Thank you. Let’s take the three questions and then the fourth one here. OK.

Audience: Thank you. My name is Nana. So I listened, and there’s a lot of conversation around the right candidate. And that sounds like bias towards a particular set of values, because I have observed elections for at least the past 15 years. And I’ve also worked on disinformation and misinformation for a long time. And the fact remains that in every election, all parties contribute to misinformation. Our personal biases might tend to show us more from one side. I say our personal biases because even online, the cliques, the people we follow helps tell our algorithms what to bring to our feed. So I’m wondering if we’re looking at it objectively. Then secondly, around platform accountability, I agree with you. Platforms should be held accountable. But I’m concerned for what? We have to be very clear what we are asking platforms to be held accountable for. If we decide to start holding platforms accountable for all forms of fake news posted on the platform, it’s a roundabout way to stifle free speech. I say this because a platform might have the ability to pull down news that has been verified by fact checkers as fake news. But if there’s no means of verifying it, it won’t, because of my opinion, pull it down. You see my point? Recently, ex-formerly Twitter introduced community notes. And anyone who has community note access will know that people even misuse it. And this is supposed to be the court of public opinion. We have to be very careful. I say this because, as a Nigerian, different African governments have found ways of trying to regulate and hyperregulate social media. When we open a door, we have to be very direct to where that door is pointed to, so that we don’t open a Pandora’s box, and it will be very difficult to shut it down. And I agree with you around advanced algorithm to detect and flag any misinformation. But it’s also very, very important that, for all algorithms, there is explainability. Because of cultural context, there are words that, when I say it, it means something else than when maybe someone who’s sitting in Italy says it. I can tell someone, oh, you’re so silly, you’re so foolish. And it’s banter, like we call it. But those words form abuse and insult in a different language. So algorithms, while advanced, may not be the best people, or the best tools, rather, not people, to flag misinformation. I do 100% agree with working and collaborating with fact checkers, because it’s very important that we have the human and the persons who work on this issue. So yeah, this is my contribution, just saying we should be a little bit more circumspect in some of the things that we’re proposing. Thank you very much.

Peace Oliver Amuge: Thank you. There was another hand last, and then we come to you.

Peterking Quaye: Yeah, OK. So thank you so much. My name is Peter King for the Records from Liberia IGF. And two interventions I would like to push across. Just what the other colleague just said, platform regulation. I can attest to that. Platform length in META, I work with them on something similar to that, based on election content that is more of misinformation or disinformation. So in terms of electoral content, they are doing something in terms of regulating content that, when reported, that has not been fact checked, they pull it down based on the fact that, yes, this is someone locally that is flagging this particular content that is not of truth. And then the other issue is, in terms of electoral context with respect to misinformation, I think there is a need for consistent and sustainable national civic education. Because basically, you tell your local story better than anybody. And in constantly in a sustainable education nationally, we have to shift misinformation or disinformation into electoral context. These are my two interventions, please.

Peace Oliver Amuge: Thank you. So there is a hand behind. Since the mic is close by, let him take it there, and then you have it last. Thank you for understanding. Thank you for this. Can I move? Can I go on? Yes, please. Yes.

Audience: OK. I’m Kosi. I’m a student from Benin. From my understanding, it’s not normal to say platform will be responsible for my information I put online. If I put something online, it’s supposed to be a response on that. It’s supposed to respond on that. Platform can, any time, if government requests information, platform can share my name and the information I share on each platform. It’s very important for everybody to know that, because information is freedom. I know information I’m sharing is to do something. It’s for information. My issue is for destruction. It’s supposed to respond on it also. That is very important to do. But all the platforms we have now are doing their own regulation process. We let them do it better. Thank you. Hello, I’m Tim. Thank you for giving me the possibility to talk. You know, I’ll tell you my favorite joke. What’s the difference between conspiracy theory and truth? Six months, remember COVID times, how much we have heard about COVID-19 and how much of that revealed to be not factually correct or even incorrect. And turned out something what we called conspiracy theories and before turned out to be truth. And here, I want to highlight the fact that we should be especially very precise on what we mark disinformation or misinformation or even fakes, especially when we are talking about elections. Because as it was mentioned before very correctly, elections are not about fact checking. They’re about a political battle and political bias where all the parties, directly or indirectly, basically fueling the misinformation narratives in the media landscape just to win. And sometimes they are supported by like an establishment and authorities in power because they just want to sit this chair another four years. It’s obvious. So I think we should be very precise here. And what to tell you more about this, say we have established it like a fact-checking association here in Russia, not here, there in Russia, with the intent to share all the possible experience we have for fighting fakes and disinformation and moreover to share our tools and platform, absolutely free and absolutely open basis. There is even a tool to detect deepfakes. So basically upload a video and it highlights a special, say it runs up a special scene in this video saying that this possibility of this, like some face of being deepfaked is like 70% or 80% or 97%, whatever. So I advise everybody to be especially precise of what we label fake or not. Understanding that political fakes and electoral fakes are most of time is a battle of trying to somebody get some more power, not to get to some truth point. And remember lessons of COVID-19 where lots of conspiracy theorists and even like lines for which people were persecuted and sometimes fined or even jailed turned out to be absolute truth. Thank you.

Peace Oliver Amuge: Thank you. So let’s give the panel this time and Giovanni will come to you first.

Giovani Zagni: Thank you. So I’ll answer first of all to the question from the gentleman in the second row. So talking again about the, how the thing was framed in Europe, have to be very clear that the occasion when a candidate said something that is false is not something that was addressed by the, that is addressed by the task force. Absolutely not. The kind of, so neither the task force nor the code of practice on this information, nor the. European Digital Media Observatory, of which I’m a member, none of these entities and in no way the general way in which the thing is framed in Europe, they have any interest in framing the political debate or in labeling political expression in any sense. So all the candidates in Europe can say basically whatever they want and there will not be any intervention, direct intervention that is established by this framework. The things that the Code and all the other stakeholders are involved in are things like transparency in funding for political advertising, for example, or flagging cases of possible inauthentic coordinated behavior. So for example, a civil society organization or a fact-checking organization can bring to the table something that they’ve noticed as, I don’t know, a bot campaign. They think it’s a bot campaign or they think there is a specific account that is particularly reactive in spreading demonstrably false news. And then there is no coercive way to oblige the platforms to do anything, but it’s up to the platforms to decide if that specific campaign, that specific instance, that specific behavior violates their terms of use. So this is how things currently stand at the European level. Second thing that I wanted to mention is a thought about how countries should be regulating social media. My personal opinion, and probably is not that of all the members of the panel or all the people in the room, is that countries should stay as far away as possible from directly regulating through law. Anything similar to spreading false content on the internet, per se, because in any way saying something that is false is punishable by law, with some exceptions like libel or slander, whatever. But generally speaking, freedom of expression has to be the most important value that is. At the same time, though, I wanted to point out that basically no human platform or way of communication is completely unfiltered, or if they are, doesn’t turn very soon into something that nobody wants to be, nobody, I mean sane of mind, want to be in. Of course, currently, in all the countries that I know of, with strong regulations in some parts of, so unabated free speech doesn’t exist. So in terms of what we should do when it comes to disinformation, my personal idea is that something like labeling is probably the best thing to do. So, fact-checking, in my opinion, is not, thank you, you’re very kind, helping me out with this, fact-checking, in my opinion, is not kind of telling, giving out like cards, like who’s saying the truth and who’s saying what’s false, but is more like providing contextual information to the user and like saying, okay, this is what’s out there. You can say that we never went to the moon, that’s fine, but keep in mind that according to all this list of reputable sources, this is actually, doesn’t appear to be what really happened. Then it’s up to you to make up your mind, to evaluate if those sources are fine. Okay, but still, I think that providing more information is always better than providing no information. So this is, but this is just my personal opinion. And with that, I shut up.

Peace Oliver Amuge: Thank you very much, Giovanni. Giuliano, you want to take some questions?

Juliano Cappi: Yes, thank you. Well, I was trying to make a point on what are we dealing with here, and what our colleague has has said is, has everything to do with that. We have a huge dispute of power. This information has everything to do with power, who have power, wants to maintain power, and those who want to gain power. Then I would like to address a few points that were mentioned here, and in the panel. First is, I couldn’t say that there is no bias in platform models, business models, considering who has been, in the last 10 years, gaining power around the world. So in Brazil, in Europe, in the United States, and in many other countries in the world, we can see in many other countries in the world, we can see the extreme right groups gaining power in Congress and media. I mean, it’s not just political power, it’s communication power. Then I guess that we have a relation between the kind of business model that are established in digital platforms, and the advancement of some political views. There is bias, it’s biased, and we cannot and cannot just imagine that there is no bias. This is important because we can try to address or try to investigate where this money that finances the industry of disinformation is coming from, and this is very important. Follow the money is one other thing that we have to do. We cannot shut blind eye on who is financing disinformation. Second point is, I wouldn’t concern with the excess of regulation at this time, because any regulation is so difficult to get. I mean, despite that Europe has done a great job on the MA and DSA recently, even in European countries, the challenge to produce any regulation is still great. And in Brazil, we have no regulation, no platform regulation at all. And this is a fight that we are trying to face, and it’s very hard. Of course, we should consider that regulation for the digital era should be based on principles. And I would like to bring another principle. I like very much the idea of trusted flaggers that my colleague here has brought, but there is a principle which lies behind the European regulation, which is systemic risk. This is very, very powerful, because it is difficult to establish specific kinds of content, to believe that we can, through algorithms, find what is wrong and what is right, or what is true, and what is conspiracy theory. We will not do this, but we can hold companies accountable for the systemic risks they are putting in place in society. This is, I believe that we can find a fair equilibrium to regulate content moderation through this principle of systemic risks. And there is a name for this in Portuguese. I’m trying to remind what’s the name, that this principle has been used in some regulation, but I forgot now. But it’s something like duty of care or something like this. And this is quite important, I would say. And finally, to finish, I would like to promote this consultation that we have done in Brazil. And it’s a consultation on digital platform regulation. And we established three pillars for regulated digital platforms. The first is disputability as a concept of economic theory. We cannot sort out these information problems while we still have impact from companies who concentrate so much market share in society, like Instagram and WhatsApp. And we have to face the challenge of building disputability. The second one is digital sovereignty. We have to look to infrastructure. There is a concept of digital public infrastructure which is gaining hype right now. But it’s important to understand if infrastructures, despite they are private or public, serve the public interests or business interests, in terms that some infrastructure are serving business interests over public interests, then we should regulate this infrastructure, despite they are private. And to finish, we have to regulate content moderation. And I guess this idea of systemic risk is a good idea for we start discussing what kind of regulation we wanted in different countries. Thank you.

Peace Oliver Amuge: Thank you. Let me just check if Poncelet wants to interview. Is Poncelet still online? Poncelet, if you can hear me, do you have any comments or to the questions?

Poncelet Ileleji: Yes, thank you very much. I think, overall, we have to realize that any disinformation in any electoral context impacts the common man, those at grassroot level. So whether it’s platforms, whether we use fact-checking sites, the most important thing is advocacy for communities to know how misinformation can affect and disinformation can affect their lives. And the only way to do it is empowering people, especially those who communities relate with at their grassroot level. And those people usually have community radios. So the power of community radios is still very important. People will always listen to what they hear from their own community. And we have to be able to have avenues to empower those people and get all stakeholders involved. Social media has been a game changer. So I remember way back in 2006 when they said Time Magazine voted Person of the Year as you. You look at it today, it’s very relevant. Person of the Year is still you because the amount of information online and disinformation has really contributed to a lot of very unfortunate things in the world, especially in electoral processes. So let us see how we’ll do. I don’t have any one cup fits all. But within our own context, I know the main focus should be addressing people at the grassroot level. Let there be no default. Thank you.

Peace Oliver Amuge: Thank you, Fonz. Yes, you can go on.

Audience: Thank you so much. And I wanted to respond a little bit about the right kind of candidate. She was talking about the right kind of candidate. I was contributing that from the perspective of the eco level of information, for example, about the candidate. It’s not the right candidate, the right, right candidate that is ideal for that post. I meant that when one candidate, for example, is there is a miss or disinformation against another candidate, the chances are that the two candidates, one of them will be disenfranchised in terms of the information, the right kind of information as of that time. So I didn’t mean that having the right candidate, the ideal candidate for that post. I wanted to clarify that. And one of the things that we have to look at is that regulations have been there since the world came into the being. And I cannot imagine a space where there’s no regulation at all. There has to be some form of regulation. What we should be against is that having overregulation in terms of whatever that we are doing. For example, if you do overregulation in terms of people becoming innovative or, you know, certain innovations, that means you will stifle competition and to second, you will stifle, you know, growth of that particular space. So I think regulation, transparency, having people being accountable, it will make the space a level playing field where everybody can be able to interact and have the right, for example, to have your content, you know, read and also have the right for not anybody to stumble on your feet, on your toes. I think that is what we are looking at. We’re not looking at, you know, making sure that, you know, for example, all these tech companies or platforms, they are banned because of the content that is posted by the end users. So I think there has to be some kind of regulation because just imagine if I walk into this room naked, yeah, there’s no regulation that has been written on the door that you can’t walk here naked. But if I walk here naked, people will go like, you know. So I think we have to have some form of regulation. And these regulations have got to be facilitative regulations. They have to facilitate the tech companies as well as the platforms, do their stuff, people read the stuff. But now when they cross the line, the red line of allowing their platform to be used, especially for disinformation, because the disinformation is intentional, unlike, for example, the misinformation. Disinformation is intentional.

Aiesha Adnan: I create a content and disseminate it for me to disparage, for example, your personality. If you are a candidate, I say, this guy is a rapist, for example, but the guy is not a rapist. If that content, you know, continue to be on the platform, the impact on the end users will decimate that particular candidate. So I think there has to be, in my opinion, as a speaker, I think there has to be some kind of regulation accountability and collaboration is very key, you know, engaging the fact checkers to ensure that, you know, the information that is being put out there by maybe third parties is the correct kind of information about a particular candidate. So I think the awareness, collaboration, it’s very key in terms of where we are going in the future. That would be my 50 cents contribution. This has become a very, very interesting discussion now. Yeah, for me, I don’t really believe that we should actually try and make the platforms accountable on the content actually uploaded by someone else. When we talk about regulation, we might be saying that this might be a simple thing, but we know that how humans and a lot of people with power, they try to influence. So that’s the reason I don’t believe that we should try and force maybe the platforms to remove content. And because all these platforms you see, it’s run by community guidelines. It’s available and it’s visible for everyone. And I believe with that, it is more towards society’s role in debunking, helping debunk this information through these kind of awareness campaigns that we are talking about, because we cannot let just the platforms decide whether this is true or not, especially when they don’t have enough information. So that is my take on that. And another point is we have talked a lot about the regulation. and then maybe holding the platforms accountable. But we have a bigger, bigger work to do. That is, as some of the members from audience mentioned about the civic education, especially on information integrity through information literacy programs. Because this doesn’t only impact the elections. It’s a general thing that we need to identify, like what is misinformation and disinformation and how do you really understand the deep fakes and all. So that is what I believe that we should be focusing on and less influence on the platforms, yeah. Thank you.

Peace Oliver Amuge: Very much. I think we’ve had a very good conversation. Aisha is starting something, but we can’t start opening it up. I should say that she does not believe that there should be, we should emphasize on content moderation because why should they moderate something they didn’t put there? I don’t know what you think. We have only 10 minutes. We can’t go into this conversation. Yes, it’s a debate for another day, but otherwise I think I have had fun moderating this session. And just to, before I kind of sum up some of the things that have come up, I want to just give you just one minute each if you want to just say your parting shots. You can start from Aisha and just take it. Very quickly, thank you.

Aiesha Adnan: Thank you. This is an interesting discussion and like coming from a very small population and then being able to be here and talk about some of the challenges that we have. I hope that some of you here do consider us when you design some of your tools and other interventions.

Nazar Nicholas Kirama: Thank you, Madam Chairman. My parting shot would be that to directing more efforts in terms of collaborations, ensuring that everybody out there, you need to kind of date or the ordinary citizen, become impacted by this information, whether we deploy a facilitative regulation or you think the date in terms of dialogue with the platforms and take companies on how they can root out the scourge of this information. I think that is important for all of us. I think awareness, making sure we mitigate from the end user perspective is very key as we move forward and I wish you luck in terms of where you are going. I hope you embrace awareness and fact-checking, platform accountability and facilitative regulation. Thank you.

Juliano Cappi: Thank you. Information integrity and disinformation is at the base of the issues that we have to sort out. And become like, it’s not only sustainability and dogma in question, just to finish, but what we see is a kind of cynical agreement over general values like privacy and freedom of expression that prevent us to debate the problems some actors are causing to society. So we have to stop this because I’ve been in IGF for another 10 years and I can’t stand it. Oh, let’s advance to attack privacy. And this has become, seriously, my friends, this has become sort of ridiculous. So start to face the problems and tell each other what we have to sort out on. Seriously, I don’t know if we can give up of society. And I would like to invite the Brazilian international community has a boot camp here and we have some of this, the hard copies of the work that we have done on consultation of digital platform regulation. We have a kind of a scene in Brazil, I’m giving to you if you’re interested, a scenario in Brazil of the main disputes. This is what we need to do to bring up what are the disputes are in place and try to sort out those problems. I’m sorry for this final speech, but I’m really concerned, thank you.

Giovani Zagni: So my final thought is that I do think that there is a strong regional and national specificity to these problems. I mean, the issue of this information is absolutely not the same also inside Europe. The problems that I can observe in my country as an Italian are probably completely different from what a Norwegian see, well, they’re outside European Union, but let’s say Scandinavian country or from Eastern Europe. What happens in each of these regions is very specific and I’m not even thinking about what’s happening in the Maldives or in Tanzania or in Brazil. So one thing that I take away from this session is how the issue of this information can become kind of an academic and very theoretical stuff from some perspectives and one of the most pressing and urgent issues from another perspective in another area of the world. So there have been cases in the past few years when this information has had such a concrete impact as to harm people and to be really a problem for the whole of society. And I do think that one of the most difficult things is to agree on some common ground at the global level. It will probably, I’m sorry about that, but it will probably need much more listening, much more discussing. And I think it’s great that a forum such as the IGF exists or have this kind of discussions.

Peace Oliver Amuge: Thank you. And we’ll go right over to your points. Are you, yes. Committee, would you like to give us your parting shot in just one minute? Okay. We’ll get back to you. And usually I think I’m not going to be on time, so I’ll see you all in just a little bit. Yes, I appreciate all that. And thank you very much to the panelists and also to you for your time and your attention. Also to me for your comments and input to this discussion. Thank you both. I think some of what we will talk about is different from public awareness. I think we need collaboration, but collaboration synergies and using reports. I’m so happy that in a different context that we have regulations, like stakeholder approach. And we will encourage more promoting and fact-checking in a particular session. And thank you all. Have a good evening, ladies and gentlemen. Bye all. Bye. Bye. Bye.

G

Giovanni Zagni

Speech speed

118 words per minute

Speech length

1573 words

Speech time

797 seconds

EU Code of Practice on Disinformation as voluntary co-regulatory instrument

Explanation

The EU has implemented a Code of Practice on Disinformation as a voluntary and co-regulatory instrument. This code involves various stakeholders including platforms, advertisers, and fact-checkers to collectively address disinformation issues.

Evidence

34 signatories including Meta, Microsoft, TikTok, and fact-checking organizations

Major Discussion Point

Regulations and frameworks to address disinformation

Agreed with

Juliano Cappi

Nazar Nicholas Kirama

Agreed on

Need for multi-stakeholder collaboration

Tension between combating false information and protecting free speech

Explanation

There is a tension between efforts to combat false information and the need to protect freedom of expression. Regulation of online content should prioritize freedom of expression while providing contextual information to users.

Evidence

Suggestion of labeling and fact-checking as preferable to content removal

Major Discussion Point

Challenges in addressing disinformation

Regional and national specificity of disinformation problems

Explanation

The issue of disinformation varies significantly across different regions and countries. What is a pressing issue in one area may be a more theoretical concern in another, making it challenging to agree on common ground at a global level.

Evidence

Differences in disinformation issues within Europe and between different parts of the world

Major Discussion Point

Challenges in addressing disinformation

P

Poncelet Ileleji

Speech speed

122 words per minute

Speech length

523 words

Speech time

255 seconds

Need for fact-checking websites supported by organizations like UNESCO

Explanation

Fact-checking websites are crucial in combating disinformation during elections. Organizations like UNESCO have been supporting the establishment of such websites in various countries.

Evidence

Example of UNESCO supporting the setup of a fact-checking website in Gambia for their last presidential elections

Major Discussion Point

Regulations and frameworks to address disinformation

Agreed with

Giovanni Zagni

Nazar Nicholas Kirama

Agreed on

Importance of fact-checking in combating disinformation

Empowering citizens and communities at grassroots level

Explanation

Empowering people at the grassroots level is crucial in combating disinformation. Community radios play a vital role in disseminating accurate information to local communities.

Evidence

Importance of community radios in reaching people at the grassroot level

Major Discussion Point

Role of different stakeholders in combating disinformation

A

Aiesha Adnan

Speech speed

136 words per minute

Speech length

1250 words

Speech time

548 seconds

Importance of tailored programs fitting cultural norms

Explanation

Programs and initiatives to combat disinformation should be designed to fit the cultural norms of specific countries. One-size-fits-all solutions may not be effective in addressing disinformation across different cultural contexts.

Evidence

Example of the Maldives presidential election where disinformation mainly came from online media rather than traditional media

Major Discussion Point

Regulations and frameworks to address disinformation

Importance of civic education and information literacy programs

Explanation

Civic education and information literacy programs are crucial in combating disinformation. These programs help citizens identify misinformation and make informed decisions during elections.

Major Discussion Point

Role of different stakeholders in combating disinformation

Differed with

Nazar Nicholas Kirama

Differed on

Role of tech platforms in content moderation

J

Juliano Cappi

Speech speed

99 words per minute

Speech length

1531 words

Speech time

923 seconds

Brazilian Internet Steering Committee’s guidelines and reports on combating disinformation

Explanation

The Brazilian Internet Steering Committee has produced several guidelines and reports on combating disinformation, especially during electoral periods. These documents provide directives and contributions to address the phenomenon of disinformation.

Evidence

Fake News and Elections Guidebook for Internet Users, report on disinformation and democracy, contributions to combating disinformation on the Internet during electoral periods

Major Discussion Point

Regulations and frameworks to address disinformation

Multi-stakeholder partnerships and collaboration between fact-checkers and platforms

Explanation

Multi-stakeholder partnerships and collaboration between fact-checkers and platforms are essential in combating election disinformation. These collaborations can improve efforts to verify information accuracy and label or demote false information.

Major Discussion Point

Role of different stakeholders in combating disinformation

Agreed with

Giovanni Zagni

Nazar Nicholas Kirama

Agreed on

Need for multi-stakeholder collaboration

Differed with

Nazar Nicholas Kirama

Unknown speaker

Differed on

Regulation of tech platforms

Power dynamics and political biases in spread of disinformation

Explanation

The spread of disinformation is closely tied to power dynamics and political biases. Those in power often use disinformation to maintain their position, while those seeking power use it to gain influence.

Evidence

Observation of extreme right groups gaining power in various countries over the last 10 years

Major Discussion Point

Challenges in addressing disinformation

N

Nazar Nicholas Kirama

Speech speed

101 words per minute

Speech length

1091 words

Speech time

646 seconds

Proactive content moderation and transparency in algorithm policies by tech platforms

Explanation

Tech platforms should implement advanced algorithms for proactive content moderation to flag and detect misleading information. They should also be transparent about their algorithm policies to ensure clarity in how information is presented.

Major Discussion Point

Regulations and frameworks to address disinformation

Agreed with

Giovanni Zagni

Poncelet Ileleji

Agreed on

Importance of fact-checking in combating disinformation

Differed with

Aiesha Adnan

Differed on

Role of tech platforms in content moderation

Accountability of tech platforms and social media companies

Explanation

Tech platforms and social media companies should be held accountable for the content posted on their platforms, especially during electoral periods. This accountability is crucial to ensure the integrity of electoral processes.

Major Discussion Point

Role of different stakeholders in combating disinformation

Agreed with

Giovanni Zagni

Juliano Cappi

Agreed on

Need for multi-stakeholder collaboration

Differed with

Juliano Cappi

Unknown speaker

Differed on

Regulation of tech platforms

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Caution against over-regulation that could stifle innovation

Explanation

While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the digital space. A balance needs to be struck between regulation and facilitating innovation.

Major Discussion Point

Role of different stakeholders in combating disinformation

Differed with

Nazar Nicholas Kirama

Juliano Cappi

Differed on

Regulation of tech platforms

Difficulty in objectively identifying misinformation in political contexts

Explanation

It is challenging to objectively identify misinformation in political contexts as all parties contribute to misinformation during elections. Personal biases can influence what is perceived as misinformation.

Major Discussion Point

Challenges in addressing disinformation

Need to consider cultural context in algorithmic content moderation

Explanation

Algorithmic content moderation needs to consider cultural context as words and phrases can have different meanings in different cultures. This is crucial to avoid misidentifying content as misinformation or abuse.

Evidence

Example of words that may be considered banter in one culture but insults in another

Major Discussion Point

Challenges in addressing disinformation

Agreements

Agreement Points

Importance of fact-checking in combating disinformation

Giovanni Zagni

Poncelet Ileleji

Nazar Nicholas Kirama

EU Code of Practice on Disinformation as voluntary co-regulatory instrument

Need for fact-checking websites supported by organizations like UNESCO

Proactive content moderation and transparency in algorithm policies by tech platforms

Multiple speakers emphasized the crucial role of fact-checking in addressing disinformation, whether through voluntary codes, dedicated websites, or platform policies.

Need for multi-stakeholder collaboration

Giovani Zagni

Juliano Cappi

Nazar Nicholas Kirama

EU Code of Practice on Disinformation as voluntary co-regulatory instrument

Multi-stakeholder partnerships and collaboration between fact-checkers and platforms

Accountability of tech platforms and social media companies

Speakers agreed on the importance of collaboration between various stakeholders, including platforms, fact-checkers, and regulatory bodies, to effectively combat disinformation.

Similar Viewpoints

Both speakers emphasized the importance of localized, culturally-sensitive approaches to combating disinformation, focusing on empowering communities at the grassroots level.

Aiesha Adnan

Poncelet Ileleji

Importance of tailored programs fitting cultural norms

Empowering citizens and communities at grassroots level

Unexpected Consensus

Caution against over-regulation

Giovani Zagni

Unknown speaker

Tension between combating false information and protecting free speech

Caution against over-regulation that could stifle innovation

Despite coming from different perspectives, both speakers cautioned against excessive regulation that could potentially infringe on free speech or stifle innovation, highlighting a shared concern for balancing regulation with other important values.

Overall Assessment

Summary

The main areas of agreement included the importance of fact-checking, multi-stakeholder collaboration, and culturally-sensitive approaches to combating disinformation. There was also a shared concern about balancing regulation with free speech and innovation.

Consensus level

Moderate consensus was observed on the need for collaborative efforts and localized strategies. However, there were differing views on the extent of platform accountability and the appropriate level of regulation. This suggests that while there is agreement on the importance of addressing disinformation, the specific methods and extent of intervention remain contentious issues requiring further dialogue and research.

Differences

Different Viewpoints

Role of tech platforms in content moderation

Nazar Nicholas Kirama

Aiesha Adnan

Proactive content moderation and transparency in algorithm policies by tech platforms

Importance of civic education and information literacy programs

Nazar argues for proactive content moderation by tech platforms, while Aiesha emphasizes the importance of civic education and information literacy programs rather than platform-led moderation.

Regulation of tech platforms

Nazar Nicholas Kirama

Juliano Cappi

Unknown speaker

Accountability of tech platforms and social media companies

Multi-stakeholder partnerships and collaboration between fact-checkers and platforms

Caution against over-regulation that could stifle innovation

Nazar and Juliano advocate for stronger accountability and collaboration for tech platforms, while the unknown speaker cautions against over-regulation that could stifle innovation.

Unexpected Differences

Objectivity in identifying misinformation

Unknown speaker

Nazar Nicholas Kirama

Difficulty in objectively identifying misinformation in political contexts

Proactive content moderation and transparency in algorithm policies by tech platforms

The unknown speaker unexpectedly challenges the idea that misinformation can be objectively identified, especially in political contexts, which contrasts with Nazar’s advocacy for proactive content moderation by tech platforms.

Overall Assessment

summary

The main areas of disagreement revolve around the role of tech platforms in content moderation, the extent of regulation needed, and the most effective approaches to combat disinformation (platform-led vs. education-focused).

difference_level

The level of disagreement is moderate. While speakers generally agree on the need to address disinformation, they differ significantly on the methods and responsibilities of various stakeholders. These differences highlight the complexity of addressing disinformation globally and the need for nuanced, context-specific approaches.

Partial Agreements

Partial Agreements

Both speakers agree on the need for addressing disinformation, but Giovani emphasizes a co-regulatory approach at the EU level, while Aiesha stresses the importance of tailoring programs to specific cultural contexts.

Giovani Zagni

Aiesha Adnan

EU Code of Practice on Disinformation as voluntary co-regulatory instrument

Importance of tailored programs fitting cultural norms

Both speakers agree on the importance of educating the public, but Poncelet focuses on fact-checking websites, while Aiesha emphasizes broader civic education and information literacy programs.

Poncelet Ileleji

Aiesha Adnan

Need for fact-checking websites supported by organizations like UNESCO

Importance of civic education and information literacy programs

Similar Viewpoints

Both speakers emphasized the importance of localized, culturally-sensitive approaches to combating disinformation, focusing on empowering communities at the grassroots level.

Aiesha Adnan

Poncelet Ileleji

Importance of tailored programs fitting cultural norms

Empowering citizens and communities at grassroots level

Takeaways

Key Takeaways

Disinformation during elections is a significant threat to human rights and democracy

Multi-stakeholder collaboration and public-private partnerships are crucial for combating disinformation

There is a need for tailored, culturally-appropriate approaches to address disinformation in different contexts

Fact-checking, media literacy, and civic education programs are important tools for countering disinformation

Regulation of tech platforms and social media companies is a complex issue that requires balancing free speech concerns

The role of traditional and social media in spreading disinformation during elections is significant

Resolutions and Action Items

Promote and support the development of fact-checking websites and tools

Implement civic education and information literacy programs to empower citizens

Encourage collaboration between platforms, fact-checkers, and other stakeholders

Consider adopting co-regulatory approaches like the EU Code of Practice on Disinformation

Unresolved Issues

How to effectively regulate tech platforms without stifling innovation or free speech

How to address the political biases and power dynamics inherent in the spread of disinformation

How to create global standards for addressing disinformation while respecting regional and national differences

The extent to which platforms should be held accountable for user-generated content

Suggested Compromises

Adopting a principle-based approach to regulation focused on systemic risks rather than specific content

Using labeling and providing additional context rather than removing content outright

Balancing platform accountability with user responsibility through community guidelines and transparent policies

Thought Provoking Comments

The European way, so to say, is to have all the relevant stakeholders around the same table and do not to impose any kind of kind of direct intervention from the authorities on the specific content, but more to have a forum where, I don’t know, like potentially damaging cases or potential threats or things that need to be looked after are discussed and then the platforms decide to take action or not.

speaker

Giovani Zagni

reason

This comment provides insight into the European approach to addressing disinformation, emphasizing collaboration and voluntary action rather than top-down regulation.

impact

It set the tone for discussing different regulatory approaches and sparked further conversation about the role of platforms in content moderation.

Most young people, most political parties, most lobbyists, what they use all over the world today to disseminate information has been through social media. Whether it’s Twitter, whether it’s TikTok, whether it’s X, they have used all this to disseminate information. And most people don’t naturally use mainstream media. They use social media as a way in which they get information.

speaker

Poncelet Ileleji

reason

This comment highlights the shift in information dissemination and consumption patterns, emphasizing the growing importance of social media in shaping public opinion.

impact

It led to a deeper discussion about the role of social media platforms in elections and the need for digital literacy.

I would like to see a place where that we actually build a culture where we promote information integrity across everyone. And when we especially talk about election, then everyone says, it’s the media. It’s the media spreading the information. But yes, of course, some part is the media, but it is the citizen who believe in it.

speaker

Aiesha Adnan

reason

This comment shifts the focus from just regulating media to empowering citizens, introducing the concept of ‘information integrity’.

impact

It broadened the discussion to include the role of citizens and the importance of digital literacy in combating disinformation.

These tech platforms, tech companies and media platforms, electoral commissions without regulation, without guardrails for them to be able to ensure that the content that is delivered on their platforms resonates with what is actually happening on the ground.

speaker

Nazar Nicholas Kirama

reason

This comment provocatively frames tech platforms as de facto electoral commissions, highlighting the need for accountability.

impact

It sparked a debate about the extent of platform responsibility and the need for regulation.

We have to be very careful. I say this because, as a Nigerian, different African governments have found ways of trying to regulate and hyperregulate social media. When we open a door, we have to be very direct to where that door is pointed to, so that we don’t open a Pandora’s box, and it will be very difficult to shut it down.

speaker

Audience member (Nana)

reason

This comment introduces a cautionary perspective on regulation, highlighting potential unintended consequences.

impact

It added complexity to the discussion about regulation, prompting participants to consider the potential downsides of overzealous content moderation.

Overall Assessment

These key comments shaped the discussion by introducing diverse perspectives on the roles of different stakeholders in combating disinformation. They highlighted the complexity of the issue, touching on themes of platform responsibility, citizen empowerment, regulatory approaches, and potential pitfalls of overregulation. The discussion evolved from focusing solely on platform regulation to considering a more holistic approach involving digital literacy, multi-stakeholder collaboration, and careful consideration of cultural and regional contexts.

Follow-up Questions

How can we improve processes and foster intersectional collaboration to better integrate different forums addressing disinformation?

speaker

Juliano Cappi

explanation

Juliano suggested we need to improve how we organize and integrate work from different forums, as current efforts may not be sufficiently effective in combating disinformation.

How can we ensure algorithms used by tech platforms for content moderation are explainable and account for cultural context?

speaker

Audience member (Nana)

explanation

The speaker highlighted that algorithms may misinterpret content due to cultural differences, emphasizing the need for explainable AI in content moderation.

How can we design tools and interventions that consider the needs of smaller countries and populations?

speaker

Aiesha Adnan

explanation

Aiesha emphasized the importance of considering smaller populations when designing tools to combat disinformation, as their specific needs may be overlooked.

How can we effectively empower grassroots communities to combat disinformation, particularly through community radios?

speaker

Poncelet Ileleji

explanation

Poncelet stressed the importance of empowering people at the grassroots level, particularly through community radios, to combat disinformation.

How can we implement ‘facilitative regulations’ that balance the need for platform accountability with the protection of free speech?

speaker

Nazar Nicholas Kirama

explanation

Nazar suggested the need for regulations that facilitate platform accountability without stifling innovation or free speech.

How can we improve civic education and information literacy programs to better equip citizens to identify misinformation and disinformation?

speaker

Aiesha Adnan

explanation

Aiesha emphasized the need for broader civic education and information literacy programs to help people identify various forms of false information.

How can we address the regional and national specificities of disinformation while still finding common ground at a global level?

speaker

Giovani Zagni

explanation

Giovani highlighted the significant differences in how disinformation manifests across different regions and countries, suggesting the need for both localized and global approaches.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #102 Harmonising approaches for data free flow with trust

WS #102 Harmonising approaches for data free flow with trust

Session at a Glance

Summary

This discussion focused on harmonizing approaches for data-free flows with trust (DFFT) in the global digital economy. Experts from various sectors explored the challenges and potential solutions for enabling cross-border data flows while addressing concerns about privacy, security, and economic development.

The panel highlighted the importance of data in driving economic growth and innovation, but noted increasing mistrust and restrictive policies leading to internet fragmentation. Key issues discussed included the need to balance data protection with data utilization, the challenges of government access to data across borders, and the impact of data localization requirements on cybersecurity and economic development.

Participants emphasized the need for multi-stakeholder collaboration and evidence-based policymaking to address these challenges. They stressed the importance of bringing the right experts together to find common ground and develop interoperable solutions. The OECD’s work on trusted government access principles and cross-border privacy rules was highlighted as a positive example of international cooperation.

The discussion also touched on the distinction between personal and non-personal data, the potential of privacy-enhancing technologies, and the need for flexible approaches that can adapt to different regional contexts and emerging technologies like AI. Speakers called for a holistic view of data governance that considers various policy goals and technical realities of the internet infrastructure.

In conclusion, the panel agreed on the need for continued high-level political commitment to DFFT, coupled with expert-driven, collaborative efforts to develop practical solutions that balance data protection with innovation and economic growth in the digital age.

Keypoints

Major discussion points:

– The importance of data flows for the global economy and innovation, balanced with concerns about privacy, security, and sovereignty

– Challenges of data localization policies and internet fragmentation

– The need for harmonized, interoperable approaches to data governance across countries

– Distinguishing between personal and non-personal data flows

– The role of multi-stakeholder cooperation and expert discussions in developing solutions

The overall purpose of the discussion was to examine approaches for enabling trusted cross-border data flows while addressing concerns about privacy, security and sovereignty. The panelists aimed to identify ways to harmonize data governance policies internationally to avoid fragmentation.

The tone of the discussion was largely constructive and solution-oriented. Speakers acknowledged the challenges but focused on potential ways forward, emphasizing cooperation and shared principles. There was a sense of cautious optimism that progress could be made through continued dialogue and evidence-based policymaking. The tone became more action-oriented towards the end as speakers offered final recommendations.

Speakers

– Timea Suto: Moderator

– Bertrand de La Chapelle: Chief Vision Officer at the Data Sphere Initiative

– Maiko Meguro: Director for International Data Strategy at the Digital Agency of Japan

– David Pendle: Assistant General Counsel for Law Enforcement and National Security at Microsoft

– Robyn Greene: Director of Privacy and Public Policy at Meta

– Clarisse Girot: Head of Division for Data Flows and Governance and Privacy at the OECD

Additional speakers:

– Jacques Beglinger: Member of the ICC delegation

– Rapidsun: Audience member from Cambodia

– Evgenia: Online audience member (full name not provided)

Full session report

Expanded Summary: Harmonising Approaches for Data-Free Flows with Trust in the Global Digital Economy

This discussion, moderated by Timea Suto, brought together experts from various sectors to explore the challenges and potential solutions for enabling cross-border data flows while addressing concerns about privacy, security, and economic development. The panel included representatives from international initiatives, government agencies, technology companies, and intergovernmental organisations.

1. Importance and Challenges of Cross-Border Data Flows

All speakers unanimously agreed on the critical importance of cross-border data flows for the global economy, innovation, and development. Timea Suto framed the discussion by highlighting that while data underpins the global economy, it faces increasing mistrust and restrictions. Bertrand de La Chapelle, Chief Vision Officer at the Data Sphere Initiative, noted that despite a fragmented legal landscape, the internet infrastructure inherently enables free flow of data.

Maiko Meguro, representing Japan’s Digital Agency, emphasised the need for data governance to balance utilisation and protection, highlighting the multi-faceted nature of data. This sentiment was echoed by other speakers, who highlighted various challenges:

– David Pendle from Microsoft pointed out that government access requests fuel mistrust in data flows, particularly across borders.

– Robyn Greene from Meta warned that data flow restrictions lead to internet fragmentation, which can hinder innovation and economic growth. She also emphasized the technical limitations of data localization and the importance of considering the global internet infrastructure when creating regulations.

– Clarisse Girot from the OECD reinforced the crucial role of data flows in driving innovation and economic development.

The speakers collectively painted a picture of a complex landscape where the benefits of free data flows are clear, but concerns about privacy, security, and sovereignty create significant obstacles.

2. Approaches to Enable Trusted Data Flows

The panel explored various approaches to address these challenges and enable trusted data flows:

– De La Chapelle proposed focusing on access rights to data rather than data sharing, highlighting the role of APIs and privacy-enhancing techniques in modern data flows. He challenged the binary perspective on data sharing, advocating for a more nuanced approach.

– Meguro advocated for working on concrete interoperability solutions and institutionalising processes to facilitate trusted data flows.

– Pendle stressed the importance of pursuing evidence-based policies that reflect the nuances of the digital economy and solve for real problems rather than misperceptions.

– Greene suggested attaching rights and obligations to data rather than data subjects, potentially simplifying cross-border data governance.

– Girot emphasised the need to bring the right stakeholders together to find common ground on these complex issues. She also highlighted the OECD’s work on privacy-enhancing technologies (PETs) and government access to data, including a declaration on trusted government access and ongoing efforts to promote it.

These diverse approaches highlight the multifaceted nature of the challenge and the need for flexible, adaptable solutions.

3. Differentiating Personal and Non-Personal Data Flows

An important theme that emerged was the distinction between personal and non-personal data flows:

– Pendle highlighted that non-personal data is crucial for the economy, research, and cybersecurity.

– Meguro noted that personal data requires carefully balancing privacy rights with economic value.

– De La Chapelle cautioned that the frontier between personal and non-personal data is not always clear, adding complexity to governance efforts. He also suggested exploring an opt-out system for medical imagery data to support AI development.

– Girot pointed out that health data governance needs special consideration due to its sensitive nature, mentioning the OECD’s recommendation on health data governance.

This discussion underscored the need for nuanced approaches that can address the varying requirements of different types of data.

4. Role of International Cooperation and Harmonisation

The speakers strongly agreed on the need for international cooperation and harmonisation of approaches, though they proposed different methods:

– Meguro called for formalising multi-stakeholder processes at the international level.

– Pendle advocated for harmonising approaches through multilateral and interoperable frameworks.

– Greene promoted the adoption of global cross-border privacy rules.

– Girot suggested building on existing frameworks like OECD guidelines.

– Meguro emphasised the importance of keeping data free flow with trust on high-level political agendas like the G7 and G20.

Despite some differences in specific approaches, there was a high level of consensus on core principles, providing a strong foundation for further international cooperation on data governance.

5. Assisting Developing Countries

The discussion addressed the need to assist developing countries in implementing effective data governance frameworks, as raised by an audience question from Cambodia. Girot mentioned the ASEAN model contractual clauses as an example of regional efforts to address this issue.

6. Unresolved Issues and Future Directions

The discussion also highlighted several unresolved issues that require further attention:

– How to effectively distinguish between personal and non-personal data in practice.

– Achieving legal interoperability across different jurisdictions.

– Balancing data localisation requirements with the need for cross-border data flows.

– Addressing national security concerns while enabling necessary cross-border data access.

– Assisting developing countries in implementing effective data governance frameworks.

Potential compromises and solutions suggested included the use of privacy-enhancing technologies, adopting a gradual approach to harmonisation, focusing on regulatory interoperability rather than strict harmonisation, and using model contractual clauses adapted to regional needs.

In conclusion, the panel agreed on the need for continued high-level political commitment to data-free flows with trust, coupled with expert-driven, collaborative efforts to develop practical solutions. The discussion highlighted the complexity of balancing data protection with innovation and economic growth in the digital age, while emphasising the critical importance of finding workable solutions to enable trusted cross-border data flows. Meguro’s closing remarks highlighted Japan’s efforts to assess regulations in light of digitalization, underscoring the ongoing nature of this work across different jurisdictions.

Session Transcript

Timea Suto: won’t be able to hear us and you won’t be able to hear our speakers online. Okay. Check, check. Everybody can hear me? Channel 1. Yeah. There we go. Good. All right. So welcome, everyone. Thank you for starting your morning with us. This is, if you’re wondering if you are in the right room, this is workshop number 102 on harmonising approaches for data-free flows with trust. And I might ask if we maybe can close the door? Thank you. All right. So why are we here discussing this topic today? We know that data, and we have talked so much about data, we know that data underpins every aspect of today’s global economy, supports everything from day-to-day business operations to the delivery of essential government services, and it enables international and multilateral cooperation. But we also know that despite this core role that data has in facilitating economic activity and innovation, there is continued mistrust in data and data-powered technologies. Some of this mistrust stems from the difficulty of understanding data, its nature, its consequences, and consequently the level of risk of its handling. Trust is also eroded by concerns that national public policy objectives such as security, privacy, or economic safety could be compromised if data transcends borders. This increasingly fuels restrictive data governance policies and regulatory measures such as digital protectionism and data localisation. Such approaches deepen internet fragmentation and desegregate information that actually would underpin the broad range of socioeconomic activities and cybersecurity protection. So with growth and development driven by data flows and digital technologies, disruptions in cross-border data flows have broad reverberations that can lead to issues like reduced GDP gains and adverse impacts on local digital ecosystems. So it is important that we talk about trust in data, it’s important that we talk about how we build policy frameworks that actually facilitate the handling, sharing, and access of data in a way that we use it for its potential for developmental gains and try and avoid some of these fragmentation effects of inept policies. So what we try and do here in this session is try and take stock a little bit of the various regional, international initiatives that try to deal with data governance and try and see whether or not we can move towards some commonality between these, if we can find some ways in talking about data governance that leads to more harmonised or at least interoperable approaches to handling data so we don’t fragment the policy space around it and with that we don’t fragment access to the benefits of data. So to help me have this conversation, I’m actually in a very interesting position because I don’t have to answer these very difficult questions, I just have to ask the questions and we have the experts here that will to talk to me and help me answer these questions together. So we have experts both here in person and online, and I’m very happy to see everybody managed to connect. We have, in order of which I will be calling them to speak, Mr. Bertrand de La Chapelle, who’s chief vision officer at the Data Sphere Initiative, Ms. Maiko Meguro, director for international data strategy at the Digital Agency of Japan Online. We have also Mr. Dave Pendle, who’s assistant general counsel for law enforcement and national security at Microsoft, also online in the middle of the night. Thank you, Dave. We have Ms. Robin Green here in front of me at the table, who’s director of privacy and public policy at Meta. And last but not least, also Ms. Clarice Giroux, who is head of division for data flows and governance and privacy at the OECD. Thank you for joining us, Clarice. It’s also quite early in Europe in the morning. So to jump right in, we’ll talk a little bit first on why is it important that we talk about data and to discuss a little bit the added value of data and give some nuance to the perceptions around what we actually mean around data and the conditions that enable cross-border data flows for the global economy. So Bertrand, if I can turn to you first to share some initial insights.

Bertrand de La Chapelle: Thank you, Tymia. Good morning, everyone. I like the fact that you mentioned it’s important to talk about data because, as you know, this was the title of a report that we produced with Lorraine Portioncoula, who was the executive director of the Data Sphere Initiative. And the title was, We Need to Talk About Data, Framing the Debate Between Free Flow of Data and Data Sovereignty. And the way we handle the question of creating the maximum value for everyone out of data is a constant challenge. I want to highlight first that when we talk about fragmentation, it is not a risk, it is a reality. The legal landscape is fragmented because we have 190 countries and they all have different laws. On the other hand, the technical infrastructure of the Internet is by default free flow of data. And it’s the tension between the two that we’ve in most cases have to address. The second thing is we have a tendency when we talk about data to think in terms of sharing of data. And a lot of people have a sort of image that dates back to older times where you are using the database and you share this database and you basically transfer this database. This is not the way it works anymore. The way it works today is through API, it’s through rights of access to data. So many times the data doesn’t travel really. It is just that you query it from another distant place. And even more, there are new techniques called privacy-enhancing techniques. And some people consider that they should be called partnership-enhancing techniques, such as homomorphic encryption or federated learning that allow to leverage existing data without necessarily having to share this data. Because you can do computation on encrypted data or you can do distributed learning for an AI system. So the landscape is changing. And I was impressed by the fact that in a panel yesterday, Yohichi Iida from Japan was answering to a question that I was raising, highlighting that the notion of data-free flow with trust is a high-level concept that is useful to drive the discussion at an international level, to establish the fact that as a principle, we should aim for the maximum capacity to share access to data. And the final point, because we can come back to a few other things, we have a tendency regarding data to think in a very binary perspective. Either data is not accessible for sometimes very legitimate reasons, it can be for intellectual property, for security, for privacy, or confidentiality in general, and so it is okay that this data is not shared. On the other hand, there is a trend, very positive, towards comments, open data, making data widely available so that people can actually build things out of this. But what I want to highlight is that too often we look at this as just a binary alternative, and we lose the fact that in between there are situations where you can not go to the full open data, but at the same time enable some access and some leverage of existing data that has protections for legitimate reasons, but where this data can be made accessible. And in that regard, in this dichotomy of closed data and open data, what the DataSphere initiative is pushing is that we have a common objective, and we should have collectively a common objective, which is to responsibly unlock the value of data for all. This doesn’t mean that data is available for absolutely everybody, it can be for limited groups of actors, but it is important that we share the objective of creating social and economic, as much as economic value, from data, because there is a tendency to think only in terms of monetizing data and there is a lot of possibilities to create social value and most importantly there is a question of having a more equitable digital society because today the data economy is, because of network effects, because of the fact that this resource is non-rival, there are mechanisms that increase disparities and that make the distribution of the value not sufficiently spread and equitable. So these are a few of the of the ideas that I wanted to share to reframe or frame this.

Timea Suto: Thank you so much Bertrand and you mentioned also that we’ve already had discussions with colleagues from Japan, so we’re turning quickly to Japan, from Japan to Ms. Maiko Meguro, but you also mentioned the role of data for various purposes and for both economic social development and other ones. So I think the question is right to ask from Ms. Meguro, how do you see this from a national perspective, from a government perspective, especially sitting where you are in your other digital agency?

Maiko Meguro: Okay, thank you. Okay, first of all, good morning and a good evening, colleagues, and thank you for the opportunity to speak about this important topic. So as Bertrand just spoke, the issue of data flow is basically both the real needs but also the issue of perception. I basically agree with that opinion and obviously, as Bertrand mentioned, the DFFT started as a high-level concept and this concept’s role is exactly the point to shape people’s perception through the concept of trust, namely to pull everybody out of the silos of sectors, and of course, integrate as a matter of data governance. So when I first started working on this topic of DFFT, which was in 2021, the rhetoric, the data is basically the oil of the 21st century was very popular. But the image of data that this metaphor actually present is a very one-sided, why does it trade centric view of the data governance? So discussion of data governance from our perspective must take in account the multi-facets nature of the subject or concept of data. For example, personal information, personal data is obviously the human rights, but at the same time, it also has economic value as recognized by the antitrust authorities of many countries. As digitalization and data linkage progresses, it will be possible to check probably the implementation status of various labor or environmental regulation in all across the supply chain or even across borders, but the data in companies’ production lanes, which are also related to the implementation is also a highly confidential corporate secrets. So there has been concrete cases of conflict in the past in this type of discussion, such as between investment agreements and environmental regulation in different countries, but the problem with cross-border data transfer is that such conflict friction will occur on the permanent basis. Thus leaving the matter to ex-post responses such as individual dispute resolutions or court cases could lead to a market environment that favors and entitles even for those people. who can take the risk of dispute, such a large company with a lot of financial and human capital. So from our perspective, we must think about the effective means of having both enhancing flow, but also necessary protection according to rights and interests attached to the data. But the necessity of cross-border data transfer is of course very clear, with the shift from the hardware base to a software information system-based social economy by digitalization. Obviously, as the moderator had just set out, data has become relevant to all part of societies. But from our perspective, from the government, then this means that the way money is earned is changed, but also the places where the social problems occur have also begun to change. And these matters also concerns distribution of rights and benefits related to data ownership and accesses. So basically, I must also mind that because of this impact of digitalization and what data can do, many governments started to see the threats posed by the use of data and digital technology. And this could also lead to the series of introduction of the regulation, for example, to banning transfer of data outside the country by foreign companies. But it is also important to remember that many countries are unable to procure the data enough to sustain their own economy and innovation within their own borders. So this is the real needs that we need to have the cross-border data flows as a matter of reality. So therefore, if the country only focuses on restricting the cross-border transfer of data in thinking of data governance, their company will not be able to use the data collected from the other country in turn. So data governance might must be always considered from the perspective both maximizing the utilization and data protection security. But lastly, I must say that the question on how to combine the needs of protection and how we want to use the data depends on the social priorities, cultures, even religions or economic structure with each society and government in principle. So it should not be discussed based on like international single rules of values. So this is our perspective that we the government could perhaps start from the working on certain arbitrary treatment, like arbitrary treatment or lack of transparency or perhaps we could work on the concrete solution of interoperability like technologies or lowering the competence costs. So what is more the institutionalizing these processes by relevant actors, both government and non-government to engage with the issue is very important which I could also touch upon later. But for now, that’s all for me, thank you.

Timea Suto: Thank you very much, Ms. Meguro. I’m going to just write into comments from Dave online because we’ve mentioned this dichotomy between how certain governments or certain regions approach data flows. And I would like to explore a little bit more of what those concerns are that fuel some of those restrictive policies that you’ve mentioned. So Dave, if you could enlighten us about that a little bit.

Dave Pendle: Yeah, thanks, Saman. Thanks for having me and good morning to everyone. My name is Dave Pendle. I’m an assistant general counsel at Microsoft. I work on the law enforcement national security team which is the team at Microsoft that responds to government requests for user data from around the world. And certainly government access requests is kind of one source of the mistrust in data flows but I’m hoping to kind of. of uplevel it a little bit and talk more broadly about some of the other concerns and what they may be rooted in. And for many in this room and who are listening this is probably not very necessarily news or insightful but I think it’s kind of interesting to look at these restrictions as being driven by different sides of kind of the same sovereignty coin. And sovereignty does seem to be kind of a major driver of the loss of trust. And the way I see it is that sovereignty can kind of serve as both a solar and a shield kind of pushing contradictory trends. First, clearly governments have a sovereign obligation to protect their citizens, to protect their national security. And that sovereign interest has led to an expansion in surveillance authorities. And certainly governments around the world have kind of exercised increasingly assertive authority to address public safety and national security needs. You also see this come up in terms of, fears of governments expressing fears about going dark and the needing the need to kind of fasten strain encryption protections that are in place. You certainly also see this where governments are seeking access to or authority to access to I should say, cross-border data. And certainly if a government is, it’s investigating cyber crime or child exploitation because data is global generally, the data outside of their borders may be relevant to really important public safety matters within their borders. So that’s somewhat understandable. And indeed the US government, it’s often pointing to the Cloud Act allows for the US government to seek data outside of its borders. But that is not unique by any means. In fact, that same principle is reflected in the Budapest Convention. It’s reflected in the OECD Trusted Government Access Principles. It’s reflected in the evidence sharing regulations that are going to affect in a couple of years in the EU where the obligations exist regardless of where the data is stored. The other side of that sovereignty coin, the shield, if you allow me to use this kind of metaphor, the sovereign interests and the fears of like third-party access to data generally have led to these walls being erected. Walls trying to contain data within a nation’s borders through privacy laws, through trade restrictions. We’ve certainly seen lots of, you know, mandatory data localization efforts, limitations on the use of global technology, the fragmentation of the internet generally are all in this vein. We often hear concerns about, you know, potentially U.S. government access to data, but even here in the U.S., there are concerns about third countries accessing sensitive data of Americans and U.S. persons in the U.S. government. China is often discussed in that vein. So around the world, we see these kind of concerns materialize through requirements for sovereign controls, through requirements or interests in end-to-end encryption in a variety of transfer restrictions. And these, you know, again, these restrictions kind of serve as that shield to the fear of government access. I can’t speak to the legitimacy or the actuality of all of these concerns, but in my world, I can’t speak to concerns about government access. And I would say that there is always some like myth busting that needs to take place from discussing government access and specifically cross-border government access. You know, we report on cross-border data disclosures every six months. We get about 60,000 legal demands from governments all over the world for about 110,000, 120,000 different users each year. In a six-month period, so we’re talking about 30,000 legal demands, you know, we typically get about 50 to 55 content disclosures that are cross-border. In the last reporting period, there was only one that pertained to an enterprise customer. So the majority of those are consumer users. And, you know, for like the EU folks in the room or elsewhere, that one, you know, enterprise customer was not in the EU enterprise. So like the concerns that we hear about, you know, the perception that if, you know, U.S. technology companies are subject to U.S. law and are handing over the world’s data to the U.S. government, it doesn’t really, you know, bear out if you look at the actual numbers. One other distinct factor here is competition. And there is also a sovereign interest, of course, in creating space for domestic technology and innovation that’s also been a driver of some of these restrictions. So that’s not that’s not an exhaustive list, but there is there’s some themes there about some of the restrictions that we see and the causes for fragmentation. Thank you.

Timea Suto: Thank you. And listening to you, I’m reminded about what Bertrand said in the beginning, that this risk of fragmenting the space because of the lack of trust in data, it’s not no longer a risk. It is the reality. And there are a lot of causes for that, as I as we hear from you. But I’d like to turn to Robin to see, so we heard from Dave the causes, what are the consequences of such an approach to data?

Robyn Greene: Thank you. And thank you for having me here to speak. And thank you to everyone for coming. My name is Robin Green. I’m a privacy and public policy director for Meta. And like Dave, while I don’t work on the legal team, I work on the policy team. I do work on law enforcement and government access issues and anything really to do with internet fragmentation. And I just I don’t think there is a conversation on internet fragmentation more important than one on data flows and the implications of restricting data flows. With that in mind, I’m going to just start with a very brief overview of the kinds of impediments to data flows that we see. Because we sort of skipped over that a little bit and not everybody really sort of understands the different ways that these kinds of restrictions manifest. And so just very briefly, we can see them most often in either express data localization requirements that are requiring that data be stored in a specific jurisdiction. Those requirements can be very prescriptive and not allow data to transfer outside of the jurisdiction under any circumstance. Or they may be somewhat more flexible and allow copies of the data to transfer outside of the jurisdictions. In addition to that, we see de facto localization, which is oftentimes where you have regulatory benchmarks, if you will, that you have to meet in order to be able to transfer data out of the jurisdiction. And oftentimes those benchmarks are out of the private sector’s hands because they are, of course, the purview of the governments that the private sector entities are subject to or simply unattainable for other reasons. And oftentimes those reasons are because we are blurring the distinction between the types of data transfers that actually occur. And so one of the things when I was in particular listening to Bertrand’s wonderful comments was just sort of noticing how we really talk only about the idea around data transfers being from one legal entity to another third party in a different jurisdiction. And it’s very natural to think of it that way, right? Especially when you consider how we have really started to consider this debate in the context of GDPR and things like that. But one of the trends that we’re seeing increasingly is actually the regulation and restriction of the physical movement of data. So the idea that even if you are not transferring data between legal entities, that you still cannot move data outside of the jurisdiction where it was created. For providers such as Meta and the kinds of services that we offer, but also for the sorts of providers that do business-to-business kinds of services, the implications of these kinds of restrictions are really dramatic. But they have the same kinds of implications as the kinds of restrictions when you’re talking about restricting third party and transfers to third party entities abroad. The difference is just that in that case, you’re talking about only some data transfers. In the case of the physical movement of data around the internet because of how the internet’s built, the interoperable and international way that it was built, it is literally not built in a way that is technically able to restrict the flow of data across borders when you’re not changing hands between legal entities. Domestic communications can often go to international switch points, for example, in order to get back to the recipient. And so we deal with these kinds of international data flows in a lot of different contexts, not just in the context of transferring data from one legal entity to another legal entity based in another jurisdiction. And so when we think of the risks of this, I think the first and foremost risk is generally of internet fragmentation. And what that means is essentially building walls around our internets, right? Instead of having one global, interoperable, secure internet, the result of that is to have regional or national silos. And the implications of that are really significant and really hard to estimate in terms of how severe they can be. And this is in part because when you think about how people interact with the internet, there is just so much access to information. There’s so much learning. There’s so much economic development. There’s so much connection between different people and different communities. And so putting up those silos would have a really dramatic impact on cultural, social, and economic norms and the threads that bind us across nations. I mean, when we think about what we’re all doing here and trying to find multilateral approaches to governance, internet fragmentation, I think, is one of the gravest threats that we see to the goals that we all have here at IGF. And so when you think about what is the primary driver internet fragmentation, it’s the restriction of data flows. In addition to that, though, as I mentioned, when you restrict data flows, it has really significant chilling consequences for economic development and innovation. As we are learning from Maiko, at the end of the day, we are a data-driven economy. Innovation is data-driven. And so we need to make sure that we’re able to access data from all over the world in order to be able to build new technologies, in order to improve existing technologies, and grow our economies as a result. Additionally, human rights have really significant deleterious effects when you wind up doing restrictions on data transfers. This results in not only restrictions on freedom of expression, access to information, economic rights, because, of course, economic rights are fundamental human rights as well, and are now reliant on access to the internet and access to information, but also rights around safety and things like that. And so we see a really, really significant range of human rights harms result from internet fragmentation and privacy harms as well. Because ultimately, when you are looking at restricting data flows, the result of that is data localization. And one of the major results of data localization is undermining cybersecurity. When you can’t access data and you can’t have global visibility of what this data environment is, it’s much, much harder to be able to identify and quickly respond to cybersecurity threats. And that’s true whether you are a provider of services to consumers or a business-to-business provider. This is across the board. If you’re a financial services provider, if you are a Facebook or Instagram, or if you are a cloud services provider, or really anything else, the number one thing you need to be able to secure your network is visibility. Or whoever you’ve hired to secure your network, the number one thing they need is visibility of what that global threat landscape looks like. Restricting data flows undermines that. And that has really significant consequences, not only for cybersecurity in the sense of what does that mean for our businesses and the integrity of our data, but for national security as well. Because ultimately, cybersecurity is a national security issue. It’s tied to the security of critical infrastructure. And when we interfere with the cross-border flow of data, we interfere with our ability to protect those kinds of critical infrastructure as well. Thank you.

Timea Suto: Thank you so much, Robin. It was quite a comprehensive list. And I’m sure that there is more. But thank you for highlighting the main important ones. I would like to turn to Clarice and ask you, from the role that you are sitting at at OECD, who works with countries in very different jurisdictions, what is the progress that you have seen on this? Because we’ve been talking about the risks of this for quite some time now. Is the idea, this high-level concept of DFFT, of trying to move us away from silos and trying to get a framework that we can all agree on, is this gaining traction? Are we moving towards some more harmonized policies on data? And what is the OECD’s perspective on this? How do you see this work from your perspective?

Clarisse Girot: Thank you, Tymia. And hi, everybody. Good morning, good afternoon, wherever you are. Thanks very much for having me. I mean, it’s very hard to come last because of course a lot has already been said. So let me build a bit on what the other speakers have been, you know, have said and your specific questions. At the OECD, we’ve been working on cross-border data flows for a very, very long time. If you may recall the privacy guidelines of the OECD back 1980, and of course the, you know, things were very different back then, but still the principles that you had to balance cross-border data flows with the privacy and fundamental human rights of individuals while enabling growth, innovation, et cetera, et cetera, was already there. So frankly, in terms of philosophy, nothing fundamentally new, and in a way, DFFT is nothing new either in many ways. The power of the concept of DFFT is that first of all, it was developed at a time when the internet economy was completely different, you know, and we could see that there was a political, geopolitical need to sort of conceptualize a narrative around the significance of cross-border data flows for global leaders, and not only for technicians, if you will, of data protection laws, and, you know, a discipline that was considered fairly niche, right? And that has changed with the concept of DFFT. But that said, it is very important to emphasize that DFFT does not start, you know, from nothing. There’s a lot we can build on, and this is where the positives come from. We have the OECD privacy guidelines, a lot of data protection laws. If we think personal data have been built based on the principles in the privacy guidelines, GDPR and others, and the Directive 9546 before GDPR, we see many more data protection laws, personal data protection laws, privacy laws are developed around the world, which in itself, a very positive thing, obviously. Now, the risks. of fragmentation that come with that should be addressed. But that’s, you know, let’s see still the positive more than the negative in this development. We need an acknowledgement that there is, there are cross-border data flows and to build the trust you need. Even if you do not need per se data transfer provisions in data protection laws, if you have them, then there needs to be a balance between, you know, business interest, innovation, commerce, digital trade, as Micah was saying, and the protection of individuals vis-a-vis the protection of their fundamental rights. Now, we also have at the OECD, a recommendation on enhanced access to and sharing of data, which is even broader, if you will, because it covers both personal and non-personal data. And it’s sort of, it’s first of all, it is based on the fundamental of data-driven innovation and the fact that to enable data-driven innovation, well, you have to take into account the fact that there is a whole gradation, if you will, of different kinds of interests that need to be protected and therefore different kinds of data openness out there. But there is this idea that there is data openness and data-driven innovation at the heart, you know, of the policy, policies, data policies, if you will, of the OECD economy. So that’s at least 38 member countries and actually, and counting, because we work with many more than 38 countries, actually, or the breadth of our work is much larger. So the positives here is, I would say, greater awareness to the significance of data and DFFT has played a role there, more data protection laws, and therefore a greater community of privacy professionals that, you know, sometimes in policy circles, you know, we’re not aware of, but there are thousands of privacy professionals out there that are used, you know, to look at not only privacy, but data governance issues more generally, because data is an asset in itself, and personal data is only, if you will, a subset of that. Now, it is also important. to highlight that we are trying to increasingly sort of sort out the different sources of impediments to cross-border data flows. And there’s a large array of those. And it is important to go back to the roots of these divergences, because then you can try building solutions on them. And so there will be, for instance, variations between the texts of data protection laws, let’s say privacy laws, and in particular, their data transfer provisions. Now, sometimes this clash in provisions is not intentional at all, or there’s just a need for clarification that some data transfer mechanisms are not there, but it should be there. And so it’s always difficult to change a law, but there is no intention to go against standards and some practices out there. We just need to highlight the fact that these modifications maybe could be required. It’s even easier to do this at the level of regulation, and even easier to do that at the level of regulatory interpretations. And I’m very familiar with this exercise, because when I was based in Asia, that’s a project that I was working on across the entire APAC region, raising awareness as to the impact of these variations and the compliance costs that come with them is really a key part of any advocacy exercise, if you will, to facilitate cross-border data flows in a region and globally. Now, when you come to variations in the very fundamentals, if you will, of these data transfer provisions or restrictions, and in particular data localization, I can only go back to what Robin was saying. I mean, this is much, much harder, obviously, to tackle. But even then, since my role is to close on a positive note, I would say even then, data localization provisions often come from the challenges of national security access, law enforcement access to data in overseas jurisdictions. And as Dave was saying, Let’s do a bit of myth-busting here. It is very important to look at what’s really happening out there. And this is where the work that the OECD has done. Very difficult work, believe me. This was no easy, easy work on the building the declaration on cross-border. I’m sorry, I’m so-called trusted government access to personal data held by private sector entities for national security and law enforcement purposes kicks in. And we believe that we can build a lot actually on this declaration. And I’d be happy to talk to that later. One last thing, just to build very quickly, I hadn’t planned on doing that, if you allow me to just take one more minute to build on what Bertrand was saying about privacy or partnership enhancing technologies. The OECD is working a lot on this. On my team, we have a whole work stream on privacy enhancing technologies. Technology cannot solve everything, but there are extremely promising developments here. What is absolutely key is to look at the sustainability of the business models of best providers. And it’s an ecosystem which is far from stable yet these technologies are very often expensive, can actually disrupt some fundamental business models. So it’s not easy. At the end of the day, it’s all about looking at things in a holistic way. What are the technologies out there? What is the business model behind each technology? How can they be used? Are they sustainable at all? Exactly like data transfer provisions need to be unpacked and data localization requirements need to be unpacked so that in each category of challenges, we can actually try and find solutions. And that’s very much the way to close. We’re looking at the implementation of the DFFT policy agenda is by looking at the big policy objectives, the different challenges under them and how we can build a multi-stakeholder ecosystem to address each of them. And that’s why I think, we can still be very positive and. look ahead. That’s my role, to be positive. So with that, I will hand it over to you, Tim.

Timea Suto: Thank you. Thank you very much, Clarice. And in the spirit of continuing on a perhaps more positive note than all the risks that we’ve discussed in the first part of the conversation, I’d like to turn back to all of the speakers and ask your ideas and opinions to what we can actually pull out as tangible solutions to the problems and the fragmentation risks that we’ve outlined in the beginning. I will ask you to keep it to three minutes each if you can, because we’ve been going a bit longer in the first segment. But also keeping in mind, I heard Clarice saying, the OECD works a lot on this. It is 28 countries. That’s a great start. We are at 190 something in the world. So how we can also move towards that objective of elevating some of the existing solutions into the broader spectrum. Bertrand, I’ll go to you first.

Bertrand de La Chapelle: Yeah, I think it’s important also for some of the people who are listening, either here or online. There are some comments that may be mentioned without a full understanding of what is behind. The term electronic evidence and access to government, access to user data. Let me explain just very briefly. If you have a criminal investigation for a crime that is committed in one country, the victim is in this country. The perpetrator or alleged perpetrator is from that country. In order to conduct the investigation, you need to have access to sometimes the email exchanges or the trace of the communications, etc. The problem is that in most cases, this is being stored by a company that is outside of a territory of the place where the crime was committed. This question of electronic evidence is becoming absolutely essential in almost any criminal investigation today. And we do not realize that in some countries, because of the lack of trust, it can take up to one year or two years to get access to this information. And it has been mentioned, but I wanted to emphasize this. And the OECD has done work in this. The Cloud Act in the US and the E-Evidence Regulation in Europe have been mentioned. It is absolutely fundamental for everybody to understand that a lot of the debate about data localization is driven also by this factor, the inability to conduct criminal investigations, because there is no access to the data that is needed. It is a very complex problem. But until we find a solution for this, which is basically that every single country should develop a legislation similar to the E-Evidence Regulation that establish clear due process mechanisms for the requests that are being sent to the private companies in another country, particularly in the US, we will not remove one of the main incentives for data localization. I wanted to explain this, because it went through, and people are not necessarily familiar. The second thing, and again, it is not a justification. It’s just to understand what are the drivers. The second thing is the fundamental difference in perception regarding privacy protection between Europe and the United States. I mean, you’re all familiar with the fact that the European Court of Human Rights, as for the third time, or maybe we’re going to have the third time, or the Schramm’s thing.

Robyn Greene: The CJU, so the Court of Justice of the European Union, so the European Court of Human Rights. It’s the European Court of Justice. And yeah, so Schramm’s 2 happened. The court has not looked at the new decrimes.

Bertrand de La Chapelle: There’s a new one. So anyway, at two occasions, at two occasions, arrangement that was made between Europe and the United States in order to take into account the discrepancy between the privacy rules in Europe and the U.S. has been rejected by the courts. And there is a fundamental difference, and if I can mention an anecdote, there was a G20 digital economy working group that was taking place two years ago, and there was an intense discussion on the wording around should the different countries foster convergence to achieve interoperability, or should they foster interoperability to achieve convergence. It looks completely esoteric until you understand that behind those words is the difference between two approaches. The European approach is about adequacy, setting a standard and asking other countries to basically reach the same level exactly, versus an American approach that is more about model contract clauses that basically says even if the country as a whole doesn’t match the privacy protection, if a particular company is abiding by a certain number of rules, then the transfer of data can happen. So that’s a second tension that justifies or not, and let’s be honest, in many cases the protection of privacy is also an alibi by certain countries that want to have a better surveillance of their citizens. So let’s be clear. But that’s the second criteria. And the third one, very quickly, is that because of the expression, and Meiko has mentioned, the data is the new oil, which is probably the worst analogy you can ever use for this thing, there is a Malthusian approach about data that completely overlooks the nature, the specific nature of the digital data, which is non-revalerous but excludable. You’re all familiar probably with the work of Elinor Ostrom, who has worked on the reverse, on things that are revalerous but non-excludable, the famous commons and governing the commons. We’re confronted here with something that is amazing. We can share it without depleting it. We can use it without preventing anybody else to use it. and there is a feeling because of this wrong metaphor of data is the new oil that we should hoard the data that we shouldn’t give it because it’s bad if I keep it I will make the most value out of it and it is fundamentally wrong because in most cases you need to join forces to leverage groups of data that doesn’t prevent you from using it or somebody else from using it so these are three drivers that do not justify but explain some of the trends towards in particular data localization and and restrictions.

Timea Suto: Thank You Bertrand and I’m turning back online to to to Maiko we’ve talked a little bit about your approach sorry from your approach from active in and then Chloe can you hear me all right actually your voice is breaking down from our side yes yes I so just wanted to ask you you’ve talked about Japan’s approach in the national context how do you elevate that either in the context of the OECD or or even broadly to move some of those perspectives into the international sphere and drive towards a bit more harmonized approach. much for your question I mean in terms of making our approach

Maiko Meguro: in terms of making of our appraoch internationalizing it’s actually stayed the same so we put a lot of emphasis to the clarification across the murky reality of data transfer data collection or access of data throughout the lifecycle data internationally to see clearly where the bottleneck what challenges lies just as Bertrand said many discussion regarding data flow or restricting data flow are based on very unclear understanding of of how actually technology works or how actually data is hosted, right? So it’s really helpful to work with, for example, OECD to actually understand that what is happening at the ground where the risk lies. And we also have to break down the discussion because everybody talk about, whoa, there’s internet fragmentation, data restriction is too much. Yes, this statement basically reflect that one side of the truth. But we also have to think about, we need to break down those statement to see that, we have to make the problem into the size of pieces that we can actually work on. Because if we break down the issue, then we could see where those multilateral incorporation is actually effective, right? So if you start talking about, for example, government access as a whole in a very abstract level, it’s very difficult to actually tackle. But we really have to understand where actual real bottleneck actually lies. And government are quite open to hear your voices, but we’d like to hear the voices in terms of more specific, more, I’m trying to find the right word, but consumable size of the discussion. Because like everybody said, data is multi-phase issue. So if we talk about data in a very vague sense, then here comes the privacy regulators, here comes the security regulators, here comes the trade negotiator, and it’s very difficult to solve the problem. So it’s always helpful to work with the people like OECD to see how the data actually flows, how the data is actually hosted, where the bottleneck is, where people’s actually having the issue. And to work on this discussion, we definitely need to have the people like the panelists from the private sector in this discussion, because often the government do not. understand how actually data is hosted in their clouds, for example. So it’s always helpful to work with multi-stakeholder, like different types of people from a different part of the world. But in order to do so, in order to do so, we definitely need to formalize the multi-stakeholder processes at the international level. We need to have certain permanent places where people can gather and discuss and work on those issues. Because often those digital discussions are ad hoc. We always have this very futuristic, fancy, multi-stakeholder ad hoc discussions, ad hoc places. But ad hoc does not always help, because often the case is we’re talking about security regulation. We’re talking about privacy regulation. And these things are not something you can have a solution in two years. We need to discuss things part by part, and we need to have piling up to reach the solution at the height. So which means we need to formalize the processes. This is where Japan put a lot of political resources to establish the formalizing processes. We call it institutional arrangement of partnership, where the G7 leaders actually authorized. And we really work together with OECD to work on formalizing this process of multi-stakeholder participation, work on the actual solution, very much looking at the solution-oriented processes. So we are really looking at actual problem solving rather than working with abstract discussion. So I think this is really the important approaches where we are really talking about topics like data, which concerns a lot of different policy spheres. And also, we need to also talk about how the actual technology works in the reality of digital economy. I’ll stop here, but happy to continue.

Timea Suto: Thank you so much, Michael. That was very clear on what we need moving forward. And a huge thank you also to Japan and to the OECD for all the work that you are doing and the evidence-based and expert discussions that are being driven here. Because you’ve said there needs to be political will, but it also needs to be expert conversation. And sometimes the two go hand in hand. Sometimes there is also an issue there of what the political conversation is. is and what is the actual evidence-based expert discussion. To respond to your call on breaking things down a little bit, we’ll try to do that with the next speakers, at least differentiating between what are the issues when we talk about non-personal data and what are the issues when we talk about personal data. So I’ll turn to Dave first online and focus a little bit about what policy frameworks enable the cross-border flows of non-personal data and then I’ll ask Robert to talk about the personal side.

Dave Pendle: Sounds good, thank you. Yeah and I would also just want to reiterate my appreciation, respect and thanks to both Japan for leading the way on DFFT for so long in such an impactful way and also the OECD for convening these discussions and bringing together the right stakeholders to really kind of move the needle in ways that I think that have accelerated and a lot of folks didn’t think were quite possible not too long ago so I think things are getting better in a pretty quick fashion due in large part to some of the contributors here. Non-personal data obviously comprises just a massive amount of global data and it’s a big driver in the global economy but of course it’s usefulness, you know, it’s ability to solve problems related to global health, medicine related to global warming and scientific research, detecting cyber attacks and protecting critical infrastructure depends on its ability to flow across borders and maybe I’ll pause there for just a second mindful to may have the three-minute request but you know Robin’s point about the impact on cybersecurity is a really good one and the ability to detect cyber malicious actors today and to thwart them depends on observing certain telemetry and signals that are traveling across the internet. Microsoft you know has publicly said that we scanned about 78 trillion signals every single day. in trying to detect malicious cyber activity. And we’re tracking something like 1,500 threat actors. And that has resulted in a lot of really good ability to protect the internet as a whole. And that is made oh so much harder if you have limitations on the ability to see that data across borders. That’s like one really concrete example that kind of maps security, cybersecurity space probably doesn’t get enough attention, but it’s really a critical one. But so any restrictions that are placed on non-personal data really do need to remain balanced. They need to reflect the very specific purposes of that non-personal data. And of course, ensure that access and usability and the transfer capability remain. Most importantly, and this is probably the theme throughout the entire panel, of course, is that the approaches need to be harmonized and it’s critical that they be multilateral and interoperable. Nothing will stifle innovation more than a patchwork of onerous and sometimes conflicting regulatory requirements across jurisdictions. So that part is just so critical. So policymakers, of course, have to work together and learn from each other and pursue evidence-based policies to kind of reflect the nuances of this discussion and debate and are mindful of how the digital economy operates and the many ways that society benefits. There is some really important work going on with non-personal data. DFFT and OECD are at the top of the list. There are also just some really valuable success stories and precedents, many of which are on the personal data side. And I suspect Robin will touch on those. But if I could just quickly note, like the OECD Trusted Government Access Principles, one of the reasons why those were so successful is because they did exactly what we just discussed about bringing policymakers together, kind of bringing all of the stakeholders into a room. So often different audiences in these discussions almost talk past each other. They speak from their own vantage points. I’m certainly guilty of that as well. But the way the OECD convened those discussions and brought in privacy people and national security stakeholders, which are often kind of not part of these discussions, as well as the law enforcement stakeholders and others, was, I think, why it was probably so successful when they were able to find as much consensus as they did. So kind of convening the right people is such an important part of that formula. And we’ve seen, you know, this success with the data privacy framework, the current evidence sharing negotiations between the U.S. and the EU are really critical to show that these kind of bilateral multilateral discussions are blossoming. And it’s just a start. We need a lot more. There are a lot more countries out there that need to be represented. But when governments sit down and work on these hard problems together, they find they have more commonalities and differences. Thank you.

Robyn Greene: Thank you, Dave. Robin? Oh, is this working? Okay. Well, first, I want to echo everything that Dave just said in terms of thanks to the government of Japan, to the OECD, and also, Bertrand, to you and the DataSphere Initiative around the years of work that you’ve done and the incredible progress that we’ve made, having these kinds of permanent places to have these conversations, which I hope we have more of, and discussions around sort of exploratory, you know, or experimental approaches to regulation, like sandboxes and things like that. Given the three-minute limitation, I’m going to just burn through what I think are the sort of seven key things that we should keep in mind when we’re thinking about how to make sure that we are protecting personal data and promoting data free flows with trust. Many of them are covered by Dave, because ultimately, even when you’re talking about non-personal data, it’s the same kind of like technical issue. And so, some of this will sound a little similar. I think the first thing, and this is very specific to personal data, is to attach rights and enforceability of obligations to the data rather than to the data subject. By doing this, you can ensure that the rights and obligations travel with the data, irrespective of where in the world the data is stored or transferred, and you don’t have to worry about whether the actual like data subjects seeking to enforce their rights are in that same jurisdiction. In addition to that, international collaboration is key beyond things like the trusted government access principles and the sort of forums for discussion like the IAP. There are opportunities for collaboration and multi-stakeholder engagement and adopting shared norms around the basic things that are interfering with, or that might facilitate better data free flows with trust. So, this would be things like promoting adoption of global cross-border privacy rules, global CBPRs. It would also be things like promoting countries becoming party to the Budapest Convention, which would also give those jurisdictions the benefits of the access to the kinds of data sharing that will happen under the second additional protocol. And so, that I think is one of the most important things that we can do, because that will also help to do one of the other really important things, which is increased interoperability and harmonization across laws and legal standards. And so, you know, by joining the Budapest Convention, I think that’s something that can actually be, can help to achieve that kind of interoperability. But when you’re talking about sort of like non-cyber crime and evidence sharing regulations, I do think it’s still extremely important to be focusing on whether and and how you can improve the interoperability of domestic regulation with other jurisdictions regulation. The next thing is a holistic assessment of the policy goals. I think one of the problems that we have is not only that we have these conversations in silos and silo at sectors, but we also think think about digital policy and silos. We think about privacy as living in its own silo, and safety as living in its own silo, and cybersecurity. But increasingly, the reality is that all of those things are in a melting pot together. And we need to be able to look at what the various policy goals are when we’re assessing what our data governance frameworks are, and figure out what’s the best way to get to the end goal, rather than how to regulate each individual silo as perfectly as possible. Because then you’re not going to build that kind of intersectionality that you need, and you might wind up having data governance approaches that undermine economic policy goals, or cybersecurity goals, or the like. In addition to that, I think having an understanding of the legal and policy environments that invite foreign investment in data centers is critical. One of the things that we see as many of the jurisdictions that are considering data localization requirements are doing it as a means of forcing domestic investment. And that is really not an effective way to encourage foreign investment in jurisdictions. The most successful jurisdictions that are inviting for foreign investment in building data centers have certain qualities, like rule of law, have an open regulatory environment that is very predictable, and basically have economic environments that make it possible for companies to build data centers. And then there’s infrastructure issues and things like that, making sure that you have the kinds of stability in electricity access, and clean water, and things like that. In addition to that, though, I think we need to do a lot more work at the outset of drafting regulations, particularly where those regulations may restrict the flows of data, making sure they’re technically compatible with the internet infrastructure and consistent with the values of an open, interoperable, and secure internet. If that can be the North Star for all of the regulations that we put forward, then I think we’ll do a much better job at promoting data flows while accomplishing many of the other policy goals. And finally, look to the future. We cannot just regulate for what the technology of today is. We need to be regulating for what the technology of tomorrow will be. AI may be one of the best current examples of that. Restricting data transfers internationally is very deleterious to the development of effective and accurate AI models. They, of course, require diverse data sets, accurate data sets, and significant amounts of data in some cases. And so when we’re thinking about this, not only in the context of AI development, but also in the context of what’s going to be the next technology, I think we should be thinking about today’s regulations in the context of how it will impact the future.

Timea Suto: Thank you, Robin. Quite a lot of mentions of expert conversations needed, overviews of policy systems, trying to figure out commonalities, holistic approaches, and a lot of mentioning of the OECD. So Clarice, if you’d like to respond to any of this, but also if you’d like to highlight anything in particular that the OECD does to try and drive forward these frameworks.

Clarisse Girot: Yeah, thanks so much. much. And thanks to everyone for, you know, praising the work of the OCD, which, you know, in this space and of Japan, of course, I mean, you know, I just took a board of the train, as it had already left the station, all credit to Audrey Plonker really initiated this work at the OCD on government access. I think the, you know, to build on what Bertrand was saying, indeed, we we have a globalization of criminal evidence, you know, criminal evidence now is, you know, 80% of the time located in another jurisdiction, if I listen to what, you know, experts around us have been telling us, and we also see that national security agencies are now part of the global ecosystem on data flow. So it is a fact of life, right? And and it was very difficult to touch these issues beforehand, also, because there is no such thing as a national security community, if you will, like there is a privacy community or a global privacy assembly for privacy regulators, and just saying national security, I think, where, indeed, to build on what Michael was saying, what is key is to bring the right people into the room. And we shouldn’t understand how difficult it can be, and particularly in the area of government access, but if we’ve done it, in this particular field, which is probably one of the hardest, it is definitely possible in other other areas. Just FYI, you know, we’ve been talking a lot about the declaration since adoption, and there’s not so much out there about what we’re doing with it. But you know, there’s a lot of work happening behind the curtain. So we haven’t stopped with the adoption of the declaration, we’re promoting it very hard, we’re working, we’re inviting non OECD countries to adhere to the instrument, we’ve been doing a lot of work, which is extremely promising. And we hope that, you know, in 2025, we see more, more interesting developments to share with you. Just another another example, and I will close with that the possibility of doing the right thing once you have the right people in the room, building on the data free flow with trust community. that we have built at the OECD as part of the so-called IAP. There is a working group that feeds into a very complex area of work on the intersection of cross-border payments and data frameworks, work which we do with the Financial Stability Board, Financial Action Task Force, IMF, BIS, et cetera, et cetera. It is the first time that everybody comes in the room to discuss the challenges met by cross-border payments. And the intersection with cross-border data flow regulations. This is happening at a fairly fast pace. It is extremely technical. It’s extremely complex. You cannot make any progress without having everybody in the room talking to each other, making efforts to understand each other. It takes here again, a lot of effort and a lot of resources to be honest. The Data-Free Flow with Trust community, this particular working group is exceptionally useful because we bring in all the payments operators and financial institutions that feed into the expertise that we need to do the right policy work on the site. So it’s just an example. If you bring the right people, there is hope. I could mention also work with young privacy enhancing technologies. Very happy to keep discussing with Dave and Robin on long personal data and cybersecurity issues in particular, as long as there is a space to meet and a team that can animate a network of experts, there is hope. And I think really, I don’t want to sound naive or anything because this takes a lot of hard work. Believe us, we know what it takes at the OECD. But to go back to what I was saying earlier, I think there is greater awareness as to the risk of the actual risks for society as a whole, not only in terms of compliance challenges for businesses to impede cross-border data flows. I think this has come top of the agenda for global leaders. a greater awareness of the solutions that are already out there, that we’re not starting from scratch, as I was saying earlier, there are communities of experts out there. There is, you know, there are a number of legal frameworks out there that we can build on. And some conversations remain exceptionally hard, and maybe we need to keep working on those. Data localization, there, here again, a gradation of data localization requirements, exceptionally hard. But, you know, again, if we’ve made progress in these very complex areas, there is no reason why we cannot have, you know, sound, stable, long-term discussions here. And of course, fora like the IGF are absolutely fundamental in that respect. And with that, I will stop.

Timea Suto: Thank you so much, Clarice. I think we have about maybe 10 minutes for one or two questions from the audience. If there’s anything that those who are listening to us online or here in the room might want to raise. I’m sorry, I can’t see everybody from here, but yeah, maybe I’ll pass the mic to you.

Audience: Good morning, I’m Rapidsun from Cambodia. So based on the discussion, so I would like to ask, how do you, like Meta or Microsoft or OECD, assist the developing country on the data governance? Because in, for example, like in Cambodia, not only the, we don’t have a national data governance, but the policy maker also not well aware or see the comprehensive of the data governance, especially the cross-border data flow. So my question, how you can assist the developing country? Thank you.

Timea Suto: Thank you for your question. Are there any others that we could maybe group together or should we take them one by one? No, I don’t see anything online either. No, I see, sorry, apologies for jumping in. I see someone online. Let’s go to Evgenia. Can the speaker who raised their hand online try and speak? And then we have another question here in the room and maybe we’ll go back online to the speaker. Jacques, please. Thank you.

Audience: My name is Jacques Beglinger. I’m also a member of the ICC delegation. But what I see in practice is also a certain difficulty when it comes to regulation and when it comes to handling data to distinguish between personal data and non-personal data. And I think this is an absolute crucial thing for industry in particular, consumers to know exactly to look into which policies. So maybe the panel can enlighten me somehow how to distinguish.

Timea Suto: Okay. I think I’ve heard something online, so we might be able to hear the question from online. If you would like to please try again. Yeah.

Audience: Thank you. Glad to see you. It’s a very interesting discussion. So my question may be more general. You touched many aspects. I guess it’s very interesting. So I would like to highlight an intervention of my colleague who mentioned G20 legislation between the United States and Europe. So I guess it’s a real problem in terms of how we may not to approach, but to maximize approach like GDPR in Europe and approach more flexible regulations. In this case, how we can find common ground? Thank you. trying to collaborate on personal data or market data or whatever, we should use less restrictive or higher restrictive approach. In both cases, each country will be not happy because in case of Europe, GDPR provides enough restrictive limitations or regulations. In Russia, we have very similar. But when we are going to Asian market, for example, yes, we should have bilateral cooperations and regulated case by case. If we try to find common ground like for less restrictive, I’m not sure could my country, could Europe, agreed to degree level of these regulations. How we could proceed, how we find middle ground in this case? Because each country like the United States, Europe, or Japan, have own reason why we have regulation like we have. What is the possible approach? Because frankly speaking, I do not believe to have some global equal or unified regulations. I guess it’s impossible to reach for the next years. Thank you.

Timea Suto: Thank you. Thank you everyone for your questions. So I could turn it back to the speakers and see if anybody would like to pick one question in particular or address all of them together. I think there’s a common thread there of how do we drive to actual tangible solutions to this? How do we assist developing regions or those who have questions or different approaches to this? And how do we drive for commonalities? We know that it’s impossible to have one single global regulation. I don’t think anybody is driving for that. But I’m just wondering. if there’s a way, I think if speakers hear solutions to how we drive towards more harmonized or more interoperable approaches. We’ve lost the online room, but I hope that we can, yeah, we see you now. Okay, perfect. Now I see all the speakers. Who wants to go first? Yeah, Bertrand first and then Clarice. Go ahead.

Bertrand de La Chapelle: Quickly, a few elements. The first thing is that we’re using the term interoperability and legal interoperability is actually an expression that I personally have pushed a lot in the last few years. But at the same time, this is a very good concept, but its implementation is not really something that we are able to describe very, very clearly. So it’s an aspirational element, but I think we need to have a serious discussion of what do we mean by interoperability, because we know what technical interoperability is. Legal interoperability is a little bit difficult. It’s envelopes of regulation what is required, what is acceptable, and what is forbidden, which is what in logic is called the deontic operators. How do you combine the overlap of legislations when you have a situation where they actually both apply? So the debate that I was mentioning regarding is it an adequacy or is it a CBPR type of approach is a typical core. I think we need to explore this topic a little bit more. The second thing is, to go to what Jack was saying, we don’t pay enough attention to non-personal data. Personal data is an extremely important element, but there is so much value that can be created by non-personal data that we need to be very careful not to be just obsessed by one dimension, and we need to go to other things. What is really interesting in his question is, as he said, the frontier between the two is not as clear-cut. and particularly a field that I’m particularly interested in which is the medical data, I think there is an enormous potential in the training of AI for diagnostic. This requires an enormous amount of data to train the AI. I think it is, and Clarice was mentioning the work done on pets, this is typically something that can be done using federated learning, which is very applicable, and medical imagery is something that can relatively be anonymized without too much fear of de-anonymization. So this is a perfect example of something that leverages a new technique, which is federated learning, which is different from just sharing the databases. Using anonymization to bring the data that is normally a very sensitive data to something that is anonymized, to develop something that is clearly an AI application beneficial for humanity. And if I want to throw an idea here, for people who are familiar with how organ donation function, in most cases when you have an accident, your organs can be used if you have opted in, to say yes my organs can be used. In some countries, and I think it’s the case in France, they’ve moved to an opt-out. Like unless you say I don’t want it to be used for transplants, if you have an accident and it can be used, the organs are going to be used. I’m wondering whether on medical imagery an equivalent shouldn’t be explored to say you have the right to the personal information that is related to your medical imagery and your personal data. Absolutely. But there is a global public interest to making the anonymized picture available under certain conditions for the training of AI. And I think this is a discussion that is typically around trust, it’s about new technologies like pets that respond to the motto I was mentioning of responsibly unlocking the value of data for all. I think we need to have a more innovative approach to how we leverage data and how we responsibly share data.

Timea Suto: Thank you for that. Clarice, you wanted to comment on that?

Clarisse Girot: Very quickly. So, first of all, to the comment, the question of our colleague from Cambodia, I think it’s very important. You know, Cambodia sits within ASEAN and there are lots of very interesting developments within ASEAN. I was part of an expert group, you know, working with the ASEAN Working Group on Data Governance to put it very shortly. And together, we put together a set of contractual clauses, ASEAN model contractual clauses, which were sort of a simplified version that worked for the ASEAN and basically, you know, the Asian region. And that could be articulated with the EU standard contractual clauses, which, you know, were made back for some of them in 2001. And it was actually very interesting to do this sort of benchmarking exercise, like in ASEAN, given the state of the laws at the national level, we do not need more than this. And actually, it works. It’s plug and play. The ecosystem locally is less, you know, used to complex data protection laws like we have them in Europe and in the US and elsewhere. And therefore, you know, it works. Like I was in Singapore a few weeks back, and actually, practitioners there told me that for their ASEAN based business, they actually, clients, they actually use this model ASEAN clauses. In other words, no one size fits all, for sure. And they are similar in nature. initiatives in Latin America, which are extremely interesting to watch as well. I would point you to a report that we published last year called Moving Forward on DFFT, on data free flow with trust. We did actually a huge range of interviews with global at the global level, you know, in all regions of the world to understand what the particular challenges were. Government access, always a challenge, but, you know, generally speaking, lots of very positive findings in there. Lots to build on. So I think there is room for cooperation here at the OECD. We’re not limited by the boundaries of the 38 member countries, far from. We do work with a lot of stakeholders outside, including governments, of course, outside of the membership. Very happy to keep discussing this. It’s very important to not impose the idea of harmonization. And I know we’ve talked a lot about the Brussels effect, which is a fact. It’s true, like the GDPR sort of exported in a way. But that does not mean that beyond the principles and some key rules and like accountability, for instance, and basic data subjects, right, you have to export sometimes a complexity, which in turn protects that, you know, builds on a long legacy with the principles. And I think there is a global acknowledgement of that. So that’s that’s a good point. I won’t go too far into the PETS conversation because it is extremely complex. Just to say that at the OECD, we also have an important recommendation on the health data governance, which looks in particular at the sharing of health data, health data being understood very broadly. And indeed, the border between personal and non-personal data can be a bit blurry and there can be worked on here. But still, there is a very clear difference between non-personal data, like in the cybersecurity space, you know, attacks on infrastructure, et cetera, et cetera, that has nothing to do with personal data at all. So we need to look at the at the border in the middle, like anonymized data, how anonymized is it? to be de-anonymized, de-identified. There is still here a margin of maneuver and of cooperation between privacy regulators in particular with the support of industry and civil society groups whose expertise is sometimes underestimated in this case. Anyway, there’s too much to say in an hour and a half, but I’m happy to continue the conversation offline.

Timea Suto: Thank you so much, Clarice. With, I think we have three minutes left on the panel, so, and three panelists who haven’t spoken in this last round. Any last words, key takeaways from Maiko, Dave, Robin? Go ahead, Maiko.

Maiko Meguro: Perhaps like from myself. So it was great to have this discussion across the private sector, international organization and also government, which is myself, because we really see that we need to actually have the right people in the room in building the DFFD and working on the real problem and setting the right questions. This is also really proved by this panel. That’s how I see this panel. But also we really see that, so today we really heard a lot of private sectors, heard from a lot of private sectors that, you know, difference of regulation, uncertainty in government access are really the issue. And I think that through the DFFD, we really should sit together and put together the legal and technical perspective together to identify what is the real genuine problem that company has, and also what are the actual purpose and function that those regulation actually need to tackle on, because Japan is actually working on this sort of exercises with expert to assess more than a thousand regulations with view to the changing assumption and the condition following the digitalization. So we’re really trying to work on integrating the privacy sensing technologies with our governance. And we’re really trying to have, to see that we are trying to see where the regulation comes from. and what has been changed and what needs to be changed in order to adapt our society into those digitalizing realities. So we really think that DFFD is materializing this sort of approaches to bring together the people from the different sectors and trying to break down the silos so that we could have more innovative solution towards a new situation which is set by the DFFD. Also, one last note that it is very important to keeping DFFD as the agenda for high-level political discussion like G7, G20 or other forums because high-level political instruction is very important to push the government to move towards innovative approaches. So of course, Japanese government is keep trying, always politically leveraging the DFFD at the high-level political discussion, but also please remember those people from the different sectors that actually those high-level forum really means to set those important topics as a priority for the governments. So this ends my words, but thank you very much.

Timea Suto: Thank you, Maiko. Dave?

Dave Pendle: Just take maybe 15, 20 seconds, but I mean, cooperation on data governance requires trust and you’ll never achieve that unless you’re talking to each other. So it’s been really encouraging to see a lot of governments roll up their sleeves and do just that. And then to make one point that I think has been made a few times, but is probably the most critical is that these conversations about problems must be grounded in real world experience. And it’s important that policymakers not solve for misperceptions, but they solve for problems and risks that are evidence-based. So bringing in those right people is key to that. So maybe just close with, in the words of Clarice, if you bring in the right people, there is hope. Thank you.

Timea Suto: Robin?

Robyn Greene: Sure, it’s fair for me to have to follow this group of folks circling up their final thoughts, but I really do agree with everything that’s been said. I think the only other thing that I would add is the really critical importance of keeping in mind the technical limitations and the importance of the technical compatibility of regulations with the global internet infrastructure and the importance of ensuring that you’re looking at each of these policy issues, not in a silo. but in the larger context of what the policy and data governance environment looks like and what the implications of one regulation that restricts data flows or promotes data flows could be on other policy goals. Thank you.

Timea Suto: Thank you so much. I’m being signaled that we’ve run over time so I won’t take too long in wrapping this up. I would just like to highlight that we’ve heard quite a lot of commonalities here around needing common principles at the top, common direction at the top, and political will at the top to want to address this. That needs to then translate into a holistic view based in understanding and evidence of what the issues actually are and that that needs to be followed up by action by experts in multi-stakeholder forums such as this one to ensure that the will and the principles translate into actionable solutions that are not just looking good on paper but are actually implementable by those who work on it on the ground. And we’ve heard quite a few examples on this from the work of the Internet and Jurisdiction Policy Network back in the day on e-evidence, to the work of the data sphere, to the work by Japan and the OECD and what companies are doing on the ground and also what they need to progress on this. So if you want to hear more about what the private sector thinks, come by the ICC Basis booth. We have a QR code there with our publications and data. Please make sure to take a look. You’ll also find us online. And with that, I just want to say a huge thank you to the panelists for being here and for this very rich discussion. To all of you who have stayed up late or woke up very early, thank you as well online and everybody who joined us here in the room and the line. And of course a huge thanks to my team, many just, and I know who has helped us pulling this session together. So with that, thank you so much and a huge round of applause to the panelists.

T

Timea Suto

Speech speed

145 words per minute

Speech length

2281 words

Speech time

943 seconds

Data underpins global economy but faces mistrust and restrictions

Explanation

Data is crucial for the global economy, supporting business operations and government services. However, there is mistrust in data and data-powered technologies, leading to restrictive policies and regulations.

Evidence

Concerns about national security, privacy, and economic safety compromising if data crosses borders

Major Discussion Point

Importance and challenges of cross-border data flows

Agreed with

Bertrand de La Chapelle

Maiko Meguro

David Pendle

Robyn Greene

Clarisse Girot

Agreed on

Importance of cross-border data flows for global economy

B

Bertrand de La Chapelle

Speech speed

135 words per minute

Speech length

2162 words

Speech time

958 seconds

Legal landscape is fragmented but internet infrastructure enables free flow

Explanation

The legal landscape for data governance is fragmented due to different laws in 190 countries. However, the technical infrastructure of the Internet inherently allows for free flow of data, creating tension between legal and technical realities.

Major Discussion Point

Importance and challenges of cross-border data flows

Agreed with

Timea Suto

Maiko Meguro

David Pendle

Robyn Greene

Clarisse Girot

Agreed on

Importance of cross-border data flows for global economy

Focus on access rights to data rather than data sharing

Explanation

The current approach to data sharing is outdated. Modern data usage involves API access and privacy-enhancing techniques rather than transferring entire databases.

Evidence

Examples of homomorphic encryption and federated learning that allow leveraging data without sharing it

Major Discussion Point

Approaches to enable trusted data flows

Differed with

Maiko Meguro

Differed on

Approach to data governance

M

Maiko Meguro

Speech speed

144 words per minute

Speech length

1816 words

Speech time

753 seconds

Data governance must balance utilization and protection

Explanation

Data governance should consider both maximizing data utilization and ensuring data protection and security. Countries need to find a balance that aligns with their social priorities, cultures, and economic structures.

Evidence

Example of conflict between investment agreements and environmental regulations in different countries

Major Discussion Point

Importance and challenges of cross-border data flows

Agreed with

Timea Suto

Bertrand de La Chapelle

David Pendle

Robyn Greene

Clarisse Girot

Agreed on

Importance of cross-border data flows for global economy

Differed with

Bertrand de La Chapelle

Differed on

Approach to data governance

Work on concrete interoperability solutions and institutionalize processes

Explanation

Governments should focus on addressing arbitrary treatment and lack of transparency in data governance. They should work on concrete interoperability solutions and institutionalize processes for relevant actors to engage with data governance issues.

Major Discussion Point

Approaches to enable trusted data flows

Agreed with

David Pendle

Robyn Greene

Clarisse Girot

Agreed on

Need for multi-stakeholder approach and international cooperation

Keep data free flow with trust on high-level political agendas

Explanation

It’s important to maintain data free flow with trust as an agenda item for high-level political discussions like G7 and G20. High-level political instruction is crucial to push governments towards innovative approaches in data governance.

Major Discussion Point

Role of international cooperation and harmonization

D

David Pendle

Speech speed

164 words per minute

Speech length

1706 words

Speech time

624 seconds

Government access requests fuel mistrust in data flows

Explanation

Government requests for user data are a source of mistrust in cross-border data flows. This issue is driven by sovereign interests in protecting citizens and national security, leading to expanded surveillance authorities.

Evidence

Microsoft receives about 60,000 legal demands from governments worldwide for about 110,000-120,000 different users each year

Major Discussion Point

Importance and challenges of cross-border data flows

Pursue evidence-based policies reflecting nuances of digital economy

Explanation

Policymakers must work together and pursue evidence-based policies that reflect the nuances of the digital economy. It’s crucial to solve for real problems and risks rather than misperceptions.

Evidence

Success of OECD Trusted Government Access Principles due to bringing together diverse stakeholders

Major Discussion Point

Approaches to enable trusted data flows

Agreed with

Maiko Meguro

Robyn Greene

Clarisse Girot

Agreed on

Need for multi-stakeholder approach and international cooperation

Non-personal data crucial for economy, research, cybersecurity

Explanation

Non-personal data is essential for the global economy, scientific research, and cybersecurity. Its ability to solve global problems depends on its capacity to flow across borders.

Evidence

Microsoft scans about 78 trillion signals every day to detect malicious cyber activity

Major Discussion Point

Differentiating personal and non-personal data flows

Agreed with

Timea Suto

Bertrand de La Chapelle

Maiko Meguro

Robyn Greene

Clarisse Girot

Agreed on

Importance of cross-border data flows for global economy

R

Robyn Greene

Speech speed

152 words per minute

Speech length

2362 words

Speech time

932 seconds

Data flow restrictions lead to internet fragmentation

Explanation

Restrictions on data flows can lead to internet fragmentation, resulting in regional or national silos. This fragmentation has significant implications for cultural, social, and economic norms and international cooperation.

Evidence

Examples of express data localization requirements and de facto localization through regulatory benchmarks

Major Discussion Point

Importance and challenges of cross-border data flows

Agreed with

Timea Suto

Bertrand de La Chapelle

Maiko Meguro

David Pendle

Clarisse Girot

Agreed on

Importance of cross-border data flows for global economy

Attach rights and obligations to data rather than data subjects

Explanation

To ensure data protection while enabling flows, rights and obligations should be attached to the data itself rather than to data subjects. This approach ensures that protections travel with the data regardless of its location.

Major Discussion Point

Approaches to enable trusted data flows

Promote adoption of global cross-border privacy rules

Explanation

International collaboration is key to enabling trusted data flows. Promoting the adoption of global cross-border privacy rules can help achieve interoperability and harmonization across legal standards.

Evidence

Example of the Budapest Convention and its second additional protocol

Major Discussion Point

Role of international cooperation and harmonization

Agreed with

Maiko Meguro

David Pendle

Clarisse Girot

Agreed on

Need for multi-stakeholder approach and international cooperation

C

Clarisse Girot

Speech speed

170 words per minute

Speech length

2827 words

Speech time

994 seconds

Data flows are crucial for innovation and economic growth

Explanation

Cross-border data flows are essential for innovation and economic growth. The OECD has been working on this issue for a long time, recognizing the need to balance data flows with privacy and fundamental human rights.

Evidence

OECD privacy guidelines from 1980

Major Discussion Point

Importance and challenges of cross-border data flows

Agreed with

Timea Suto

Bertrand de La Chapelle

Maiko Meguro

David Pendle

Robyn Greene

Agreed on

Importance of cross-border data flows for global economy

Bring right stakeholders together to find common ground

Explanation

To address data governance challenges, it’s crucial to bring the right stakeholders together. This approach has proven successful in addressing complex issues like government access to data.

Evidence

OECD’s work on the intersection of cross-border payments and data frameworks

Major Discussion Point

Approaches to enable trusted data flows

Agreed with

Maiko Meguro

David Pendle

Robyn Greene

Agreed on

Need for multi-stakeholder approach and international cooperation

Build on existing frameworks like OECD guidelines

Explanation

There are existing frameworks and communities of experts that can be built upon to address data governance challenges. It’s important to recognize and leverage these resources rather than starting from scratch.

Evidence

OECD privacy guidelines and recommendation on enhanced access to and sharing of data

Major Discussion Point

Role of international cooperation and harmonization

Agreements

Agreement Points

Importance of cross-border data flows for global economy

Timea Suto

Bertrand de La Chapelle

Maiko Meguro

David Pendle

Robyn Greene

Clarisse Girot

Data underpins global economy but faces mistrust and restrictions

Legal landscape is fragmented but internet infrastructure enables free flow

Data governance must balance utilization and protection

Non-personal data crucial for economy, research, cybersecurity

Data flow restrictions lead to internet fragmentation

Data flows are crucial for innovation and economic growth

All speakers agreed on the critical importance of cross-border data flows for the global economy, innovation, and development, while acknowledging the challenges and risks associated with these flows.

Need for multi-stakeholder approach and international cooperation

Maiko Meguro

David Pendle

Robyn Greene

Clarisse Girot

Work on concrete interoperability solutions and institutionalize processes

Pursue evidence-based policies reflecting nuances of digital economy

Promote adoption of global cross-border privacy rules

Bring right stakeholders together to find common ground

Speakers emphasized the importance of bringing together diverse stakeholders, including governments, private sector, and civil society, to develop effective and balanced approaches to data governance and cross-border data flows.

Similar Viewpoints

Both speakers advocated for a shift in approach to data governance, focusing on access rights and attaching obligations to the data itself rather than traditional notions of data sharing or subject-based rights.

Bertrand de La Chapelle

Robyn Greene

Focus on access rights to data rather than data sharing

Attach rights and obligations to data rather than data subjects

Both speakers emphasized the importance of building on existing frameworks and pursuing evidence-based policies that reflect the realities of the digital economy.

David Pendle

Clarisse Girot

Pursue evidence-based policies reflecting nuances of digital economy

Build on existing frameworks like OECD guidelines

Unexpected Consensus

Importance of technical compatibility in regulations

Bertrand de La Chapelle

Robyn Greene

Focus on access rights to data rather than data sharing

Data flow restrictions lead to internet fragmentation

Despite coming from different perspectives (DataSphere Initiative and Meta), both speakers highlighted the importance of considering technical realities and compatibility when developing data governance regulations, which is not always a primary focus in policy discussions.

Overall Assessment

Summary

The speakers generally agreed on the importance of cross-border data flows for the global economy, the need for multi-stakeholder approaches, and the importance of balancing data utilization with protection. There was also consensus on the need for evidence-based policies and building on existing frameworks.

Consensus level

High level of consensus on core principles, with some variations in specific approaches. This suggests a strong foundation for further international cooperation on data governance, but also highlights the complexity of implementing these principles in practice across different jurisdictions and stakeholder groups.

Differences

Different Viewpoints

Approach to data governance

Bertrand de La Chapelle

Maiko Meguro

Focus on access rights to data rather than data sharing

Data governance must balance utilization and protection

While de La Chapelle emphasizes a shift towards access rights and privacy-enhancing techniques, Meguro stresses the need for a balance between data utilization and protection, considering social priorities and cultural differences.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to data governance, balancing data utilization with protection, and the methods for achieving international cooperation and harmonization.

difference_level

The level of disagreement among the speakers is relatively low. While there are some differences in emphasis and approach, the speakers generally agree on the importance of cross-border data flows, the need for trust, and the value of international cooperation. These minor differences in perspective are unlikely to significantly impede progress on the topic of harmonizing approaches for data-free flows with trust.

Partial Agreements

Partial Agreements

All speakers agree on the need for international cooperation and harmonization, but they propose different approaches. Pendle emphasizes evidence-based policies, Greene advocates for global cross-border privacy rules, and Girot focuses on bringing diverse stakeholders together.

David Pendle

Robyn Greene

Clarisse Girot

Pursue evidence-based policies reflecting nuances of digital economy

Promote adoption of global cross-border privacy rules

Bring right stakeholders together to find common ground

Similar Viewpoints

Both speakers advocated for a shift in approach to data governance, focusing on access rights and attaching obligations to the data itself rather than traditional notions of data sharing or subject-based rights.

Bertrand de La Chapelle

Robyn Greene

Focus on access rights to data rather than data sharing

Attach rights and obligations to data rather than data subjects

Both speakers emphasized the importance of building on existing frameworks and pursuing evidence-based policies that reflect the realities of the digital economy.

David Pendle

Clarisse Girot

Pursue evidence-based policies reflecting nuances of digital economy

Build on existing frameworks like OECD guidelines

Takeaways

Key Takeaways

Cross-border data flows are crucial for the global economy and innovation, but face challenges of mistrust and restrictions.

A fragmented legal landscape exists alongside internet infrastructure that enables free data flow.

Data governance must balance data utilization and protection.

Restricting data flows can lead to internet fragmentation and hinder economic growth, innovation, and cybersecurity.

International cooperation and harmonization of approaches are needed to enable trusted data flows.

Differentiating between personal and non-personal data flows is important but not always straightforward.

Multi-stakeholder engagement and evidence-based policies are crucial for developing effective data governance frameworks.

Resolutions and Action Items

Continue work on formalizing multi-stakeholder processes at the international level

Promote adoption of global cross-border privacy rules

Keep data free flow with trust on high-level political agendas like G7 and G20

Work on concrete interoperability solutions for data governance frameworks

Pursue evidence-based policies that reflect the nuances of the digital economy

Unresolved Issues

How to effectively distinguish between personal and non-personal data in practice

How to achieve legal interoperability across different jurisdictions

How to address the tension between data localization requirements and the need for cross-border data flows

How to balance national security concerns with the need for cross-border data access

How to assist developing countries in implementing effective data governance frameworks

Suggested Compromises

Use of privacy-enhancing technologies to enable data sharing while protecting sensitive information

Adopting a gradual approach to harmonization rather than seeking immediate global uniformity

Focusing on interoperability of regulations rather than strict harmonization

Using model contractual clauses adapted to regional needs, as done in ASEAN

Thought Provoking Comments

The way it works today is through API, it’s through rights of access to data. So many times the data doesn’t travel really. It is just that you query it from another distant place. And even more, there are new techniques called privacy-enhancing techniques.

speaker

Bertrand de La Chapelle

reason

This comment challenges the common perception of data sharing and introduces new technical concepts that are reshaping how data flows work.

impact

It shifted the discussion towards considering more nuanced and modern approaches to data flows, beyond simple data transfer models.

So from our perspective, we must think about the effective means of having both enhancing flow, but also necessary protection according to rights and interests attached to the data.

speaker

Maiko Meguro

reason

This comment highlights the need for balance between data flow and protection, emphasizing the complexity of the issue.

impact

It led to a more holistic discussion about the multifaceted nature of data governance, considering both benefits and risks.

In a six-month period, so we’re talking about 30,000 legal demands, you know, we typically get about 50 to 55 content disclosures that are cross-border. In the last reporting period, there was only one that pertained to an enterprise customer.

speaker

David Pendle

reason

This comment provides concrete data that challenges common perceptions about the frequency and scale of cross-border data disclosures.

impact

It introduced an evidence-based perspective into the discussion, encouraging a more factual approach to assessing risks and concerns.

Nothing will stifle innovation more than a patchwork of onerous and sometimes conflicting regulatory requirements across jurisdictions.

speaker

David Pendle

reason

This comment succinctly captures a key challenge in global data governance and its potential impact on innovation.

impact

It reinforced the importance of harmonization and interoperability in data governance approaches, shaping subsequent discussion on policy frameworks.

I think we need to be regulating for what the technology of tomorrow will be. AI may be one of the best current examples of that.

speaker

Robin Greene

reason

This comment introduces a forward-looking perspective, emphasizing the need for adaptable regulations.

impact

It shifted the discussion towards considering future technological developments in current policy-making, particularly highlighting AI as a key area.

There is a global acknowledgement of that. So that’s that’s a good point. I won’t go too far into the PETS conversation because it is extremely complex.

speaker

Clarisse Girot

reason

This comment acknowledges the complexity of privacy-enhancing technologies (PETs) while also noting global progress in understanding data governance issues.

impact

It balanced the discussion by recognizing both progress and ongoing challenges, setting a realistic tone for future work in this area.

Overall Assessment

These key comments shaped the discussion by introducing nuanced technical perspectives, challenging common perceptions with data, emphasizing the need for balanced and harmonized approaches, and encouraging forward-looking policy-making. They collectively moved the conversation from theoretical concepts to practical considerations, highlighting the complexity of data governance while also pointing towards potential solutions and areas for future focus.

Follow-up Questions

How to implement legal interoperability in practice?

speaker

Bertrand de La Chapelle

explanation

The concept of legal interoperability is aspirational but its practical implementation is not clearly defined. This needs further exploration to understand how to combine overlapping legislations.

How to distinguish between personal and non-personal data in practice?

speaker

Jacques Beglinger (audience member)

explanation

There is difficulty in distinguishing between personal and non-personal data when it comes to regulation and handling data. This distinction is crucial for industry and consumers to know which policies apply.

How to find a middle ground between restrictive (e.g., GDPR) and more flexible data protection approaches?

speaker

Evgenia (online audience member)

explanation

Different countries have varying levels of data protection regulations. Finding common ground between restrictive and flexible approaches is challenging but necessary for international collaboration.

How can developed countries and organizations assist developing countries in establishing data governance frameworks?

speaker

Rapidsun (audience member from Cambodia)

explanation

Many developing countries lack national data governance frameworks and policymakers may not be fully aware of comprehensive data governance issues, especially regarding cross-border data flows.

How to explore innovative approaches to leveraging and responsibly sharing medical data for AI training?

speaker

Bertrand de La Chapelle

explanation

There is potential in using anonymized medical imagery data for AI training in diagnostics. This requires exploring new approaches, such as opt-out systems for data sharing, to balance personal privacy with public interest.

How to formalize multi-stakeholder processes at the international level for addressing data governance issues?

speaker

Maiko Meguro

explanation

There is a need for permanent forums where diverse stakeholders can gather to discuss and work on data governance issues over time, rather than relying on ad hoc discussions.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #49 Benefit everyone from digital tech equally & inclusively

WS #49 Benefit everyone from digital tech equally & inclusively

Session at a Glance

Summary

This workshop focused on how digital technologies can benefit everyone equally and inclusively. Speakers from various countries and backgrounds discussed challenges and opportunities in bridging the digital divide. Liu Chuang presented on using GIS technology to support sustainable agriculture in rural areas. Horst Kremers emphasized the importance of information governance and stakeholder involvement in disaster management. Xiaofeng Tao highlighted big data as a tool for reducing inequalities and supporting environmental monitoring. Ricardo Robles-Pelayo discussed challenges in closing the digital divide in Mexico and Latin America, emphasizing the need for universal internet access and digital skills training. Daisy Selematsela and Lazarus Matizirofa presented on democratizing digital scholarship and preserving cultural heritage through digitization in South Africa. Tamanna Mustary Mou focused on meaningful internet access for women, highlighting barriers such as affordability and digital skills gaps.

Key themes across presentations included the need for multi-stakeholder cooperation, infrastructure development, digital skills training, and policies to ensure equitable access. Speakers emphasized that digital technologies can support sustainable development goals but require intentional efforts to bridge divides based on geography, gender, and socioeconomic status. The discussion concluded with calls for continued collaboration and concrete actions to ensure digital technologies benefit everyone. Participants agreed that realizing the full potential of digital technologies for inclusive development requires ongoing dialogue and coordinated efforts across sectors and regions.

Keypoints

Major discussion points:

– Using digital technologies and big data to reduce inequalities and improve quality of life

– Challenges of the digital divide, especially for marginalized communities and women

– Importance of multi-stakeholder cooperation and governance in implementing digital solutions

– Role of education and capacity building in bridging the digital divide

– Potential of digital technologies to support sustainable development goals

Overall purpose:

The goal of this workshop was to explore how digital technologies can be leveraged to benefit everyone equally and inclusively, with a focus on reducing the digital divide and promoting sustainable development.

Tone:

The overall tone was collaborative and solution-oriented. Speakers shared insights and case studies from their respective fields and regions in a constructive manner. There was a sense of urgency about addressing digital inequalities, but also optimism about the potential of digital technologies to create positive change if implemented thoughtfully. The tone remained consistent throughout, with participants building on each other’s ideas in a collegial way.

Speakers

– Xiaofeng Tao: Professor, workshop moderator

– Gong Ke: Professor, Chair of CCIT

– Liu Chuang: Professor, Institute of Geography and Natural Resource Chinese Academic Science, Editor-in-chief of Global Change Research Data

– Horst Kremers: Chair of RIMA, Germany

– Ricardo Robles Pelayo: Professor at the University EBC campus, La Nepantla, Mexico

– Daisy Selematsela: Professor, University of Worcestershire

– Lazarus Matizirofa: University of Pretoria

– Tamanna Mustary Mou: PhD fellow at St. John’s University, New York

– Xiang Zhou

Additional speakers:

– Abdullah Swaham: Minister (mentioned but did not speak)

Full session report

Digital Technologies for Inclusive Development: A Comprehensive Workshop Summary

This workshop brought together experts from various countries and backgrounds to explore how digital technologies can benefit everyone equally and inclusively. The discussion focused on challenges and opportunities in bridging the digital divide, with speakers presenting diverse perspectives on leveraging digital tools for sustainable development and addressing inequalities.

Key Presentations and Themes

1. GIS Technology for Sustainable Agriculture

Professor Liu Chuang from the Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, presented on the application of Geographic Information System (GIS) technology to support sustainable agriculture in rural areas. She highlighted specific examples, including:

– The use of GIS to map soil nutrient levels, enabling targeted fertilizer application and reducing environmental impact.

– Precision agriculture techniques that optimize water use and crop yields.

– Mobile apps that provide farmers with real-time weather data and crop management advice.

Liu Chuang emphasized how these technologies can significantly benefit smallholder farmers, improving their livelihoods and contributing to food security.

2. Big Data for Sustainable Development Goals (SDGs)

Professor Xiaofeng Tao from Beijing University of Posts and Telecommunications discussed the role of big data in reducing inequalities and supporting environmental monitoring. He highlighted:

– The potential of big data to contribute to the implementation of SDGs, particularly in areas such as poverty reduction and climate action.

– Challenges in realizing this potential, including the “data divide,” “computer divide,” and “algorithm divide” between developed and developing countries.

– The need for capacity building and technology transfer to address these divides.

3. Information Governance for Disaster Risk Reduction

Dr. Horst Kremers, Chair of the CODATA Germany Task Group on Methodologies of Data Handling and Knowledge Management, emphasized the importance of information governance in disaster management. Key points included:

– The need for national platforms to coordinate disaster risk reduction efforts.

– The importance of inclusive communication strategies that reach all segments of society, including vulnerable groups.

– The role of digital technologies in improving early warning systems and disaster response.

4. Digitizing Cultural Heritage and Scholarship

Dr. Daisy Selematsela and Dr. Lazaros Matizirofa from the University of South Africa presented on democratizing digital scholarship and preserving cultural heritage. Their work highlighted:

– Efforts to digitize historical papers and rock art, making these cultural artifacts accessible to a wider audience.

– The development of digital repositories and open access platforms to enhance education and research.

– Challenges in digital preservation, including funding and technical expertise.

5. Meaningful Internet Access for Women

Tamanna Mustary Mou, from the Digital Empowerment Foundation in Bangladesh, focused on barriers to meaningful internet access for women. She presented data showing that globally, men are 21% more likely to be online than women, and discussed barriers such as:

– Affordability of devices and data plans.

– Digital skills gaps and lack of relevant content.

– Social and cultural norms that limit women’s access to technology.

6. Digital Technologies in Latin America

Ricardo Robles-Pelayo discussed challenges and opportunities for digital technologies in Mexico and Latin America, with a focus on:

– The potential of ICTs to address educational equity challenges.

– Opportunities for digital health services to improve healthcare access.

– The need for policies to promote digital inclusion across the region.

Areas of Agreement and Consensus

Speakers broadly agreed on:

1. The potential of digital technologies to promote inclusive development across various sectors.

2. The existence of significant digital divides based on geography, gender, and socioeconomic status.

3. The importance of digital skills and education in bridging these divides.

4. The need for inclusive approaches in implementing digital solutions.

Key Takeaways and Future Directions

The workshop concluded with several important takeaways and suggested actions:

1. Enhance partnerships and collaboration on big data for SDG implementation.

2. Invest in digital infrastructure and skills training, especially in rural areas and for marginalized groups.

3. Develop policies to ensure universal and affordable internet connectivity.

4. Continue efforts to digitize cultural artifacts and knowledge to increase access.

5. Address specific barriers to women’s internet access and digital participation.

In his concluding remarks, Professor Xiaofeng Tao emphasized the need for ongoing international cooperation and knowledge sharing to realize the full potential of digital technologies for inclusive development.

Conclusion

The workshop highlighted both the significant potential of digital technologies to promote inclusive development and the persistent challenges in ensuring equitable access and benefits. Moving forward, realizing the full potential of digital technologies for inclusive development will require coordinated efforts across sectors and regions, with a continued focus on leaving no one behind in the digital age.

Session Transcript

Xiaofeng Tao: Yes, he’ll join us. Yes, don’t worry, he’ll join us. I’ll just continue, but he’ll join us. Yes. And he will do the presentation with you together. Right. Yes. Yes. Yes. Yeah. We’ll share the time slot. Yes. Yes. Hello, Daisy. Good morning. Good morning. Greetings from Riyadh. And greetings to Horst also and to everybody. Ni hao. Guten Morgen, Horst. Good morning, Daisy. After so many years of acquaintance, it’s always very nice to meet friends again. Also, like Gong Ke and Liu Chuang and others. Thank you very much. Very greetings from Berlin. Hope we can meet each other in person recently. That would be a good idea, yeah. It will be early morning in Berlin, in Deutschland. Oh, well, it’s not so early. You see, now I’m retired a few years already from my official work. And then that was normal time. You see, now I enjoy a cup of coffee along in that region. Thank you. Yeah, it’s also normal time for us here in South Africa. It’s normal working time. It’s the time to be at the office now. Yes, we start to work at eight. Yeah. That’s why my colleague Lazarus is on his way. Yes. Okay. Thank you. So I think we should start in two minutes. Yeah. Okay. Okay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Gong Ke: So, CCIT itself is the platform of multidisciplinary collaboration, linking the scientific technological society with policymakers in ICT domain in China and international dialogue, such as IGF, WCs, and so on and so forth. So today we gather here to discuss the topic as just mentioned by Zhou Xiang, benefit everyone by digital tech equally and inclusively. So I think we gather here to discuss this topic because we know on one side, the remarkable benefit brought up by the technology tech already to every country, everywhere, but not to everyone by far. So we have to explore the ways how to collaborate in the spirit of a global digital compact to help everybody, everywhere, in every eco-social status to get the benefit of digital tech. Today, even though there’s a small workshop, we have experts from Asia, from Europe, from Africa, and from America to join our hands and to share our insight, explorations, and good practices. And we also encourage all participants in this room later on site or online to join the discussion. I firmly believe with our joint discussion and joint efforts, we should make this workshop a great success and open a new opportunity for the further collaboration on the inclusive and equitably development of the digital technology for everyone. Thank you so much. I stop here and give the floor back to Zhou Xiang.

Xiaofeng Tao: Many thanks to Professor Gung and he gave us very inspiring remarks and also highlights the importance of digital tech through the multistakeholder cooperation. I think today we have a lot of experts and scientists from different fields to share their viewpoints. Actually we have six invited speakers with various backgrounds from different disciplinary on different aspects including the policy technology and also education. So our workshop after the presentation from speaker will follow by a panel discussion with all the speakers also on-site and online participants. So I would like to remind our speaker each of you will have 10 minutes to present your thoughts and ideas so I will remind you when two minutes left. So let’s move to the part of presentation. Firstly we have Professor Liu Chuang to give us an on-site presentation about GIS methodology and the technology transformation for SDGs. Professor Liu Chuang is from the Institute of Geography and Natural Resource Chinese Academic Science and she is also editor-in-chief of Global Change Research Data published in the reportatory World Data System. Now we welcome Professor Liu to give you a presentation. You have floor please.

Liu Chuang: Thank you. Thank you very much. So, ladies and gentlemen, and good morning. And then 20 years ago, and we get together. This is Geneva. So I think it is some of us in there. And then Tunis. And then also 20 years ago we coded how we work together to set up the tassel group in developing countries, I think that they see. So we we work together for that. But now, we, we focus on 20 years later so now we work. This is how the data can benefit for everybody. So, and the work. Okay, so the topic is a GIS methodology and technology transformation for STDs. So, this is from our experience so we are in China we have was the biodiversity. So, richly in the biodiversity, and we have more than 3000 t geography indications in China. However, that’ll work. Okay, go ahead. So this next. So, but the problem is not only China had the problem but a whole world. I feel initiative with the OCOP program. Then, this is a 85 countries join this each country have their diverse, different products agriculture product. However, challenging is go to the next one. Uh huh. Okay. Before that. Yeah. So the challenging is a good product, a good nutrition, a good environment, but not because the good product is not to cut the feedback of the good price in the market. So this is a big problem. So how did you seal three seals in meeting the challenges? One is the product are not well geolocated, and the special geosocio-economic assessment, which the product of the origin are not illustrated clearly and not well-known by consumers. The intellectual property or brand of these special agricultural products have not been established or well-protected. So this is the challenging, and there was a solution. One is very important is the cooperation. So then in this February, in the Chinese Academy of Sciences, so sign up MOU with the FAO. Link the FAO’s OCOP program together to Chinese Academy of Sciences GIS technology. That’s one. Then Chinese Academy of Sciences promised to work with the FAO on the GIS for three aspects. One is open science, open data and knowledge shared, and also technology transfer and capacity building. Go ahead. The solution is what is GIS solution. So we transfer the technology methodology to the whole world. This is innovative solution for linking production, environmental marketing, and the consumers of a special agricultural products. Go ahead. Then the key word, in this technology, one is geography, education, environment, and sustainability for the world. And then we saw the high quality, what does it mean high quality? We need the brand, quality, appearance, brand, modernity, and culture. So in that case, we get better product, better nutrition, better environment, and a better life, leave no one behind. So the key policy and the technologies is one is we need the multiverse stakeholder teamwork. And then we need to link science, technology, engineering, and the standard management, and also culture. We need the open data, open science, and open knowledge, and then traceable, all the information can traceable. Okay, go ahead. So the key technology is big data, professional knowledge, internet of things, retrieval, digital network, and the intelligence of the farmers, consumers, and the decision makers. So we link all this kind together. This is a good data, peer-reviewed data set, and the professional knowledge that’s published in the peer-reviewed journal, and also link all these kinds of things together. Right now, what does the solution impact? In China, we have right now 19 cases in China, and then more than 600,000 farmers get the benefit. For example, in the small village, Lanna Village, in only three years, originally the community income’s zero, and the second year, we got 100,000, and the next year, more than half a million, and then, yes, it’s more than one million. So the local farmers got benefit. This is the experience of China. Okay, next slide. And then, the… in Asia and the Pacific. We work with FEO in Asia, that is in the Bangladesh, in the Bhutan, and also in the Papua New Guinea. And now it’s not only in Asia Pacific, but the world, including the Panama and the South Africa, and also other countries right now is working on that. So I think this is a good solution. We need to work together, and then in the cooperation with the data open science, with data open data, and then the knowledge, and the link, and then network together, and the intelligence of GIS. Thank you very much.

Xiaofeng Tao: Thank you, Professor Liu, you saved the time. Okay, let’s move on the next speaker. It will be a presentation on the information governance for implementation of UN, all of society principle in data management. The speaker is Mr. Horst Kramer from his chair of RIMA, Germany. Now you have the floor, Horst, please.

Horst Kremers: Yeah, please allow me to share the screen.

Xiaofeng Tao: Okay. She wants to share a screen. Yeah. Does it work? You can share. Can you see this? Not yet.

Horst Kremers: No, no, I was unable to share a screen here. I don’t have the sign on my screen, on my Zoom. Give him the light. Okay. Okay, Horst, you can share the screen on your side. Horst, is- I don’t. I don’t sorry sorry I don’t see I don’t see that I don’t see that line that says share screen I don’t see it oh is his monitor fell here maybe can you check at the bottom host the bottom of your screen the tabs at the bottom of your screen it will be green oh not up I apologize knowledge management knowledge management yeah knowledge is very important thank you Desi great contribution okay okay we saw we see your screen okay thank you very much good morning colleagues I’m very happy to contribute to be able to contribute to that workshop today my aspect today is on information governance for the implementation of the United Nations all of society principles in disaster management since about 2025 years I ran from from environmental sustainability information into into disaster aspects and the information management in disaster and today I want to share about that principle of all of all of society I refer to the United Nations Sendai framework for disaster risk reduction and this Sendai framework has already indicates on whom to whom to involve in disaster discussion and here for us is of course always the question what is the information management aspect of all this so explicitly listed in the Sendai framework is women children and youth and so on migrants academia business media So when I show you this, and if you have a contact into disaster management, you would see that the inclusion of indigenous people is starting, but for migrants, not in every country migrants are enclosed in such discussion, and so on. So there is a lot to do, and we try of course to improve in work groups at UNDRR, the United Nations Disaster Risk Reduction Organization, and others. The role of stakeholders is defined in that framework, especially to engage in the implementation of local, national, regional, and global plans and strategies, contribute to and support public awareness, culture of prevention, and education on disaster risk, and advocate for resilient communities, and an inclusive and all-of-society disaster risk management. So that is not just an idea, it’s a mandate from the Zendai framework to work in this. During my work with UNDRR, and also nationally here and in Europe, I compiled a list of those stakeholders, which I call also existing pillars of societal resilience in all phases of disaster management. A lot of people focus on the core phase of rescue and first aid, but we have this whole circle of disaster management. Now I don’t want to read all of these here, but for instance, Gokke would be interested maybe also in chambers of engineers. How are they involved? It’s not only these blue light first aid organizations, but it’s Salvation Army, school services, medical care organizations, amateur radio associations. When all the transmission fails, when all the internet is down or something, you have to rely on amateur radio today and so on. So to make that short, you can read all these for download of my presentation with the link I give in the end. For example, then all these groups, all these actors in disaster normally should work together

Xiaofeng Tao: in so-called national platforms, which also the CEDAW framework suggests to implement everywhere. National platforms for disaster risk reduction. And I just made a short copy of the one here of the Luxembourg government here in Europe. And this platform constitutes a sustainable network that aims to stimulate a regular exchange, sharing of information and data health by different ministerial departments and all those involved. Just to give you some idea of what is also the question of how to work in these crises, the Swedish Civil Contingency Agency made a brochure just a few months ago, the new version which is titled In Case of Crisis or War. So, and this addresses all of society. That in Sweden is translated into Arabic. Well, this is the name of languages in Swedish language. So, in English, Farsi, and when you know this, even in Ukrainian language. If you know some of these names here are local languages, local ethnical groups throughout the big country of Sweden. I listed that here because of international aspects of all those people living in the country. These are not only Swedish people or in any other nation. It’s the same. But this is a good example how to include, for the information part, also other languages in the country, local languages, native languages, indigenous languages, which is also a role there. In the case of what I think is one of the best examples and best documented examples for community development is the one from Scotland here, which deals with the whole circle of disaster management from communication, inclusion, support, planning, working together, and the methods with an impact that we work together for for our society’s safety and well-being. What are today my short, because of time, recommendations for action and target achievements? We should review progress regularly at the local level and contribute to national and regional progress reviews by sharing information with the national government, develop a communication strategy, internal and external, to inform local authorities, the community, and different actors. When you see the complexity of the list of actors involved, you see what is the real challenge, because we are far from addressing all these in the, and communicate with all these in a standard way,

Horst Kremers: to inform local authorities, community, and different actors about gaps, problems, and achievements. So it’s not only about warnings. It’s absolutely important, these warnings, but the whole process is about talking about gaps, on problems, and so on. Put in place communication mechanisms that allow local leaders and the community to provide input, suggestions, and comments. Other recommendations for action are something like recording of status and situation, evaluation of documentation of previous experiments, data management plans, and so on. So also to be short, don’t want to read all this, but I just say, if you haven’t looked at this, from management point of view, the question is, do we have all professions involved in this? Not only someone, actors, some single organization or something. Do we have all the professions on board that are especially working for society here? So I think there is a long way to be done to have that all of society principle ensured in so many technical and governance way. Very short in selected aspects of governance is you would need an office secretariat. It’s a permanent… process. So it’s not a working group, it’s not a project, please. Governance in disaster is something which needs permanent support of structures. That means you need steering committees, you need working committees, focus of working groups, drafting teams, technical drafting teams who make proposals for standards. As we did, Liu Chang especially certainly would know from former times when we did so in sustainable development, environmental information and geographic information standards, you would need to sit together with colleagues from different organizations, from different professions to sit together and draft standards, technical standards for meta-information, for processes and whatsoever. We are far from this in disaster management, by the way. Promote and document lighthouse realizations, feasibility studies based on these standards to be supposed, prototypes, testbeds where others can come with their data and say, no, let’s see what your analysis tells about our data. Discuss and negotiate strategy at national Sendai platforms, that is what I mentioned already. Every nation should have one. Roadmaps for objectives, two years, five years, 20 years, is typical management things. Do you look in your country, do you see what is a two years plan? Do you know what is a five years plan, a 20 years plan? I think that is also something we could support. Now, I want to close my presentation to invite you to come to Switzerland in June. In Geneva is the 8th session of the Global Platform Disaster Risk Reduction. of United Nations, and one month later, WSIS in July also allows a discussion on Information Society. My session proposal is on Information Society in times of risk. Those who are interested and maybe consider to join for contributing to such session are invited to contact me. I thank you for your information and you see the download link for my presentation to read and see the links with the documents. Very interesting documents I recommend and thank you for the opportunity to be with you. Thank you. Okay, thank you very much

Xiaofeng Tao: and thank you for sharing very interesting information about the events to be held next year in Switzerland about WSIS last 20 years. I think we will have discussion during the panel part. So let’s move to the next speaker myself. So I will give you a brief introduction about enhancing partnership on big data for SDG. So as we know, digital tech like IoT, big data artificial intelligence, this advanced technology has greatly changed our life, but there are also increasing challenges and risks ahead, not only on economic and society, but also on the environment we live. So when we talk about achieving SDG, we normally focus on the development of human society. and economic development, but at the same time, the nature and the environmental are also concerned. So this morning, I would like to talk about importance of strengthening cooperation on tackling this environmental issue with support of big data. Please, go ahead.

Xiang Zhou: Oh, it works. Thank you. So as you can see from the screen, our world is facing myriad global changes. For example, the disaster, as Austin mentioned, occurred every day around the world. Air pollution is definitely a very severe situation in South and Eastern Asia. Frequent flooding happens not only in the rural area, but also I think we are experiencing more and more severe disaster in Southeast Asia, which is a big problem for the sustainable agriculture and production. For at the same time, the South America are experiencing severe deforestation, and meanwhile, the extreme degradation of grassland on Central Asia, and also climate change worsens the severity of the welfare worldwide. It’s too much. Okay. So as we all know, the big data can be a key tool for supporting and evaluating the implementation on SDG. The new technology not only can provide the strong powerful to give us the accurate information about the environmental and human activity, but also it can provide a lot of data, which will be a source of knowledge, can be decision making for government, academia, and also private sector for them to take action to improve our daily life and industry development and every aspect of our world and human life. So as we all know, the SDG has 17 goals with 169 targets, and we also have more indicators. If I remember right, it’s more than 213. So how can we use the new technology, for example, big data IoT, to support the implementation of SDG? And also we need to evaluate the stator and the progress of this implementation. So we need to improve the link between the observation, computing, analysis, and also the knowledge discovery. As Minister Abdullah Swaham mentioned yesterday, there are also a lot of challenges and obstacles need us to overcome, as he addressed, data divide, computer divide, and algorithm divide. So I think if we want to try to promote the application of big data, we… We need to think about how to make the new technology to play more important roles during this process. So I think there are a lot of features of big data we need to improve as a new infrastructure to align the objective of benefiting everyone. For example, the technical reliability and the stability. So we need to build reliable infrastructure all over the world at local level, national level and the regional level as the host mentioned. Also the equity and the diversity as our workshop title mentioned, these also to help the gap between the different community, different country. The other things we need to think about is the responsibility and accountability, which will help to promote application and the service by implementing data openness, integration and analysis. And also the data security is also a very important issue for us to think about because it will integrate all kinds of data source by extract the information and produce knowledge for decision-making. So there are a lot of things we need to think about. So that’s why we need to enhance our MATIC. stakeholder cooperation mechanism, which will accelerate and enhance the process of big data in our society and life. So the organizations can play more important roles in this process. For example, for research community and the commercial sector, they can work together for building the analysis platform to facilitate the computing service and maybe big data computing platform. Okay, so there are also application model and the policy aspects. The stakeholders can work together to go forward. Here I have some case because personally I’m from the technical community. So as you know, the satellites are playing more and more important roles for monitoring the natural resource environment. But there are different capacities on earth observation and data accumulation at different countries at different levels. So we try to build a data hub by reducing the data divide from this kind of framework. So you can see some organizations are working on the data integration, some working on the algorithm optimization, and also we have a private sector like Amazon and other companies which will provide a computing facility. So we can strongly support different applications with this flexible framework by multi-stakeholder cooperation mechanism. And there are other case, there are case called the knowledge hub. It’s provide more opportunity for us to cooperating together from data to information for decision making. So the algorithm can be correct and optimized for the further information extraction. So in this case, we created a knowledge graph for more than 450 remote sensing satellite and which will be very important knowledge base for information for kinds of application. There are some cases about using big data for environmental monitoring and rapid response. So I would like to spend too much time on speaking and maybe you can see with the support of this big data, we can monitor and accessing global change by deriving key variables from satellite. We also can realize real-time data monitoring by with the support of high technology. Okay, so a short summary. The first one, big data finds new knowledge, create new value and improve new capability and think it has great potential as a new emerging technology and the data resource. And also big data and AI, not only have intensive application in responding to environmental issue, but also it can be a very powerful tool by reducing the digital wide as we talked recently. from yesterday’s forum. And the third one, Big Data Governance and Collaborative Action will improve the ability of society to cooperate with the virus, public safety and health challenges. And most important, it will improve the quality of economic and social life for achieving SDG. So, I think that’s all from my presentation. Thank you for your attention.

Xiaofeng Tao: Okay, let’s move to the next speaker. Now, we welcome Professor Ricardo Robles-Pelayo from Mexico to give us a presentation. His topic is Closing the Digital Wide, Challenges and Opportunities in Mexico and Latin America. You have the floor, please. Thank you very much. I will see if that works.

Ricardo Robles Pelayo: So, good morning, everyone. I am Ricardo Robles-Pelayo, Professor at the University EBC campus, La Nepantla. And thank you again for the invitation to the IDF 2024 here in the Kingdom of Saudi Arabia. On this occasion, participating in the workshop benefits everyone for digital tech equality and inclusivity. I will discuss Closing the Digital Divide, Challenges and Opportunities in Mexico and Latin America. Okay, perfect. As we will know, we are living in a transformative era in which technology has become the driving force behind social, economic, and cultural change. In Mexico and Latin America, information and communication technologies have opened enormous possibilities to reducing inequalities and improving quality of life. However, this technological revolution highlights significant challenge on equal access digital. exclusion and widening economic and social gaps. Today, I want to share a profound reflection on how to close the digital divide in our region, analyzing its impact to key sectors such as education, health, labor justice, and business development while proposing transparent and sustainable solutions to address this challenge. Before finding solutions to benefit everyone for digital technology with equality and inclusion, we must observe the number of people who have first-hand access to the internet. As we observed in yesterday’s opening session, many people worldwide don’t have internet access. As we can see in the graph, leaving aside the number of people who live in each of the Latin American countries, internet access has grown significantly in Latin America, now serving as an essential social and economic participation tool. However, this reality is ordinary. Despite progress, more than 240 million Latin Americans still need to learn to use the internet. Whether due to higher costs, lack of infrastructure, and insufficient technological skills, internet access remains a luxury in many rural communities, reforcing social and economic exclusion. Internet connectivity is not just a technical matter, but a fundamental right that enables access to information, job opportunities, healthcare services, and quality education. In a globalized world, technological disconnection equates to exclusion. For this reason, we need initiatives that ensure universal affordable and high-quality connectivity for everyone, regardless of geographic location and socioeconomic status. In Mexico, protecting personal data, the right to access telecommunications and transparency, and our rights regulated in our constitution and secondary laws. However, we are currently experiencing political changes and constitutional reforms that threaten the application of those mentioned technological digital rights, which are considered as human rights in Mexican constitution. The digital divide is not solely a technological problem, but a manifestation of pre-existing economic and social inequalities. In Mexico, this divide disproportionately affects indigenous community, low-income households, rural areas, while digital devices and internet connectivity are becoming more common in urban areas. These tools remain out of reach for most or marginalized regions. It is essential to highlight that information and communication technologies can become a driving force for economic development. However, their limited adoption and marginalized context perpetuates conditions of poverty and exclusion. Ambitious and well-designed public policies are needed to ensure access to these technologies and their effective use to generate opportunities in these communities. Education is one of the areas where ICTs can have the most significant impact, especially in Latin America, where educational equity remains a challenge. Incorporation of technological tools in classrooms modernizes teaching and opens new opportunities for students, otherwise excluded for quality education. However, their implementation still needs to be be improved. In Mexico, a student still needs access to essential devices such as computers in most rural schools, and teachers must receive the necessary training to use ICTs effectively. In addition, government digital education programs often require more transparent evaluation, which limits their impact. To harness the potential of ICTs in education, we must focus on training, teacher training, investing in technological infrastructure, and ensuring that digital tools are accessible for all students. Health is another sector where technology can make a crucial difference. In Mexico, advanced tools such as the da Vinci robot used for high-precision surgeries represent the future of medicine. However, their availability is limited for a few hospitals in Mexico. Leaving millions needing access to this innovation, this unfair centralization underscores our healthcare system’s profound geographic and economic inequalities. To close this gap, we must democratize access to medical technology, ensuring that advancements reach regional hospitals and marginalized areas to achieve the above, invest in infrastructure, medical personnel training, and policies prioritizing equity in access to health services are required. Technological change transforms labor markets, driven by advances in artificial intelligence and robotics. According to the Economic Commission for Latin America and the Caribbean, up to 62% of jobs in Latin America are at risk for automation, mainly affecting workers in less qualified sectors. This action can generate employment. and while labor equalities benefit fighting to those advanced skills and marginalize others. And just to try to finish as soon as possible, digital transformation also reaches the justice system, which through such electronic portals, artificial intelligence, this innovation can improve efficiency and transparency, but also raise ethical challenge, ensuring that access to digital justice is universal and now to make the decisions and understandable of our parties are crucial. Unfortunately, in Mexico, a judicial reform is implemented that would involve economic material and human resources, slowing down the advance of technological systems and resources that are being filled in other parts of the world. In conclusion, in conclusion, there are some aspects that we must consider. Guarantee universal access to the internet in rural and marginalized areas where the government and the private sector can invest in technological infrastructure, great educational programs to teach technological skills to students from early age based on inclusive digital training for teachers. Prioritize teacher training and create inclusive digital content to encourage ethical innovation, provide tax incentives, financing, and technological training to support MSMEs with the appropriate technological tool, and train workers in a digital and human skill implementing collaborative technological tools and foster a culture of continuous learning to ensure the adaptation job. Decentralize advanced medical technology to benefit all regions and implement regulation to ensure transparency and equity using artificial technology. And just to finish, just want to continue the thought that yesterday to whether we can bridge the digital divide and build a more inclusive and equitable future for people worldwide. Thank you very much.

Xiaofeng Tao: Thank you very much, Ricardo. Thanks for sharing your deep analysis and also solution proposed for better inclusion. Now we will have next speaker, Professor Desi Salamanca from University of Worcestershire and also her colleague Lazaros from the University of Pretoria. And the topic is Centering Social Justice Cohesion in Digital Technology Accessibility Equally and Inclusively. And now, Desi, you have floor, please.

Daisy Selematsela: Okay, I’m trying to put it on play mode. Just a second, Tony. He has a similar program. You’ll have to help yourself. You are an expert. We believe you. Yes, I can share, but I’m trying to minimize. Okay, good morning colleagues from South Africa and we’re happy that you are joining you from far. So we are focusing on the two aspects whilst I just want to put it on slideshow. My screen is too big here so I can’t put it on slideshow but my colleague will come shortly. So we’ll be focusing on centering social justice or social cohesion in digital technology accessibility, equality and inclusivity. And this is linking to what we are discussing this morning when we look at information society at times of risk. Also like what Professor Juha has highlighted, facilitating access at all levels and also what Ricardo just highlighted regarding the unequal access. And we are looking at this from the Global South perspective. And if we look at social integration and inclusion, I want to highlight how the Department of Sports Arts and Culture in South Africa, where the libraries for example, report to this ministry look at social cohesion. And this is how it’s defined by the ministry. The degree of social integration and inclusion in communities and in communities and society at large and the extent to which mutual solidarity finds expression itself among individuals and communities. And this leads us to how do we look at social cohesion or social justice in the South African perspective because this also impacts the Global South. And like Professor Juha has highlighted the SDGs on our site there, as you can see there. So for us, when we look at social cohesion, we have four elements here. we’re looking at libraries that stimulate social cohesion by fostering inclusivity, and that’s what we’re discussing today when it relates to technologies, libraries advancing social cohesion by supporting the sustainable development goals, also nurturing social cohesion through education, and this was highlighted also by Professor Zhu and Professor Ricardo, libraries facilitating social cohesion through information technology, and this is what we’re focusing on today. So social integration intervention to attain SDGs in South Africa, it’s quite key on how we want to address these things. When we look at social and economic disparities, and I’m glad that my colleague Professor Ricardo highlighted how on the issues around widening social and economic exclusion. Now I want to touch base on what are policy documents, for example, that are relating to this, for example, in South Africa. The intention of reducing social economic disparities, i.e. poverty and so forth, are also stated as part of the reconstruction and development program of 1994, which further is reaffirmed by the national development plan of 2012, and these documents emphasize the following. No political democracy can survive and flourish if the mass of the people remains in poverty and so forth. Then I want to move to the next slide. What is the context then of social cohesion? The context of social cohesion looks at poverty, inequality, and social exclusion, which have also received global attention in the post-2015 development agenda, and this is the Africa agenda. For more than two decades, South Africa has sought to address poverty and inequality in a wide range of initiatives, and the national development plan of 2030 predated the sustainable development goals, yet it is largely aligned with the goals of addressing poverty, inequality, and exclusion. And the context of social cohesion, again, as we can see that it also linking to what I’m saying around policy legislation documents around redress and skewed distribution of social and economic opportunities. And when we move again to social cohesion interventions, it looks at protection that aims to ensure a basic level of well-being to enable people to live with dignity, and that governments tend to introduce social cohesion policies to meet social, economic, and political objectives. And I want to highlight that social cohesion, like Professor Zhu highlighted earlier in one of the slides there, Carniki and Emilio are indicating that it is wide in many African countries, encompassing a range of social protection and interventions and societal safety nets. And I just want to quickly jump, because of the time, as part of the interventions, the African Union has made the promotion of social protection and cohesion as its defining principle, and it’s quite important with what we’re talking about today on information society at times of risk and how do we deal with it. But now the thrust of our talk is to focus more on how do we deal with these technological access issues, and we focus for us, myself and my colleague Lazarus, and we’re looking at democratizing digital technologies. What are the challenges if digital technology accessibility is left unchallenged? The role of African research in the global knowledge economy is impacted. We’re grappling with what open scholarship on open science and research agendas mean in different areas and context. Researchers in low and middle income countries are facing a lot of challenges. income countries are vulnerable, as we have heard also from Professor Ricardo, regarding the pressures on publishing, and also the issues around public data publishing. Also, the global knowledge environment and local needs and impacts are not addressed properly, and undervaluing institutional repositories, which are also the conduit to accessing the information. And these are part of the technologies that we need. Sustainable open access publishing in Africa, and funding challenges for education and research growth, because this also impacts, if infrastructure is not up to par, then it impacts all this. And the other element, when we look at democratizing digital technologies, the challenges that will impact on this is widening digital technology, and my colleagues have highlighted to that. Sustainable national and regional infrastructures, valuing indigenous knowledge and local languages are also important when you talk about access that links to digital technologies. Also, promoting inclusivity and diversity of voices, misinformation in public policy is also key, and open research to the broader community, and allowing for greater cultural and linguistic diversity to support local and regional knowledge production. And my colleague Lazaros will jump in on the technologies regarding digital scholarship transformation. OK. Thank you, Desi.

Xiaofeng Tao: OK. Welcome, Lazaros, for joining us.

Lazaros Matizirofa: Thank you, Prof. Go ahead. Yes, thank you so much. So in the university, what we are trying to do is to broaden the digital scholarship transformation and underpin our strategy to enhance providing solutions for the digital divide. So as you can see, my first slide is actually going to divide. illustrate how we define digital scholarship in higher education. So in this slide, you’ll notice this is the first planetarium of its size built in Africa, the new digital dome, which is hosted here at the University of Witwatersrand. And it is there to give our clients the in-depth understanding of the natural world as well as whatever lives in the sky. And this is a phenomenal dome that we are going to enhance as a university to ensure that we provide students both within the universities and in primary schools and secondary to have to imagine what the world looks like from this environment. Lazarus, sorry to disturb you. If you need to play the next one, just tell me, okay? Because I’m controlling the slide playing on site, okay? Okay, next slide, 15. It’s a bit slow, sorry. Okay, slide 15. Yes, so linking digital humanities to digital scholarship. This is what we are doing at the intersection of humanities and digital technology, providing digital humanities. Scholars usually engages with humanities topics through digital collaboration, creating digital projects and using digital tools to fuel research. And as such, this could mean utilizing materials and resources that were born digital, and we also digitize them for existing material objects like print books and artworks. This could also mean exploring computational technologies such as algorithms. code or data, text mining, tools to understand how large collection and information will all intersect. Also digital scholarship, we also assume this will visualize timeless maps and data organization tools to visualize and analyze and interpret text and data in innovative ways. Next, please. And on the digital scholarship in South Africa and higher education, these are some of the things to democratize digital scholarship. We need resources. We also need people who can give the instruction and how then people engage and with all this to ensure that we have a simple and complex environment, connect researchers to communities and empower scholarship and action. Next slide, please. So at Digital Humanities at VITS Libraries, we have our museums, which we are also ensuring that we provide digital solutions and also enhancing the library digital scholarship services through the archives and other things that we can digitize in our collections, artifacts, new tools, and also providing the optical character recognition for the digitized collections so that researchers can have vast amounts of textual data they can use. However, these advances are not limited to just text. We are also providing sound images and video that have been subjected to these new forms of research. Next slide, please. So you will see with our advancement on historical papers, we have the richest archive of research papers in South Africa. And so our Our agenda here is to digitize some of these collections, provided copyright will permit us, so that most of the archived material can then be accessible to our clients, and also sensitive archives from the upper third area. And also provide digital solutions to them, so that they’ve, they can transform and be accessible to, to other people outside the university. And so this includes combining stem with digital scholarship and digital humanities, through our makerspace, which combines humanities together with other in and promote cross disciplinary collaboration, that’s helping students and researchers to move beyond the historical division between the sciences and the humanities. Next slide please. Rather us. Because we still have to leave some time for panel discussion, could you please finish your presentation in three minutes. Yes, thank you, sir. So most of these things that we are going to digitize are in our, our museums, and also already we have digitized the last slides which picks on this rock art research institute. These are African rock art that presides around most of the African countries where which were collected, and we’ve digitized them to ensure that they have a wider audience. So thank you, sir. I’m at the end of the presentation. Yes. Yes. Yeah, so you see, these are African potteries that were created a long time ago, which were also digitizing and providing the 3D dimension to ensure that researchers can utilize and analyze them and write something about them. Next slide. Yes, so this is the Rock Art Research Institute, as I mentioned before, which we have digitized. Most of these materials came from different African countries. From the, you know, these are images that you will find on our mountains and caves, which we are providing here as a digital archive at WITS. I think this should be the last slide. I think so. Thank you. Okay.

Xiaofeng Tao: Very impressive presentation from Desi and Lada Ross. Thank you. Thank you for your presentation. Now, we have our last invited speaker on-site presentation from Havana Mastery Moo. She is from St. John’s University, New York, and her topic is Meaningful Access and the Football Internet for Women. She will stay on the stage to give her presentation. Good morning.

Tamanna Mustary Mou: Good morning, everyone. So I’m from Bangladesh, but this time I came from New York because I’m a Ph.D. fellow at St. John’s University, New York. So I’m presenting my slides in front of you because you know that Bangladesh and China are very friendship country, and at this time I represent the Asian women. That’s why I’m here to present these slides. topic is meaningful access and affordable internet for women. So all of we know that women are a bit behind using internet. So I have selected that topic to let you know that was the barrier for the women to use internet and this is the internet governance forum and I believe this is very relevant for all of us to know the problems and the barriers which hinders women participation especially in the South Asian country. Connectivity. I would like to focus on the meaningful connectivity that what is called meaningful because all of we know that we can use internet anywhere and everywhere but is that using are meaningful or it is like is meaningless. So a first connection 4G now is 5G is available also and an appropriate device regular internet use broadband connection at home and workplace. Those are the key points that we need to ensure for the women and since we are very limited time we are having very limited time that’s right very so now the thing is that what is the what is the situation of our country like progress challenges opportunities and way forward and we have meaningful connectivity when we can use the internet every day using an appropriate device with enough data and first connection the FAA all of you know that publisher reports that reveals one of the 10 people across nine countries in Africa Asia and Latin America have solid working access to internet and this is very very very limited because it’s just one in 10 people and these issues of our connectivity involve how the action needed to provide affordable and meaningful access and men were far more likely to engage in a range of online activities including posting comments about political social and economic issues and data men are more likely to use internet and Yesterday, when I was in the plenary session, I found that it was discussed that men are always using more internet than women. And this is the situation not only in Asia and Africa, but also in Europe and America. Next, please. And this is the scenario in politics, in economics, in business, and everywhere. And we have to admit that without the full participation of women, it is impossible to progress as a whole for the society. Because women are half and maybe more than half of the society. So if women are behind using internet, the progress in all respects will be hindered. And this is the situation. We are having some data that 144 developing countries buy up to $18 billion USD. Meanwhile, 180 million women and girls would be able to generate more income, and nearly 500 million would improve their education level. So since I’m working in the Ministry of Education, I found that if women can afford the internet and they are getting the full facility to using internet, the education sector will be developed like anything. So in education, the previous speaker has also focused on the using of internet in education. And this is a very important part that we need to change our education as a digitalized system. And nowadays, we are using chat GPT, and we are using technology, technological innovation in education. So in every sector, we need that kind of participation for women, either in business or in education or in commerce, in everywhere. Next, please. So the thing is that we found that barriers for meaningful connectivity and affordability and digital skills gap remain stubborn barriers to gender equitable access to and use of internet. Across the globe, fewer women than men use the internet. And research from Wave Foundation found that globally, men are 21% more likely to be online than women. And those are the problem. There are some barriers, that lack of digital skills, like the affordability is sometimes more expensive for the women to afford, because most of the women feeling that they have less income. So since they have less income, they cannot afford the internet, which is very expensive for them. So this is the things that women are. And there are some issues for privacy also. Some women are afraid of privacy or any kind of online harassment. They are also aware of the social harassment. They are also aware of the online harassment and the vulnerability and safety, security concern. Those are the things that interrupt using of internet, as opposed to men. So next, please. This is a source for digital divide. And other barriers found to ensure meaningful access and affordable internet for women, unavailability of broadband access, or less access to public internet center, insufficient income, and unable technological device, and cultural norms or social barriers. I have discussed this. And those are the gender-focused policy to address women. Those are the things that can highlight the use of internet by women. And less gender-focused policy to address women’s ability to access and benefit from the internet. So the policymakers should be concerned about those things that we are facing nowadays. So what policy should be women-friendly? The Ministry of Information and Broadcasting should also change their policy so that women can have access. Thank you. Next slide, please. We are at the very end of the things that we use internet in education. I’ve told that economic affairs, commerce, transportation, media communication, in production, and medical sector. We have already discussed those. And the next, please. And the thing is that to ensure that equality, no equality, no internet and no equality is just similar. So if we want to ensure the equality, we have to ensure the internet for the women. So this is just a vice versa situation. If women are far behind than men in any kind of development, the internet should be assured for the women to develop equally. So this is the data from online population, internet use, gender gap, and mobile broadband penetration. Those are the data I have found from some LNs for affordable internet, meaningful connectivity, unlocking the full power of internet access. So these are the data. We all the data focus that women are less likely using internet than men. Next please. But those are very, the situation, it is like a village situation. Now it is the situation is, assuring gender sensitive environment, and the government of Bangladesh is very much aware for the development all over the country, even in the village. So now it is the village women are using mobile phone, like smartphone and internet, you can talk with friends and family, like video calls, chat, and everything. So they are becoming changing. So this is the situation we have found from ICT source, that women empowerment is like before, it’s very much changing the situation. Next please. And key to achieving international standard of communication by using internet. Since we are here for internet governance forum, we need to change the situation as a global village. We need to change the situation for women using internet. So this is a global community, and now are here to promise to keep the internet for all the citizens, all over the global village. So this is the promise, and we think that in future, the next IGF, we will able to ensure internet for women also, as we can ensure internet for men. And this is my presentation, and thank you very much for your kindness to give me the privilege here to present my slides. Thank you very much. Have a good day.

Xiaofeng Tao: Thank you. Thank you, Tamara. Thank you for sharing your insight, idea about meaningful access for the internet for women. So we still have a few minutes for the discussion. Maybe our on-site speaker or online speaker can share your viewpoints in very compact words. Okay. Osk, do you have some words to say?

Horst Kremers: Thank you very much. I’m just, I’m just, can you read the chat? I’m just preparing a remark on my session proposal in WSIS, I’m just writing it, can I please complete and come back later, just a minute.

Xiaofeng Tao: Okay. Do not circulate WSIS after you’re finishing, okay? It’s a very good proposal, I believe. Okay.

Liu Chuang: Yeah, I think this is a very good session in a different aspect, in different regions. And I think we have a common understanding for our common future, one is we need to work together, right? Yeah. So, in order for disaster, for the GIs, for the women, and for the law, you know, we need to first thing, we need to work together, right? So yeah, so I think that’s how we can think about the next step, how we can work together, right? Yeah. And then the second one is that now we go to the intelligence area, so we need to think about how to migrate, to benefit the data, and the internet, and to the society, to everybody, leave no one behind, right? So we need action, so what action we should do, so we need to discuss about that, right? So I would like to transfer to…

Ricardo Robles Pelayo: Thank you very much. I think since yesterday at the opening session, we have a common issues to do about education, about the economic issues, and to share all the knowledge around the world, and in this forum, I think it’s a good place to do it. So I think that is very usual that consider that around the world, we have to work together as Mr. Luke says, and we can share this space to do it as well. So thank you very much for the invitation again, and let’s work together to reach that goal. Thank you very much.

Xiaofeng Tao: Okay. So, Stacy? You have some words for us?

Daisy Selematsela: Yes. I think what’s important with today’s discussions from the different presentations that we all converge with what we are trying to put forward on access, and in the use of, or the availability of infrastructure, and… also the issues around data to address societal issues. And that’s why I see the convergence that even though our presentations were different, however, they were coming together to address issues of access that links to the SDGs, issues of access that addresses societal impact, and how do we want to see all the societies being able to access information and the use of data in that aspect. Lazaros, you can come in, thanks.

Lazaros Matizirofa: Lazaros, can you hear us? Yes, thank you, Chair. I think from my side is the role that we need to play in terms of, particularly in the African context, is to digitize some of the materials that should be then be provided as information resources to a wider audience. And I think majority of Africa as a continent, be it universities or public institutions, they do have a lot of rare materials that still needs to be streamlined to access on a digital platforms. And therefore, the internet then will provide that link to everybody having access to these collections.

Xiaofeng Tao: Thank you, sir. Okay, thank you, Lazaros. Now, Tamara, you have some words to say? Maybe you, sir, next. Yeah, some short words about our workshop before cost of speaking. Okay, you can go ahead if you have some words to say.

Tamanna Mustary Mou: Thank you very much. We need to be ensure women active participation and leadership in our country as well as all the South Asian country. And we already have ensured the participation of women mostly in the European countries. Now, in Poland IGF, I have attended 2021 in Poland UN IGF as an UN fellow. So that time, that was a promise that women’s voice should be heard from everywhere. So when the women are eligible to speak out, they can ensure their own right. So my last word is that women should speak out, women’s voice should be heard, and women are the part of the world and they are the active participants everywhere. So we cannot ensure our development without women’s participation. So this is the last part that we need to ensure internet for women and as well as men everywhere. Thank you.

Xiaofeng Tao: Thank you Tamara. Host, do you still have some words? One minute?

Horst Kremers: Yeah, thank you. Yeah, to be short, I put a note in the chat with links that you may be possible to see. I will, of course, keep you informed. The question is that I have a long time and I see the situation in disaster risk reduction and I see the problem really from full scale and we can do very positive things. The question also is, just to see the importance of the thing is, now is the time that slowly starts preparation of the follow-up United Nations program on risk reduction, which must be some program from 2030 to 2045, the next 15 years. And in preparation of this, we have to argue for rewording, improving the wording of United Nations instruments in this case where I work. with UNDRR with the disaster risk reduction things. And other colleagues here certainly in sustainable development and whatsoever also could contribute to wording the next program. So that is what I see as a general positive aspect of that. Thank you.

Xiaofeng Tao: Thank you very much, Faust. As the final or last speaker for our workshop, I think we do have an intensive and informative workshop covering key issue about digital tech for benefiting everyone. Unfortunately, due to time, we have to make a concluding remark. So firstly, many thanks to all the speaker. We have such a great opportunity to communicate and exchanging regarding diverse aspects. We’ve seen one and a half hour, we talk about add some value on sustained agriculture with support of digital tech, information governance on disaster management, the principle practice recommendation at different level, local, regional, national, and so on. So also I talk quite a lot about big data as a powerful tool for reducing digital divide, algorithm, data, computing. And also there is very important for internet connectivity to foster opportunity for rural and indigenous people through education. We talk about the importance of democratic thing, digital technology, digital scholarship, and the digital humanities. And Tamara gave us very insightful understanding about meaningful connectivity and the equity. So thanks to all for your participation in workshop 14.9. For the way forward, everyone will and should be benefited from digital tech and its implication relies on the joint effort from all the stakeholder today on site and all the stakeholder in our information society for future actions. And lastly, please follow the activity of CCIT and contact us if you have any suggestion on future cooperation. So our workshop ends here. Thank you all for your active participation. Thank you. Thank you. We are organizing a training workshop in Bangladesh next February. Yeah. Yeah. Yeah. I will send me the agenda. We have maybe have a picture. We have an operation with local. You can also meet with John in Ghana for presenting something. Great. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

L

Liu Chuang

Speech speed

115 words per minute

Speech length

929 words

Speech time

481 seconds

GIS methodology can benefit farmers and rural communities

Explanation

Liu Chuang argues that GIS technology can help farmers and rural communities by linking production, environment, marketing, and consumers of special agricultural products. This innovative solution aims to improve product quality, nutrition, and environmental sustainability.

Evidence

In China, 19 cases have benefited over 600,000 farmers. One example is Lanna Village, where community income increased from zero to over one million in three years.

Major Discussion Point

Digital Technology for Inclusive Development

Agreed with

Xiaofeng Tao

Daisy Selematsela

Tamanna Mustary Mou

Agreed on

Digital technologies can promote inclusive development

Differed with

Horst Kremers

Differed on

Focus of digital technology implementation

X

Xiaofeng Tao

Speech speed

86 words per minute

Speech length

1698 words

Speech time

1175 seconds

Big data can support SDG implementation and environmental monitoring

Explanation

Xiaofeng Tao discusses how big data can be a key tool for supporting and evaluating the implementation of SDGs. He argues that new technologies can provide accurate information about the environment and human activity, serving as a source of knowledge for decision-making.

Evidence

Examples of using big data for environmental monitoring and rapid response were mentioned, such as deriving key variables from satellites for global change assessment.

Major Discussion Point

Digital Technology for Inclusive Development

Agreed with

Liu Chuang

Daisy Selematsela

Tamanna Mustary Mou

Agreed on

Digital technologies can promote inclusive development

Lack of infrastructure hinders access in rural areas

Explanation

Xiaofeng Tao points out that there are different capacities for earth observation and data accumulation in different countries. This disparity in infrastructure can hinder access to digital technologies, particularly in rural areas.

Evidence

He mentions the creation of a data hub to reduce data divide by integrating data from various organizations and providing computing facilities.

Major Discussion Point

Challenges in Bridging the Digital Divide

Agreed with

Ricardo Robles Pelayo

Agreed on

Infrastructure gaps hinder digital access

H

Horst Kremers

Speech speed

118 words per minute

Speech length

1313 words

Speech time

664 seconds

Information governance is needed for inclusive disaster management

Explanation

Horst Kremers emphasizes the importance of information governance in implementing the United Nations’ all-of-society principle in disaster management. He argues that inclusive disaster management requires involving various stakeholders and addressing their information needs.

Evidence

He cites the Sendai Framework for Disaster Risk Reduction and provides a list of stakeholders that should be involved in disaster management, including engineers, medical care organizations, and amateur radio associations.

Major Discussion Point

Challenges in Bridging the Digital Divide

Differed with

Liu Chuang

Differed on

Focus of digital technology implementation

R

Ricardo Robles Pelayo

Speech speed

112 words per minute

Speech length

1155 words

Speech time

616 seconds

There are significant digital access gaps in Latin America

Explanation

Ricardo Robles Pelayo highlights the digital divide in Latin America, where more than 240 million people still lack internet access. He argues that this digital exclusion reinforces social and economic inequalities in the region.

Evidence

He cites data showing that internet access has grown significantly in Latin America but remains a luxury in many rural communities due to high costs, lack of infrastructure, and insufficient technological skills.

Major Discussion Point

Challenges in Bridging the Digital Divide

Agreed with

Xiaofeng Tao

Agreed on

Infrastructure gaps hinder digital access

Public policies are needed to ensure universal connectivity

Explanation

Ricardo Robles Pelayo argues for the need for ambitious and well-designed public policies to ensure access to digital technologies and their effective use. He emphasizes that these policies should focus on generating opportunities in marginalized communities.

Evidence

He mentions the need for investing in technological infrastructure, teacher training, and policies prioritizing equity in access to health services.

Major Discussion Point

Strategies for Enhancing Digital Access and Skills

D

Daisy Selematsela

Speech speed

128 words per minute

Speech length

1166 words

Speech time

543 seconds

Digital technologies need to be democratized to address inequalities

Explanation

Daisy Selematsela argues for the democratization of digital technologies to address social and economic disparities. She emphasizes the need to foster inclusivity and diversity of voices in the digital space.

Evidence

She mentions policy documents in South Africa, such as the Reconstruction and Development Program of 1994 and the National Development Plan of 2012, which aim to address poverty and inequality through various initiatives.

Major Discussion Point

Digital Technology for Inclusive Development

Agreed with

Liu Chuang

Xiaofeng Tao

Tamanna Mustary Mou

Agreed on

Digital technologies can promote inclusive development

L

Lazarus Matizirofa

Speech speed

113 words per minute

Speech length

926 words

Speech time

491 seconds

Digital scholarship and humanities can enhance education

Explanation

Lazarus Matizirofa discusses how digital scholarship and humanities can enhance education by providing innovative tools and resources. He argues that this approach can help students and researchers move beyond traditional divisions between sciences and humanities.

Evidence

He mentions the use of a digital planetarium at the University of Witwatersrand and the digitization of historical papers and African rock art collections.

Major Discussion Point

Strategies for Enhancing Digital Access and Skills

Digitization of cultural artifacts can increase access to knowledge

Explanation

Lazarus Matizirofa argues that digitizing cultural artifacts, such as African rock art and historical papers, can increase access to knowledge. This process allows researchers to utilize and analyze these materials, making them available to a wider audience.

Evidence

He mentions the digitization efforts at the University of Witwatersrand, including the Rock Art Research Institute’s collection of images from various African countries.

Major Discussion Point

Strategies for Enhancing Digital Access and Skills

T

Tamanna Mustary Mou

Speech speed

131 words per minute

Speech length

1472 words

Speech time

672 seconds

Meaningful connectivity is crucial for women’s participation

Explanation

Tamanna Mustary Mou emphasizes the importance of meaningful connectivity for women’s participation in various sectors. She argues that ensuring internet access for women is crucial for their empowerment and equal participation in society.

Evidence

She cites data showing that men are 21% more likely to be online than women globally, and mentions the potential economic benefits of closing this gender gap in internet access.

Major Discussion Point

Digital Technology for Inclusive Development

Agreed with

Liu Chuang

Xiaofeng Tao

Daisy Selematsela

Agreed on

Digital technologies can promote inclusive development

Digital skills gaps and affordability are barriers for women

Explanation

Tamanna Mustary Mou identifies digital skills gaps and affordability as major barriers to women’s internet access. She argues that these factors, along with privacy concerns and cultural norms, contribute to the gender gap in internet usage.

Evidence

She mentions that women often have less income, making internet access more expensive for them, and cites concerns about online harassment and security as additional barriers.

Major Discussion Point

Challenges in Bridging the Digital Divide

G

Gong Ke

Speech speed

102 words per minute

Speech length

230 words

Speech time

135 seconds

Multi-stakeholder cooperation is key for technology implementation

Explanation

Gong Ke emphasizes the importance of multi-stakeholder cooperation in implementing digital technologies for inclusive development. He argues that collaboration between various sectors is crucial for ensuring that everyone benefits from digital advancements.

Evidence

He mentions the participation of experts from Asia, Europe, Africa, and America in the workshop as an example of international collaboration to address digital inclusion.

Major Discussion Point

Strategies for Enhancing Digital Access and Skills

Agreements

Agreement Points

Digital technologies can promote inclusive development

Liu Chuang

Xiaofeng Tao

Daisy Selematsela

Tamanna Mustary Mou

GIS methodology can benefit farmers and rural communities

Big data can support SDG implementation and environmental monitoring

Digital technologies need to be democratized to address inequalities

Meaningful connectivity is crucial for women’s participation

These speakers agree that digital technologies, when properly implemented and made accessible, can contribute to inclusive development across various sectors and demographics.

Infrastructure gaps hinder digital access

Xiaofeng Tao

Ricardo Robles Pelayo

Lack of infrastructure hinders access in rural areas

There are significant digital access gaps in Latin America

Both speakers highlight the issue of insufficient infrastructure as a major barrier to digital access, particularly in rural and marginalized areas.

Similar Viewpoints

Both speakers emphasize the importance of collaboration and inclusive governance in implementing digital technologies effectively.

Horst Kremers

Gong Ke

Information governance is needed for inclusive disaster management

Multi-stakeholder cooperation is key for technology implementation

Both speakers highlight the importance of digital skills and education in bridging the digital divide, albeit focusing on different aspects (general education and women’s access respectively).

Lazarus Matizirofa

Tamanna Mustary Mou

Digital scholarship and humanities can enhance education

Digital skills gaps and affordability are barriers for women

Unexpected Consensus

Cultural preservation through digitization

Liu Chuang

Lazarus Matizirofa

GIS methodology can benefit farmers and rural communities

Digitization of cultural artifacts can increase access to knowledge

While focusing on different areas (agriculture and cultural artifacts), both speakers unexpectedly converge on the idea that digitization can help preserve and promote local knowledge and cultural heritage.

Overall Assessment

Summary

The main areas of agreement include the potential of digital technologies for inclusive development, the need to address infrastructure gaps, the importance of multi-stakeholder cooperation, and the role of digital skills in bridging the digital divide.

Consensus level

There is a moderate to high level of consensus among the speakers on the importance of digital technologies for development and the need to address access gaps. This consensus implies a shared understanding of the challenges and potential solutions in bridging the digital divide, which could facilitate more coordinated efforts in policy-making and implementation of digital initiatives.

Differences

Different Viewpoints

Focus of digital technology implementation

Liu Chuang

Horst Kremers

GIS methodology can benefit farmers and rural communities

Information governance is needed for inclusive disaster management

While both speakers advocate for the use of digital technologies, Liu Chuang focuses on GIS for agricultural development, while Horst Kremers emphasizes information governance for disaster management.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were in the specific focus and application of digital technologies for development and inclusion.

difference_level

The level of disagreement among speakers was relatively low. Most speakers agreed on the importance of digital inclusion and the need to address various digital divides. The differences were mainly in the specific areas of focus or application, which can be seen as complementary rather than contradictory approaches. This suggests a multifaceted approach is needed to address digital inclusion comprehensively.

Partial Agreements

Partial Agreements

Both speakers agree on the existence of digital divides, but Ricardo Robles Pelayo focuses on regional disparities in Latin America, while Tamanna Mustary Mou emphasizes gender-specific barriers for women globally.

Ricardo Robles Pelayo

Tamanna Mustary Mou

There are significant digital access gaps in Latin America

Digital skills gaps and affordability are barriers for women

Similar Viewpoints

Both speakers emphasize the importance of collaboration and inclusive governance in implementing digital technologies effectively.

Horst Kremers

Gong Ke

Information governance is needed for inclusive disaster management

Multi-stakeholder cooperation is key for technology implementation

Both speakers highlight the importance of digital skills and education in bridging the digital divide, albeit focusing on different aspects (general education and women’s access respectively).

Lazarus Matizirofa

Tamanna Mustary Mou

Digital scholarship and humanities can enhance education

Digital skills gaps and affordability are barriers for women

Takeaways

Key Takeaways

Digital technologies have great potential to benefit everyone, but significant divides and inequalities in access still exist

Multi-stakeholder cooperation and partnerships are crucial for implementing digital technologies inclusively

Big data, GIS, and other digital tools can support sustainable development and environmental monitoring

Improving digital access and skills for women and rural communities is a key challenge

Digital technologies in education, healthcare, and cultural preservation can enhance development outcomes

Resolutions and Action Items

Enhance partnerships and collaboration on big data for SDG implementation

Invest in digital infrastructure and skills training, especially in rural areas

Develop policies to ensure universal and affordable internet connectivity

Digitize cultural artifacts and knowledge to increase access

Address barriers to women’s internet access and digital participation

Unresolved Issues

How to effectively close the digital divide between urban and rural areas

Ways to make advanced digital technologies like AI accessible to marginalized groups

Balancing open data sharing with privacy and security concerns

Funding mechanisms for digital infrastructure in developing countries

Measuring and evaluating the impact of digital inclusion efforts

Suggested Compromises

Combining open data initiatives with robust data protection policies

Balancing investment in cutting-edge technologies with basic digital access

Public-private partnerships to expand digital infrastructure cost-effectively

Adapting digital solutions to local contexts while maintaining global standards

Thought Provoking Comments

We need to work together, right? So yeah, so I think that’s how we can think about the next step, how we can work together, right? Yeah. And then the second one is that now we go to the intelligence area, so we need to think about how to migrate, to benefit the data, and the internet, and to the society, to everybody, leave no one behind, right?

speaker

Liu Chuang

reason

This comment synthesized key themes from multiple presentations and proposed concrete next steps, emphasizing collaboration and inclusivity.

impact

It shifted the discussion towards actionable steps and reinforced the overarching goal of benefiting everyone through digital technology.

Education is one of the areas where ICTs can have the most significant impact, especially in Latin America, where educational equity remains a challenge. Incorporation of technological tools in classrooms modernizes teaching and opens new opportunities for students, otherwise excluded for quality education.

speaker

Ricardo Robles Pelayo

reason

This comment highlighted a specific, high-impact application area for digital technology in addressing inequality.

impact

It focused the conversation on the practical implications of digital technology for social development, particularly in education.

Across the globe, fewer women than men use the internet. And research from Wave Foundation found that globally, men are 21% more likely to be online than women.

speaker

Tamanna Mustary Mou

reason

This comment introduced concrete data on gender disparities in internet access, bringing attention to an important aspect of digital inequality.

impact

It brought gender issues to the forefront of the discussion and prompted consideration of targeted approaches to increase women’s access to digital technology.

Overall Assessment

These key comments shaped the discussion by synthesizing diverse perspectives into common themes of collaboration, inclusivity, and targeted interventions. They moved the conversation from theoretical concepts to practical applications and specific challenges, particularly in education and gender equity. The comments also reinforced the need for multi-stakeholder cooperation and data-driven approaches in addressing digital divides.

Follow-up Questions

How can we enhance multi-stakeholder cooperation mechanisms to accelerate and enhance the process of big data in our society and life?

speaker

Xiaofeng Tao

explanation

This is important to address challenges in implementing big data for SDGs and reduce digital divides.

What specific actions should be taken to improve the link between observation, computing, analysis, and knowledge discovery for supporting SDG implementation?

speaker

Xiaofeng Tao

explanation

This is crucial for effectively using new technologies like big data and IoT to support and evaluate SDG implementation.

How can we democratize access to medical technology to ensure advancements reach regional hospitals and marginalized areas?

speaker

Ricardo Robles Pelayo

explanation

This is important to address profound geographic and economic inequalities in healthcare systems.

What initiatives can ensure universal, affordable and high-quality connectivity for everyone, regardless of geographic location and socioeconomic status?

speaker

Ricardo Robles Pelayo

explanation

This is crucial for addressing internet access as a fundamental right and reducing digital exclusion.

How can we improve the wording of United Nations instruments, particularly for the next program on disaster risk reduction from 2030 to 2045?

speaker

Horst Kremers

explanation

This is important for preparing effective future UN programs and improving global disaster risk reduction efforts.

What policies should be implemented to make internet access more women-friendly and address barriers to women’s internet use?

speaker

Tamanna Mustary Mou

explanation

This is crucial for ensuring gender equality in internet access and use, particularly in developing countries.

How can we digitize rare materials from African institutions to provide wider access through digital platforms?

speaker

Lazarus Matizirofa

explanation

This is important for preserving and sharing valuable cultural and educational resources across Africa and globally.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #134 Data governance for children: EdTech, NeuroTech and FinTech

WS #134 Data governance for children: EdTech, NeuroTech and FinTech

Session at a Glance

Summary

This discussion focused on data governance for children in the context of emerging technologies, specifically EdTech, FinTech, and NeuroTech. Experts explored the risks and benefits associated with processing children’s data in these domains, as well as governance models and regulatory frameworks.

The panel highlighted potential benefits of these technologies, such as personalized learning in EdTech and enhanced financial literacy through FinTech. However, they also emphasized risks like privacy concerns and potential exploitation of children’s data. The importance of multi-stakeholder governance models was stressed, with examples including regulatory sandboxes and public-private partnerships.

Participants discussed the challenges in implementing existing regulations and the need for better guidance for schools and teachers in choosing EdTech products. The conversation touched on the convergence of technologies and the difficulty in predicting future developments, particularly in NeuroTech.

The panel explored the global divide in both technology access and regulatory frameworks, emphasizing the need for a level playing field. They discussed the potential future implications of these technologies, including the possibility of cognitive enhancement and the integration of financial services into various digital platforms.

The discussion concluded by emphasizing the importance of maintaining a balance between innovation and protection in future regulatory approaches. Participants stressed the need for a holistic child rights approach when considering the future of technology and data governance for children.

Keypoints

Major discussion points:

– Risks and benefits of data processing in emerging technologies like edtech, fintech, and neurotech for children

– Multi-stakeholder governance models and regulatory approaches for children’s data protection

– Implementation challenges and gaps in existing legal/regulatory frameworks

– Future trends and concerns regarding these technologies and their impact on children

Overall purpose:

The goal of the discussion was to explore data governance issues related to emerging technologies that impact children, identify challenges and promising practices, and consider future implications and regulatory needs.

Tone:

The tone was primarily analytical and forward-looking, with speakers offering expert insights on complex issues. There was a sense of cautious optimism about potential benefits balanced with concern about risks. The tone became more speculative and urgent when discussing future trends and the need for proactive governance approaches.

Speakers

– Jasmina Byrne: Chief of Forsythian Policy at UNICEF

– Sabine Witting: Assistant professor for law and digital technologies at Leiden University, co-founder of TechLegality

– Emma Day: Co-founder of TechLegality

– Melvin Breton: From UNICEF

– Aki Enkenberg: From Government of Finland

– Steven Vosloo:

Additional speakers:

– Jutta Croll: From the Digital Opportunities Foundation in Germany

Full session report

Data Governance for Children in Emerging Technologies: A Comprehensive Overview

This discussion brought together experts from various fields to explore the complex landscape of data governance for children in the context of emerging technologies, specifically focusing on EdTech, FinTech, and NeuroTech. The panel, which included representatives from UNICEF, academia, and government, aimed to identify key challenges, opportunities, and future implications of these technologies for children’s rights and well-being.

Benefits and Risks of Emerging Technologies

The discussion began by acknowledging the potential benefits of these technologies for children. Emma Day highlighted the personalised learning opportunities offered by EdTech, such as adaptive learning platforms. Melvin Breton emphasised the role of FinTech in enhancing financial literacy from a young age, including through gamified savings apps. Aki Enkenberg noted the potential benefits of neurotechnology in health and education sectors, such as early detection of learning difficulties.

However, these opportunities were balanced against significant risks. Jasmina Byrne raised concerns about privacy and security risks associated with data collection, particularly the potential for data breaches in educational settings. Melvin Breton warned of the potential for manipulation and exploitation in FinTech, particularly given children’s vulnerability to persuasive design techniques. Aki Enkenberg cautioned about the risk of unconscious influencing through neurotech, especially as it moves from medical to consumer spaces.

Governance Models and Implementation Challenges

A key theme that emerged was the need for multi-stakeholder governance approaches to address the complex challenges posed by these technologies. Sabine Witting and Emma Day both emphasised this point, highlighting the importance of involving diverse stakeholders in shaping governance frameworks.

Emma Day and Melvin Breton discussed the value of regulatory sandboxes as a means of fostering innovation while ensuring compliance with regulations. Day explained that these sandboxes allow companies to test new products or services in a controlled environment, under the supervision of regulators, helping to identify potential risks and regulatory issues before full market deployment.

The discussion highlighted significant implementation challenges, particularly in EdTech. Emma Day noted that the main issue was not necessarily gaps in the regulatory framework, but rather difficulties in implementing existing regulations, particularly at the school level. This emphasized the importance of capacity building and support for educators and administrators.

The cross-border nature of many of these technologies was identified as a particular challenge by Emma Day, highlighting the need for international cooperation in governance approaches. Additionally, the panel discussed the digital divide and its implications for data governance in different parts of the world, recognizing that approaches may need to be tailored to different contexts.

Regulatory Frameworks and Gaps

While Emma Day emphasised implementation challenges, Steven Vosloo suggested that existing laws may not fully cover new technologies, particularly in the realm of neurotechnology. This highlighted a tension in approaches to regulation, with some speakers focusing on better implementation of existing frameworks and others calling for new regulatory approaches.

Steven Vosloo recommended that countries conduct policy mapping exercises to identify regulatory gaps, particularly for neurotechnology. This proactive approach was seen as crucial given the rapid pace of technological development and the move of neurotechnology from medical to consumer spaces.

Aki Enkenberg highlighted the challenge of regulating converging technologies that cross traditional regulatory boundaries. He also provided insights into Finland’s approach to data governance for children, which includes strong protections for children’s data and efforts to promote digital literacy.

Jasmina Byrne raised the issue of global fragmentation in regulation, emphasising the need for more uniform safety standards across different jurisdictions. Emma Day noted different approaches to enforcement, with some regulators taking a more collaborative approach while others favored punitive measures.

Future Developments and Challenges

Looking to the future, the panel identified several key trends and challenges. Aki Enkenberg and Melvin Breton both highlighted the ongoing convergence of different technology domains, with FinTech expanding into new areas such as gaming, the metaverse, and NFTs.

Steven Vosloo raised the possibility of a future divide between “treated, enhanced and natural humans” as a result of neurotechnology, highlighting potential equity issues that may arise from cognitive enhancement technologies.

Emma Day noted the geopolitical influences on EdTech development, highlighting the dominance of American and Chinese companies and European efforts to develop alternatives. This geopolitical dimension was seen as a crucial factor shaping the future landscape of educational technologies.

Throughout the discussion, Jasmina Byrne emphasised the need to shape technology development with child rights in mind, calling for a holistic child rights approach when considering the future of technology and data governance for children.

Conclusions and Future Directions

The discussion concluded by emphasising the importance of maintaining a balance between innovation and protection in future regulatory approaches. The panel stressed the need for adaptive governance models that can respond to rapidly evolving technologies while ensuring robust protections for children’s rights.

Key takeaways included the need for multi-stakeholder governance models, the importance of addressing implementation gaps in existing regulations, and the value of proactive approaches such as regulatory sandboxes and policy mapping exercises.

The panel identified several unresolved issues, including how to effectively regulate converging technologies, address global fragmentation in regulation, and incorporate child rights principles into technology development.

Emma Day mentioned UNICEF’s ongoing work on case studies about innovations in data governance for children, demonstrating continued efforts to address these complex challenges.

In conclusion, the discussion highlighted the critical importance of developing comprehensive, rights-based approaches to data governance for children in the context of emerging technologies. As these technologies continue to evolve and converge, ongoing dialogue and collaboration between diverse stakeholders will be crucial to ensuring that children can benefit from technological innovations while being protected from potential harms.

Session Transcript

Sabine Witting: EdTech, FinTech and Neurotech. My name is Sabine Witzing. I’m an assistant professor for law and digital technologies at Leiden University and the co-founder of TechLegality together with my colleague here, Emma Day. And we are joined today by a variety of speakers both online and offline. And I will ask the speakers to introduce themselves when I hand over to them. And I would really like to encourage participation both online and in the room here. Be critical, ask questions. We have brilliant people here who have possibly all the answers we’ll see about that. But otherwise they will ask you more questions. So I think it will be an interesting session. So let’s get started and let me hand over straight away to Jasmina. She is online for introductory remarks and setting the scene. Jasmina, over to you.

Jasmina Byrne: Hello everyone and good afternoon. I hope you had a productive day of sessions today. Sabine, shall I just… Yeah, I’m Jasmina Byrne, Chief of Forsythian Policy at UNICEF.

Sabine Witting: We can’t hear the online speaker. Oh. Jasmina, just hold on a second. Okay, we can hear you now, please proceed.

Jasmina Byrne: Oh, good afternoon, everyone. I’m Jasmina Byrne. I’m Chief of Forsythian Policy in UNICEF. Shall I hand over to colleagues or proceed with my…

Sabine Witting: No, please go ahead. We’re welcome in setting the scene.

Jasmina Byrne: Oh, okay, all right. Thank you so much, Sabine. Well, I hope you all had a productive day at IGF and I’m really sorry I’m not there in person. This is one of my favorite conferences, but you are in really good hands with Emma, Sabine, and Steve, my colleagues. And online, we have Melvin Breton, also from UNICEF, and Arki Enkenberg from Government of Finland, who is actually our key partner in the implementation of this initiative. And this session today is about rights-based data governance for children. across three emerging domains, education technologies, neurotechnology, and financial technology. So we have been working with about 40 experts around the world to understand better how these frontier technologies impact children, and particularly how data used through these technologies can benefit children, but also if it can cause any risks and harm to children. We all know that globally EdTech has been at the forefront of innovation in education. It can help with personalized learning. We see that the data sharing through education technologies can improve outcomes in education, facilitate teacher sessions, plans, administration, and so many other things. Other innovative technologies like Neurotech are currently being tried in diverse settings, and they offer great opportunities for improving children’s health and optimizing education. Financial technologies as well allow children to take part in digital economy through digital financial services. So all of these innovative technologies have also created data-related risks, particularly in relation to privacy, security, freedom of information, and freedom of expression. At the same time, we are seeing really rapid introduction. As we see a rapid introduction of these technologies into children’s lives, the policy debate is a little bit lagging behind. So this is why we hope that this initiative and the partnership with Government of Finland will not only help us identify what are the benefits and risks for children through use of these technologies and data sharing through these technologies, but also to help us. formulate policy recommendations for responsible stakeholders. And in this case, there are ministries of education, finance, consumer protection authorities, data protection authorities and others. So I’ll hand over to Sabine now to moderate the session and I hope we are going to have a productive discussion. Thank you all.

Sabine Witting: Thanks so much, Jasmina, also for laying out the kind of three blocks that we will be discussing in the session today. So we will first look at the risks and benefits associated with processing and collection of children’s data in these three domains. Then we will look at the governance models and lastly at the regulatory and policy frameworks. So let’s dive right into the first block. And as I’ve mentioned, we want this to be an interactive session. So after each block, we will have a Q&A session. So Emma, maybe I can start with you. As Jasmina was saying, there are lots of risks and benefits associated with data processing in the context of these emerging technologies. And maybe let’s zoom into the first domain into edtech, which I think is the most obvious one when you think about data governance and children. And maybe you can tell us a little bit about the examples that you have where edtech may be used or the data governance may be used for good in the context of children. Thank you.

Emma Day: Yeah, thanks so much, Sabine. So I think you’re probably aware that there’s currently a lot of debate about the benefits that can be derived from edtech in general, including first the pedagogical benefits, so the benefits for teaching and learning. So when we think about data processing, any data that’s collected from children must be both necessary and proportionate for this to be lawful under data protection law. So for edtech to be necessary, it must first serve an educational purpose. And there’s still much debate about to what extent edtech products do serve an educational purpose and where that purpose has been identified, then it’s also not yet really clear what benefits can be derived from the data that’s processed by edtech. For example, by sharing those data with the school, with the government to analyse for more evidence-based kind of policymaking. I think there’s still a lack of clarity around exactly what data would be helpful. What are the questions that we’re seeking to answer with these data? There’s much debate about the potential for personalised learning. And this relies on algorithms which learn from individual children’s data and steer them. their learning to suit their personal learning needs. And data from these kinds of tools can also potentially be shared with teachers. And then perhaps their teachers can identify early which of their students are falling behind, particularly if they have a very large class of students, they may miss a student, but if they have this, an algorithm can show them which students in their class are falling behind the rest of them. And it may also help them to look at equity to ensure that girls, children with disabilities, and children in rural areas are receiving the same opportunities as everyone else. And then finally, on this point, there’s some interesting projects looking at how children can have more agency, so they are actually benefiting themselves and they’re able to share their data for their own benefit in privacy preserving ways. So for example, in the UK, the ICO, which is the Information Commissioner’s Office, has just started a sandbox project with the Department of Education. And this is aiming to enable children to share their education data securely and easily with higher education providers once they reach the age of 16. So I will leave it there. I’m sure there are many other benefits and we’ll let the audience come in with more a little bit later.

Sabine Witting: Thanks so much, Emma, for laying out these benefits in the context of EdTech. And Melvin, if I can hand over to you and maybe you can tell us a little bit about the benefits and risks in relation to the FinTech sector. Melvin, over to you.

Melvin Breton: Thank you so much, Sabine. I think similarly to EdTech, you can really think about all these technologies that are enabling better data processing as sort of double-edged swords. You can think about, in the application with FinTech, ways in the most obvious way in which it benefits children. is in enhancing financial literacy from a young age, right? The better, the more data, the better data collection that you carry out as some of these technologies are being used by children, you can learn about their money habits and perhaps can have personalized nudges that alert them that they’re overspending in central categories that they need to save or nudge them towards developing healthy saving habits and healthy spending patterns as well, right? So the better the processing using emerging technologies, the better this kind of ongoing feedback and real-time feedback becomes and helps kids develop good money management skills. And you can also think about at the intersection FinTech and EdTech about using this data to develop purpose-built applications for education in financial literacy. So that’s on the positive side. There are other many applications. If you think about the intersection of public policy and FinTech, you have the commission of the rights of the child establishes the right to social security and social protection. And there’s a lot of applications of FinTech in handing social security and social protection benefits and cash transfers in different contexts that are enabled by FinTech. And the better the data processing technologies become, the more efficient and agile the social protection applications of financial technology can become. We’re looking at in different parts of the world. issues with the population and you’re also looking at future, the future of labor markets and people are talking about universal basic income. How about universal child benefits? Starting there and seeing how emerging technologies can enable us to make universal child benefits universal and much more efficient. So that’s on the benefits side, many more. On the risks, there’s always the risk of exploitation, as with any technology, more information means more opportunities for bad actors to target their attacks to children, promoting on the weak side, on the downside of better spending habits, you can also promote children and young people, not just children, to overuse some of these financial technologies, sometimes to their detriment. And we’ve seen some alarming cases with, for example, trading apps, stock trading apps, addressing mental health issues and harms to children or to young people, rather. And there’s also the potential for manipulation, for making children buy things that they don’t necessarily need, making it available for them to buy products and services that are harmful. And then there’s the whole issue of facilitating addictive behaviors and through in-app purchases and things like that. So we can get into either any of those more, but I’ll just leave it there for now. for the time being, over.

Sabine Witting: Thanks so much, Melvin, for that. I think that was really interesting to see also how a technology like FinTech that we maybe might not have thought about initially when you think about children’s data also has these risks and benefits. Thanks so much, Melvin, for laying these out. Aki, maybe you can share a few examples and your experience from Finland and this area around the risks and benefits across these frontier technologies. Aki, over to you.

Aki Enkenberg: Yes, absolutely. And I’m very happy to be here. Thanks, UNICEF, for inviting me to be part of the panel. It’s quite a timely issue that does require strong multistakeholder cooperation. And the IGA is a really good platform for taking these issues, debate around these issues further. And we also have to keep in mind, and this is a broader point, that the recently approved global data digital compact puts issues around data governance for the first time firmly on the global development agenda. And we should be also mindful of systematically including a child lens in these discussions going forward. But from the Finnish standpoint, looking at what we’ve done nationally, a couple of remarks with a specific focus on the education or education system. We’ve long recognized that children and youth do need to be considered through specific perspectives in relation to digital technologies, AI and data. This kind of perspective has been part of our kind of national thinking around AI policies, data policies, and we’ve also worked together with UNICEF on these issues, both on AI and data governance with important benefits for our national policymaking. The tradition has also been that we’ve had strong multistakeholder cooperation in place at the national level to be able to uncover evidence, make informed choices, take informed action, et cetera. in our context. So this realisation that children and youth are in the forefront from the point of view of evolving use of new technology is quite crucial, especially in relation to social media. They’re often early adopters of new services but also potentially less mindful of privacy concerns, they’re less informed about their data rights, perhaps care less about those rights, etc. And in national policymaking there’s often this tendency to really prioritise the potential and promotion of technology in national AI or data strategies, for example in education or health, but a lot less focus on safeguarding rights or child rights specifically. Children and youth are faced with quite complicated legal frameworks, insufficient understanding of their own rights, social pressures that make it difficult to opt out, etc. And of course when we’re talking about young children specifically, they’re not in a position to make these choices in the first place, so they have to rely on others to make them for them. But in terms of our measures, first I’ll bring up this priority of strengthening the agency of children and youth to kind of regard them as active agents in their own right when it comes to data governance, to support their capacity and competence to act. And this is also something we’ve considered quite important from the point of view of developing democratic citizenship also in Finland. So data and AI literacy as a first step has received special attention in our case. We’ve realised the need to update media literacy education for the data and AI age. There’s a number of research and development projects focusing on developing guidance and approaches for schools and teachers, etc. in this field. And the focus most often is on making sure that child rights are integrated in how schools adapt. and use tech or digital services in their daily operations. There are some flagship projects by several universities, also by Sitra, our national innovation fund, funded by the Academy of Finland, educational authorities, et cetera. For example, there’s a project called Gen-I, funded by the Council of Strategic Research, which focuses on exactly this evolving landscape of data and AI literacy and what it takes to be able to understand the implications on data governance as well. But secondly, besides this, there is this realization that this ongoing datafication of schools and educational settings call for improved standards and certifications for technology. Because when you look at what’s going on in the private sector, there’s an increased focus on measuring cognitive processes, emotional response of children, behavior of them, by them in different settings, where they learn and are being taught. And of course, the key benefit there is that by automizing learning analytics, teachers can then focus on student interaction and support individual learning better. But there is this tendency of growing and continuous data gathering, where neurotechnology is also increasingly part of the problem. It provides deeper insight into processing of information, learning by children, but also raises new questions around how that data is governed. So as a response, our Finnish National Agency for Education is preparing a comprehensive package of guidance at the moment, not only focusing on what children should learn and how they should learn in the digital age, but also what kinds of tools and services should be used by schools and teachers to ensure the quality and safety of digital content and services, and to engage in regular dialogue with the actors involved in producing these contents and services. And as I mentioned in the beginning, the belief really is that none of this can be done by the governments alone or our authorities alone, but through active cooperation with research community, edtech companies, schools and parents.

Sabine Witting: Thank you. Thanks so much, Aki, for this intervention, for sharing the experience from Finland. And you provided me with the perfect segue into the kind of second block of the conversation, which is around governance models. You said that none of the stakeholders can do it alone, and I think that holds true for a lot of the topics we’re discussing at IGF, but specifically for these new forms of data governance. And you also mentioned the Global Digital Compact and how the Global Digital Compact is also encouraging this multi-stakeholder governance model. So maybe we can think a little bit about what data governance could look like for these three domains. And of course, when we think about data governance, we first think about the DPAs, the data protection authorities. But of course, this topic is much broader than only focusing on the DPAs. I would like to hear a little bit more about the multi-stakeholder models that can be deployed to govern these frontier technologies. And Melvin, maybe I can start with a question to you in the context of fintech. What are some of the multi-stakeholder governance models that are working in this particular space?

Melvin Breton: Yeah, thank you, Sabine. I think with fintech, it’s particularly complex, right, because financial services are a very established area of regulation. And fintech comes and adds the technological layer on top of that and creates intersections. I was mentioning before with edtech, but with social media and many other environments in which data is being processed. So it needs to be multi-stakeholder if we’re going to have effective governance. You can think about, there are some examples of public-private partnerships that allow companies to opt in to some sort of data, more advanced data protection regulations in the context of a regulatory sandbox to see how that might work. And there are other sort of frameworks like open banking conglomerates that allow better sharing of information between financial institutions and the government that you can also bring FinTechs into to make sure that all the information is transparent and complies with data governance regulations. So the challenge really is that as you develop these technologies, you’re creating new tools and you’re creating new data that may not be covered by existing either financial regulations or data protections and data governance regulation. And if you have a very wide ranging data governance regulation, but there’s the financial sector operating in a sort of separate environment where data is not flowing from financial systems to the broader government, then you run into a problem where you have, in principle, data regulation, but you don’t know what you don’t know, right? You don’t know what… information is being generated through the use of these fintechs necessarily that may be covered in principle by the data governance regulation but may not be visible to the regulators on the data governance side and maybe not even to the financial regulator, right? So the multi-stakeholder model since this is such an emerging and rapidly evolving area, we’re seeing the successful use of regulatory sandboxes as I was mentioning before where companies can opt in to see how these processes of sharing information and sharing data can balance issues like privacy, governance but also the efficiency and effectiveness of some of these services and when it comes to children right now we are seeing very little in terms of regulatory initiatives in fintech that take children into account specifically mostly that’s happening at the level of data governance regulations and that’s where children are protected but fintech per se is not yet perhaps because the regulatory landscape is still maturing it’s not taking steps to to protect data related to children specifically so that’s that’s something that we would like to see, open bank and conglomerates, public-private partnerships, regulatory sandboxes for fintech companies to opt in and work closely with the government to see the intersection of data governance regulations and financial regulations and fintech-generated information and data in the future. So I’ll leave it at that, over.

Sabine Witting: Thanks so much, Melvin. I think we all see this as a very complex issue, and the more we dive into it, the more complex it gets, and I think you highlighted the importance of regulatory sandboxes as an innovative data governance model, and also the importance of public-private partnerships in this context. Of course, one player that is very important, especially also at a forum as the IGF here, is the role of civil society. Traditionally, many contexts of society are upholding the importance of human rights and children’s rights in this context. And Emma, maybe you can tell us a little bit about more, what role do you see for civil society in these various multi-stakeholder models for data governance for children?

Emma Day: Yeah, great question. And before I get specifically to that, I just want to loop back to this issue of regulatory sandboxes, because I think these come from the fintech sector, as Melvin is describing, but as part of this project on data governance for children that UNICEF is leading at the moment, we’re producing a series of case studies on innovations in data governance for children. And one of those case studies is going to look specifically at the role of regulatory sandboxes in data governance for children. And I think these are a very promising model of multi-stakeholder governance that could have great potential for the education sector. Now, we see that they’re usually used a little bit more narrowly by regulators, so often data protection authorities will put out a call for applications to the private sector, and private sector companies will then work with the regulator on some of these kinds of frontier technologies like edtech or fintech, or perhaps even neurotech, where it’s not clear yet how the law or the regulation applies in practice, because this is such a new technology. And then there is a set period of time and there’s an exit report, which is publicized usually so that other people in the sector can learn, other companies can learn what are the boundaries of regulation, and the regulator can then learn how they maybe should change that regulation. and move as the tech moves also. But I think what’s most promising is what we’ve seen. There is an organization called the Datasphere Initiative, and they’re looking at the role of regulatory sandboxes much more from this multi-stakeholder perspective. So including also civil society is the missing piece in these sandboxes, working together with regulators and with the private sector on these big questions about how to govern these frontier technologies. What is still missing though is involving children. We haven’t seen an example yet of a regulatory sandbox. There are some which are about children, but there are not any which actually involve the participation of children. And the other, I think, innovative aspect of this multi-stakeholder regulatory sandbox that the Datasphere Initiative is promoting is they’re looking at cross-border sandboxes also. So many of these tools, like edtech tools in particular, are used across many different countries, often they’re multinational companies. And so it’s really not a question for one regulator. And in fact, it’s much better for everyone if these kinds of technologies are interoperable and regulators can come together and tackle these questions together as much as possible, and also involve civil society as much as possible from the regions where this edtech will be deployed. So I think this is not yet happening to our knowledge within the education sector, but it seems to be a very promising model for the future.

Sabine Witting: Thanks so much, Emma. Lots of potential, as you can hear, with the different data governance models. And maybe let us pause here for a second, because I think this was already a lot of content. And if you were listening to Emma and wondering the whole time, what is a regulatory sandbox, also please, you see, okay. So maybe before we go into the first block of Q&A, maybe Emma, a quick explanation what a regulatory sandbox is. Thank you.

Emma Day: Yeah, so a regulatory sandbox is an arrangement usually between a regulator. So it could be, often it’s a data protection authority actually, because they’re usually about data processing. And so the data protection authority wants to work with the private sector to explore how the regulation should be put into practice. So if you think about in an example from EdTech, say there was a new kind of immersive technology that suddenly became available for education where children could become avatars and they could put on a glove and feel things, there would be some risks and some benefits, and maybe the regulator would want to explore those with the company. And so there’s always this question of trust, right? where the company is worried that the regulator is just going to bring an enforcement action against them. And so within this sandbox, it’s kind of a protective framework where the companies can explain the technology they’re exploring and the regulator can then have an interaction with them and tell them if the direction they’re going in is going to be lawful or if they’re gonna end up in a risky area. It’s still, in most countries, regulators still will not allow the company to experiment with something that is not lawful or that is actually prohibited by regulation. But it’s a way for usually a product that’s still in the development phase to get the guidance from the regulator on how to navigate that space forwards. I hope that makes sense. Yeah, absolutely.

Sabine Witting: Thanks so much, Emma. Yeah, so essentially before you unleash technology on lots of people, maybe let’s first try from a compliance perspective, what is it that we can do to avoid the most severe adverse impacts? So that’s the idea to then strengthen compliance once the product is on the market. So let me stop here. And you can also ask another question on regulatory sandboxes in case that wasn’t clear. So let me maybe give the opportunity to people in the room on these first two blocks to ask any questions pertaining to what we’ve heard, risks and benefits with regards to these technologies, governance models and multi-stakeholder models. Any questions from the floor at this point in time? Yes, there in the back. Do we have a running mic? Yeah. Sorry, can I take yours? Yeah, yeah. Thank you so much. Thank you. Yeah.

AUDIENCE: Thank you. This is to Emma. Emma, you mentioned about regulatory sandboxes. Have you seen, I know, which countries or which regulators are great examples to follow?

Emma Day: Thank you. So from what I’ve seen of this particular model of multi-stakeholder governance, which includes civil society, the focus has been in Africa on health tech. And there have been cross-border regulatory sandboxes that the Data Sphere Initiative has been coordinating. And so the Data Sphere Initiative is a third party, which maybe also makes it easier that it’s not the regulator who is actually leading the sandbox, and they bring all of the different stakeholders together. The regulatory sandboxes that we… We see more within Europe, generally more just the regulator with the private sector without that civil society piece so far. But if anyone has any examples they know of that they want to share, we’d also love to hear more about those.

Sabine Witting: Thanks so much. Emma Jutta, please.

Jutta: Yes. Jutta from the Digital Opportunities Foundation in Germany. My question goes to Malcolm. Probably it’s also interesting for the person that was talking about ad tech. I just think that the data of children in the fintech sector are of huge interest because they will be the customers of the future. And we’ve been talking about privacy, but what about security of these data? How do we make sure that these data are not exploited for any purpose that we don’t want them to be? Thank you.

Sabine Witting: Thanks so much, Jutta. I think for Melvin. Melvin, maybe you want to start and then Aki, if you want to add anything to that.

Melvin Breton: Sure. That’s the million dollar question, right? I think if we knew how to prevent these data from being exploited and used for nefarious purposes, we probably would be doing it already. I think there is an intense tension between innovation and development of new technologies and new applications in the fintech sector and the protection of data related to children. It’s also not clear cut because a lot of the use of financial applications is not necessarily happening in fintech apps, but it’s happening in social media apps that have payments enabled or where you can purchase certain items. It’s happening in games where you have in-app purchases. and loot boxes and all these things that you can purchase from within the game and that don’t necessarily require multiple instances of approval from a parent. So you set it and forget it in a way and then you have the credit card data or whatever payment form that you have and then you run with it. And then there are a lot of transactions that are being carried out by children in platforms and apps that have the parent’s information data. You can think about online shopping platforms where children often have access to their parent’s account to purchase this or that item. So that’s to say the information that is generated and collected about children and that is generated from children in financial applications and financial technologies is scattered. I think regulatory sandboxes for fintech applications are a good first step to see how we can develop ways of collecting that dedicated information that’s being generated in the context of the fintech apps and services. We’ll see how that develops. Then there are, as I was saying, the other financial applications of technologies that are not necessarily fintech apps where the conversation is part of a broader conversation related to the data that’s being generated and used in those other applications. I mentioned games and I mentioned social media. There’s currently the debate about the Kids Online Safety Act in the US. What are the, I don’t know that there’s a lot of focus on the financial aspect within that legislation. How can we pay more attention to financial applications and financial transactions that kids are carrying out outside of dedicated FinTech apps at the same time as we use regulatory sandboxes to try and regulate that within the dedicated FinTech apps? I think that’s gonna be a big question. And that’s to not even mention crypto blockchain, decentralized finance, which is perhaps another kind of warms. So I’ll leave it at that for now.

Sabine Witting: Thanks so much, Melvin. I think more questions now, but I think one point was very important is that because some of you might’ve wondered like how often does a child actually make a bank transfer on an app? But I think that aspect what you were mentioning about how FinTech is embedded in typical digital environments where children are engaged. I think that was a very important point. And then to think about in a second step about data processing and also secondary data processing and all the problems that come with it. I had then two hands up on both sides. Let me give to Steve first and then to Emma. No, you didn’t want to? Oh, sorry. Okay. He’s like, Emma, please.

Emma Day: Thanks. So just, I wanted to come back on this point about cybersecurity, which I think is a really important point. There’s a big part of this discussion that what we’ve been seeing, we’ve been interviewing regulators around the world. So data protection authorities, and it’s clear that really in every country, it’s very common that at a school level, there is a big security breach and children’s data is leaked. And even at levels of ministries of education. So when we’re talking about the benefits of sharing all of this data, it’s not really something an ed tech company can necessarily, the problem may not be with them. The problem may be with the school or with the government in terms of the cybersecurity they’ve put in place. So we need to, that’s a big part of the picture to enable it to be a safe and trusted environment to implement these new technologies.

Sabine Witting: Yeah. that comes with accountability for all of the stakeholders that are involved in the deployment of these technologies and clear roles of who should be held accountable and how. So any other questions on these topics at this point in time from the floor? Also online, I don’t think we have any questions online. Any other questions from the floor? No? All right, wonderful. So then let’s move on to the next two blocks. So we spoke about the risks and benefits. We spoke about governance models. And of course, we can’t say governance without saying law and regulation. So let’s look at that next. So when we are looking at these kinds of emerging technologies, of course, the classic conflict comes up. How does law and regulation keep up with that? Technology is changing all the time. Children’s vulnerabilities in this context are changing all the time. So how can we address these? And maybe, Stephen, you can tell us a little bit about more. What do you see in the context of the legal and regulatory framework? And how does it apply to the field of neurotech, which is the kind of third domain that we haven’t spoken about yet? But Stephen, maybe before you go into the regulatory context, maybe explain quickly what neurotech is and how it impacts children.

Steven Vosloo: Thanks, Sabine. That’s a great lineup. Thank you. And good point, because not everyone knows what it is. So very quickly, neurotechnology is any technology that looks at neural or brain signals and the functioning of the brain or the neural system. So it could record those functions. It could monitor those. It could modulate or even kind of write to. I’m a computer scientist, so I must kind of write to the brain and write to brain data and make some neural changes. And so it could impact children in many ways. I’ll talk a little bit later about, let’s say, neurotechnology in the classroom to help monitor levels of concentration, for example. And so that’s kind of monitoring brain activity, and we’ve seen examples of this in some classrooms around the world. So just one other thing on that, neurotechnology is either generally, the technology itself is either invasive or non-invasive. And so the invasive side is what you may have seen with very severe neural disorders like quadriplegics, who actually have a chip implanted in the skull, kind of on the brain. And so with their thoughts, they can move a mouse or communicate or kind of interact with computers. So it gives an incredible amount of agency and autonomy to people who otherwise are physically paralyzed. The other side is non-invasive. And this is actually where the space is going to probably go more and impact children more. So this is less accurate than the very heavy kind of medical, clinical invasive side. But it’s also less invasive. You know, it could be a headband that you wear, so it’s much easier to kind of buy this technology. And again, it could look at your levels of concentration or so forth. So you asked about the laws and regulations. Neurotechnology is not advancing in a regulatory void or vacuum. We have existing regulations, existing laws, including the Convention on the Rights of the Child. The question is, do they apply to this frontier technology? And so we see, for example, in the UK, the ICO, which is the Data Protection Authority, looking, has done some research into looking at existing laws within the UK to see if they provide cover for neurotechnology. And they’re in the investigation phase. And the same is happening in Australia. The Australian Human Rights Commission has been investigating, you know, does the existing regulatory framework cover neurotechnology? So then what is the answer? And we’ve been thinking of two camps and I’ll give you some examples. In Europe, for example, the European Parliament also did an investigation and basically found that they think the existing laws and frameworks do provide enough cover. So there’s the EU Charter on Fundamental Rights and Freedoms and there’s the European Convention on Human Rights. And then what we know in the context of data governance, particularly the GDPR, which probably broadly applies to neurodata, because I should have said earlier, you know, any kind of monitoring of brain functioning translates into data essentially. That’s how you record it and that’s how you analyze it. And so there’s GDPR. There’s also the European AI Act that’s coming into effect soon, which doesn’t speak about neurotechnology directly, but for example prohibits the use of emotion detection AI in the workplace and in the classroom. And that would often be captured by a neurotechnology. And here we see, I also should have mentioned, a real convergence of technologies and that’s what complicates the space more, because neurotech is not new. It’s been around since the 70s, but it’s recently that it’s really made advances and that’s in part due to advances in AI and the ability to process large amounts of data that’s getting captured. So other countries have said no, the existing laws don’t provide enough cover, they need to make some changes. And these especially come from Latin America. In Chile, for example, there was a constitutional amendment in the last two years that really picked out the sensitivity of brain and neural data. And there was a world first case recently, or you know, fairly recently in Chile, where there was a commercial neurotech product And somebody bought the product and they said they’re not happy with the terms and conditions in the product where you don’t quite know where your data is going to and who’s processing it and who are the third parties. And that went all the way to the Supreme Court and the Supreme Court judged that the Neurotech company needed to cease operation until it kind of addressed that whole. is introducing a law that will a broad law that will result in 92 new articles and 35 amendments to existing laws because they also didn’t think that and these are health laws these are across a range of sectors didn’t think that the existing space provided enough cover for novel kind of issues around Neurotech. And then lastly in the US two of the states California and Colorado have updated their data protection the kind of personal data privacy data protection regulation to really pick out neural data and brain data and there the FTC which is a consumer protection body has also gone after some companies more actually for misrepresentation where companies say this product will help you can read your brain data and help you to do X and it can’t really it’s still too rudimentary and so it’s misrepresent misrepresentation. So I’ll close there just to say that some countries feel there’s enough cover others don’t and it seems to be landing in different ministries and and kind of looked at through different lenses. Our recommendation is that all countries should do a policy mapping exercise to look at what’s is at the national level and look at the opportunities and risks and emerging use cases from Neurotech and whether there is sufficient protection and cover in place.

Sabine Witting: Thanks so much for that explanation also the different examples and I think you were also speaking about convergence of technologies and I think it’s also convergence of regulations that we see right and and how they can be applied. and what gaps we have. And I think at a very practical level, what you said last is, there are different ministries involved and who is going to lead now, law reform, but also implementation of the laws. Is it, for example, in the concept of neurotech, is it the Ministry of Health? Is it a data protection authority? Is it a communications regulator? Are these three all working together? You mentioned in the EU, the AI Act, and how does the AI Act apply together with the GDPR in this context? So I think it’s exactly that. It’s that mapping exercise first to really understand how these regulatory mechanisms all interact. Emma, maybe to say, okay, if we recognize there might be some gaps, even though we might look at convergence of different regulatory frameworks and we pull everything we have together, we still have gaps. How are we gonna fix this?

AUDIENCE: I think Stephen’s recommendation is a very good one, this mapping exercise, first of all, to see where the gaps are. I would say in terms of edtech, I think it’s less about gaps, actually, and more about implementation. And so I think you can have gaps in putting the frameworks in place, but definitely maybe even a bigger gap in terms of implementing the regulations we already have. So in the context of edtech, really the edtech that’s being used at the moment is generally still to do with data protection and perhaps AI regulation, of which we now around the world have quite a lot of regulation. Maybe if we’re looking to the future, there will be neurotech embedded in the edtech and it’s gonna become then all of the issues that Stephen raised. But I think that’s where we at the moment need to do the work is on the implementation. And if you think of edtech, education in many, many countries around the world is a devolved responsibility. And when it comes to the choosing edtech products to be used in schools, it’s often teachers or the school management who will choose what products are gonna be used at the school level. And they need guidance to be able to make these choices. they have to think about, is this a good tool for education, what about data protection, what about cyber security, what about AI ethics, and so I think the gaps here are a little bit like Aki was talking about, they’ve been developing in Finland this kind of guidance, some of the key kinds of tools that can be used for this are procurement rules, where governments decide that if schools are going to procure edtech to use in a school, then they need to meet certain requirements for data protection, cyber security, and even educational value also, they can be, like Aki was mentioning, certification schemes, so that an edtech company has to be audited and then they’re certified that they meet these minimum standards, and industry also can create standards, and there can be guidance and codes of practice, and we know that some regulators are starting to work on this for schools, but this is really an emerging area, and I think it’s a gap everywhere, that maybe there’s also room that every regulator doesn’t have to start from the beginning, that there can be some common themes and regulators can learn from each other, for example, the Global Privacy Assembly has been working with UNICEF on this project of data governance for edtech, and different regulators from around the world are coming together through the Global Privacy Assembly to look at what the common challenges are, and maybe what some of the common solutions could be as well.

Sabine Witting: Yeah, thanks so much, and I think that’s a very important point, it’s not so much, you know, usually we think, oh, there is a regulatory problem, we need law reform, but oftentimes more law and more specific laws, and let’s say, oh, we need a neurotech law specifically, it’s not going to solve the issue, because the issue usually lies in the implementation and the application of the existing legal frameworks, and also what you said around procurement rules, and I think looking at these different aspects, for example, of edtech, one of the things would also be to, for example, say, you need to also conduct a data protection impact assessment. as part of that, right, for schools to really actively think about and to point them towards the risks associated with edtech, because they might just not be thinking about that at all. And also, as you mentioned, the kind of joint thinking through bodies like the Global Privacy Assembly, the IGF and others, how we can really move forward in these kind of spaces. Jutta, I see a question. Please, please come in. Sorry, can we have a microphone? Oh, yeah. Jutta is on the move. Thanks, Jutta. Stephen is on the move. Stephen, come to rescue. There we go. Thank you.

Jutta: Yes, I just wanted to refer to Section 508 in the US law, which was introduced, I do think, 20 years ago, making accessibility a precondition for any procurement. And if we would have that for all the technology that we’ve been talking about, making child rights assessments or child safety assessments in procurement, that would be a good recommendation. Thank you.

Sabine Witting: Thanks much, Jutta. Yeah, because I think it brings the problem much closer to the people who actually deal with it, right? And because it is not just an abstract data protection issue, it becomes a procurement issue. And a procurement issue is what schools deal with. And they know procurement and they know rules around that. So if you bring the abstract issue of data protection down to that level, it’s much more likely that people actually think about. So thanks so much for that point. Any other questions on this particular block around regulation? What’s your experience in your country around that? Do you see regulatory frameworks? Do you see implementation gaps? What might be required? Any points from the floor? Otherwise, any other examples? Yep. He’s moving. Very good. Go ahead, please.

AUDIENCE: This isn’t actually an example, but it’s more just to say how challenging the space is. So I really like that point, Jutta, about bringing in a condition for procurement. And in the US, the government is such a massive buyer of ed tech that this really has, that really has teeth and that can move the needle. This is more of a challenge. On your last point, just after me about convergence, the thing that government ministries and regulators do so badly is work outside of their silo. We all do it badly, even within departments within UNICEF. So I’m not pointing fingers. I’m saying it’s a real challenge to all of us when you get technologies or issues like data governance that touch on neurotechnology. Is it an education issue? Is it a health issue? Is it a data governance, data protection issue? So it’s really going to challenge all of us to kind of think outside of the box or think outside of the silo and work together. Yeah.

Emma Day: Just another challenge I see is that I think there are different challenges in different geographies of the world. And there are some countries who are still struggling with access to internet. So I think equity is a big challenge. So in terms of ed tech, you talk to some regulators and really they’re trying to make sure that every school has access to education and has access to the internet. And if you’re talking about immersive technologies, the reality is there is not the infrastructure to support this in most schools in many, many parts of the world. And then for many regulators, they’re not financed. They don’t have the resources to have that kind of oversight, often over foreign companies who are deploying their products in their country, possibly financed by development aid as well. It becomes quite a complicated picture. So I think that’s where we also need to look at this multi-stakeholder governance model and think about who are all those actors who we need to include and make sure the procurement may or may not happen at the national level in all countries. It may happen actually from a donor as well. So there are different actors who need to be brought into these discussions, I think.

Sabine Witting: And I think what we also see, I think in the global south context, is there is a competing interest right and I think from my experience what I’ve heard from from many schools is that they say well like data protection issues yeah there might be risks but there’s really not something that we can prioritize to prioritize data protection because a much more tangible issue here is access to education that’s what we need to deal with first and I think always loops back to the problem that children’s data governance is an abstract issue it’s nothing that a lot of people really see what you know and not understand and I think that’s why it’s easily pushed aside and rather than really considered within the CRC as they’re also equally competing right oh yes yes Milo sorry please interrupt me anytime go ahead

Jasmina Byrne: thank you so much Sabine I was just listening to this discussion about regulatory frameworks and various stakeholders and I wanted to say sometimes these policies or strategies that come from different divisions departments in the government and so on could also help us advance any any any potential work on on data governance and I’m now thinking about digital public infrastructure that is actually an approach being adopted by so many countries which actually facilitates the government services and a layer of these platforms that are set up on this digital public infrastructure which includes financial payments and includes data sharing and it includes digital IDs and when different governments in collaboration with ministries private sector as well are developing these strategies this is where we also need to be vigilant to think about how these data sharing practices can impact children at all levels there are currently about 54 of such strategies in place and there is a big push for an adoption of digital public infrastructure across the world. So to answer to your questions, Sabine, where are the good examples? I think we probably need to look much more closely to see how to engage with those stakeholders who are advancing DPI in their countries and regions to think about data governance as well across different domains. Thank you.

Sabine Witting: Thanks so much, Jasmina, for that intervention. There’s another question in the back. I think we don’t have a microphone. Thank you.

AUDIENCE: Thank you. Just building on from what Jasmina said and following on from what Eman said as well, when we, you know, where are the best practices? That’s important. Another area that I want to emphasize is, you know, there is operational activities like skill capacity building when it comes to educators, right? How do they know what is, what does good look like? And then when we look at the strategy, that’s at a different level altogether that we need to think about. So it’s, I think, I don’t have the answer, but just an observation. And in different parts of the world, so I come from Australia. Well, Australia has been strong enough to advocate, you know, child rights and standing strong against matter, but it’s not all countries who can do that. So it’s an interesting or challenging area, but I think an area that we all have to collaborate together so that I think that collaboration piece plays a role, a very strong role, as well as where are the best practices. Thank you.

Sabine Witting: Thank you so much for that intervention. And maybe, Emma, do you briefly want to speak a bit about the case studies that are looking at these kind of innovations?

Emma Day: Yeah, I think in terms of, I think what you’re saying is right, and again, it comes back to this question of resources, really. And I think in no country can a regulator, like a data protection authority, have oversight over every tech company that’s operating in its country. It’s just impossible, really. But I think that’s why, then, we’re looking at innovations in data governance to try and see what are some examples of how you plug those gaps. So we will publish next year, it will be a UNICEF collection of innovations in data governance for children. And some examples, we had the regulatory sandboxes, but also the certification schemes. So certification schemes are generally led by a non-profit or even by a company themselves. And it’s a way of, I suppose, outsourcing some of that oversight. And you always have attention because you can get the commercialization, then, of the certification schemes. So it has to be done. properly and we’re trying to look at some examples and this case study will then try to look at some of the considerations. It’s quite difficult to find shiny example best practices. We often start looking for those and then we end up looking at promising practices and take a little bit of what seems good from different examples. So I think in these case studies that’s what we’ll be doing is looking around the world and if anyone has any ideas along these themes they want to contribute we’d love to hear from them and the other the other case study we’re looking at at the moment is on children’s codes. So looking at there is a UK age-appropriate design code, Ireland has produced a similar code and then there are other codes developing in Indonesia, Australia, look which in these codes generally actually So if this is kind of our way of looking for best practices or promising practices and getting those out there and sharing them.

Sabine Witting: Somebody said online that the captioning has stopped working.

Melvin Breton: I think it’s back, it went out for a little bit and now it’s back. Could I ask a question? So in the theme of regulatory authority we have all these different tech domains and we have one issue that cuts across all of them which is data governance and data regulation. I think something that could be explored is how can we explore is empowering the regulatory bodies, data regulation, data government, governance authorities, a lot more within the government. Because if I’m thinking about fintech, you have very strong financial regulations in many countries and financial regulatory bodies. It’s not so clear that they look at the advice from data governance authorities, but those data governance authorities often have such a wide remit that it’s very difficult for them to give direction that’s tailor-made for areas like fintech. So encouraging collaboration from the financial regulatory bodies with the data protection authorities to develop more tailor-made regulations on data governance for fintech, for neurotech, for edtech, for whatever the case may be, might be a good first step. And then once those regulations are well established, making them more binding. Because it’s one thing, the financial regulatory body regulating fintech, but they may not be applying regulations directed at protecting children’s data beyond what I think is now accepted as the norm, which is like the data needs to be encrypted, data needs to be anonymized. But beyond that, it’s not super clear that the data protection regulations are very specific to children’s needs in all these domains, across all these domains.

Sabine Witting: Thanks so much, Melvin, for that. And I think, yeah, I see lots of nods here next, left and right here. You want to add something, Emma?

Emma Day: Yeah, maybe. I think it’s interesting then the enforcement side of things, and different regulators have very different approaches to this. So some regulators see themselves as being kind of collaborators with the private sector who really want to, they’re kind of balancing this approach of promoting innovation in their own country, in their own tech ecosystem, and also making sure that the tech companies don’t overstep the mark too much. But often, from that perspective, the regulator will meet with the companies and kind of warn them verbally first. In other countries, the regulators are much more, take a punitive approach where they will directly it’s more about bringing enforcement actions and they’re not very approachable and there are pros and cons to each. In other countries, particularly like we were discussing before, where it may even be a foreign company that’s the problem in the country, there are few resources and it’s very difficult to know actually how technically this would happen, where would be the jurisdiction, how will they hold this company accountable in their own country. So there are definitely issues related to enforcement and accountability as well which probably deserve a whole other case study just to try and unpack.

Sabine Witting: Thanks so much Emma and I think this was a very rich discussion, very interesting block around laws and regulation. What actually does a gap look like? Do we have a gap about convergence of technologies, convergence of regulatory frameworks, implementation problems and then Emma also what you said about best practices, promising practices and maybe only practices. So we’re changing the bar I guess as we go but it’s a learning space and we need to think outside the box all of us. So maybe after looking at the risks and benefits, governance models, laws and regulations, maybe that was very much looking at the status quo, maybe we can close the session by looking ahead a little bit and look at the next 10 years, 15 years and these different frontier technologies, edtech, neurotech and fintech and really think about what might be the upcoming issues in terms of data governance because of course we already need to think ahead, predict things and find solutions as we go forward. Maybe Aki I can start with you, just some concluding thoughts on that.

Aki Enkenberg: Yes, thank you. I think it’s been a very interesting discussion so far and Already, I think many of the issues related to the future of these fields and how they should be or could be governed have come up. So maybe we can also build on those in this final segment. I do agree with, I think, Stephen, who raised this issue of convergence earlier, which makes it quite difficult to predict or make predictions about where neurotech or edtech or fintech will evolve or go in the next five to 10 years, because they interact with each other, right? So they merge into each other. And out of these combinations, different fields will emerge, different problems will emerge, and so on. Definitely, that’s one key point to watch. Secondly, we can think about technology on its own, and often it’s very useful to kind of make these kinds of predictions. But we also should keep in mind that it doesn’t evolve autonomously, so it’s also governed and constantly being steered by governments and other stakeholders in the process. So we should definitely also think about, at the same time, whether we want the technology to evolve and how we can be part of the process and what role governance plays. On neurotech, quite an interesting field. I think we’ll see a lot of unexpected things over the next five years, even. In addition to these leaps in kind of measuring brain activity or neural activity, definitely there’ll be a growing focus on acting on humans, acting on the brain or stimulating the brain. new interfaces also for doing this. Many of us have heard about the Neuralink, but that’s only one example. I think there’ll be a whole explosion of these kinds of interfaces, how humans and their brains will be interacted on or acted on. So definitely in the clinical field, there’s a lot of potential for these technologies, and that’s already proven, but they will also trickle down to consumers eventually in different kinds of context. And one discussion point today about the convergence of neurotech and edtech will be quite important to follow, how these technologies eventually come to schools and classrooms to monitor learning or behavior, but also to stimulate learning. And certain type of behavior is quite interesting, but also quite controversial, I’m sure. The downsides also from the kind of interface of neurotech and AI, this risk of unconscious influencing for political purposes, for commercial purposes, for marketing, advertising, or changing people’s minds, influencing them when their brains are still evolving in the case of children and youth, extremely important to keep in mind. As Stephen mentioned, the EU AI Act already recognized this danger, and definitely when it comes to regulation at this point in time, it seems to be wise to focus on the risks posed by specific uses of technology. It will be very difficult to govern or prohibit certain technologies or allow other technologies per se, but it will be possible to govern how they’re used. and applied, and the EU AI Act through its approach is a good example of this one. On fintech, finally, definitely in my mind at least, this kind of financialization of everything and embedding of financial services or financial angle in every other type of digital service we consume or games or entertainment, social media, etc. So definitely moving from this the situation where fintech, we regard fintech primarily as a new means for making payments, saving, investing, in the future also more and more about lending, to a world where financial services will be part of every other thing we do. And of course combined with the very likely scenario where everyone will be quite easily identified online also through digital identity systems, this KYC or Know Your Consumer problem will be less important than it is today. People can be recognized online, their identity is known, and that they’re conducting financial transactions or so on everywhere where they go and through different means, not only through specific apps or banks and so on. And then finally, definitely we’ll move into a world where not only our kind of behavior and choices and actions that are visible will be measured and tracked, but also our bodily activity and brain activity more and more. And this will become a focus for data governance also. And when we think about how AI is also developing, we’re trying to create… these independently acting AI agents that are currently sort of learning from what exists, the data that exists and is available online, but in the future they will also, I mean there’s a need for them to also, these systems to learn from humans directly, from their activities, behaviors and their thoughts and so on. So our data, our bodily data, our brain data will become commercially crucial or important for this endeavor. So this really highlights the role of personal data and bodily data in the future in data governance. And then finally, was it Jasmina I think who mentioned this issue or Emma on the global divide. So whereas we’re in the global north, we’re trying to keep pace with technology and also develop very advanced regulations to tackle some of the issues we see, we do have to keep in mind this need to develop a level playing field globally and really to address also not only the technology divide, but also the regulatory divide. So these are my thoughts. Thanks.

Sabine Wittingg: Thanks so much Aki. Well, Stephen, good luck following that. Maybe just some concluding thoughts for this very comprehensive analysis now.

Steven Vosloo: Thank you Aki, that was excellent. Yeah, I don’t have too much to add. Aki very eloquently highlighted the technological use cases, but also the broader issues. Maybe I’ll just pick out one quick thing. So on neurotech anyways, this move from neurotechnology beginning in the medical space that’s highly regulated and has ethical oversight, now moving into the consumer space. And in many countries, consumer electronics devices don’t on subject to that level of oversight. So there’s clearly a gap there. And there’s from a data governance and just protection perspective, there’s a huge area to focus on. But in terms of where the space is going in the consumer side, anyways, we will definitely see in the education space, that’s come up a lot. And this isn’t just me speaking, this is through consultations we’ve done with neurotech experts from around the world. So in the classroom to kind of support learning and the opportunities and risks that comes with that. But in the home space, the cognitive enhancement is also an element of that. area to really watch. And so this is not where you have a neural disorder, where you get treated through neurotechnology. This is where you’re healthy, but you can perform better. And in our consultations, people from certain countries that are highly competitive in terms of getting into universities and so forth, where already you pull all the levers you can to advance your child, whether it’s through tutors, whether it’s through medication, whether it’s literally, you look at all your options. If neurotechnology promises that, that is something that people will look at. And if it works, it comes back to the equity issue of AKI. So how do you compete in the global south against your peer in the global north who’s just performing so much better? So that kind of touches on not just treatment, but also enhancement. In one of the consultations, one of the folks said something that was really great. He’s from Zimbabwe, and he said, you may get a future world where you get those, the treated from neurotechnology for disorders, and you may get the enhanced who are healthy. And then we added in the group, the naturals. And this could be the future. Anyway, we’ll leave you on a controversial note.

Sabine Witting: Very good. Thank you so much, Stephen. Emma, controversial note.

Emma Day: Well, I think mine might be controversial in a slightly different way. I’d like to go back to what Aki said about, there’s obviously a trajectory of the development of technology, but we are governing how that continues into the future. And I think sometimes there is a kind of inevitability that we hear about the direction that technology will evolve in, and that we’re all going to end up with chips in our brains. But I think that these are decisions that we make, and we can decide what’s in the best interests of children for their education. We can put the guardrails in place, and we can maximize some of the benefits that are being promised here. But also, we can decide not to end up with chips in our brains if we don’t want to, at the really extreme of that end point. I think there’s also, just really focusing on edtech, I think some of it is also to do with geopolitics of how this develops. We’re seeing at the moment, quite a monopoly by American and Chinese tech companies. There are a couple of big tech companies who deploy their edtech kind of more infrastructure around the world really and then at a national level you see in most countries in the world there is an ecosystem growing now of apps that plug into those big company platforms for things like language, mathematics and they’re more culturally and linguistically appropriate and maybe those ecosystems are going to grow more and you also see within Europe there is the Gaia X project at EU level which is being led by the German government and the aim there is to try and find European level solutions based on secure and trustworthy exchange of educational data so that they don’t have to use the big tech companies for edtech so it depends how all of that plays out and we don’t really know what direction that’s going to move in but it’s likely to also have an influence on the kinds of technology we see and the values that underpin those technologies as well I think. Thanks Angela for that very good point. Melvin?

Melvin Breton: Sabine, thank you.

Sabine Witzing: Tell us, more problems.

Melvin Breton: More problems, no I think it’s useful maybe to think about it in terms of the extensive future and the intensive future in terms of fintech. I think Aki already alluded to some of the extensive future in the sense that I’m using it here where we’re seeing fintech across an increasing range of domains. We started by just having a web or app layer on top of financial services and now we’re seeing it getting into gaming, getting into social media where there are obvious financial applications there that are relevant for children that we haven’t yet completely come to grips with in terms of regulation and data protection beyond maybe encryption and anonymization which is still not even applied across the board but at least we know those two things are important but then we’re getting into other things like the metaverse which is maybe an extension of games to just like social life and in a parallel world where there will also inevitably be transactions and we’re already seeing things like NFTs and digital land that you can purchase and what kind of implications does that have for children and data and you’re seeing also in social media that there’s financial transactions are becoming public and another source of information about the lives of children that is becoming more more prevalent so what what are we going to do about that I think those are very much open questions not to even not to even mention neurotech which is I think scary to think about the the prospect of the intersection of neurotech and fintech but something that we need to keep in mind nonetheless I think there are some good news there’s I think age detection through AI for purposes of age gating is getting a lot better I think now companies say that they can detect a person’s age to plus minus one year roughly just through their the use of AI but then that opens the question the question what else does it know about you in terms of your financial life and and the transactions that you’re likely to make and what potential does that open for manipulation and exploitation of children. There’s also the AI and fintech intersection front, there’s the algorithms are getting a lot better, for example, for deciding who to lend to, banking services using that to be able to process more applications for loans and things like that. That has consequences for financial inclusion, it enables more financial inclusion of families that previously maybe didn’t have access to financial services. The technology itself allows people to, or the technologies themselves allow people to be more integrated into the financial systems, so that’s a plus for financial inclusion, but then those same tools, if you’re thinking about AI or machine learning algorithms used to decide who gets and doesn’t get a loan, that can also have another edge, which is that it can lead to financial exclusion because it’s a lot easier to see who has a risk of becoming non-compliant. So the inequality aspect here is important. Also to mention that whatever applications that require connectivity will just compound the digital divide that already exists, so something to think about there. On the positive side of applications, I think social protection and transfers, cash transfers. for social protection are going to benefit immensely from these new technologies that are becoming more efficient, fewer data requirements, more points of entry, and as things like central bank digital currencies, stable coins, and things like that become more prevalent, it’s going to make it a lot easier to expand and scale up social protection systems and transfers, again with the caveat of, you know, we need to be conscious of the digital divide. And then on the education front, financial education is going to become, yes, just wrapping up.

Sabine Witzing: Yes, thank you.

Melvin Breton: Yes, financial education is just going to allow for a longer financial life. Starting earlier in your financial journey and becoming more savvy is going to be something that’s going to be beneficial for children. But again, a pinch of salt that we need to be careful about the risks. Over.

Sabine Witzing: I also love how you just kindly brought in the metaverse and stable coins. Yes, Mina, please answer all of our questions now in the last two minutes.

Jasmina Byrne: Thank you so much. I mean, it’s been a great pleasure listening to all of you and so many fantastic contributions and ideas. And we talked about integration of technology and regulation and stakeholder, multi-stakeholder approaches to these issues. And when we talk about the future, obviously, we need to think about how some of these technologies are going to be evolving. EdTech is already much more mature. The challenge is going to be the size of the market. How do we capture everyone who is introducing some EdTech tools to the market, but also piloting of new technologies? that is happening, how are we also trying to work with those companies who are testing and piloting new approaches and new technologies. In the financial sector, we heard from Alvin also that includes integrating blockchain, crypto and so on. And basically AI integration into everything that we are going to be seeing more and more in the future. I think what is going to be a big challenge for all of us is the global fragmentation of regulation, which can lead to uneven safety standards and standards for children in particular. I think that fragmentation can potentially lead to a lack of trust in these technologies and their adoption and application for good, as we said in the beginning that there are so many benefits. So the question is also for us who are working for children and children’s rights in the context of digital technologies, is how do we even shape the future of technology? How do we use this knowledge and this understanding of implications for children to shape the development? And somebody was mentioning, I think Jutta was mentioning also standards or recommendations for even procurement of some of these technologies and maybe going even back towards the development of these technologies and integration of child rights principles into development of these technologies. We also need to think about the future of regulation. So when we talk about the future of technologies is one thing, but also what are going to be the future approaches to regulating technologies and how do we strike that balance between innovation and protection? We talked a lot about benefits, we talked about risks, but then ensuring that the future regulation strategies, policies actually accurately. in a way, create that balance and maintain that balance and allow for innovation while at the same time safeguarding children. And I just want to end on the child rights note. We haven’t mentioned so much children’s rights. Many of you, particularly online, have worked over the past several years on really integrating child rights into any kind of tech policy. And we heard from Aki the opportunities under the digital compact to integrate more effort in relation to children’s data governance. So I would just like to remind everyone again that children’s rights are comprehensive, but also they need to be looked at both from the positive and protection side. And when we think about the future of tech and future of technology, that holistic child rights approach, I think is the best way forward. Thank you so much.

Sabine Witting: Thank you so much Ms. Yasmina for wrapping up and thanks so much to the audience here in the room and online and the speakers for a fantastic panel and enjoy the rest of your day. Good evening to the people here in Saudi Arabia, I think. And we will see you all tomorrow here at the IDF. Thank you.

Jasmina Byrne: Thank you.

E

Emma Day

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Personalized learning potential of EdTech

Explanation

EdTech has the potential to provide personalized learning experiences for students. This can be achieved through algorithms that learn from individual children’s data and tailor their learning to suit their personal needs.

Evidence

Data from these tools can be shared with teachers to help identify students falling behind or ensure equity for different groups of students.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Need for multi-stakeholder governance approaches

Explanation

Data governance for emerging technologies requires a multi-stakeholder approach. This involves collaboration between regulators, private sector, and civil society to address complex issues in data governance for children.

Evidence

Example of the Datasphere Initiative looking at regulatory sandboxes from a multi-stakeholder perspective, including civil society.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Sabine Witting

Agreed on

Need for multi-stakeholder governance approaches

Importance of regulatory sandboxes for innovation

Explanation

Regulatory sandboxes provide a protected framework for companies to explore new technologies under regulatory guidance. This allows for innovation while ensuring compliance with data protection and other relevant regulations.

Evidence

UK ICO’s sandbox project with the Department of Education to enable children to share their education data securely with higher education providers.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Melvin Breton

Agreed on

Importance of regulatory sandboxes

Implementation gaps in applying existing regulations

Explanation

The main challenge in edtech regulation is not necessarily gaps in the law, but rather implementation of existing regulations. This is particularly challenging at the school level where decisions about edtech are often made.

Evidence

Example of teachers or school management choosing edtech products without sufficient guidance on data protection, cybersecurity, and AI ethics considerations.

Major Discussion Point

Regulatory Frameworks and Gaps

Differed with

Steven Vosloo

Differed on

Approach to regulation of emerging technologies

Geopolitical influences on EdTech development

Explanation

The future development of EdTech is influenced by geopolitical factors. This includes the current monopoly of American and Chinese tech companies and efforts in Europe to develop alternative solutions.

Evidence

Example of the Gaia X project at EU level, led by the German government, aiming to find European-level solutions for secure and trustworthy exchange of educational data.

Major Discussion Point

Future Developments and Challenges

M

Melvin Breton

Speech speed

111 words per minute

Speech length

2374 words

Speech time

1277 seconds

Financial literacy enhancement through FinTech

Explanation

FinTech can be used to enhance financial literacy from a young age. Better data collection and processing can provide personalized feedback to help children develop good money management skills.

Evidence

Examples of personalized nudges alerting children about overspending or encouraging healthy saving habits.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Potential for manipulation and exploitation in FinTech

Explanation

FinTech also presents risks of exploitation and manipulation for children. This includes the potential for bad actors to target children or promote overuse of financial technologies.

Evidence

Examples of alarming cases with stock trading apps addressing mental health issues and harms to young people.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Need for collaboration between financial and data regulators

Explanation

There is a need for increased collaboration between financial regulatory bodies and data protection authorities. This collaboration is necessary to develop tailored regulations for data governance in fintech, particularly concerning children’s data.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Emma Day

Agreed on

Importance of regulatory sandboxes

Expansion of FinTech into new domains like gaming and metaverse

Explanation

FinTech is expanding into new domains such as gaming, social media, and the metaverse. This expansion raises new questions about data protection and regulation, particularly for children.

Evidence

Examples of financial transactions becoming public on social media and the emergence of NFTs and digital land purchases in the metaverse.

Major Discussion Point

Future Developments and Challenges

Agreed with

Aki Enkenberg

Agreed on

Convergence of technologies creating new challenges

A

Aki Enkenberg

Speech speed

137 words per minute

Speech length

1770 words

Speech time

770 seconds

Neurotechnology benefits for health and education

Explanation

Neurotechnology offers potential benefits in health and education sectors. It can be used to monitor learning or behavior in classrooms and stimulate learning.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Risk of unconscious influencing through neurotech

Explanation

Neurotechnology presents risks of unconscious influencing for political or commercial purposes. This is particularly concerning for children whose brains are still evolving.

Evidence

The EU AI Act’s recognition of this danger and its focus on governing specific uses of technology rather than prohibiting technologies per se.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Convergence of different technology domains

Explanation

There is an increasing convergence of different technology domains, such as EdTech, FinTech, and NeuroTech. This convergence makes it difficult to predict future developments and creates new challenges for regulation.

Major Discussion Point

Future Developments and Challenges

Agreed with

Melvin Breton

Agreed on

Convergence of technologies creating new challenges

J

Jasmina Byrne

Speech speed

136 words per minute

Speech length

1180 words

Speech time

519 seconds

Privacy and security risks of data collection

Explanation

The collection and processing of children’s data through emerging technologies pose risks to privacy and security. These risks need to be balanced against the potential benefits of these technologies.

Major Discussion Point

Benefits and Risks of Emerging Technologies for Children

Global fragmentation of regulation as a challenge

Explanation

The global fragmentation of regulation poses a significant challenge for ensuring consistent safety standards for children. This fragmentation can lead to uneven protection and potentially undermine trust in these technologies.

Major Discussion Point

Regulatory Frameworks and Gaps

Need to shape technology development with child rights in mind

Explanation

There is a need to shape the future development of technology with children’s rights in mind. This involves integrating child rights principles into the development of technologies and future regulatory approaches.

Evidence

Mention of the opportunity under the digital compact to integrate more effort in relation to children’s data governance.

Major Discussion Point

Future Developments and Challenges

S

Steven Vosloo

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Existing laws may not fully cover new technologies

Explanation

Current laws and regulations may not provide sufficient coverage for emerging technologies like neurotechnology. Some countries are investigating whether existing frameworks are adequate, while others are introducing new laws.

Evidence

Examples of investigations by the UK ICO and Australian Human Rights Commission, and new laws in Chile and Brazil specifically addressing neurodata.

Major Discussion Point

Regulatory Frameworks and Gaps

Differed with

Emma Day

Differed on

Approach to regulation of emerging technologies

Need for policy mapping to identify regulatory gaps

Explanation

Countries should conduct policy mapping exercises to identify gaps in their regulatory frameworks regarding emerging technologies. This would help determine if there is sufficient protection in place for children’s data.

Major Discussion Point

Regulatory Frameworks and Gaps

Potential divide between treated, enhanced and natural humans

Explanation

The advancement of neurotechnology could lead to a future divide between those treated with neurotechnology for disorders, those enhanced for better performance, and those who remain ‘natural’. This raises significant ethical and societal concerns.

Evidence

Quote from a participant from Zimbabwe during consultations on the future of neurotechnology.

Major Discussion Point

Future Developments and Challenges

J

Jutta Croll

Speech speed

149 words per minute

Speech length

150 words

Speech time

60 seconds

Role of procurement rules in ensuring standards

Explanation

Procurement rules can play a crucial role in ensuring standards for child safety and rights in technology. Making child rights assessments or child safety assessments a precondition for procurement could be an effective approach.

Evidence

Reference to Section 508 in US law, which made accessibility a precondition for procurement 20 years ago.

Major Discussion Point

Data Governance Models and Implementation

S

Sabine Witting

Speech speed

176 words per minute

Speech length

2497 words

Speech time

847 seconds

Need for multi-stakeholder governance approaches

Explanation

Data governance for emerging technologies requires involvement from multiple stakeholders. This is particularly important for complex issues surrounding children’s data in new technological domains.

Evidence

Reference to the Global Digital Compact encouraging multi-stakeholder governance models.

Major Discussion Point

Data Governance Models and Implementation

Agreed with

Emma Day

Agreed on

Need for multi-stakeholder governance approaches

Agreements

Agreement Points

Need for multi-stakeholder governance approaches

Emma Day

Sabine Witting

Need for multi-stakeholder governance approaches

Need for multi-stakeholder governance approaches

Both speakers emphasized the importance of involving multiple stakeholders in data governance for emerging technologies, particularly for complex issues surrounding children’s data.

Importance of regulatory sandboxes

Emma Day

Melvin Breton

Importance of regulatory sandboxes for innovation

Need for collaboration between financial and data regulators

Both speakers highlighted the value of regulatory sandboxes in fostering innovation while ensuring compliance with regulations, particularly in the context of emerging technologies.

Convergence of technologies creating new challenges

Aki Enkenberg

Melvin Breton

Convergence of different technology domains

Expansion of FinTech into new domains like gaming and metaverse

Both speakers noted that the convergence of different technology domains creates new challenges for regulation and prediction of future developments.

Similar Viewpoints

Both speakers highlighted challenges in implementing and enforcing regulations, with Emma focusing on implementation gaps at the school level and Jasmina emphasizing the global fragmentation of regulation.

Emma Day

Jasmina Byrne

Implementation gaps in applying existing regulations

Global fragmentation of regulation as a challenge

Both speakers emphasized the importance of proactively addressing regulatory challenges, with Steven suggesting policy mapping exercises and Jasmina advocating for integrating child rights principles into technology development.

Steven Vosloo

Jasmina Byrne

Need for policy mapping to identify regulatory gaps

Need to shape technology development with child rights in mind

Unexpected Consensus

Importance of procurement rules in ensuring standards

Jutta Croll

Emma Day

Role of procurement rules in ensuring standards

Implementation gaps in applying existing regulations

While not explicitly stated by Emma, her discussion of implementation challenges aligns with Jutta’s suggestion of using procurement rules to ensure standards. This unexpected consensus highlights a practical approach to addressing implementation gaps.

Overall Assessment

Summary

The speakers generally agreed on the need for multi-stakeholder approaches, the importance of regulatory innovation (such as sandboxes), and the challenges posed by the convergence of technologies. There was also consensus on the need to address implementation gaps and shape future technology development with children’s rights in mind.

Consensus level

Moderate to high consensus on key issues, with speakers often approaching similar concerns from different angles. This level of agreement suggests a shared understanding of the complex challenges in data governance for children in emerging technologies, which could facilitate more coordinated efforts in addressing these issues.

Differences

Different Viewpoints

Approach to regulation of emerging technologies

Emma Day

Steven Vosloo

Implementation gaps in applying existing regulations

Existing laws may not fully cover new technologies

Emma Day argues that the main challenge in edtech regulation is implementation of existing regulations, while Steven Vosloo suggests that current laws may not provide sufficient coverage for emerging technologies like neurotechnology.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the adequacy of existing regulatory frameworks and the specific approaches to governance for emerging technologies.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of addressing data governance for children in emerging technologies, but have slightly different perspectives on how to approach regulation and implementation. These differences do not significantly impede the overall discussion on improving data governance for children, but rather highlight the complexity of the issue and the need for comprehensive, multi-faceted solutions.

Partial Agreements

Partial Agreements

Both speakers agree on the need for collaboration in governance, but Emma Day emphasizes a broader multi-stakeholder approach including civil society, while Melvin Breton focuses specifically on collaboration between financial and data regulators.

Emma Day

Melvin Breton

Need for multi-stakeholder governance approaches

Need for collaboration between financial and data regulators

Similar Viewpoints

Both speakers highlighted challenges in implementing and enforcing regulations, with Emma focusing on implementation gaps at the school level and Jasmina emphasizing the global fragmentation of regulation.

Emma Day

Jasmina Byrne

Implementation gaps in applying existing regulations

Global fragmentation of regulation as a challenge

Both speakers emphasized the importance of proactively addressing regulatory challenges, with Steven suggesting policy mapping exercises and Jasmina advocating for integrating child rights principles into technology development.

Steven Vosloo

Jasmina Byrne

Need for policy mapping to identify regulatory gaps

Need to shape technology development with child rights in mind

Takeaways

Key Takeaways

Emerging technologies like EdTech, FinTech and Neurotech offer both benefits and risks for children’s data governance

Multi-stakeholder governance models are needed to address the complex challenges of regulating these technologies

There are gaps in existing regulatory frameworks to fully address new and converging technologies

Implementation of existing regulations is a major challenge, especially in resource-constrained settings

Future developments will likely see further convergence of technologies and expansion into new domains, requiring adaptive governance approaches

A holistic child rights approach is important when shaping future technology development and regulation

Resolutions and Action Items

UNICEF to publish a collection of case studies on innovations in data governance for children next year

Recommendation for countries to conduct policy mapping exercises to identify regulatory gaps for neurotechnology

Unresolved Issues

How to effectively regulate converging technologies that cross traditional regulatory boundaries

How to address the global fragmentation of regulation and create more uniform safety standards

How to balance innovation with protection in future regulatory approaches

How to incorporate child rights principles into the development of new technologies

How to address the digital divide and ensure equitable access to benefits of new technologies

Suggested Compromises

Use of regulatory sandboxes to allow innovation while exploring appropriate governance models

Development of certification schemes as a way to outsource some regulatory oversight

Incorporation of child rights assessments into procurement processes for new technologies

Thought Provoking Comments

EdTech, FinTech and Neurotech. My name is Sabine Witzing. I’m an assistant professor for law and digital technologies at Leiden University and the co-founder of TechLegality together with my colleague here, Emma Day. And we are joined today by a variety of speakers both online and offline.

speaker

Sabine Witting

reason

This opening comment sets the stage for the entire discussion by introducing the three key technology domains that will be explored: EdTech, FinTech, and Neurotech. It establishes the interdisciplinary nature of the panel and the focus on legal and technological aspects.

impact

This framing shaped the entire flow of the discussion, providing a structure for exploring data governance issues across these three domains throughout the session.

So we have been working with about 40 experts around the world to understand better how these frontier technologies impact children, and particularly how data used through these technologies can benefit children, but also if it can cause any risks and harm to children.

speaker

Jasmina Byrne

reason

This comment highlights the global, collaborative nature of the research being discussed and frames the key tension between benefits and risks of these technologies for children.

impact

It set up the discussion to explore both positive and negative impacts, leading to a more balanced and nuanced conversation throughout.

I think there’s still a lack of clarity around exactly what data would be helpful. What are the questions that we’re seeking to answer with these data?

speaker

Emma Day

reason

This comment cuts to a core issue in data governance – the need to clearly define the purpose and value of data collection, especially for children.

impact

It shifted the conversation from general benefits to more specific considerations about data utility and necessity, encouraging more critical thinking about data practices.

You can think about, in the application with FinTech, ways in the most obvious way in which it benefits children is in enhancing financial literacy from a young age, right?

speaker

Melvin Breton

reason

This comment introduces a concrete benefit of FinTech for children that may not have been immediately obvious, broadening the scope of the discussion.

impact

It opened up exploration of specific use cases and benefits of FinTech for children, leading to a more detailed discussion of both opportunities and risks in this domain.

We’ve long recognized that children and youth do need to be considered through specific perspectives in relation to digital technologies, AI and data.

speaker

Aki Enkenberg

reason

This comment emphasizes the importance of child-specific considerations in technology governance, highlighting Finland’s proactive approach.

impact

It shifted the discussion towards more child-centric policy approaches and the need for tailored governance frameworks.

Neurotechnology is not advancing in a regulatory void or vacuum. We have existing regulations, existing laws, including the Convention on the Rights of the Child. The question is, do they apply to this frontier technology?

speaker

Steven Vosloo

reason

This comment raises a crucial question about the applicability of existing legal frameworks to emerging technologies.

impact

It prompted a deeper exploration of regulatory gaps and the need for adaptive governance approaches for frontier technologies.

I think it’s less about gaps, actually, and more about implementation. And so I think you can have gaps in putting the frameworks in place, but definitely maybe even a bigger gap in terms of implementing the regulations we already have.

speaker

Emma Day

reason

This insight shifts focus from creating new regulations to the challenges of implementing existing ones, especially in the education sector.

impact

It led to a discussion about practical challenges in governance, such as procurement rules and guidance for schools, rather than just focusing on regulatory frameworks.

So I would just like to remind everyone again that children’s rights are comprehensive, but also they need to be looked at both from the positive and protection side. And when we think about the future of tech and future of technology, that holistic child rights approach, I think is the best way forward.

speaker

Jasmina Byrne

reason

This concluding comment brings the discussion full circle, emphasizing the need for a holistic, rights-based approach to technology governance for children.

impact

It provided a unifying framework for the diverse topics discussed and reinforced the importance of balancing innovation with protection in future governance approaches.

Overall Assessment

These key comments shaped the discussion by progressively deepening the analysis of data governance issues for children across EdTech, FinTech, and Neurotech domains. They moved the conversation from general benefits and risks to specific implementation challenges, regulatory gaps, and the need for child-centric, rights-based approaches. The comments highlighted the complexity of governing frontier technologies, emphasizing the importance of multi-stakeholder collaboration, practical implementation strategies, and the need to balance innovation with protection. Throughout the discussion, there was a consistent focus on the unique considerations required for children’s data, which culminated in a call for a holistic, rights-based approach to technology governance for children.

Follow-up Questions

How can regulatory sandboxes be adapted to include civil society and children’s participation?

speaker

Emma Day

explanation

This is important to ensure a more comprehensive multi-stakeholder approach in developing and regulating new technologies affecting children.

What are some examples of successful cross-border regulatory sandboxes?

speaker

Emma Day

explanation

This could provide insights into how to regulate multinational edtech companies more effectively across different jurisdictions.

How can data protection authorities be empowered to provide more tailored regulations for specific tech domains like fintech, edtech, and neurotech?

speaker

Melvin Breton

explanation

This could lead to more effective and specific data governance regulations for children across different technology sectors.

What are the best practices for implementing existing data protection regulations, particularly in the education sector?

speaker

Emma Day

explanation

This is crucial for addressing the gap between existing regulations and their practical implementation in schools.

How can we address the equity issues arising from the potential use of neurotechnology for cognitive enhancement?

speaker

Steven Vosloo

explanation

This is important to prevent widening global inequalities in education and cognitive performance.

How can we integrate child rights principles into the development of new technologies?

speaker

Jasmina Byrne

explanation

This is crucial for shaping future technologies in a way that respects and promotes children’s rights from the outset.

What approaches can be developed to balance innovation and child protection in future regulation strategies?

speaker

Jasmina Byrne

explanation

This is important to ensure that future regulations allow for technological innovation while safeguarding children’s rights and safety.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Open Forum #15 Digital cooperation: the road ahead

Open Forum #15 Digital cooperation: the road ahead

Session at a Glance

Summary

This discussion focused on implementing the Global Digital Compact (GDC) and fostering partnerships to address digital challenges worldwide. Participants shared experiences and insights on collaborative efforts to close digital divides, promote digital literacy, and ensure secure and inclusive digital spaces.


Key themes included the importance of cross-sector partnerships, cultural adaptation of digital initiatives, and addressing common challenges across different regions. Examples were given of projects like Finland’s work on AI strategies in African countries and efforts to connect post offices globally to expand digital access. Participants emphasized the need for secure-by-design approaches in digital infrastructure and the importance of energy efficiency in connectivity projects.


Funding emerged as a persistent challenge, with many noting the difficulty of financing good ideas and initiatives. The discussion highlighted the value of platforms like the IGF for connecting actors and increasing visibility for digital cooperation efforts. Participants also stressed the importance of data governance and cybersecurity frameworks that protect all nations, not just developed economies.


The session underscored the complexity of digital cooperation, with issues ranging from cultural translation of initiatives to aligning incentives for partnerships. Ultimately, there was optimism that while challenges exist, they are not insurmountable if stakeholders work together effectively. The discussion concluded with a call for continued collaboration and implementation of agreed-upon digital cooperation principles.


Keypoints

Major discussion points:


– The importance of partnerships and collaboration in implementing the Global Digital Compact (GDC) objectives


– Challenges in finding and forming effective partnerships, including funding, cultural translation, and connecting relevant actors


– The need for platforms to increase visibility and connect potential partners


– Data governance and cybersecurity as key areas requiring global cooperation


– The complementary relationship between the GDC and existing frameworks like WSIS


Overall purpose:


The discussion aimed to explore concrete implementation challenges for the GDC objectives and gather insights on effective partnerships and collaboration strategies from various stakeholders.


Tone:


The tone was generally constructive and solution-oriented. Participants shared examples of successful partnerships and initiatives while also highlighting ongoing challenges. There was an underlying sense of optimism about the potential for collaboration to address digital development issues, even as speakers acknowledged the complexities involved.


Speakers

– Filippo Pierozzi: Moderator


– Isabel De Sola: From the Office of the Tech Envoy


– Roy Erikkson: Ambassador for Global Gateway in Finland


– Kevin Hernandez: From the Universal Postal Union


Additional speakers:


– Nandipha Ntshalbu: Online participant


– Shamsher Mavin Chowdhury: From Bangladesh, online participant


– Alisa Heaver: From the Ministry of Economic Affairs in the Netherlands


– Patricia Ainembabazi: From CIPESA (Collaboration on International ICT Policy for East and Southern Africa)


– Damilare Oydele: From Library Aid Africa


– Guilherme Duarte: From the Brazilian Association of Internet Service Providers


Full session report

Expanded Summary of Discussion on Implementing the Global Digital Compact


Introduction:


This discussion focused on implementing the Global Digital Compact (GDC) and fostering partnerships to address digital challenges worldwide. Participants from various sectors and regions shared experiences and insights on collaborative efforts to close digital divides, promote digital literacy, and ensure secure and inclusive digital spaces.


Key Themes and Discussion Points:


1. GDC Objectives and Implementation:


Isabel De Sola from the Office of the Tech Envoy opened the session with a poll on GDC objectives, highlighting the need for stakeholder-driven implementation through partnerships. She later mentioned that the UN would provide an implementation map for the GDC in the coming months, encouraging organizations to endorse the GDC vision and principles online.


2. Importance of Partnerships and Collaboration:


The discussion emphasized the crucial role of partnerships in implementing the GDC objectives. Roy Eriksson, Ambassador for Global Gateway in Finland, shared examples of knowledge sharing and capacity building across countries, particularly Finland’s work on AI strategies in African countries. He highlighted the Global Gateway initiative, which focuses on infrastructure investments and digital development projects in Africa.


3. Infrastructure Development and Connectivity:


Kevin Hernandez from the Universal Postal Union introduced the Connect.Post programme, which aims to connect all post offices globally to the internet by 2030, transforming them into hubs for digital services. Guilherme Duarte from the Brazilian Association of Internet Service Providers highlighted the role of small ISPs in connecting underserved areas.


4. Cultural Adaptation and Relevance:


Speakers agreed on the importance of ensuring digital initiatives are culturally relevant and inclusive. Audience members stressed the need for culturally relevant digital literacy programmes, while Patricia Ainembabazi from CIPESA highlighted learning opportunities across regions with similar challenges.


5. Data Governance and Cybersecurity:


Shamsher Mavin Chowdhury, an online participant from Bangladesh, raised concerns about data governance and cybersecurity frameworks for developing countries. An audience member discussed challenges in procuring secure ICTs and addressing education gaps in cybersecurity. The discussion highlighted the need for inclusive frameworks that protect developing countries’ interests in the digital space.


6. Energy Efficiency and Sustainability:


Nandipha Ntshalbu brought attention to the often-overlooked issue of energy efficiency and sufficiency in digital infrastructure development. This focus highlighted the need to consider sustainability in digital development projects and the intersection of digital and green transitions.


7. Funding and Resource Allocation:


Financing emerged as a persistent challenge. Roy Eriksson shared an example of Finland outsourcing expertise to support AI strategy development in Zambia, demonstrating innovative approaches to resource allocation in international development cooperation.


8. Platforms for Collaboration:


The discussion highlighted the value of platforms like the Internet Governance Forum (IGF) for connecting actors and increasing visibility for digital cooperation efforts. Patricia Ainembabazi mentioned the Forum for Internet Freedoms Africa (FIFA) and the African Parliamentary Network for Internet Governance (APNIC) as examples of regional platforms fostering cooperation and knowledge sharing.


9. Alignment with Existing Frameworks:


Alisa Heaver from the Ministry of Economic Affairs in the Netherlands raised questions about aligning the GDC with existing frameworks like the World Summit on the Information Society (WSIS) action lines. Isabel De Sola responded, emphasizing the importance of building on existing work and avoiding duplication.


10. Innovative Projects and Initiatives:


Several innovative projects were mentioned during the discussion:


– The Library Tracker project by Library Aid Africa, presented by Damilare Oydele, which aims to map and support libraries across Africa.


– The SYNC digital well-being program, focusing on developing preventative interventions for high schoolers in Saudi Arabia.


– The Dynamic Coalition and Cyber Security Hub video, presented by an audience member, showcasing efforts in cybersecurity education.


Agreements and Consensus:


There was broad agreement on the importance of partnerships, culturally relevant initiatives, and addressing common digital challenges across regions. Speakers from diverse backgrounds found unexpected consensus on the similarities of digital challenges across different geographical areas.


Differences and Unresolved Issues:


While there was general agreement on overarching goals, differences emerged in the focus of digital development efforts. The discussion revealed unresolved issues, such as ensuring fair and transparent data governance, addressing power imbalances created by data monopolies, and fostering global cooperation on cybersecurity for developing countries.


Conclusion and Next Steps:


The discussion concluded with a call for continued collaboration and implementation of agreed-upon digital cooperation principles. Key takeaways included the crucial role of partnerships, the importance of cultural relevance, and the need to address energy efficiency in digital infrastructure. Stakeholders are encouraged to engage with the GDC implementation process and contribute to ongoing efforts in digital cooperation.


The session underscored the complexity of digital cooperation, with issues ranging from cultural translation of initiatives to aligning incentives for partnerships. While challenges exist, there was optimism that they are not insurmountable if stakeholders work together effectively. The discussion highlighted the need for further dialogue on specific implementation strategies, prioritization of actions, and allocation of resources to ensure the successful realization of the Global Digital Compact’s objectives.


Session Transcript

Filippo Pierozzi: over to Isabel.


Isabel De Sola: Thanks, Filippo. I’m Isabel De Sola from the Office of the Tech Envoy, and I think what we can do, since we’re warmed up and rolling into the next session, is focus our thoughts now on answering some of those concrete implementation challenges. So I would like to invite Ambassador Roy Eriksson of Finland to join me on the stage, and also Thelma Kwei of Smart Africa, if you’re with us here in the room. Thelma? Okay, she’s on her way. And on the note, from our colleague who was just online asking about connectivity, I wanted to take a little poll, if you’ll bear with me, here in the room. So if you’re familiar with the GDC objectives, you know that GDC objective one is about closing all digital divides and working on connectivity. I’m thinking about those who are unconnected, either physically, because of lack of infrastructure, or because of the skill set and the affordability of infrastructure. So I want to take a little poll of the organizations that are here in the room. Could you stand up if you envision that you’re contributing to closing digital divides implementation of GDC objective one? You’re working on infrastructure, you’re working on digital skills. Anybody in the room, can you stand? Okay, excellent, nice. Now, stay standing if you’re working on objective two, the inclusive digital economy, or stand up if you’re working on tech transfers to the developing world, if you’re working on connecting businesses to the internet, if you’re selling services online. No, don’t be shy, don’t be shy, stand, stand. Okay, so slightly less, slightly less. Objective three, we’re thinking about open, safe online spaces. We’re thinking about women and girls safety online, gender-based violence. Okay, excellent. You’re thinking about misinformation, disinformation. This is your concern. Wonderful. Okay, there’s a lot of us here. Objective four, you are a company that has a lot of data, and you’re governing the data, or you have data for development. You’re thinking about how to apply the data to development challenges. Okay, one person at the back of the room, or you’re thinking about interoperability, crossing borders with data. No? Okay, this is the best student. Now, who’s worried about AI, governance of AI? Who’s working on that here, or concerned? Great. Okay, wonderful. Thank you so much for participating in that exercise. You are in the right session. You have come to the right session, and if you still haven’t made a decision, or still haven’t clarified how you’d like to participate in GDC implementation, this is also the right session. So, forgive me for some of the abstract thinking, or sorry, abstract comments from the UN at this stage. I mentioned before that GDC implementation is going to be primarily conducted by the stakeholders, so by governments, by businesses, civil society, academia, scientists, children as well. And the wonderful thing about GDC implementation is that it’s already happening. The UN will play a role by providing and opening up a platform, by convening the stakeholders, and allowing information about implementation to circulate, and we’re working on that in the form of an implementation map that more news should be coming in the month of January about how you can all get involved in that. But we do want to hear your thoughts on the design, and we’ve gotten a lot of comments in this, in the previous session on the design, and we do want to hear your thoughts about how working across sectors is going to make a real difference for that. So, we’ve invited a couple of guests, just two voices this morning, to tell us their thoughts on how some of those partnerships are assisting them, or will be assisting them, to take GDC objectives forward. I’d like to turn over to Ambassador Erickson first, and then Thelma from Smart Africa, and then, unfortunately, there’s no free breakfast at this session. I’m going to come into the audience. I want to hear about what you are doing, and see if that can inform the UN’s design, or the next steps that we take forward on this road to digital cooperation. So, over to Ambassador Erickson first. Thank you so much for being here at such an early start this morning, and tell us your thoughts.


Roy Erikkson: Thank you. Maybe I should first introduce myself. I am the Ambassador for Global Gateway in Finland, and Global Gateway is a EU initiative to have big infrastructure investments in the global south, or new emerging markets. And it’s interesting, because I had structured my intervention exactly the same way as you said. So, the first three goals, closing the digital divides in order to achieve the sustainable development goals, and expanding inclusion in the benefits of digital economy, and then foster an inclusive, open, safe, and secure digital space that respects and protects and promotes human rights. All these are what we are taking into consideration when we’re doing projects under the umbrella of Global Gateway. The GDC also mentions gender equality and the empowerment of all women and girls, and the full and equal participation in the digital space, which is also very important for Finland, as well as accessible and affordable data and digital technologies and services, because it’s all right to have connectivity, but you need to have access as well. So, meaningful connectivity is important. In Finland, access to the internet is considered a human right, and that’s why we are promoting through the Global Gateway connectivity issues. Global Gateway has five sectors, but digitalization is at least one of them, and we have chosen that as our focus. We work mostly in Africa, half of our investments will go to Africa, a quarter to Asia, and another quarter to Latin America. But we are not only bringing connectivity, building, for example, submarine cables, or building in the last mile connectivity. We are also looking into not only the hard infrastructure, but also the soft infrastructure, meaning capacity building and increasing digital literacy skills and capacity. And I actually have a couple of good examples what Finland is doing. Just a second, I’ll have to find it, because the page has now… There it is. We have one project that is coming to an end, but it’s continuing under a different name, but it’s African digital and green transition. And in this project, for example, we sent an expert for six months into Zambia, and they wrote the artificial intelligence strategy for the country. So, this is some sort of capacity building that we do hands-on. This partner of ours found out that there’s a lot of demand for this kind of service, so they actually wrote a book on ethics for digitalization. So, these are concrete examples of how we can help and share our knowledge with other partners. Well, maybe it’s best to give the audience the possibility to ask clarifying questions, but we want to provide the whole package to our partners. So, we build the connectivity and help with digitalization, but we also emphasize schooling, education, and skills, so that our partners have the whole package, and they can manage what the challenges are with the digital economy. Thank you.


Isabel De Sola: Thank you so much. And tell me, so you represent the government of Finland. And was it the government of Finland, the example that you gave, that went to some African countries and found the partners there? Was it an expert from within the Ministry of Foreign Affairs that helped to write this book, or how did this work get done? Because it sounds like you were working through partnerships.


Roy Erikkson: Yes, yes. We actually, we outsourced this. We found somebody who would be able to send an expert of theirs and we paid the costs for having that expert residing in Zambia and writing this strategy.


Isabel De Sola: So what I’m hearing in your story is that actually the organization where you worked act as a sort of broker of different actors who wanted to collaborate on the ground to bring ethical AI ideas to a certain African context and translate these into the local context.


Roy Erikkson: Yes, that’s correct. My work is actually like a facilitator. I find out what kind of projects there are and then reach out to companies and academia in Finland if they would be able or interested in participating in it, as well as trying to find financing for these projects. Financing seems to be a crucial point. There are lots of really, really good ideas, but finding financing for those, that is the crucial thing.


Isabel De Sola: Indeed. And just one more question and then we’ll go to the audience. Did it all work out well? Were there any challenges or bumps along the way? What did you learn from the experience that can help others who are in similar positions of trying to connect the actors in order to get things done?


Roy Erikkson: Well, this specific project that I mentioned, it’s quite surprising that the challenges are the same. You’re from north or south, east or west. It is the same challenges that you have to deal with. So there’s a lot of benefits from having this kind of technological diplomacy, sharing your experience, so that the wheel doesn’t have to be invented everywhere from scratch. You can help and give some advice, and this is something that is important. Another issue that has come up a lot is, I participated in a big conference in Latin America, and there cyber security issues are ones that need to be tackled. And there you can have a lot, because we might be a little bit more advanced, but we have the same challenges. So in order for having a secure digital space, it is to share our experience and help others to raise the standards so that they can also fight against cybercrime.


Isabel De Sola: Thank you for this reflection.


Kevin Hernandez: Hi, everyone. My name is Kevin Hernandez. I’m from the Universal Postal Union, which is the UN organization that focuses on the postal sector. And we’re here to talk about the challenges that we face when it comes to connecting the actors in order to get things done. So, we’re in the postal sector, and we have a program called Connect.Post that aims to connect all the post offices in the world to the Internet by 2030, and then transform them into one-stop shops where citizens can access government services, digital financial services, and also leverage them as hubs for community networks. So, this implies partnerships across governments, international donors, private companies, and it’s been quite interesting. We’ve had some projects off the ground in several countries already. We’ve partnered with UN organizations, private companies, governments, of course, and across different industries and governments at different levels. And partnerships are key. There’s no other way to do this other than through partnerships. But, you know, it’s not being able to function without it. So, we have a usually, we can pull up a designated postal operator in each country, and they need to be given the authority to deliver other types of services for this to be able to work, and they need to be able to be given the legal authority to operate a community network for this to work. So, you need to facilitate a lot of discussions and also need to introduce them to a lot of people, and we need to also help them frame the way that they want to go about enabling change in a way that they’re not used to doing so. So, there’s a lot of challenges. But anyway, we will have a session later today. So, if anyone is interested in what I’m saying, I think we’re in a workshop on 10 out of 335, but the name of the session is Connect Our Posts, Connecting Communities to the Postal Network.


Isabel De Sola: I’ll just give an example of the name of the purse. Purse.


AUDIENCE: This is about finding the right cooperation and finding the right people to work with. I’ll give two examples of the work that we’ve been doing in the past two years and the reports that we produce. The first is that we looked at do governments procure their ICTs secure by design, and we found that the answer is almost zero. So, if industry doesn’t get the incentive to buy, produce secure ICTs, then we will always remain insecure. And we’ve come up with ideas how to build up capacity in that topic to come up with procurement rules, but somebody has to start listening to what we have done and that’s a major challenge already. The second is education and skills, and what we found on both topics, like Ambassador Erickson was saying, there is not that big a difference between the whole world. No one is procuring secure by design. Almost no one is procuring secure by design. In education and skills, whether you live in Papua New Guinea or in the Netherlands where I live, there’s about a 20 year gap difference in what the industry demands from their education, tertiary security education, and what is on offer, and what are the best practices in the world, and that is something that we want to find out. We’re going to present that at 12.30 at the Dynamic Coalition booth. We made a great video on that, called the Cyber Security Hub, and it’s something that we want to build and create the programs where exactly the digital compact will be about, and the sort of input that we want to deliver there. We need partners to do that, and that’s why I’m advocating ourselves here also, but we’re delivering, and that is what the Dynamic Coalition in the IGF is capable of, we can deliver on our promises.


Kevin Hernandez: So that is something which I invite you to join, is3c.org, and that’s where all the information is. Thank you for this opportunity, but also, we’re looking forward to work with you.


Isabel De Sola: Yes, thank you, and before you go, so we heard from our colleague from the Postal Union about the importance of discussions, so of bringing the actors together and then discussing when there’s new information or there needs to be new ways of working, and you’re saying that you have the great ideas, the good content, and you need a catalyst or a boost for visibility, so to connect these great ideas with the procurers, and that’s the partnership that you’re looking for. Is that right? So, the IGF is a place for visibility, I imagine, and you’re looking for others, other areas where, or platforms where you can have more visibility on these ideas?


AUDIENCE: Yes, I think that’s a great question, and I think it’s important to be able to fund the people who actually do the work, because that is the other challenge, that we need to find funding for the experts to pay them, and we have the experts, I can tell you that also.


Isabel De Sola: Thank you so much. So, funding, again, and we’ll go to one more in the room and then online as well, working my way to the front of the room. discussions. We need platforms for visibility. Tell us about your partnerships.


AUDIENCE: Okay, so SYNC digital well-being program based in the King Abdulaziz Center for World Culture. One of our projects is to develop, it would come under the heading of digital literacy, but not just how to use technology, more how to use technology in a way that is safe and health promoting. So this is to develop a preventative intervention for high schoolers across the kingdom of Saudi Arabia, so that they can engage with technology in ways that foster well-being and aren’t damaging to their health. And that’s in partnership with Johns Hopkins Bloomberg School of Public Health to develop the content, pilot the intervention, and also to make sure that it’s culturally resonant. I think that’s another huge issue in terms of moving into other territories and skills transfer, that it is culturally attuned and not dissonant with local values, things like that. So that’s been some of our experience with partnerships.


Isabel De Sola: That’s a great example. And so you found a partner that’s based in the US, and your organization is based here in Saudi, and your beneficiaries or stakeholders are Saudi youth. And so the translation from one culture to another has been part of the dynamic of your partnership. How have you made the most of the connection in the US? And then how have you landed it here in Saudi Arabia in a culturally relevant way?


AUDIENCE: So I think one, there’s many strands to ensuring it was culturally relevant. One of the partners at Hopkins is a Saudi national who grew up in KSA and studied in the US. So he’s one of the primary investigators, one of the project leads. But also we’ve done extensive stakeholder groups, stakeholder mapping with people in Saudi Arabia, teachers, parents, students.


Filippo Pierozzi: Thank you so much for that example. And if my mic… Yes? This one will go online. Let’s… Hello? Can you hear me? Okay. Yes. We’ll take one more example online and then start wrapping up. If you could introduce yourself from online, hopefully we can hear you. And I’ll ask the IT team to put you up on the screen. Thank you. I hope you can hear me.


Nandipha Ntshalbu: Can you hear me? We can hear you. Thank you. Thank you. Thank you. Probably I wouldn’t want to go to an example, but I think I would like to engage with the beautiful presentation from Inbal. And I want to highlight one area that I find missing in the discourse. With everything that we have been dealing with, both in IGF and even the compact itself, even the objectives, we seem not to want to be visible addressing the issue of energy efficiency and sufficiency. Because if we don’t address that in an objective, we will not be intentional. But if we look at fiat and non-fiat currency, all developments have pointed to not only the challenges with connectivity, they are a function of the availability of energy. And if one looks at the issues of energy, it becomes important that we address this. So that’s the first thing. The second thing is, if it could be possible for us online to be accorded an opportunity to get contacts of the colleagues that have just presented, and even the reports, because one would be interested in knowing in Africa, which states in Africa have been utilized beyond Zambia. And I’m asking this from an angle of data localization and data sovereignty, as you’re looking at ethical AI deployment. So one would be interested, Juan, in terms of how one can participate when in the initiatives from Finland, but in terms of the report, which are the African countries that have been contacted. Thank you.


Roy Erikkson: Okay, thank you. I would like to comment also on the secure by design on connectivity, because that is something that Finland especially has taken up in our discussions. It, for example, in Latin America, there is a digital alliance between European Union and Latin America and the Caribbean region. And the conference I was referencing to, we discussed the importance of having security by design, because the digital economy will be based on connections, and the connections need to be secure in order to increase public trust on digital services, but also for businesses, so that they know that their data is secure, that it doesn’t leak anywhere. So it is an issue that we are tackling and taking into consideration when we are designing projects. And it is true that energy and connectivity, they go hand in hand. In many places in Africa, for example, the communications towers are using energy provided by diesel generators. And of course, if we want to achieve our climate goals, we should try to find ways of using less and less fossil fuels. So one of our projects has been to provide solar panels to these communication towers, so that they are independent and can provide sufficient energy, so that the connectivity is actually better, because it doesn’t cut and so forth. So we are looking into that. That’s why it’s called the digital and green transformation, because we need to look at both climate issues and digital and connectivity issues. Data storage is an excellent question. We have mentioned it in other conferences in Africa, that we are building also data centers. And we want to provide, as I was referring to, the whole package, skills and the data, I mean, the hard infrastructure. But it’s also a question of who is in control of the data. And in Africa, I see that there is so much really good talent. I would say that, for example, on the financial sectors, the applications that you have invented there far exceed applications we are using in Europe. So if you can have the ownership of the data in the data centers, that could help to provide new applications using the data that the governments are gathering there. And the best way of participating in this project is to make an inquiry to the local EU delegation, and say that you would be interested in global gateway projects. And especially if you are interested in digitalization, tell that. Or if, because we don’t have embassies in all countries in Africa, you can also contact the Finnish embassy, because we are all part of the team Europe. So we work together. Thank you.


Isabel De Sola: Thank you for those inputs. And I think there’s one more person online. Let’s raise their hand. Please introduce yourself and the tech team will put your screen up so we can see you.


Shamsher Mavin Chowdhury,: Hello, everyone. My name is Shamsher Mavin Chowdhury, and I’m from Bangladesh. So I have a concern. May I start?


Isabel De Sola: Yes, please.


Shamsher Mavin Chowdhury,: That global digital conflict presents an opportunity to address these challenges through fair global governance of digital technologies. However, for this conflict to be effective, it must ensure that countries like Bangladesh are not left behind. It must prioritize inclusive data governance and cybersecurity frameworks that protect all, not just privileged few. With that in my mind, I would like to pose the following questions to this distinguished assembly. How will the global digital conflict ensure fair and transparent data governance that protects user privacy and enables countries like Bangladesh to retain control over their national data assets? Will that conflict address the power imbalance created by data monopolies where global tech giants dominate developing economies, digital ecosystems? And talking of the cybersecurity, what steps are being taken to foster global cooperation on cybersecurity so that developing countries like Bangladesh can access resources, expertise, and frameworks to combat cyber threats? Thank you.


Isabel De Sola: Thank you for those questions online. So actually, when we did our poll here in the room for objective four on data governance, our audience didn’t stand up. So there were a few of us here in the room working on data governance, which is perhaps the most ambitious of the GDC objectives. The GDC already has provisions on data governance to the person who asked this important question. It has two strands. One is to look and enhance data for development, so the data that we can use to spur and catalyze progress on the SDGs. And a second strand is on interoperability and governance of data across borders. On that note, you’ll be happy to hear that a working group on data governance, which is tasked to develop principles in the next two years for data governance, is already getting started. You can hear me? Sorry. It looks like somebody can’t hear me. It’s already getting started based out of the Commission on Science and Technology for Development in Geneva. I believe the working group will be composed and its members named in January or February of next year. And then they’ll have a year and a half to work on principles. So that’s good news, a rapid GDC implementation. On the question of cybersecurity, I’ll just mention this very fast, and then we’ll go back to the audience, that recently, a convention on cyber security on the European Budapest Convention, which has been for the last 10 or 15 years, I think, a bedrock of cybercrime work. And for a country like Bangladesh, they will have ideally participated in shaping that framework and then implementing it at the local level going forward. So I just want to summarize where we are and then maybe go back to the audience for their comments. We’ve been talking about the road ahead and partnerships. So a couple of things have popped up. One is the need for lots of discussion across partners to understand each other, the need for translation between different cultures, the difficulty of finding partners when one is based far away or, for example, participating online today, it’s much more difficult to find what you need, the utility of platforms like the IGF for connecting the actors or for getting visibility. So the supply and demand of partnerships. I have great content, but where are the clients that will use my great content? Or the ever-persistent question of financing, financing for these initiatives. I wanted to throw out into the audience a question about incentives to partner. But I also see that there’s a hand up, so let me hand you a microphone. There’s two hands up to help us keep thinking about these ideas. And if you could introduce yourself. Thank you.


Alisa Heaver: Good morning. My name is Alisa Heever. I’m from the Ministry of Economic Affairs in the Netherlands. And I actually wanted to circle back to the question Henriette asked you or asked in the previous session. So I won’t do it with a lot of introduction. But it was basically, so why doesn’t the GDC link to the WSIS action lines, but does link to the SCGs? Thanks.


Patricia Ainembabazi: Hi, everyone. I am Patricia Ainembabazi from Uganda. I work in the civil society with CIPESA. CIPESA is a collaboration on international ICT policies. We work in Eastern and Southern Africa. I wanted to first talk about partnerships and the work we do around the topic. We do trainings, all things advocacy, but also we do trainings for journalists, other CSOs, as well as parliamentarians. At the moment, we do have a parliamentary track. I don’t know if any of you knows APNIC. This is the African parliamentary network for internet governance on the continent. So we work with these groups of people to front things around internet governance. You’ve talked about partnerships. We do have one with the EU. Someone from East Asia mentioned. We also work with Smart Africa. I was waiting to see Thelma here. We’ve had trainings on data governance. It’s around harmonizing or aligning the EU data policy framework with the different policies in the different countries in Africa. So we have, I would say, partnerships do work. And it’s not about, obviously, the money helps, but also it’s aligning the goals that the different countries or the different organizations want. At CIPESA, we found that the issues that we talk about or we address in Eastern and Southern Africa, they are not only limited to these regions that we’re in. They are across the sub-Saharan. They’re actually in many in Europe, but the context matters, but the issues remain the same. So partnerships do work and we welcome organizations that would want to work with CIPESA and towards the goals that we all want. Thank you.


Isabel De Sola: Thank you for that. So just one comment on your intervention, as you said that you found this hybrid, the hybrid setup, it takes some skills training to do it correctly, the hybridness of our session. Just one point about your comments that struck me, as you said, when we looked across countries and regions, we found that there’s many similarities and some of the problems are the same with the desire for partnerships are the same. So maybe if I can rephrase what you were saying is actually there were learning opportunities across the region. So looking out there and finding others in a similar objective or frame of mind was useful for your organization. Is that correct?


Patricia Ainembabazi: Yes. We do have FIFA Africa, and this has nothing to do with soccer. It is the Forum for Internet Freedoms Africa. We have this every year. This year we’re in Dakar, Senegal. So we had almost 500 participants, and not only from Africa, but also from abroad. And we always have different streams and different topics and sessions. And at the end, when you’re looking at the reports and the submissions from all the different groups, it’s the same problems. It’s the same appetite towards open internet access to like all the principles of the GHSA.


Filippo Pierozzi: Thank you for that. And there was one last comment over here, two more comments. Okay. You could just pass the mic. And thank you for introducing yourself.


Damilare Oydele: Thank you so much. My name is Damilare Olidule. I work with Library Aid Africa. We leverage data technology and common sentiment approaches to transform libraries on a vibrant basis. And as I was speaking about data infrastructure in the first minute, I was speaking about how libraries are access points to digital connectivity and access. Over time as an organization, we’ve worked collaboratively with libraries across African countries in the context of transforming these libraries into vibrant spaces. And more recently, we’re working on what we call Library Tracker. We’re tracking the libraries across African countries to understand what are they doing in these libraries, the impact they’re making, and more importantly, how many of these libraries are connected. I use this data to engage policy makers and partners to understand areas that are good for libraries. And for the users of the platform, to be able to see what libraries are around them, and assess what libraries stand to offer for them. So and I say that also working on libraries with data features. This focus really on how we work with libraries to transform these libraries into data tech hubs. Reason? Because the needs of our community is changing and evolving over time. That means our libraries also need to change in that trajectory. Okay? And I say that also, we’re also working on upskilling librarians in African countries. And data skill and tech skills for them to make libraries much more vibrant and viable and thrive. Right? Over time, we’ve worked with library partners across currently non-African countries to implement our innovation. And we’re ever looking at how can we tap into the ecosystem of data and economy and data governance partners to see how we can cross-pollinate ideas and innovation, and not just that, bring an investment into the library ecosystem, so make libraries connected. Because if libraries are connected, economies are also connected, and that transforms society where these libraries are located. Thank you so much.


Guilherme Duarte.: Hello, good morning, everyone. My name is Guilherme Duarte. I’m from the Brazilian Association of Internet Service Providers. We are a membership of small ISPs that work in Brazil. We’ve been attending this BIGF for a few years. We’re a small ISP, but we’re a small ISP. We’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. And we’re a small ISP. Because our members do a lot of work in connecting schools in Brazil. We have some good experiences in public-private partnerships for building infrastructure in the Amazon and other under-assisted areas in Brazil. But we also have been working, have a good knowledge of how these small companies have been building up the infrastructure in Brazil by themselves as well. So private investment in public infrastructure as well. So I would like to, more of a question of how we can be more of a part of the work that we are doing here.


Isabel De Sola: Thank you for those last comments. No, that’s okay. Thank you for those last comments and I’ll go back to the start of our session to respond to the last question. So how to get involved? The first one window, can you still hear me? To get involved would be the endorsement of GDC. And it’s a window online that allows an organization to signal if they’d like to endorse the vision and the principles, that’s one thing. But you also have a path where you don’t endorse the vision and the principles, but you provide information on what your organization is doing and which of the five pillars of the GDC are of your greatest interest. That’s a way to get started. What’s coming up in the next few months is an implementation map of the GDC that the UN agencies are currently designing in our role, which is to provide a platform and a space in a sense to convene all the actors and to make it easier for them to find each other. So the implementation maps in September has been under design. There should be more news about it in January and a way for your organization or our friend from digital libraries or our child rights advocates and all the different actors, if they would like to, to voluntarily signal what they’re up to in GDC implementation. The utility of the map is hopefully not only for the cartographers. A cartographer is a map designer. The SG is the map designer, but it’s not meant to help him. It’s really meant to help the actors so that you could come and say, okay, Liberia, objective three, open security online, and you could see the different actors there. So watch this space. Hopefully we will have more news soon. I wanted to make sure to address the question from Henriette. I actually don’t know why the GDC objectives weren’t in the text mapped against WSIS action lines. However, that exercise has taken place already, and it’s available online at ungis.org, un-gis, ungroup on the information society.org, I think, has developed a map of where you can see how the GDC is connected to WSIS action lines. There has been a, it’s been difficult to describe in what ways the WSIS and the GDC work together. And part of the WSIS review will, the task is to describe how they are interactive. The way that our, that the WSIS was the starting point, and it’s been the primary framework for digital cooperation over 20 years. And after 20 years, the GDC has provided, in a sense, a refresher, a little icing on the cake. So whereas the WSIS tackles the fundamental starting points of digital cooperation, so connectivity, access to information, connecting businesses to the internet, it talks as well about how ICTs could be used for sustainability. The 20-year agenda is still very relevant, so we still haven’t connected the entire planet. Not everybody has the digital literacy and access to capacity building that they need to use the internet. We still aren’t making the most of ICTs for the environment. This agenda, the WSIS agenda, is still relevant. What the GDC does is it comes and it adds some new challenges and opportunities to this agenda after 20 years, which the member states felt was a timely moment to do so. So it adds data, DPIs, misinformation, artificial intelligence, et cetera. And the two agendas are very complementary. I hope that that goes some part of the way to answering your question. It’ll get very technical and the audience might not be that familiar with Action Line B4, but I think that’s the one that speaks about the ethics of ICTs. And today we speak about human rights online. So language matters and language has changed. In 2003, 2006, 2005, forgive me, we were thinking about the ethics of ICTs, but over 20 years there have been so many risks to human rights from the use of ICTs and from lack of use of ICTs that the conversation has shifted and the GDC reflects this evolution in the language, I think. I hope that goes some way to answering your question. I believe everybody needs a coffee break, so we might wrap this up on, I’ll say, maybe one or two ideas that I’m taking away from this conversation and I invite Ambassador Erickson to do the same, but in the road ahead, partnerships are going to be key. In fact, they have been all throughout these years, as many in the audience pointed out. It sounds like there’s appetite here for finding partners, for learning from others that are in similar situations across borders, and also for recognizing the similarities of the challenges that we’re facing. So I may be in El Salvador, but I can share with somebody in Denmark the same challenge about misinformation, for example, and learning from each other is very valuable. Partnerships, however, take time. They take discussion. They take going out there and beating the pavement, looking for people that you need. And it takes funding as well and alignment of interests so that there’s incentives to collaborate. Those are some of the things that I take away. And really, thank you for all of the participation and your attention this morning. And Ambassador Erickson, the final word is with you.


Roy Erikkson: Thank you. Yeah, my takeaway from this is that I think that we more or less know what the challenges are. It is now just trying to find the best ways of finding partnerships and doing together and implement what we have agreed on under the digital compound. I’m quite positive and optimistic that the challenges are not insurmountable. We can do it and we do it together.


I

Isabel De Sola

Speech speed

144 words per minute

Speech length

2669 words

Speech time

1109 seconds

Stakeholder-driven implementation through partnerships

Explanation

Isabel De Sola emphasizes that GDC implementation will be primarily conducted by stakeholders, including governments, businesses, civil society, academia, and scientists. The UN’s role is to provide a platform and convene stakeholders to facilitate information sharing about implementation.


Evidence

Mention of an implementation map that will be available in January to allow stakeholders to get involved.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Roy Erikkson


Kevin Hernandez


Patricia Ainembabazi


Agreed on

Importance of partnerships in digital development


R

Roy Erikkson

Speech speed

121 words per minute

Speech length

1275 words

Speech time

627 seconds

Finland’s Global Gateway initiative for infrastructure investments

Explanation

Roy Erikkson discusses Finland’s involvement in the EU’s Global Gateway initiative, which focuses on infrastructure investments in the global south and emerging markets. The initiative emphasizes digitalization and connectivity issues, along with capacity building and digital literacy skills.


Evidence

Example of sending an expert to Zambia for six months to write the country’s artificial intelligence strategy.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Differed with

Shamsher Mavin Chowdhury,


Differed on

Focus of digital development efforts


Knowledge sharing and capacity building across countries

Explanation

Roy Erikkson highlights the importance of sharing experiences and knowledge across countries to address common challenges. He emphasizes that technological diplomacy can help countries avoid reinventing the wheel and benefit from others’ experiences.


Evidence

Mention of participating in a conference in Latin America where cybersecurity issues were discussed, noting that sharing experiences can help raise standards to fight cybercrime.


Major Discussion Point

Benefits of Partnerships in Digital Development


Agreed with

AUDIENCE


Patricia Ainembabazi


Agreed on

Need for culturally relevant and inclusive digital initiatives


K

Kevin Hernandez

Speech speed

128 words per minute

Speech length

362 words

Speech time

169 seconds

Connect.Post program to connect post offices to the internet

Explanation

Kevin Hernandez presents the Universal Postal Union’s Connect.Post program, which aims to connect all post offices worldwide to the internet by 2030. The program seeks to transform post offices into one-stop shops for government services, digital financial services, and community network hubs.


Evidence

Mention of partnerships across governments, international donors, private companies, and different industries to implement the program.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Isabel De Sola


Roy Erikkson


Patricia Ainembabazi


Agreed on

Importance of partnerships in digital development


A

AUDIENCE

Speech speed

142 words per minute

Speech length

580 words

Speech time

243 seconds

Need for secure-by-design ICT procurement

Explanation

An audience member highlights the lack of secure-by-design ICT procurement by governments. They argue that without incentives for industry to produce secure ICTs, digital systems will remain insecure.


Evidence

Mention of a report finding that almost no governments procure ICTs secure by design.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Importance of culturally relevant digital literacy programs

Explanation

An audience member discusses the development of a digital literacy program for high schoolers in Saudi Arabia. The program focuses on safe and health-promoting technology use, emphasizing the importance of cultural relevance in skills transfer.


Evidence

Partnership with Johns Hopkins Bloomberg School of Public Health to develop culturally resonant content and pilot the intervention.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Roy Erikkson


Patricia Ainembabazi


Agreed on

Need for culturally relevant and inclusive digital initiatives


Finding financing for projects and experts

Explanation

An audience member emphasizes the challenge of finding funding for digital development projects and experts. They stress the importance of being able to fund people who actually do the work.


Major Discussion Point

Challenges in Digital Cooperation


N

Nandipha Ntshalbu

Speech speed

134 words per minute

Speech length

255 words

Speech time

113 seconds

Energy efficiency and sufficiency in digital infrastructure

Explanation

Nandipha Ntshalbu points out the need to address energy efficiency and sufficiency in digital infrastructure development. They argue that connectivity challenges are often a function of energy availability, which should be explicitly addressed in the GDC objectives.


Major Discussion Point

Challenges in Digital Cooperation


Data localization and sovereignty concerns

Explanation

Nandipha Ntshalbu raises concerns about data localization and data sovereignty in the context of AI deployment in African countries. They express interest in understanding which African countries have been involved in initiatives related to ethical AI deployment.


Major Discussion Point

Challenges in Digital Cooperation


S

Shamsher Mavin Chowdhury,

Speech speed

117 words per minute

Speech length

172 words

Speech time

87 seconds

Data governance and cybersecurity frameworks for developing countries

Explanation

Shamsher Mavin Chowdhury emphasizes the need for inclusive data governance and cybersecurity frameworks that protect all countries, not just privileged ones. He questions how the Global Digital Compact will ensure fair and transparent data governance for countries like Bangladesh.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Differed with

Roy Erikkson


Differed on

Focus of digital development efforts


Power imbalance created by data monopolies

Explanation

Shamsher Mavin Chowdhury raises concerns about the power imbalance created by data monopolies, where global tech giants dominate developing economies’ digital ecosystems. He questions how the GDC will address this issue.


Major Discussion Point

Challenges in Digital Cooperation


A

Alisa Heaver

Speech speed

152 words per minute

Speech length

68 words

Speech time

26 seconds

Aligning GDC with existing frameworks like WSIS

Explanation

Alisa Heaver questions why the GDC objectives are not linked to the WSIS action lines, while they are linked to the SDGs. This raises the issue of aligning the GDC with existing digital development frameworks.


Major Discussion Point

Challenges in Digital Cooperation


P

Patricia Ainembabazi

Speech speed

136 words per minute

Speech length

369 words

Speech time

161 seconds

Collaboration on internet governance policies in Africa

Explanation

Patricia Ainembabazi discusses CIPESA’s work on internet governance in Eastern and Southern Africa. They collaborate with various stakeholders, including journalists, CSOs, and parliamentarians, to promote internet governance issues.


Evidence

Mention of the African Parliamentary Network for Internet Governance (APNIC) and partnerships with the EU and Smart Africa.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Agreed with

Isabel De Sola


Roy Erikkson


Kevin Hernandez


Agreed on

Importance of partnerships in digital development


Learning opportunities across regions with similar challenges

Explanation

Patricia Ainembabazi highlights that the issues addressed by CIPESA in Eastern and Southern Africa are not limited to these regions but are found across sub-Saharan Africa and even in Europe. This presents opportunities for cross-regional learning and collaboration.


Evidence

Mention of the Forum for Internet Freedoms Africa (FIFA) event, which attracts participants from Africa and abroad to discuss common internet-related challenges.


Major Discussion Point

Benefits of Partnerships in Digital Development


Agreed with

Roy Erikkson


AUDIENCE


Agreed on

Need for culturally relevant and inclusive digital initiatives


D

Damilare Oydele

Speech speed

189 words per minute

Speech length

332 words

Speech time

104 seconds

Transforming libraries into digital connectivity hubs

Explanation

Damilare Oydele discusses Library Aid Africa’s work in transforming libraries into vibrant digital spaces. They are developing a Library Tracker to understand the impact of libraries and their connectivity status, aiming to turn libraries into data tech hubs.


Evidence

Mention of working with libraries across African countries and developing the Library Tracker tool.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


G

Guilherme Duarte.

Speech speed

196 words per minute

Speech length

185 words

Speech time

56 seconds

Small ISPs connecting underserved areas in Brazil

Explanation

Guilherme Duarte discusses the role of small Internet Service Providers (ISPs) in connecting underserved areas in Brazil. These ISPs are involved in connecting schools and building infrastructure in remote regions like the Amazon.


Evidence

Mention of public-private partnerships for building infrastructure and private investment in public infrastructure.


Major Discussion Point

Implementation of the Global Digital Compact (GDC)


Public-private partnerships for infrastructure development

Explanation

Guilherme Duarte highlights the importance of public-private partnerships in developing digital infrastructure in Brazil. Small ISPs are involved in both public-private partnerships and private investments in public infrastructure.


Evidence

Examples of connecting schools and building infrastructure in under-assisted areas like the Amazon.


Major Discussion Point

Benefits of Partnerships in Digital Development


Agreements

Agreement Points

Importance of partnerships in digital development

speakers

Isabel De Sola


Roy Erikkson


Kevin Hernandez


Patricia Ainembabazi


arguments

Stakeholder-driven implementation through partnerships


Knowledge sharing and capacity building across countries


Connect.Post program to connect post offices to the internet


Collaboration on internet governance policies in Africa


summary

Multiple speakers emphasized the crucial role of partnerships in implementing digital development initiatives, sharing knowledge, and addressing common challenges across different regions and sectors.


Need for culturally relevant and inclusive digital initiatives

speakers

Roy Erikkson


AUDIENCE


Patricia Ainembabazi


arguments

Knowledge sharing and capacity building across countries


Importance of culturally relevant digital literacy programs


Learning opportunities across regions with similar challenges


summary

Speakers agreed on the importance of ensuring digital initiatives are culturally relevant and inclusive, taking into account local contexts while addressing common challenges.


Similar Viewpoints

Both speakers highlighted the importance of infrastructure investments and public-private partnerships in connecting underserved areas and promoting digital development.

speakers

Roy Erikkson


Guilherme Duarte.


arguments

Finland’s Global Gateway initiative for infrastructure investments


Small ISPs connecting underserved areas in Brazil


Public-private partnerships for infrastructure development


Both speakers expressed concerns about data governance, sovereignty, and the need for inclusive frameworks that protect developing countries’ interests in the digital space.

speakers

Shamsher Mavin Chowdhury,


Nandipha Ntshalbu


arguments

Data governance and cybersecurity frameworks for developing countries


Data localization and sovereignty concerns


Unexpected Consensus

Similarities in digital challenges across diverse regions

speakers

Roy Erikkson


Patricia Ainembabazi


arguments

Knowledge sharing and capacity building across countries


Learning opportunities across regions with similar challenges


explanation

Despite representing different regions (Finland and Africa), both speakers emphasized that digital challenges are often similar across diverse geographical areas, suggesting unexpected commonalities in global digital development issues.


Overall Assessment

Summary

The main areas of agreement centered around the importance of partnerships, culturally relevant initiatives, infrastructure development, and addressing common digital challenges across regions.


Consensus level

Moderate consensus was observed among speakers on key issues. This suggests a shared understanding of the importance of collaboration and inclusive approaches in digital development, which could facilitate more effective implementation of the Global Digital Compact. However, some divergent views on specific implementation strategies and priorities indicate the need for continued dialogue and negotiation.


Differences

Different Viewpoints

Focus of digital development efforts

speakers

Roy Erikkson


Shamsher Mavin Chowdhury,


arguments

Finland’s Global Gateway initiative for infrastructure investments


Data governance and cybersecurity frameworks for developing countries


summary

Roy Erikkson emphasizes infrastructure investments and capacity building, while Shamsher Mavin Chowdhury focuses on data governance and cybersecurity frameworks for developing countries.


Unexpected Differences

Energy efficiency in digital infrastructure

speakers

Nandipha Ntshalbu


Other speakers


arguments

Energy efficiency and sufficiency in digital infrastructure


explanation

Nandipha Ntshalbu raised the issue of energy efficiency and sufficiency in digital infrastructure, which was not prominently discussed by other speakers. This unexpected focus highlights an often overlooked aspect of digital development.


Overall Assessment

summary

The main areas of disagreement revolve around priorities in digital development, approaches to cybersecurity, and the scope of issues to be addressed in the Global Digital Compact.


difference_level

The level of disagreement among speakers is moderate. While there are differing focuses and priorities, most speakers agree on the overall goals of digital development and cooperation. These differences in perspective can contribute to a more comprehensive approach to implementing the Global Digital Compact, but may also present challenges in prioritizing specific actions and allocating resources.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of improving digital security, but Roy Erikkson focuses on knowledge sharing, while the audience member emphasizes the need for secure-by-design procurement.

speakers

Roy Erikkson


AUDIENCE


arguments

Knowledge sharing and capacity building across countries


Need for secure-by-design ICT procurement


Similar Viewpoints

Both speakers highlighted the importance of infrastructure investments and public-private partnerships in connecting underserved areas and promoting digital development.

speakers

Roy Erikkson


Guilherme Duarte.


arguments

Finland’s Global Gateway initiative for infrastructure investments


Small ISPs connecting underserved areas in Brazil


Public-private partnerships for infrastructure development


Both speakers expressed concerns about data governance, sovereignty, and the need for inclusive frameworks that protect developing countries’ interests in the digital space.

speakers

Shamsher Mavin Chowdhury,


Nandipha Ntshalbu


arguments

Data governance and cybersecurity frameworks for developing countries


Data localization and sovereignty concerns


Takeaways

Key Takeaways

Partnerships are crucial for implementing the Global Digital Compact (GDC)


There are similarities in digital challenges across different regions and countries


Cultural relevance and local context are important when implementing digital initiatives


Financing remains a persistent challenge for digital development projects


Data governance and cybersecurity are key concerns, especially for developing countries


Existing platforms like IGF are valuable for connecting actors and increasing visibility


Energy efficiency and sufficiency are important considerations in digital infrastructure development


Resolutions and Action Items

UN to provide an implementation map for GDC in the coming months


Working group on data governance to develop principles in the next two years


Organizations encouraged to endorse GDC vision and principles online


Stakeholders invited to provide information on their GDC-related activities


Unresolved Issues

How to ensure fair and transparent data governance that protects user privacy in developing countries


Addressing the power imbalance created by data monopolies in developing economies


Specific steps for fostering global cooperation on cybersecurity for developing countries


How to fully integrate small ISPs and local initiatives into global digital cooperation efforts


Detailed explanation of how GDC and WSIS action lines are interconnected and complementary


Suggested Compromises

Balancing global standards with local cultural contexts in digital literacy programs


Combining hard infrastructure development with soft skills and capacity building


Using existing institutions like libraries and post offices as hubs for digital connectivity


Thought Provoking Comments

We actually, we outsourced this. We found somebody who would be able to send an expert of theirs and we paid the costs for having that expert residing in Zambia and writing this strategy.

speaker

Roy Erikkson


reason

This comment reveals an innovative approach to international development cooperation, where a government (Finland) acts as a facilitator and broker to connect expertise with local needs.


impact

It sparked a discussion about the role of governments in facilitating partnerships and the importance of finding the right experts for specific projects. It also highlighted the need for cultural translation in such collaborations.


We have one project that is coming to an end, but it’s continuing under a different name, but it’s African digital and green transition. And in this project, for example, we sent an expert for six months into Zambia, and they wrote the artificial intelligence strategy for the country.

speaker

Roy Erikkson


reason

This comment provides a concrete example of how international cooperation can contribute to building digital capacity in developing countries, particularly in emerging technologies like AI.


impact

It led to further discussion about the importance of capacity building and knowledge transfer in digital development projects. It also raised questions about data sovereignty and localization in AI development.


We have a program called Connect.Post that aims to connect all the post offices in the world to the Internet by 2030, and then transform them into one-stop shops where citizens can access government services, digital financial services, and also leverage them as hubs for community networks.

speaker

Kevin Hernandez


reason

This comment introduces an innovative approach to leveraging existing infrastructure (post offices) to bridge digital divides and provide digital services.


impact

It broadened the discussion to include the role of traditional institutions in digital transformation and sparked interest in multi-stakeholder partnerships for digital inclusion projects.


With everything that we have been dealing with, both in IGF and even the compact itself, even the objectives, we seem not to want to be visible addressing the issue of energy efficiency and sufficiency.

speaker

Nandipha Ntshalbu


reason

This comment highlights an often overlooked aspect of digital development – the energy requirements and environmental impact of digital infrastructure.


impact

It shifted the conversation to include sustainability considerations in digital development projects and led to a discussion about the intersection of digital and green transitions.


We do have FIFA Africa, and this has nothing to do with soccer. It is the Forum for Internet Freedoms Africa. We have this every year. This year we’re in Dakar, Senegal. So we had almost 500 participants, and not only from Africa, but also from abroad.

speaker

Patricia Ainembabazi


reason

This comment introduces a significant regional initiative for internet governance and digital rights in Africa, highlighting the importance of regional cooperation and knowledge sharing.


impact

It emphasized the value of regional platforms for addressing shared challenges and learning from diverse experiences. It also underscored the global nature of digital governance issues.


Overall Assessment

These key comments shaped the discussion by highlighting the importance of multi-stakeholder partnerships, knowledge transfer, and capacity building in digital development. They broadened the conversation to include considerations of sustainability, cultural relevance, and regional cooperation. The discussion evolved from abstract concepts to concrete examples of implementation, emphasizing the need for practical, context-specific approaches to digital cooperation. The comments also underscored the global nature of digital challenges while recognizing the importance of local and regional initiatives.


Follow-up Questions

How can we address energy efficiency and sufficiency in digital development?

speaker

Nandipha Ntshalbu


explanation

This is important because energy availability is crucial for connectivity and digital development, but it’s not explicitly addressed in the current objectives.


Which African countries beyond Zambia have been involved in Finland’s digital and green transition projects?

speaker

Nandipha Ntshalbu


explanation

This information is important for understanding the scope of Finland’s involvement in Africa and potential opportunities for collaboration.


How will the Global Digital Compact ensure fair and transparent data governance that protects user privacy and enables countries like Bangladesh to retain control over their national data assets?

speaker

Shamsher Mavin Chowdhury


explanation

This is crucial for ensuring that developing countries are not left behind in the global digital landscape and can protect their citizens’ data.


How will the Global Digital Compact address the power imbalance created by data monopolies where global tech giants dominate developing economies’ digital ecosystems?

speaker

Shamsher Mavin Chowdhury


explanation

This is important for ensuring fair competition and preventing the exploitation of developing economies by large tech companies.


What steps are being taken to foster global cooperation on cybersecurity so that developing countries like Bangladesh can access resources, expertise, and frameworks to combat cyber threats?

speaker

Shamsher Mavin Chowdhury


explanation

This is essential for building a secure global digital ecosystem that includes and protects all countries, not just developed nations.


Why doesn’t the Global Digital Compact link to the WSIS action lines, but does link to the SDGs?

speaker

Alisa Heaver


explanation

Understanding the relationship between different global frameworks is important for coherent policy-making and implementation.


How can small ISPs be more involved in the work being done on digital cooperation and connectivity?

speaker

Guilherme Duarte


explanation

Small ISPs play a crucial role in connecting underserved areas and their involvement is important for achieving universal connectivity.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Networking Session #60 Risk & impact assessment of AI on human rights & democracy

Networking Session #60 Risk & impact assessment of AI on human rights & democracy

Session at a Glance

Summary

This panel discussion focused on assessing AI risks and impacts, with an emphasis on safeguarding human rights and democracy in the digital age. The speakers represented various organizations involved in AI governance, including government agencies, standards bodies, research institutions, and advocacy groups.


David Leslie introduced the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA) methodology recently adopted by the Council of Europe. This approach aims to provide a structured framework for evaluating AI systems’ impacts on human rights and democratic values. Several speakers highlighted the importance of flexible, context-aware approaches to AI risk management that can be tailored to specific use cases.


Representatives from standards organizations like ISO/IEC and IEEE discussed their work on developing AI standards and certification processes to promote responsible AI development. Government officials from Japan and the US shared insights on their national AI governance initiatives and how these align with international frameworks. The importance of stakeholder engagement, skills development, and ecosystem building was emphasized by multiple speakers.


Industry perspectives were provided by LG AI Research, which outlined its approach to implementing AI ethics principles throughout the AI lifecycle. The role of NGOs in advocating for strong AI governance and bringing public voices into policy discussions was highlighted by the Center for AI and Digital Policy.


Overall, the discussion underscored the need for collaborative, multi-stakeholder efforts to develop effective AI governance frameworks that protect human rights and democratic values while fostering innovation. The speakers agreed on the importance of proactive approaches to identifying and mitigating AI risks as the technology continues to advance rapidly.


Keypoints

Major discussion points:


– The development and adoption of AI governance frameworks and risk assessment methodologies, like the Council of Europe’s HUDERIA


– The role of standards organizations and governments in creating AI governance guidelines and policies


– The importance of stakeholder engagement, skills development, and ecosystem building in AI governance


– Approaches to operationalizing human rights considerations in AI development and deployment


– The contributions of NGOs and civil society in advocating for responsible AI and human rights protections


Overall purpose:


The goal of this discussion was to explore various international and organizational approaches to AI governance, risk assessment, and human rights protection in the context of AI development and use. Speakers shared insights from government, industry, standards bodies, and NGOs on frameworks and best practices for responsible AI.


Tone:


The tone was largely collaborative and optimistic, with speakers building on each other’s points and emphasizing the importance of working together across sectors and borders. There was a sense of urgency about the need to develop robust governance frameworks, but also confidence in the progress being made. The tone remained consistent throughout, focusing on constructive approaches and shared goals.


Speakers

– David Leslie: Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, Professor of Ethics, Technology, and Society at Queen Mary University of London


– Wael William Diab: Chair of the ISO-IEC JTC1 SC42 (AI standardization)


– Tetsushi Hirano: Deputy Director of the Iraq Digital Policy Office at the Japanese Ministry of Internal Affairs and Communications


– Matt O’Shaughnessy: Senior Advisor at the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor


– Clara Neppel: Senior Director at IEEE


– Myoung Shin Kim: Principal Policy Officer at LG AI Research, IEEE Certified AI Professor


– Heramb Podar: Center for AI and Digital Policy (CAIDP), Executive Director of ENCODE India


Additional speakers:


– Samara Jaideva: Researcher at the Alan Turing Institute


Full session report

AI Governance and Human Rights: A Multi-Stakeholder Approach


This panel discussion brought together experts from various sectors to explore approaches to AI governance, risk assessment, and human rights protection in the context of AI development and deployment. The speakers represented government agencies, standards bodies, research institutions, and advocacy groups, providing a comprehensive overview of current efforts and challenges in responsible AI development.


Key Frameworks and Methodologies


The discussion began with David Leslie introducing the Human Rights, Democracy and Rule of Law Impact Assessment (HUDERIA) methodology recently adopted by the Council of Europe. This framework aims to provide a structured approach for evaluating AI systems’ impacts on human rights and democratic values. Leslie described HUDERIA as “a unique anticipatory approach to the governance of the design, development, and deployment of AI systems” anchored in four fundamental elements. He also noted the Japanese government’s support for the Council of Europe’s work on an AI convention.


Other speakers presented complementary frameworks and standards:


1. Wael William Diab discussed ISO/IEC standards for AI systems, emphasising the importance of third-party certification and audits to ensure responsible adoption.


2. Tetsushi Hirano outlined the Japanese AI Guidelines for Business, which differentiates aspects from the perspective of AI actors.


3. Matt O’Shaughnessy highlighted the NIST AI Risk Management Framework, emphasising its flexible and context-aware application. He also discussed the White House Office of Management and Budget Memorandum, which provides guidance on AI use in the federal government.


4. Clara Neppel presented IEEE standards for ethically aligned AI design, focusing on building ecosystems to implement these standards. She also mentioned IEEE’s work on environmental impact assessment of AI.


5. Myoung Shin Kim shared LG AI Research’s approach to AI ethics and risk governance, which includes internal processes and education. She discussed their XR1 generative AI model and detailed their AI ethics implementation process.


6. Heramb Podar presented CAIDP’s advocacy work, including their Universal Guidelines on AI and efforts to promote ratification of AI treaties.


Human Rights Considerations


A significant portion of the discussion focused on incorporating human rights considerations into AI development and governance. Key points included:


1. The importance of stakeholder engagement in AI impact assessments, with multiple speakers emphasising the need to involve affected communities.


2. Data quality standards for AI systems, as highlighted by Wael William Diab.


3. The need for detailed analysis of rights holders in impact assessments, as mentioned by Tetsushi Hirano.


4. Human rights impact assessments for government AI use, discussed by Matt O’Shaughnessy.


5. Incorporating human rights principles in AI standards, as emphasised by Clara Neppel.


6. Educating data workers on human rights, a focus for LG AI Research according to Myoung Shin Kim.


7. The role of NGOs in advocating for human rights in AI governance, highlighted by Heramb Podar.


International Cooperation and Implementation


The speakers agreed on the importance of international cooperation and interoperability between different AI governance frameworks. This was evident in discussions about:


1. The Council of Europe’s work on an AI convention, mentioned by David Leslie.


2. Efforts to ensure interoperability between AI frameworks, highlighted by Tetsushi Hirano.


3. How U.S. domestic AI policies inform international work, discussed by Matt O’Shaughnessy.


4. IEEE’s global network of AI ethics assessors, presented by Clara Neppel.


5. LG AI Research’s collaboration with UNESCO, shared by Myoung Shin Kim.


6. CAIDP’s advocacy for ratification of AI treaties, mentioned by Heramb Podar.


Practical Implementation Challenges


The discussion also addressed the practical challenges of implementing AI ethics principles:


1. Matt O’Shaughnessy emphasised the need for context-aware application of risk management frameworks.


2. Clara Neppel discussed the importance of building ecosystems to implement ethical AI standards.


3. Myoung Shin Kim outlined LG’s AI ethics impact assessment process and mentioned their upcoming annual report on AI ethics implementation.


4. Heramb Podar highlighted the need for clear prohibitions on high-risk AI use cases.


5. Several speakers noted the challenge of balancing innovation with responsible AI development.


Education and Public Engagement


Myoung Shin Kim from LG AI Research emphasized the importance of education in AI ethics implementation. She discussed initiatives to educate data workers on human rights and improve citizens’ AI literacy. While other speakers touched on stakeholder engagement, Kim’s presentation provided the most detailed discussion of education efforts.


Conclusion


The discussion underscored the need for collaborative, multi-stakeholder efforts to develop effective AI governance frameworks that protect human rights and democratic values while fostering innovation. The speakers presented a range of approaches and methodologies for responsible AI development, highlighting both progress and ongoing challenges in the field. As David Leslie noted in his closing remarks, the conversation demonstrated the complexity of the issues and the importance of continued dialogue and cooperation among diverse stakeholders in shaping the future of AI governance.


Session Transcript

David Leslie: Can everyone hear me? Samara, can you hear me? Hello? Hello? Yes? Yeah? Okay. Perfect. If everyone’s ready, we can get started. I believe everyone’s joined us online. Perfect. Good evening. Thank you so much for joining us here today, this evening. We know it’s the last session, but I can promise you we have an other networking session on assessing AI risks and impacts, safeguarding human rights and democracy in the digital age. We will be moderated by Professor David Leslie, who is the Director of Ethics and Responsible Research Innovation at the Alan Turing Institute, and Professor of Ethics, Technology, and Society at Queen Mary University of London. He will be introducing the rest of us, but to everyone joined here today and online, my name is Samara Jaideva. I’m a researcher at the Alan Turing Institute, and I’m very proud to say I have supported in helping publish and develop this human rights impact assessment framework that we’ve done with the Council of Europe. So now I’ll turn it to David to introduce us to this panel. Great. Samara, can you hear me? Am I… Just give me an acknowledgement and I’ll keep going. Good? Okay. Okay, so thank you so much, Samara. I am very thrilled with you. Just to say, our team at the Turing has been really involved with this process dating back to 2020 when the Ad Hoc Committee on Artificial Intelligence was really doing the initial steps to building a feasibility study that would come to inform what now is the framework convention, the treaty that is aligning human rights, democracy, and the rule of law with AI. And I’ll just also say that really this is the adoption of the Huderia methodology, which has just happened this past month, is really a kind of historical moment in a time of change where so much of the activities in the kind of international AI governance ecosystem are yet to be decided. And so this is really a kind of path breaking outcome, I would say. And I was just thinking about it, over the years, in being at the Council of Europe plenaries, where we’ve really talked through governance measures. It was early 2021, I want to say, where we first took a question about foundation models and frontier AI. I mean, you can just imagine that rich conversation about governance challenges has been going on at the Council of Europe’s venue in Strasbourg for a number of years now. So I’ll also just quickly say the Huderia itself that has been developed through the activities of the Committee on Artificial Intelligence and all the member states and observer states, it really is a unique anticipatory approach to the governance of the design, development, and deployment of AI systems that anchors itself in basically four fundamental elements. We’ve got a context-based risk analysis, which provides a kind of structured approach initially to collecting the information that’s needed to understand the risks of AI systems, in particular the risks they pose to human rights, democracy, and the rule of law. It really focuses in on what we call the socio-technical context, so the environments, the social environments in which the technology is embedded. It also allows for an initial determination of whether the system is the right approach at all, and it provides a mechanism for triaging more or less involving governance processes in light of the risks of the systems. There is also a stakeholder engagement process, which proposes an approach to enable engagement as appropriate for relevant stakeholders, so impacted communities, in order to sort of amplify the voices of those who are affected and to gain information regarding how they might view the impacts, and in particular contextualize and corroborate potential harms. Then there’s the third module, if you will, or the third element, a real risk and impact assessment, which is a more full-blown process to assess the risks and impacts that are related to human rights, democracy, and the rule of law in ways that really both integrate stakeholder consultation, but also really ask the how questions and try to think of downstream effects in a much more full-blown way. And then finally, there’s a kind of a mitigation planning element, which provides steps for mitigation and remedial measures that allow for access to remedy and iterative review. And as a whole, the Huderia also stresses that there’s a need for iterative revisitation of all of these processes and all of the elements of Huderia insofar as both the innovation environment, so the way that the systems are designed, developed, and deployed, both that is very dynamic and changing, but also the broader social and legal, economic, political contexts are always changing. And those changes mean that we need to be flexible and continually revisit how we’re looking at the governance process for any given system. So with that, let me now then introduce our first panel speaker, and that is Mr. William Diab, who is chair of the ISO-IEC JTC1 SC42, so just a wonderful standards development organization or set, a group of them that are doing great work on AI standards. And he’ll address the role of AI standardization in safeguarding human rights democracy as well as cover some existing and upcoming standards on these issues. So I’ll turn it over to you, Will. Go ahead.


Wael William Diab: Thank you, David, and thank you for the warm introduction. I’d like to thank you also for the invitation to present on this panel. My name is Will, and as David mentioned, I chaired the Joint Committee of ISO and IEC on Artificial Intelligence. And so I’m going to give you a brief flavor of what we do. Just to quickly acknowledge it’s not just me that does this. We have a pretty large management team, and we’ll make all of these slides available, but in the interest of time, I’m going to just jump into just what it is that we do. And so we take a look at the full ecosystem when it comes to AI. We start by looking at some of these non-technical trends and requirements, whether it’s application domains, regulatory policy, or what’s perhaps most relevant here is emerging societal requirements. Through that, we assimilate the context of use of the technologies we cover, and then what we do is we provide what we call horizontal and foundational projects on artificial intelligence. And I’ll talk a little bit more about examples, but I want to point out that the story doesn’t stop there. We have lots of sister committees in IEC and ISO that focus on the application domains themselves that leverage our standards. We work with open source communities and others. So we are part of the ISO and IEC families. Our scope is we are the focal point for the IT standardization of AI, and we help other sister committees in terms of looking at the application side. We’ve been growing quite a bit. We’ve published over 30 standards and have about 50 that are active. We have 68 countries, so the way we develop our standards is by one country, one vote principle, and about 800 unique experts that are in our system. I would also like to note that we work extensively with others. We have about 80 liaison relationships, both internal and external, and I’ll show a slide at the end. We also run a biannual workshop. The way we’re structured is we currently have 10 major subgroups, five of which are joint with other committees, and I’ll show what we do. So the first thing that’s important about understanding AI and being able to work with different stakeholders that have different needs is to have some foundational standards, and this area covers everything from common terminology and concepts, and by the way, that is a freely available standard that we do to work using AI. A lot of the work in this area has also been around enabling what we call certification and third-party certification. So I’ll show a slide at the end. The second thing that’s important to understand is that third-party audit of AI systems, so we believe that it’s important to enable this to ensure that we have broad, responsible adoption of AI. Another big area for us is around data. So data, as many people know, is at the cornerstone of a responsible and quality AI system. This original work started by looking at big data, and we completed all those projects, and then we expanded the scope to look at anything related to data and AI. And so we’re in the process of publishing a six-part multi-series. The first three have been published, and the next three should be published in this coming year around data quality for analytics in the AI space. Some of the more recent work is around synthetic data and data profiles for AI. Trustworthiness, which is very relevant to the topic at hand, as well as enabling responsible systems, is probably our largest area of work. The slide is a bit of an eye chart to try and read, and the reason is that we start from the fact that they are IT systems themselves, and yet with some differences from a traditional IT system, for example, in terms of the learning. And so what this allows us to do is to build on the large portfolio of standards that IEC and ISO have developed, and then extend that for the areas that are specific to AI. So one example of the work here is our AI risk management framework. This was built on the ISO 31,000 series as an implementation specific. But other things that you might see bolded on this chart are things that you might hear in every day, so making something controllable, explainable, transparent, and what we do is then take those concepts and translate them into technical requirements. A colleague of mine had put this together to indicate where societal and ethical issues lie in terms of the direct impact versus things that are further away, and I thought it was a great slide because everything in yellow really maps into what we’re doing today. So when it comes to societal issues in two ways, the first is through dedicated projects that are directly around this area, and again, you know, using use cases to translate from some of these non-technical requirements down to technical requirements and prescriptions on guidance, how to address them, as well as integrating it across our entire portfolio. For instance, when we look at use cases, we ask what some of the ethical and societal issues are. We don’t do this alone with a number of international organizations. In terms of use cases and applications, it’s important for us to be able to provide horizontal standards, and as I mentioned, you know, we’ve collected over 185 use cases, and we’re constantly updating this document, but we also take a look at the work from the point of view of an application developer, whether it’s at the technical development side or at the deployment side, and we have standards in this area. We’ve also started to look at the environmental sustainability aspects as well as the beneficial aspects of AI and big human machine teaming. Computational methods are at the heart of AI systems, and we have a large portfolio of work here. Our more recent work has been focused around having more efficient training and modeling mechanisms. Governance implications of AI, so this is looking at it from the point of view of a decision maker, whether it be a board or an organization, and answering some of the questions that might come up. Weeding of AI-based systems, this is another joint effort for us, and we have a multi-part series focused on testing, verification, and validation. In addition to the existing work, we’re looking at new ideas around things like red teaming. Health informatics is a joint effort with ISO TC215, and this is really taking us into the healthcare space, trying to assist them in building out their roadmap. In addition to the foundational project that we’ve got, we are also looking at extending the terminology concepts for the sector, which may serve as a model for other sectors as well, as well as looking at enabling certification for the healthcare space. In terms of functional safety, this is the work around enabling functional safety, which is essential for sectors that consider safety important. This is being done jointly with IEC SC65A. Natural language processing is around everything to do with language, and it goes beyond just text, and this is becoming increasingly important in new deployments. Last but not least, we have started a new joint working group with the ISO CASCO group that does certification and conformity assessment to look at conformity assessment schemes. Sustainability is a big area for us, both in terms of looking at the sustainability of AI and how AI can be applied to sustainability. I’m going to skip to just this slide. One of the important things is to allow this idea of a third-party certification and audit in order to ensure broad responsible adoption. This picture shows you how a lot of our standards come together. ISO-IEC 42001, which if you’re familiar with 27001, cybersecurity, or ISO 9001, is built around the same concepts, allows us to do this. Just quickly wrapping up, just to allow time for my other co-speakers, just to sum up, we’re looking at the entire ecosystem. We’re growing very rapidly. We work with a lot of other organizations, and it’s to join. We also run a biannual workshop that typically looks at four tracks applications. One of our recent ones was looking at transportation. We look at beneficial AI. We look at emerging standards and also what some of the emerging technology and requirements are. With that, I hand it back over to the moderator.


David Leslie: Thank you very much. Thanks so much, Val. That was a brilliant presentation. It just shows how much work on the concrete side of how the devil’s in the details, and we need to really work. I would say the Huderia that we’ve just adopted, this is the methodology. And as we move on in the next year or so, we’ll be working on what we call the model, which really gets into the trenches and explores some of those areas that you just presented, thinking also about the importance of alignment and ensuring the kind of standards are aligning with the way that we’re approaching this on the international governance level. So, our next speaker is Tetsushi Hirano, the Deputy Director of the Iraq Digital Policy Office at the Japanese Ministry of Internal Affairs and Communications. And Hirano-sensei will offer us his perspective on AI and its impacts on human rights and governance, both in Japan and internationally. Tetsushi, the floor is yours.


Tetsushi Hirano: Thank you, David. I’m very pleased to participate in this important session following the successful adoption of the Huderia methodology. And I sincerely hope that this pioneering work will promote this new type of approach and facilitate the accession of the interested countries to the AI cooperation. Speaking of Japan, Japan has been developing its own AI risk management framework since 2016. And this year, we released the AI Guidelines for Business, which took into account the results of the Hiroshima AI process for the advanced AI systems as well. And I see some similarities and differences between the Japanese guidelines and the similarities. Both are based on common human-centered values and also pay attention to the different contexts of AI life cycles. While Huderia provides a model of risk analysis of the application design and development deployment context, the Japanese guideline differentiates these aspects from the perspective of AI actors. Namely, the guidelines provide a detailed list of what developers and deployers and users are recommended to do with respect to our analysis. This is one of the features of our guidelines. compared to other frameworks. But despite this formal difference, Huderia and the Japanese guidelines go in the same direction in the analysis. So we are hoping to contribute to the further development of Huderia technical document plan for 2025. And the next is the difference. And this is also a strong point of the Huderia as far as I can see. And the Huderia offers a detailed analysis of right holders and effects on them. But some Japanese experts evaluate COBRA very highly, especially in view of the COBRA, which can be seen as a threshold mechanism. And it also provides a step-by-step analysis of stakeholder involvement. And I have to admit that the stakeholder involvement process presented there is demanding if some of the steps are to be implemented precisely. But this can serve as a kind of benchmark for continuous development. And the Japanese government is future framework for domestic AI regulations. And I’m sure that Huderia will be one of the key important documents to look at, especially when developing public procurement rules, for example, where the protection of the citizens’ rights is at the core of the issue. I would also like to mention interoperability, a document of which is also planned for 2025. As we all know, there are many AI risk management frameworks under development. And for example, the reporting framework based on the Hiroshima process code of conduct or EU AI Act itself has three different type management framework, and to name but a few. The interoperability document may highlight the commonalities of these frameworks, as well as their respective strengths, which can facilitate mutual learnings between them. In particular, there are documents that only address advanced AI systems and we will have to think about what kind of impact, for example, synthetic content created by generative AI can have on democracy also in the meetings of the future meetings of the AI Convention. And finally, I would like to address the future role of conferencing parties to the AI Convention. As a pioneering work in this field, Huderia is expected to become a benchmark. However, it is also important to share knowledge and the best practices with concrete examples as this type of risk and impact assessment is not yet well known. This together with the interoperability document will help interested to join this convention.


David Leslie: Thank you. Thank you so much, Tetsushi. And I’ll just say that the support of the Japanese government across this process has been absolutely essential to the innovative nature and the success of the instrument. So, just a real deep thank you there. Speaking of which, I am now, I have the pleasure of introducing Matt O’Shaughnessy, who is Senior Advisor at the U.S. Department of State’s Bureau of Democracy, Human Rights, and Labor. And I’ll just say that the past few years have really marked major strides, one might even say quantum leaps, in these approaches that the U.S. has developed, for instance, in AI risk management and governance, with key initiatives like NIST’s AI Risk Management Framework through the recent White House Office of Management and Budget Memorandum on Advancing Governance, Innovation, and Risk Management of Artificial Intelligence. So, there’s been a lot of really excellent work coming out of the public sector in the U.S. And so, Matt, I wanted to really ask you if you could talk a little bit more about these kind of national initiatives and speak a bit about how they reflect and contribute to emerging global frameworks and shared principles for AI development and use.


Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the NIST AI Risk House Office of Management and Budget Memorandum on Government Use of AI. Maybe I’ll say a few words, kind of being an overview of each of those, and then talk kind of about how they interact and inform our international approach to AI. So, both of these documents take a similar approach. They’re both flexible, they’re both very context-aware, directed. Specifically at how particular AI systems are designed and used in particular contexts. And they both aim to promote innovation, of course, while also setting out concrete groups that can help effectively manage their risks. So, I guess, let me start with the NIST AI Risk Management Framework. So, this is our general risk management framework that sets out steps that are applicable to all organizations, whether they’re private entities or government agencies who are developing or using AI. So, the AI Risk Management Framework describes different actions that organizations can take to manage risks of all of their AI activities. A lot of those are relevant to respect for human rights. So, for instance, it describes both technical and kind of steps that can help manage harmful bias, discrimination, mitigate risks to privacy. But it also describes a lot of more general actions, things like how to establish processes for documenting the outcomes of AI systems, processes for deciding whether an AI system should be commissioned or deployed in the first place, or policies or procedures that improve accountability, or kind of increased knowledge about the risks and impacts the application of that AI system has. So, a lot of these governance-oriented actions address many of the concepts that are set out in the council. And they help lay the groundwork for organizations to better consider the risks to human rights that their AI activities pose, and also address and mitigate them. As I mentioned before, the Risk Management Framework is really designed to be applied in a flexible and context-aware manner. And that’s really important. It helps ensure that the risk management steps are both well-tailored and proportionate to the specific context of use, but also that they’re effective, and that they effectively target the most salient risks that are posed by a particular system in the particular context of its use. David, in your instance of the Houdini area taking a socio-technical approach, considering both the social context that an AI system is developed in, it’s deployed in, and that’s really core to the NIST Risk Management Framework. And I think really important to making sure that AI risk management, more generally, is effective and effectively targets the most important risks. The Risk Management Framework sets out a lot of these kind of general steps that organizations can take to manage various risks. But as I said before, it’s most effective when it’s deployed in a very context-aware manner. And to do that more effectively, it supported the development of what it calls, quote, profiles, that describe how it can be used in specific sectors, for specific AI technologies, or for specific types of end-use organizations, whether it’s like a government agency or a specific private sector entity. So one example of that that the Department of State has developed is a risk management profile for AI and human rights. And that describes specific potential human rights impacts of AI systems. And that can help developers of AI systems better anticipate the specific human rights impacts that their AI systems could have, and help them tailor the actions that are described in the Risk Management Framework to the specific end-use that they could have. And this is also where tools like the Council of Europe’s Huderia tool, the Human Rights, Democracy, Rule of Law Impact Assessment tool, can contribute and be most effective. So, you know, a lot of the kind of key risk management steps that the Huderia sets out are similar to those in the NIST AI Risk Management Framework. But the Huderia provides more detail on actions that are particularly relevant to human rights and democracy. Things like, you know, engaging stakeholders to make sure that organizations are aware of the human have, or establishing mechanisms for remedy. So as Tetsushi mentioned, the detailed resources that will be negotiated and developed next year will be particularly helpful in kind of offering this insight for organizations who are applying risk management tools that already exist, but are looking for more detailed references or resources to help them specifically look at human rights impacts in contexts where those are particularly salient. Okay, so that’s our framework, which again applies to kind of all organizations. And again, it’s kind of a very flexible, context-oriented tool. You also asked about our White House Office of Management and Budgets memorandum governance, innovation, and risk management for agency use of AI. So this is the set of rules, binding rules for government agencies, covered government agencies that use AI, and it sets out similarly key risk management actions that government agencies who are developing or using AI systems must follow in their AI activities. So this memo was released in March of twenty twenty four. You can look it up online. It’s M. And it was in fulfillment of the AI and government act of twenty twenty. And even though it was developed by this administration, it builds on work that was started in the previous administration, such as a December twenty twenty executive order called Promoting the Use of Trustworthy AI in the federal government. So it sets out a lot of bipartisan priorities. This memo, again, kind of reflects our broader approach in the United States to AI governance. It’s meant to be tailored to advance innovation, make sure that we’re using AI in ways that benefit citizens and the public at large, but also make sure that we. The example in managing and addressing the risks of AI, this guidance aligns with a lot of the provisions that were set out in the Council of Europe’s AI Convention, I’ll just give you a quick overview of some of those key aspects. So it establishes some AI governance structures and federal agencies like chief officers or governance boards that promote accountability, documentation, transparency. It sets out some key risk management practices, especially for AI systems that are determined to be what we call safety, impacting or right. Those include steps for things like risk evaluation or assessments of the quality of an AI data set that’s used for training or testing, ongoing testing and monitoring steps, training, oversight for human operators, assessments and mitigations of harmful bias, engagement for affected communities, for rights impacting AI systems. So, again, just kind of some key risk management steps that are mandated for government AI systems. And we see those as really instrumental for managing impacts on human rights. You know, things like AI systems that are used in law enforcement contexts or related to critical government services, determining whether someone is eligible for benefits, which we would label as rights impacting and apply these, you know, kind of key risk management steps that are set out in this memorandum. So those are kind of our two key domestic policies that set out AI risk management practices. And in terms of the international implications of these, both of them were informed by international best practices looking to work done by other countries, international organizations. The NIST AI Risk Management Framework had extensive international multi-stakeholder consultations. And. It’s 1.0 right now and is intended to be updated over the years, so there’ll be, you know, kind of continuing conversation between these domestic efforts and best practices that are being set out and developed internationally. And in turn, both of these these domestic products inform our international work. So both the Council of Europe’s Huderia and recent OECD projects have drawn from the AI Risk Management Framework. It’s informed the work of standards developing organizations like ISO, IEC. Continuing to work with NIST to develop crosswalks of their own domestic guidelines with the RMF, which helps ease compliance and aid interoperability. So both of these things kind of lay the groundwork for all of our international work on safe, secure and trustworthy AI, whether it’s in the Council of Europe’s AI Convention, whether it’s our UN General Assembly resolution on AI or our Freedom Online Coalition joint statement on responsible government practices for AI. And, you know, we’re looking forward to over the next couple of years, continuing to. The and the conversation on AI Risk Management continues to develop on there and turn it back over to you, David, thanks again.


David Leslie: Thanks, Matt. And and also just to say, Matt’s presence in Strasbourg has has been a huge boon for for the you know, as we’ve tried to to sort of develop the Huderia over the months and years. And so just to also thank you for that, for that continuing commitment to that process. I think it’s been really important to have, you know, everybody speak and and share insights in the room and at the Council of Europe. So I’d like to now introduce Clara Neppel, who is a senior director at IEEE. You’re up in at the very forefront of driving initiatives that address the ethical and societal implications of emerging technologies. IEEE is one of the world’s largest technical organizations, has been instrumental in developing frameworks and standards for responsible use for a number of years now. And it’s always had a strong focus on risk management. IEEE’s work on risk management provides practical tools and methodologies to ensure that these AI systems that are being developed are robust, fair and aligned with societal values. And and so Clara will share with us insights into, you know, into the into this work and into into how it’s contributing to our to sort of the broader AI governance ecosystem. And I think you’re there, Clara, in person. So go ahead. Yes, yes.


Clara Neppel: Thank you. Thank you, David. Thank you also for the kind introduction. Yes, we were we were also very active in the Council of Europe as well as in the OECD and other international organizations. And maybe one of the, I think, critical aspects here is that IEEE is not only a standard setting organization, but also an association, as you mentioned, of technology also that permits us to be quite early in identifying risks. And maybe this is also the reason why we were among the first to start working on what we call ethically aligned design in 2016, which permitted us to come up with some concrete instruments like standard certifications quite early. And what I would like to share with you now is really also some practical lessons learned, which I think is important to implement human rights in technical systems, AI systems. So first lesson learned is really that we need the time and we need the stakeholders. We need for even if we think that some of the concepts like transparency or fairness are already quite defined, you might be surprised. So I’m also co-chair of the OECD expert group on AI and privacy. And both, let’s say, ecosystems have very clear understanding of what transparency means or what fairness means, but they have this is very different. The professionals, for instance, transparency is about the transparency of data collection. And on the AI expert side, it’s really about how the, let’s say, decisions of the systems are made understandable. So this is just one example. And so let’s say one of our most used standards right now, IEEE 7000 took this time, so it took five years to being developed. In 2000, when the standard published. And since then, there are a lot of, let’s say, lessons that we would like because it was really worldwide deployed. So the second lesson that I would like to share with you is that we need skills. The skills that we need is not only the technology, the skills related to technology, but also to ethics. And we were investing in this also right from the beginning. We have not only systems certification, but also personal certification. In addition of assessors, and we can say now that we have more than 200 assessors worldwide that are certified by IEEE. We have a training program which reaches from Dubai, as I just heard today, to South Korea and obviously in Europe. So we have, let’s say, this worldwide network of assessors that also, let’s say, have a certified way of understanding of what human rights and ethics is. And third, and I think this is the most important of these standards instruments, and we have the skills and the people that can implement it, we can build very strong ecosystems. And I think that without that, you are still working in isolation. You need these ecosystems. I can give you the example now in Austria, because the European office is based in Vienna. We have now, starting from the city of Vienna, so from public services to data hubs in Tirol, for instance, that are built on the basin, which that means that already the data governance, let’s say, is according to ethical principles. And then all the applications that are running on this data hub are also required to to fulfill the same requirements. And this permits to have these ecosystems, which, in the end, it’s, let’s say, they found of what we want to achieve with human rights. I think what the Huderia methodology concerns, the standard was a human rights first approach. And this was also acknowledged by the Joint Research Center of the European Commission that made the analysis of existing standards on human rights and acknowledged that IEEE standards are very close to what is being required with respect to human rights. It is about stakeholder engagement, if you want, so it’s about the recipe about how to engage stakeholders, how to understand the values of the stakeholders. And I would like maybe to bring here also an aspect which I think is very often not seen. So very often we are focusing on transparency, on fairness and so on. But there are human rights that are not in the existing framework, like dignity. And we have in 7000 all these aspects, all these values that are being analyzed because it’s a risk-based approach. Then there is a clear methodology on how to mitigate those risks with translating it into concrete system requirements or organizational methods. So this is about the design phase and this is complemented by certified, so a certification method, which is also looking to existing systems and assesses it along the different aspects of transparency, accountability. Last but not least, I would like to mention that we are now also in the process of scaling, let’s say, the certification system. We are working with VDE from Germany and Positive AI from France to develop trust label, AI trust label, which would include the seven aspects of human agency and oversight, technical robustness and safety, private transparency, diversity, and social environmental well-being. Just to the last one, for the environmental well-being, we just started a working group on the environmental impact of AI to clearly define the metrics that are being used for environmental impact, including also inference cost and not only in energy, but also, for instance, data usage. We are doing this also together with the OECD. So I think that’s a first overview of what we’re doing. Thank you.


David Leslie: Thanks, Clara. And I mean, it’s really important to note here as well that, you know, making these approaches usable for people is such a priority. And one of the things I think that lies ahead of us is really making the human rights, the range of human rights that are accessible to people and being able to translate them out so that people can actually pick up, you know, the various approaches to risk management and really, if you will, operationalize a concrete approach to understanding and assessing the impacts on those rights. So I’ll now introduce Mr. Myung-Shin Kim, who is Principal Policy Officer at LG AI Research and an IEEE Certified AI Professor. LG AI Research really focuses on innovation in AI that is responsible and that’s developed and deployed safely and ethically. And I think, you know, an important dimension of that is risk governance and addressing bias mitigation and ensuring transparency and accountability. So, Mr. Kim, I’m wondering if you could share LG AI Research’s perspective on specifically on AI risk governance. How does your organization approach managing these risks? And what do you believe an ideal framework for AI risk governance should look like? Right.


Myoung Shin Kim: Thank you very much for inviting me to this meaningful discussion. Today, I will share how LG AI Research is translating our AI ethics principle into tangible action, focusing on AI risk governance. …about LG AI Research. Established four years ago, our mission is to provide advanced AI technologies and capabilities to LG affiliates, such as LG Electronics and LG Chemical. One of our landmark achievements is the development of XR1, a generative AI model capable of understanding and creating contents in both Korean and English. XR1 has achieved performance on par with global benchmarks, demonstrating its competitive edge in the international AI landscape. Just last week, we released XR1 3.5 as an open-source language model contributing to the development of the AI research ecosystem. Beyond AI technology, LG AI Research places a strong emphasis on adhering to AI ethics throughout the entire lifecycle of the AI system. Since XR1, LG officially announced its AI ethics principles with five core values, humanity, fairness, safety, accountability, and transparency. But you know, more important than principles is putting them into practice. So we employ three different strategic pillars to ensure adherence to AI ethics principles, namely governance, research, and engagement. Let me explain each in detail. First of all, we conduct assessment for every project to identify and address potential risks across the AI lifecycle. It consists of three steps. First, analyzing project characteristics, setting problem-solving practice, and verifying research and documentation. When risks or problems are identified, we establish specific solutions and assign responsibilities to designated personnel and set deadlines for resolving the issues. The entire AI ethical impact assessment process and its outcome are attached to the final report when the project closes in our project management system. A unique aspect of our approach is the involvement of a cross-functional task force. This brings together researchers in charge of technology, business, and AI ethics, each contributing their specialized knowledge and diverse perspectives. From a human rights perspective, we pay special attention to some of the key questions during the AI ethical impact assessment. For example, what groups are included among stakeholders affected, and if there is any possibility of intentional or unintentional misuse of the AI system by users. Additionally, we educate data workers about the Universal Declaration of Human Rights and the Sustainable Development Goals, providing guidelines to respect, protect, and promote human rights during data production or the data-steaming process. As you know, generative AI models sometimes produce inaccurate information known as hallucinations due to misinformation. To address this issue, we have developed AI models that generate answers based on factual information and evidence. Additionally, we are continually researching unlearning techniques to selectively delete personal information that was unintentionally used during the training process. Considering that AI is ultimately created by humans, I think it is also important to assess the level of human rights sensitivity among our researchers. For this reason, every spring, LGRI Research conducts an AI ethics awareness survey to assess and improve adherence to our AI ethics principles. I am personally pleased to see that the gap between awareness and practice has narrowed this spring compared to last year. Additionally, we hold an AI ethics seminar bi-weekly to boost interest and participation in AI ethics. For AI ethics to take root in our society, I believe citizens’ AI literacy must improve. Additionally, if high-quality AI education is not evenly provided, existing economic and social disparities may widen. To address this issue, we provide a customized AI education program to over 40,000 youth, college students, and workers annually. An old curriculum includes AI ethics to help citizens to grow a more mature user and also like critical watchdogs in the AI market. And our efforts are expanding beyond Korea to the global level. We are collaborating with UNESCO to develop online educational content for AI ethics targeting researchers, developers, and policy makers. The final MOOC will be held worldwide by early 2026. Lastly, every January we published a report compiling all the outcomes and lessons learned from implementing our AI ethics principles. These reports illustrate how we are implementing not only just like our own AI ethics principles, but also UNESCO’s recommendation on the ethics of AI and South Korea’s national AI ethics guidelines. We hope this can serve as a reference for their AI ethics implementation approach. The following report is scheduled to be published at the end of like January, next month, and will be available on our homepage. So if you have interest, please check. Thank you for your attention. Thanks so much for all that great information, Dr. Kim. Now, in the interest of time,


David Leslie: I’m just going to go right to introducing Hiram Poddar, who is at the Center for AI and Digital Policy, CAIDP, and is also Executive Director of ENCODE India. Now, in particular, CAIDP has been a vocal advocate for the development and implementation of strong governance frameworks that prioritize transparency, accountability, and fairness in the production and use of AI systems. It’s an organization that’s also deeply engaged in policy analysis and stakeholder collaboration to safeguard human rights and democratic principles in the face of rapid technological transformation. So, Hiram, given your work with CAIDP, could you share some thoughts on how NGOs can contribute to creating good governance guardrails for AI? In particular, what do you see as the critical steps for ensuring that AI systems are designed and deployed in ways that uphold societal values and human rights? And you are there in the room, if I’m not mistaken.


Heramb Podar: Yes, I am. I hope you can hear me. For the opportunity to speak, CAIDP has been, indeed, a very vocal advocate. All of the work we do is grounded in policies to uphold human rights, democracy, and the rule of law. Ultimately, for NGOs, it’s all about advocacy through engagement with the due process in terms of, you know, public voice opportunities which might come up and bringing in as much of a public voice as possible. Just a few minutes ago, my co-speaker was just speaking about, you know, like how all rights are not often covered. Sometimes there are contexts which are overlooked, unfortunately. So, really kind of CSOs and NGOs can be that bridge between the implementation or on-ground, you know, risks and how the public is feeling and the policies that are being developed, whether it be at the COE or whether it be the NIST frameworks and so on. Highlighting, like, specific actions CAIDP has taken, we have been very vocal in the advocacy for the ratification of the Council of Europe AI Treaty. We think it prevents global fragmentation, it aligns everyone’s national policies to global standards, and we have recently released statements to the South African presidency for the G20 to ratify the treaty, to the U.S. Senate to ratify the treaty. You know, bring in voices, as I was talking about earlier. One of our key members in our global academic network is NCORE Justice, which is a youth organization focused on AI risks, making sure that AI works for everyone and that AI is safe for, you know, any kind of future generations that do not inherit any kind of malicious AI that might impact human rights. Quickly jumping on to, you know, specific actions in terms of design and development, that was a very interesting question. At CAIDP, we have something called the Universal Guidelines on AI. We just recently celebrated the sixth anniversary of the UKAI principles, as we like to call it, and what we would like to see most is, you know, having clear red lines in whatever policies governments put out in terms of prohibiting use cases that are not based on scientific validity, in terms of use cases that might be adversely impacting certain groups or impacting human rights. We see some early examples of high-risk use cases, for example, in the EU AI Act, things like biometric surveillance or social scoring and so on. What would be, you know, exciting to see is, you know, having ex-ante impact assessments, having proper kind of transparency and explainability across the AI life cycle from design to decommissioning. Ultimately, having, you know, whistleblower protections, we’re seeing an increasing kind of a race to turn better AI systems, and we find it very necessary, you know, for there to be certain guardrails and certain whistleblower protections so that people can speak their mind. Yeah, and just in specific use cases like autonomous weapon systems having termination obligations, which is another one of the cornerstones of our UKAI principles, so having human oversight, we see constantly that a lot of states, so we released something called the Artificial Intelligence and Democratic Values Report on an annual basis, which is the world’s most comprehensive coverage of, like, national AI policies, and we rank countries according to their metrics. And something we saw very interestingly was also a recommendation on AI ethics, where countries are really slow in implementing them, and this also brings to light, kind of, the global digital divide. A lot of the global south countries are particularly, you know, playing catch-up. Countries are not getting to, you know, submit their readiness assessment methodologies to the UNESCO, which is our key indicator for implementation. So, again, you know, NGOs, coming back to the original question, have a role to play in terms of making sure that countries, companies, you know, other different sectors, they not only make these commitments, but they also, you know, follow through with action, you know, and not just rooted in words which might, you know, have interpreting differences and, like, actually having some sort of grounded principles or grounded metrics. Yeah, and I’ll end this here. Thank you so much, Haram. That’s really, really great to hear that this is a kind of multilateral, it needs to be a multilateral effort, and NGOs need to play a central role as we develop the governance instruments. So, I’ll just say that it’s been amazing to hear


David Leslie: about all of this innovative work that’s been done in standards development organizations at the state level. The work of the Council of Europe, I think, is, you know, it’s been out ahead on many things, and always hearing about all this innovation, innovative work really just reminds me that actually, you know, we talk a lot about move fast, break things, right? But I think, you know, on our end of things and hearing about all the fast and safe things, you know, we need to be out in front of some of the way these technologies are developing. So, to close here, I want to just maybe turn back to Smera and ask if you had any closing observations. Yes, all I would say is it’s so fantastic to hear from everyone who’s joined us here today. I think so many excellent points about stakeholder engagement, the role of civil society being a part of it, being ahead of the curve and identifying some of those risks, skills development as well, which was mentioned. So, I think all of this develops a really good and strong ecosystem, and when you use tools like the Huderia methodology in this space to identify this and introduce impact mitigation measures, you know, as you said, David, move fast and save things. So, I think on that note, I’ll send back to you. Okay, wonderful. So, just again, one more thank you to all of our speakers. We are striving to finish on time, and thank you so, so much to all the important comments and information that were shared today. So, I wish you well from the southeast of England, and I hope you have a nice time who are physically there in Riyadh at the rest of IGF. Take care. Thank you.


D

David Leslie

Speech speed

138 words per minute

Speech length

2145 words

Speech time

928 seconds

Huderia methodology for AI risk assessment

Explanation

The Huderia methodology is a unique anticipatory approach to AI governance. It focuses on four fundamental elements: context-based risk analysis, stakeholder engagement, risk and impact assessment, and mitigation planning.


Evidence

Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mitigation planning


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Stakeholder engagement in AI impact assessment

Explanation

The Huderia methodology emphasizes the importance of stakeholder engagement in AI impact assessment. It proposes an approach to enable engagement with relevant stakeholders, including impacted communities.


Evidence

Aims to amplify voices of affected communities and gain information on how they view potential impacts


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


W

Wael William Diab

Speech speed

138 words per minute

Speech length

1441 words

Speech time

624 seconds

ISO/IEC standards for AI systems

Explanation

ISO/IEC JTC1 SC42 is developing standards for the full AI ecosystem. These standards cover various aspects including non-technical trends, requirements, and horizontal and foundational projects on artificial intelligence.


Evidence

Over 30 published standards, about 50 active projects, 68 participating countries, and 800 unique experts involved


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Data quality standards for AI systems

Explanation

ISO/IEC is developing standards for data quality in AI systems. This includes a six-part multi-series on data quality for analytics in the AI space.


Evidence

First three parts of the data quality series have been published, with the next three scheduled for publication in the coming year


Major Discussion Point

Human Rights Considerations in AI Development


T

Tetsushi Hirano

Speech speed

131 words per minute

Speech length

576 words

Speech time

262 seconds

Japanese AI Guidelines for Business

Explanation

Japan has developed AI Guidelines for Business, taking into account the results of the Hiroshima AI process for advanced AI systems. The guidelines differentiate aspects of AI from the perspective of AI actors, providing detailed recommendations for developers, deployers, and users.


Evidence

Guidelines provide a detailed list of recommendations for developers, deployers, and users


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Differed with

Matt O’Shaughnessy


Differed on

Approach to AI risk assessment frameworks


Detailed analysis of rights holders in Huderia

Explanation

The Huderia methodology offers a detailed analysis of rights holders and effects on them. It provides a step-by-step analysis of stakeholder involvement, which is seen as a benchmark for continuous development.


Evidence

Japanese experts evaluate COBRA (part of Huderia) highly, especially as a threshold mechanism


Major Discussion Point

Human Rights Considerations in AI Development


Interoperability between AI frameworks

Explanation

There is a need for interoperability between different AI risk management frameworks. An interoperability document planned for 2025 may highlight commonalities of these frameworks and their respective strengths.


Evidence

Mentions various frameworks like the Hiroshima process code of conduct and EU AI Act


Major Discussion Point

International Cooperation on AI Governance


M

Matt O’Shaughnessy

Speech speed

163 words per minute

Speech length

1461 words

Speech time

536 seconds

NIST AI Risk Management Framework

Explanation

The NIST AI Risk Management Framework is a general risk management framework applicable to all organizations developing or using AI. It describes actions organizations can take to manage risks of their AI activities, including those relevant to human rights.


Evidence

Framework describes technical steps to manage harmful bias, discrimination, mitigate privacy risks, and improve accountability


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Clara Neppel


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Differed with

Tetsushi Hirano


Differed on

Approach to AI risk assessment frameworks


Human rights impact assessments for government AI use

Explanation

The White House Office of Management and Budget memorandum sets out binding rules for government agencies using AI. It mandates key risk management actions, particularly for AI systems determined to be safety-impacting or rights-impacting.


Evidence

Includes steps for risk evaluation, data quality assessment, ongoing testing and monitoring, and engagement with affected communities


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Clara Neppel


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


U.S. domestic AI policies informing international work

Explanation

U.S. domestic AI policies, such as the NIST AI Risk Management Framework, inform international work on AI governance. These domestic products have influenced international initiatives and standards.


Evidence

Council of Europe’s Huderia and OECD projects have drawn from the AI Risk Management Framework


Major Discussion Point

International Cooperation on AI Governance


Context-aware application of risk management frameworks

Explanation

The NIST AI Risk Management Framework is designed to be applied in a flexible and context-aware manner. This approach ensures that risk management steps are well-tailored and proportionate to the specific context of use.


Evidence

Framework supported by ‘profiles’ that describe how it can be used in specific sectors, for specific AI technologies, or for specific types of end-use organizations


Major Discussion Point

Practical Implementation of AI Ethics


C

Clara Neppel

Speech speed

133 words per minute

Speech length

932 words

Speech time

419 seconds

IEEE standards for ethically aligned AI design

Explanation

IEEE has been developing standards for responsible use of AI with a strong focus on risk management. Their work provides practical tools and methodologies to ensure AI systems are robust, fair, and aligned with societal values.


Evidence

IEEE 7000 standard took five years to develop and has been widely deployed


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Myoung Shin Kim


Agreed on

Importance of AI risk management frameworks


Incorporating human rights principles in AI standards

Explanation

IEEE standards take a human rights first approach in AI development. Their standards are acknowledged to be very close to what is required with respect to human rights.


Evidence

Acknowledgment by the Joint Research Center of the European Commission


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Matt O’Shaughnessy


Myoung Shin Kim


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


IEEE’s global network of AI ethics assessors

Explanation

IEEE has developed a global network of certified AI ethics assessors. This network helps in implementing and assessing adherence to AI ethics principles worldwide.


Evidence

More than 200 certified assessors worldwide, training programs from Dubai to South Korea


Major Discussion Point

International Cooperation on AI Governance


Building ecosystems to implement ethical AI standards

Explanation

IEEE emphasizes the importance of building strong ecosystems to implement ethical AI standards. These ecosystems involve various stakeholders and ensure that AI systems adhere to ethical principles from data governance to application development.


Evidence

Example of ecosystem in Austria, from city of Vienna public services to data hubs in Tirol


Major Discussion Point

Practical Implementation of AI Ethics


M

Myoung Shin Kim

Speech speed

111 words per minute

Speech length

774 words

Speech time

416 seconds

LG AI Research’s approach to AI ethics and risk governance

Explanation

LG AI Research has developed an approach to AI ethics and risk governance based on five core values: humanity, fairness, safety, accountability, and transparency. They employ three strategic pillars: governance, research, and engagement.


Evidence

Development of XR1, a generative AI model, and implementation of AI ethics principles


Major Discussion Point

AI Governance Frameworks and Standards


Agreed with

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Agreed on

Importance of AI risk management frameworks


Educating data workers on human rights

Explanation

LG AI Research educates data workers about the Universal Declaration of Human Rights and the Sustainable Development Goals. They provide guidelines to respect, protect, and promote human rights during data production or data-steaming process.


Major Discussion Point

Human Rights Considerations in AI Development


LG AI Research’s collaboration with UNESCO

Explanation

LG AI Research is collaborating with UNESCO to develop online educational content for AI ethics. This initiative targets researchers, developers, and policymakers globally.


Evidence

Final MOOC planned to be held worldwide by early 2026


Major Discussion Point

International Cooperation on AI Governance


LG’s AI ethics impact assessment process

Explanation

LG AI Research conducts an AI ethics impact assessment for every project to identify and address potential risks across the AI lifecycle. This process involves a cross-functional task force bringing together researchers from technology, business, and AI ethics.


Evidence

Three-step process: analyzing project characteristics, setting problem-solving practice, and verifying research and documentation


Major Discussion Point

Practical Implementation of AI Ethics


Agreed with

David Leslie


Matt O’Shaughnessy


Clara Neppel


Heramb Podar


Agreed on

Stakeholder engagement in AI impact assessment


H

Heramb Podar

Speech speed

154 words per minute

Speech length

753 words

Speech time

292 seconds

NGO advocacy for human rights in AI governance

Explanation

NGOs like CAIDP play a crucial role in advocating for human rights in AI governance. They act as a bridge between on-ground risks, public sentiment, and policy development.


Evidence

CAIDP’s advocacy for the ratification of the Council of Europe AI Treaty


Major Discussion Point

Human Rights Considerations in AI Development


Agreed with

David Leslie


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Agreed on

Stakeholder engagement in AI impact assessment


CAIDP’s advocacy for ratification of AI treaties

Explanation

CAIDP advocates for the ratification of international AI treaties to prevent global fragmentation and align national policies with global standards. They have released statements urging various countries and organizations to ratify the Council of Europe AI Treaty.


Evidence

Statements released to the South African presidency for the G20 and to the U.S. Senate


Major Discussion Point

International Cooperation on AI Governance


Need for clear prohibitions on high-risk AI use cases

Explanation

CAIDP advocates for clear red lines in AI policies, prohibiting use cases that are not based on scientific validity or that might adversely impact certain groups or human rights. They call for ex-ante impact assessments and proper transparency across the AI lifecycle.


Evidence

Examples of high-risk use cases in the EU AI Act, such as biometric surveillance or social scoring


Major Discussion Point

Practical Implementation of AI Ethics


Agreements

Agreement Points

Importance of AI risk management frameworks

speakers

David Leslie


Wael William Diab


Tetsushi Hirano


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


arguments

Huderia methodology for AI risk assessment


ISO/IEC standards for AI systems


Japanese AI Guidelines for Business


NIST AI Risk Management Framework


IEEE standards for ethically aligned AI design


LG AI Research’s approach to AI ethics and risk governance


summary

All speakers emphasized the importance of developing and implementing comprehensive AI risk management frameworks to ensure responsible AI development and deployment.


Stakeholder engagement in AI impact assessment

speakers

David Leslie


Matt O’Shaughnessy


Clara Neppel


Myoung Shin Kim


Heramb Podar


arguments

Stakeholder engagement in AI impact assessment


Human rights impact assessments for government AI use


Incorporating human rights principles in AI standards


LG’s AI ethics impact assessment process


NGO advocacy for human rights in AI governance


summary

Multiple speakers highlighted the importance of involving stakeholders, including affected communities, in AI impact assessments to ensure comprehensive consideration of potential risks and impacts.


Similar Viewpoints

Both speakers emphasized the importance of applying AI risk management frameworks in a context-specific manner, taking into account the unique ecosystems and environments in which AI systems are deployed.

speakers

Matt O’Shaughnessy


Clara Neppel


arguments

Context-aware application of risk management frameworks


Building ecosystems to implement ethical AI standards


These speakers highlighted the importance of aligning national and international AI governance efforts to ensure consistency and prevent fragmentation in global AI governance.

speakers

Tetsushi Hirano


Matt O’Shaughnessy


Heramb Podar


arguments

Interoperability between AI frameworks


U.S. domestic AI policies informing international work


CAIDP’s advocacy for ratification of AI treaties


Unexpected Consensus

Education and skill development for AI ethics

speakers

Clara Neppel


Myoung Shin Kim


arguments

IEEE’s global network of AI ethics assessors


Educating data workers on human rights


explanation

Both speakers from different sectors (standards organization and private company) emphasized the importance of education and skill development in AI ethics, which was an unexpected area of focus given the primarily policy-oriented discussion.


Overall Assessment

Summary

The speakers showed strong agreement on the need for comprehensive AI risk management frameworks, stakeholder engagement in impact assessments, and the importance of aligning national and international AI governance efforts.


Consensus level

High level of consensus among speakers, indicating a shared understanding of key challenges and approaches in AI governance. This consensus suggests potential for collaborative efforts in developing and implementing AI governance frameworks across different sectors and jurisdictions.


Differences

Different Viewpoints

Approach to AI risk assessment frameworks

speakers

Tetsushi Hirano


Matt O’Shaughnessy


arguments

Japanese AI Guidelines for Business


NIST AI Risk Management Framework


summary

While both speakers discuss AI risk assessment frameworks, they present different approaches. Hirano focuses on the Japanese AI Guidelines for Business, which differentiates aspects from the perspective of AI actors, while O’Shaughnessy emphasizes the NIST framework’s flexible and context-aware application.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches and frameworks for AI risk assessment and governance, with different organizations and countries presenting their own methodologies.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers present complementary rather than conflicting views, focusing on their respective organizations’ or countries’ approaches to AI governance. This suggests a general alignment in recognizing the importance of AI ethics and risk management, but with variations in implementation strategies. The implications are that while there is a shared goal of responsible AI development, there may be challenges in creating a unified global approach due to these differing methodologies.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of implementing ethical AI standards, but they differ in their approaches. Neppel emphasizes building ecosystems and a global network of assessors, while Kim focuses on internal processes and education within LG AI Research.

speakers

Clara Neppel


Myoung Shin Kim


arguments

IEEE standards for ethically aligned AI design


LG AI Research’s approach to AI ethics and risk governance


Similar Viewpoints

Both speakers emphasized the importance of applying AI risk management frameworks in a context-specific manner, taking into account the unique ecosystems and environments in which AI systems are deployed.

speakers

Matt O’Shaughnessy


Clara Neppel


arguments

Context-aware application of risk management frameworks


Building ecosystems to implement ethical AI standards


These speakers highlighted the importance of aligning national and international AI governance efforts to ensure consistency and prevent fragmentation in global AI governance.

speakers

Tetsushi Hirano


Matt O’Shaughnessy


Heramb Podar


arguments

Interoperability between AI frameworks


U.S. domestic AI policies informing international work


CAIDP’s advocacy for ratification of AI treaties


Takeaways

Key Takeaways

Multiple AI governance frameworks and standards are being developed by different organizations globally, including Huderia, ISO/IEC, NIST, IEEE, and country-specific guidelines.


Human rights considerations are becoming increasingly important in AI development and governance, with a focus on stakeholder engagement, impact assessments, and data quality.


International cooperation and interoperability between different AI governance frameworks is crucial for effective global AI governance.


Practical implementation of AI ethics requires context-aware application of risk management frameworks, ecosystem building, and clear prohibitions on high-risk AI use cases.


NGOs and civil society organizations play a vital role in advocating for human rights in AI governance and bridging the gap between policy development and on-the-ground risks.


Resolutions and Action Items

Continue development of the Huderia technical document plan for 2025


Develop interoperability document for AI risk management frameworks by 2025


LG AI Research to publish annual report on AI ethics implementation in January


UNESCO and LG AI Research to develop online educational content for AI ethics by early 2026


Unresolved Issues

How to effectively address the global digital divide in AI governance implementation


Balancing innovation with responsible AI development and use


Addressing potential impacts of synthetic content created by generative AI on democracy


Ensuring consistent implementation of AI ethics recommendations across different countries


Suggested Compromises

Flexible and context-aware application of AI risk management frameworks to balance innovation and risk mitigation


Collaboration between public and private sectors in developing AI governance approaches


Incorporating diverse stakeholder perspectives in AI impact assessments to address varied concerns


Thought Provoking Comments

The Huderia itself that has been developed through the activities of the Committee on Artificial Intelligence and all the member states and observer states, it really is a unique anticipatory approach to the governance of the design, development, and deployment of AI systems that anchors itself in basically four fundamental elements.

speaker

David Leslie


reason

This comment introduces the core structure of the Huderia methodology, highlighting its comprehensive and forward-looking approach to AI governance.


impact

It set the stage for the entire discussion by outlining the key elements of Huderia, providing a framework for subsequent speakers to relate their work and perspectives to.


One of the important things is to allow this idea of a third-party certification and audit in order to ensure broad responsible adoption.

speaker

Wael William Diab


reason

This insight emphasizes the critical role of independent verification in ensuring responsible AI adoption, introducing a key governance mechanism.


impact

It shifted the conversation towards the importance of standardization and certification in AI governance, prompting discussion on practical implementation of ethical principles.


As a pioneering work in this field, Huderia is expected to become a benchmark. However, it is also important to share knowledge and the best practices with concrete examples as this type of risk and impact assessment is not yet well known.

speaker

Tetsushi Hirano


reason

This comment highlights both the potential of Huderia and the need for practical implementation guidance, addressing a crucial gap in current AI governance efforts.


impact

It prompted consideration of how to make abstract governance principles more concrete and actionable, influencing subsequent discussions on implementation and best practices.


We need the time and we need the stakeholders. We need for even if we think that some of the concepts like transparency or fairness are already quite defined, you might be surprised.

speaker

Clara Neppel


reason

This insight underscores the complexity of defining and implementing ethical AI concepts, emphasizing the need for diverse stakeholder engagement and iterative development.


impact

It deepened the conversation by highlighting the challenges in operationalizing ethical principles, leading to discussions on the importance of multi-stakeholder collaboration and ongoing refinement of governance approaches.


For AI ethics to take root in our society, I believe citizens’ AI literacy must improve. Additionally, if high-quality AI education is not evenly provided, existing economic and social disparities may widen.

speaker

Myoung Shin Kim


reason

This comment introduces the crucial aspect of public education and literacy in AI ethics, linking it to broader societal issues of equality and fairness.


impact

It broadened the scope of the discussion to include the role of public education in AI governance, prompting consideration of how to engage and empower the general public in AI ethics discussions.


Overall Assessment

These key comments shaped the discussion by progressively expanding the scope of AI governance considerations. Starting from the structural framework of Huderia, the conversation evolved to cover practical implementation challenges, the need for standardization and certification, the importance of stakeholder engagement, and the role of public education. This progression highlighted the multifaceted nature of AI governance, emphasizing the need for comprehensive, collaborative, and adaptable approaches that consider both technical and societal aspects of AI development and deployment.


Follow-up Questions

How can the Huderia methodology be further developed and refined?

speaker

David Leslie


explanation

David mentioned that as they move forward in the next year, they will be working on what they call ‘the model’, which will explore some areas in more detail. This suggests a need for further development of the Huderia methodology.


How can interoperability between different AI risk management frameworks be improved?

speaker

Tetsushi Hirano


explanation

Tetsushi mentioned the need for an interoperability document that highlights commonalities between different frameworks and their respective strengths. This is important for facilitating mutual learning and potentially easing compliance across different standards.


How can knowledge and best practices of AI risk and impact assessment be shared more effectively?

speaker

Tetsushi Hirano


explanation

Tetsushi emphasized the importance of sharing knowledge and best practices with concrete examples, as this type of risk and impact assessment is not yet well known. This is crucial for helping interested parties join the AI Convention.


How can we better address the impacts of synthetic content created by generative AI on democracy?

speaker

Tetsushi Hirano


explanation

Tetsushi highlighted the need to consider the impact of synthetic content created by generative AI on democracy in future meetings of the AI Convention. This is an emerging area of concern that requires further research and discussion.


How can we improve the implementation of AI ethics recommendations globally, particularly in Global South countries?

speaker

Heramb Podar


explanation

Heramb noted that many countries, especially in the Global South, are slow in implementing AI ethics recommendations. This highlights a need for research into effective implementation strategies and addressing the global digital divide in AI governance.


How can we develop more effective metrics for assessing countries’ implementation of AI ethics and governance frameworks?

speaker

Heramb Podar


explanation

Heramb mentioned the need for grounded principles or metrics to assess countries’ follow-through on AI ethics commitments. This suggests a need for research into developing more robust assessment methodologies.


How can we improve AI literacy among citizens to ensure they can be mature users and critical watchdogs in the AI market?

speaker

Myoung Shin Kim


explanation

Myoung Shin emphasized the importance of improving citizens’ AI literacy to help AI ethics take root in society. This suggests a need for research into effective AI education strategies for the general public.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #43 States and Digital Sovereignty: Infrastructural Challenges

WS #43 States and Digital Sovereignty: Infrastructural Challenges

Session at a Glance

Summary

This workshop focused on digital sovereignty and infrastructure challenges in the context of global digital transformation. Speakers from various countries and organizations discussed different perspectives on digital sovereignty, ranging from state-centric approaches to more inclusive, multi-stakeholder models. The discussion highlighted the importance of digital public infrastructure (DPI) in enabling countries to exercise greater control over their digital assets and services.

Key topics included the development of sovereign digital infrastructures, the role of open-source technologies, and the importance of data localization and protection. Speakers emphasized the need for countries to balance autonomy with international cooperation, particularly in regions facing infrastructure limitations. The Brazilian AI plan was presented as an example of national efforts to boost technological capabilities and reduce dependencies.

Challenges such as meaningful connectivity, especially in Global South countries, were identified as crucial factors affecting the success of digital sovereignty initiatives. The debate also touched on the role of private sector involvement in DPI development and the need for regulatory frameworks to ensure accountability and ethical use of technologies.

Participants discussed the potential for regional cooperation in building digital infrastructures, while also addressing concerns about how digital sovereignty might be used as a geopolitical tool. The importance of interoperability and cross-border collaboration was stressed, particularly in the context of emerging technologies like AI.

Overall, the workshop underscored the complex nature of digital sovereignty, highlighting the need for nuanced approaches that consider diverse national contexts while fostering international cooperation and inclusive development in the digital realm.

Keypoints

Major discussion points:

– Different conceptions and layers of digital sovereignty, from state-level to personal and common sovereignty

– The role of digital public infrastructure (DPI) in enabling digital sovereignty for countries

– Challenges around connectivity, data localization, and infrastructure development, especially for Global South countries

– Balancing national sovereignty with regional/international cooperation and interoperability

– The importance of open source technologies and multistakeholder governance models

The overall purpose of the discussion was to explore how different countries and regions are approaching digital sovereignty and digital public infrastructure development, examining both the challenges and opportunities. The speakers aimed to share perspectives from different parts of the world on these issues.

The tone of the discussion was largely analytical and informative, with speakers presenting research and case studies from their areas of expertise. There was a sense of urgency around addressing digital divides and asymmetries between countries, but also optimism about the potential for DPI and regional cooperation to enable greater digital sovereignty. The Q&A portion introduced some more critical perspectives, particularly around connectivity challenges, but the overall tone remained constructive.

Speakers

– Rodolfo Avelino: Counselor of the Brazilian Internet Steering Committee, moderator of the session

– Min Jiang: Professor of Journalism Studies at the University of North Carolina at Charlotte, CyberBRICS fellow

– Ekaterine Imedadze: Commissioner of the Georgia National Communication Commission

– Korstiaan Wapenaar: Principal at the center of digital excellence in Johannesburg, develops digital economy strategies for Africa

– Ritul Gaur: Policy Advisor at the Digital Impact Alliance, worked on DPI negotiations at G20

– Renata Mielli:

Additional speakers:

– Luca Belli: Professor at UW Law School

– Jose Renato: Researcher at the University of Bonn Sustainable AI Lab, co-founder of LAPIN

Full session report

Digital Sovereignty and Infrastructure Challenges in Global Digital Transformation

This workshop explored the complex landscape of digital sovereignty and infrastructure challenges in the context of global digital transformation. Speakers from various countries and organisations shared diverse perspectives on digital sovereignty, ranging from state-centric approaches to more inclusive, multi-stakeholder models.

Conceptualising Digital Sovereignty and Digital Public Infrastructure (DPI)

Digital sovereignty was presented as a multifaceted concept extending beyond nation-states. Min Jiang, Professor at the University of North Carolina at Charlotte, emphasized supranational, corporate, personal, and common digital sovereignty. This broader view complements the multistakeholder model by addressing underlying power issues.

Ritul Gaur, Policy Advisor at the Digital Impact Alliance, focused on how digital sovereignty enables countries to exercise more control over critical digital assets. Gaur explained that DPI governance can vary from state-controlled to private sector-driven, highlighting the flexibility in approaches. He described DPI as “laying out the most common drill, but then allowing others to build a market economy around it,” positioning it as a foundation for broader digital development.

Renata Mielli stressed the importance of viewing digital sovereignty as complementary to cooperation between countries, arguing that cooperation is fundamental to achieving sovereignty given the different realities each country faces in digital areas.

Infrastructure and Connectivity Challenges

The workshop highlighted significant challenges in developing digital infrastructure and ensuring meaningful connectivity, especially for countries in the Global South. Ekaterine Imedadze, Commissioner of the Georgia National Communication Commission, discussed Georgia’s challenges in developing data centres and connectivity infrastructure. Imedadze also mentioned Georgia’s green energy production potential, which could support data center development.

Korstiaan Wapenaar, Principal at the center of digital excellence in Johannesburg, noted that African countries struggle with fiscal and capacity constraints for digital infrastructure. He explained that DPI enables governments to deliver services at scale and reach people in need.

Luca Belli, Professor at UW Law School, raised a critical point about the lack of meaningful connectivity in Brazil, defining it as stable, fast enough internet access on an appropriate device with enough data. Belli stated that only 22% of the Brazilian population has meaningful connectivity, challenging the effectiveness of current digital sovereignty efforts.

In response, Renata Mielli outlined Brazil’s plans to address connectivity challenges through the PAC (Growth Facilitation Program), which aims to invest 23 billion reais (around $4 billion) over the next four years in digital infrastructure and AI development. Mielli emphasized that these efforts must be guided by reducing inequalities from the outset.

AI Development and Data Sovereignty

The discussion highlighted the importance of AI development and data sovereignty. Mielli stressed that data sovereignty is central to AI development and self-determination. She also mentioned ongoing G20 discussions on AI and the digital economy.

Min Jiang emphasized the importance of open source technologies and free software for AI sovereignty in developing countries, while Ritul Gaur advocated for DPI to be designed for cross-border interoperability.

Balancing Sovereignty and Cooperation

A key theme throughout the discussion was the need to balance national digital sovereignty efforts with regional and international cooperation. Min Jiang pointed out that small countries need to cooperate and build alliances to achieve digital sovereignty. In response to a question about regime types, Jiang noted that while democracies might be more inclined to collaborate, authoritarian regimes also engage in digital cooperation when it serves their interests.

The discussion also touched on international infrastructure projects, such as the Peace Cable that Meta is investing in, highlighting the complex interplay between corporate interests and national digital sovereignty efforts.

Unresolved Issues and Future Considerations

The workshop underscored several unresolved issues and areas for future consideration:

1. Balancing national digital sovereignty with cross-border interoperability

2. Addressing lack of meaningful connectivity while investing in advanced technologies

3. Defining the scope and governance models of Digital Public Infrastructure

4. Ensuring stability and productive management of regional digital infrastructure projects

5. Preventing the use of digital sovereignty on infrastructure with regional impact as a weapon against other countries

Conclusion

The workshop highlighted the complex nature of digital sovereignty, emphasizing the need for nuanced approaches that consider diverse national contexts while fostering international cooperation. The discussion evolved from theoretical concepts to practical challenges and potential solutions, underscoring the importance of context-specific strategies that balance national autonomy with international cooperation and equitable access. There was a general consensus on the critical role of DPI in enhancing digital sovereignty, the importance of open technologies and interoperability, and the need for both national efforts and international cooperation in achieving digital sovereignty in an increasingly interconnected world.

Session Transcript

Rodolfo Avelino: Aloha. Aloha, aloha, okay, okay. We have a test. Hello. Good afternoon. Good afternoon. Hello. One, two, three. One, two, three. Hello. One, two, three. Yes. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Well come. everyone to the Workshop States and the Digital Sovereign Infrastructural Challenges. I am Rodolfo Avelino, Counselor of the Brazilian Internet Steering Committee, and I will be moderating these sessions. We also have here Juliana Ons as Online Mediator and Ramon Costa as Rapporteur. Both are technical advisors for CGI.br. First, I would like to thank the IGF organization and everyone that is present today. A special thanks to our speakers that will contribute to our debate. The developments of the Internet are marketed by the consolidations of large digital platforms, notifications, and the growing use of artificial intelligence. This caused significant changes not only in social process, but also in sessions public services such as health, education, communications, and the state’s capacities in general. Despite technologies advances, problems of the opacity, national security, surveillance, and autonomy to implement digital policies arise. These issues can be framed under the conceptions of the digital sovereignty, a notion that can have multiples. meanings and purposes, raises themes such as the security of digital infrastructures, the security of the strategies data, innovation and the status capacity to guarantee fundamentals rights. This sessions aims at discussions, policies, initiatives to implement digital infrastructures in different regions and countries in light of different approaches to digital sovereignty. I hope we have a great conversations and that which experience may serve as inspirations for others. Now, I would like to give the floor to our speakers. Our first presentations will be delivered by Dr. Min Young, a professor of Journalism Studies at the University of North Carolina at Charlotte and also a CyberBRICS fellow. She will start our conversations by presenting the different conceptions of the digital sovereignty. Dr. Young, you have the floor for 80 minutes, please.

Min Jiang: Thank you. Thank you colleagues at CGI Brazil for convening the session and for inviting me to join. Can you hear me all right? Just double checking. Okay, great. Thank you so much. And greetings also to participants from around the world. My contribution to the panel is based largely on a book I co-edited with Dr. Luca Belli of FGV Law School. And can I have the slides up, please? Thank you. Can you move to the next slide, please? Hello. Can we move to the next slide, please? Yes. Thank you so much. So the book is titled Digital Sovereignty in the BRICS Countries, How the Global South and Emerging Power Alliances are Reshaping Digital Governance, coming out in two weeks through Cambridge University Press. I will develop my remarks today in two parts. First, I will trace the development of digital sovereignty and explain why digital sovereignty is gaining currency. Second, I will offer a broad framework for conceptualizing digital sovereignty beyond a traditional normative definition of sovereignty centered around nation states. In fact, I would argue digital sovereignty is not something that belongs to the states owning. Instead, digital sovereignty as broadly conceptualized complements the multistakeholder model by foregrounding the underlying power issues that have prevented multistakeholderism to be more widely adopted. So to start…next slide, please. Given the IGF…sorry, one slide back. To start, given the IGF is a global forum under the auspice of the UN, it’s appropriate to recognize that the UN is a post-World War II creation based on national independence and sovereignty. The idea of sovereignty, which can be traced back to the French philosopher Jean Baudin in the 16th century, as well as the 1648 Peace of Westphalia, is foundational to the modern system of nation states. As such, states are thought to enjoy territorial integrity, legal equality, and non-interference in international affairs. However, all of us also recognize that such normative and very idealistic notions of sovereignty are often good on paper, but not so good in practice. Sovereignty is frequently a function of power. Strong states, for example, can invade. Other states think Iraq War and the current war in Ukraine. Weaker states often lack power to exert influence. Next slide, please. The problem of power imbalance is especially evident in the digital era, where much of the world’s digital infrastructure, data, services, and increasingly AI depends on a handful of Silicon Valley firms. Snowden’s revelation of NSA’s global surveillance program in 2013 made it clear that the U.S. government cannot be trusted. Facebook, Cambridge Analytica scandal and the general failure of U.S. Big Tech in the 2016 U.S. presidential election also made it clear that U.S. Big Tech cannot be trusted. While internet sovereignty was once thought to be an authoritarian product shipped out of China, post-Snowden, it’s not surprising why more and more countries, including EU, as a coalition of post-nations, are picking up the banner of digital sovereignty. And in fact, EU means by digital sovereignty self-determination to voice their dissatisfaction. It’s also not surprising why ICANN was pressured to move out of the U.S. Commerce Department in 2014. Whether the effort came from individual states, group of states, or multi-stakeholder fora, they share one thing in common as the global consensus of U.S. centered. global digital order is breaking down, national and international actors are in search of alternative to build a new digital order. Next slide, please. The moment we’re in is not unlike the new world information and communication order new wickle debate in the 1970s. Next slide, please. Hello, next slide, please.

Rodolfo Avelino: One minute, one minute, please.

Min Jiang: Thank you. The new wickle debate in the 1970s culminated in the McBride report published by UNESCO in 1980. At the time information sovereignty and cultural sovereignty these were the exact phrases from the report were of concern to global South countries which were critical of the free flow of information agenda championed by the US and UK seen as an instrument for information colonization and cultural imperialism. In the end, the US and UK pulled out of the new wickle debate arguing global South countries used information sovereignty and cultural sovereignty as a pretext for censorship and control at home. Next slide, please. The power is symmetry then mirrors the power symmetry today while state centric perspective remains essential to understand digital sovereignty. Many researchers, including myself also recognize digital sovereignty is a signifier is a term with many different meanings used by different actors to express their aspirations and also to assert control and power. Thus in the book volume we adopted a more generative definition of digital sovereignty as the exercise of agency power and control in shaping. infrastructure, data, services, and protocols. The book project’s bottom-up efforts also led us to develop a broader framework of digital sovereignty, mapping the following seven perspectives. Next slide, please. In the state digital sovereignty perspective, nation-states exert control over digital architecture, data, protocols, and services. It can be both positive and negative. While Brazil, for example, built the PICS digital payment system during the pandemic, and India built the UPI digital payment system as digital financial infrastructure to increase independence and inclusion, Russia, on the other hand, built the RUNET for digital isolation. In the supranational digital sovereignty perspective, regional alliances like the EU develop unified digital policies as well as legal and digital infrastructure. Former German Chancellor Angela Merkel, in fact, gave a speech at the 2019 IGF defining EU’s digital sovereignty as a form of self-determination. So for EU countries, being sovereign doesn’t mean working alone, but working together. Network digital sovereignty, something ICANN as an organization might endorse, it emphasizes decentralized control, network interoperability, freedom from nation-states, and global coalition. Corporate digital sovereignty, on the other hand, tends to endorse the freedom of tech giants in driving digital economies and shaping digital norms, something scholars have critiqued to be a form of surveillance capitalism or data colonialism. Personal digital sovereignty, on the other hand, emphasizes empowerment of individuals to control their digital identities and personhood. Post-colonial digital sovereignty highlights efforts by former colonized nations to reclaim autonomy, access, ownership, and control in the digital space. Finally, common digital sovereignty emphasizes community driven governance of shared digital resources and production of public digital goods. This is embodied in the open source free software movement, as well as international digital solidarity and labor movements. So for us, digital sovereignty is not something that belongs to the nation states owning. Broadly conceptualized, digital sovereignty does not replace the model stakeholder model. On the country, it complements it by placing it in a wider discursive field and foregrounding the underlying power issues. Global digital sovereignty shouldn’t and cannot be determined by national governments alone, but needs a global coalition to address pressing issues of global divide in digital infrastructure, entrenched challenges of digital surveillance and censorship, as well as structural monopoly and digital inequality. Thank you for allowing me to share my perspective. Look forward to further exchanges and discussions.

Rodolfo Avelino: Thank you. Thank you, Dr. Young. Now introduce it Ekaterine has been a commissioner of the Georgia National Communication Commission since in 2021. The commissioner has 15 years of professional experience in the telecommunications field. Mrs. Ekaterine, could you comment on how the Georgian government adapts to undergoing digital transformations and how these projects relate to broader European context?

Ekaterine Imedadze: Thank you very much. I think my presentation will be put on now and first of all, I want to thank hosts of this very interesting panel. Thank you very much. Thank you. interesting, very important, very, very, very much relevant topic of workshop. This is Information and Coordination Center of Brazil. Also, I think everybody will join me thanking the host country, Saudi Arabia, Riyadh, for this amazing venue for the IGF. And it’s a great pleasure to share a perspective from another part of the world about how the state can see very small, tiny state in the South Caucasus can see the challenges and can overcome the challenges related to digital sovereignty. And the previous presenter very in a best manner outlined what are the layers of the digital sovereignty and how it has evolved. And I will be speaking about the very specifically telecom layer, the most upstream perspective of the infrastructure and challenges related to my country, Georgia. And it is also related to the region itself, the South Caucasus. So my perspective as representative of the telecom regulator and state representative will be specifically as I said, to the very upstream layer of the infrastructural challenges. I’m trying to now move on. Thank you. So my friend from South Caucasus is helping me as we do usually. Thank you. Oh, thank you. Thank you. Thank you so much. So, we all know that most part of the data, as we speak about digital, most part of the data traffic goes through the submarine cables, you know that, and how important submarine cable resilience levels are. We see that this information was kindly provided by one of our important partners, which is World Bank, so I’m allowed to show this information, but actually this first slide is available information you can find on telegeography, and it’s updated on a daily basis almost, and you see this is the most upstream layer of the Internet, and this is a value chain, and this is a growing market, and this is the very, how to say, the basis of our connectivity, that enables us to exchange data, to protect data, and this many cables are also created to ensure that the data exchange is resilient, so the sovereignty information on the infrastructure level is pertained. If we can go to the next slide. It’s a bit of a challenge. Thank you. So, now let’s speak about the very tiny segment of this big map of worldwide connectivity, which is South Caucasus, where my country Georgia is located, and you can see the map, you can see the geography, from these almost 500 infrastructure connectivity routes, only one route is connecting Georgia and South Caucasus with Europe. And this connection is very important, direct connection. You understand how important it is if we speak about the infrastructure level, resilience and sovereignty of data based on that. And there are the aspects of resilience, which is providing direct international access, not through some other jurisdictions, but through the sea. This is what makes the subsea cables so important nowadays. And development of more inter-regional networks is obvious. The necessity of development of more inter-regional networks is an absolute necessity for our region. So this is the challenge we are facing now. And fortunately we have partners who are supporting us with making this connectivity and resilience really work. I will speak later on that. What else is happening in our region is that there is a project of trans-Caspian cables under discussion, which will connect us further to East Asia or Central Asia. We know that digital is interconnected, so we need to be part of global resilient connectivity paths. And with this, not only the infrastructure level, resilience and direct connectivity corridors come to our mind, but also the layers that were also… So, what else comes to mind after seeing that there is definite need of expanding the infrastructure level independence of the region? It comes also the layer of services, software and data protection. Data-related resilience layer, we will speak about the digital hub concept, which is also crucial for our region, which is on… Hello? Hello? Check? Some kind of check. Next slide, please. Thank you. Maybe we need some kind of inter-regional digital hub. If we speak about the concept of protecting our data, we see that concerning the geopolitical situation around South Caucasus, you can see how on the map, how important it is to have the connectivity, some kind of inter-regional connectivity hub that will enable us having the transparent data protection frameworks, which is aligned with the EU GDPRs, EU-related data protection legislation, and that will allow us to have some kind of protected sovereign data transfers throughout the region. Another important aspect why we need to have data hubs in region is that upcoming technological demands related with AI definitely requires that information should be brought closer to the customers. So this is another challenge and this is another important precondition to building the inter-regional connectivity hubs, which will involve regional countries, but which will create some kind of alternative to overcome choke points like you see in the Red Sea region. And when we see about the challenges, challenges bring usually the opportunity. So my presentation here was brought to show you that we’re trying to turn these challenges into the opportunities for our region. And as I’ve mentioned in the beginning, if we can… and move to the next slide. Thank you, Nini. So there are some kind of articles about what are the challenges, how the big techs and geopolitics are reshaping the internet plumbing now and what is going on around the world. It’s very much relevant to our region. And as I’ve mentioned, this concept of South Caucasus Digital Hub to make our data more resilient is kind of the answer to the question how we can build more robust digital service, so digital layers, starting from the upstream infrastructure up to the software and data protection layers. And you can see this Baltic Highway project, which is supported by European Union. And we also are supported by World Bank and by European Union to build similar regional connectivity corridors that will enable countries in the region to be connected safely with other worlds and to bring the data closer to our subscriber and to ensure that policies and regulations that are adopted amongst the European Union to be kind of transposed to the regional data hubs. This is how we see, answering to the challenge you would have. I’ve seen, of course, this is now in projection stage. The projection stage means that we have some concept how this data center regional hub concept should work. And we really hope that it will be continued and it will turn because the adoption of AI and growing demand on the machine learning or bringing the more like content into the digital space gives us understanding that this project should be elaborated as soon as possible. So this was what I wanted to share with you and I’m happy to answer later the questions. Thank you.

Rodolfo Avelino: Thank you. And now introduce it. And Korstiaan is a principal at the center of digital excellence in Johannesburg. He develops digital economy strategies to address Africa’s developmental challenges. Korstiaan will comment on the capacity limitations in Africa, different dependence on local, private and the international players connecting to the trade offs of sovereignty. You have the floor.

Korstiaan Wapenaar: Thank you very much, colleagues for having me. Thank you. in and for the opportunity to participate. Can I ask? Okay, great. Thank you very much. There have been a couple of version changes this morning, so there might be a couple of edits that might not have come through, but we can run with it. So, I think the point of departure to start is that digital transformation of the public sector is a prerequisite for socioeconomic development in Africa. African states have struggled to deliver services to people and organizations at scale, and subsequently these technologies allow them to reach people in need at scale when they need. Unfortunately, despite this prerequisite, let’s call it, African countries have largely struggled to deliver on the opportunities of digital transformation. The e-Government Development Index is a useful proxy for that, and we see that there are only four African countries that have managed to achieve above the global average. And so, there are some critical underlying drivers of this underperformance. In particular, one is acknowledging that many African countries have significant fiscal constraints and significant capacity constraints in terms of their expertise, and that subsequently this has impacted the rollout of the quality of both hard infrastructure, if we think about data centers and the like, as well as soft infrastructure being the services that, the technologies that are used to deliver services through this hard infrastructure. Next slide, please. Next slide, please. If we look at data centers as a proxy for the availability of infrastructure across the continent, we see that there is a rapidly growing demand for more physical infrastructure. This estimate on screen, and apologies that there’s no access on the on this graphic, is that African countries will need to more than double their data center hosting capacity by 2030. At present, the number of these countries are underdeveloped, there’s not a lot of digital activity, and so localization requirements are hard to meet through a local data sector because it’s economically infeasible to host a center domestically just to meet those local requirements. Subsequently, we’ve seen that a number of markets across the country, governments have started or have experimented with deploying their own data centers to manage their own data and operate their own technologies and infrastructures. In many cases, though, due to the capacity limitations, these are poorly managed, they’re underutilized, and they have become what is termed economic drains, maybe one might call it a white elephant or the like. And so it leaves a little bit of a quandary for African countries that are trying to achieve localization requirements independently and autonomously. Next slide please. And so subsequently, what this means is that there is an inherent dependency in Africa on, maybe inherent is a bit of a strong term, but there is a mutual benefit between the state and the private sector in delivering this hard infrastructure, where in many cases, private sector players such as your hyperscalers are supporting governments in the operation of their own technologies. Next slide please. And so subsequently, as the value of digital public infrastructure is better understood, and is gathering steam across the globe, we likewise see increasing adoption in Africa, as we saw before with the e-government digital. or EGDI and the like, that there is, this adoption is slow or slower across the continent. So these principles, Dr. Min was talking about FOSS, open source and the like, these principles are arguably key mechanisms that will allow service delivery at scale by allowing governments to adopt these technologies, lead with their own interests and operate them independently and autonomously. Next slide, please. If we break away from, we’ll start to unpack the debates within the DPI realm around what is public, we see that there is room to explore the role of the private sector in supporting the design, delivery and operation of services through technology and government. And so we see in Africa that the private sector has a key role to play in many of these, in many countries in service delivery, creating a question around that P in DPI and whether or not P needs to be big P or small P for those that are participants in the debate. So we know that financial services players, telecoms, retailers, vendors and community are all supporting or bolstering government in its delivery of services. So arguably, if we think about sovereignty, and this is maybe bending the definition a little bit, there’s sovereignty in terms of government’s ability to deliver services independently or its requirements to engage the support of the private sector. And we see that in Africa, participation in the private sector may be a requirement but is not detrimental inherently. And so this might be a necessity given current limitations. Next slide, please. Likewise, when we look towards the emerging DPI. ecosystem, we see that there are a wide variety of non-government players that are offering technology and support. A couple of them are on screen there. So these entities will help governments identify what technologies to use. They will help them roll their technology out and optimize it for their local environment. Again, this is contrary to a hard line view of an independent autonomous state by drawing in the participation of these entities in their support. So these role players, arguably, as non-government role players, are critical to catalyzing digital transformation across Africa. Likewise, equivalent to the requirement of private sector players. Thank you very much.

Rodolfo Avelino: Thank you, Korstiaan. Now, Ritul Gaur is a Policy Advisor at the Digital Impact Alliance. His area of work includes research and advocacy around the digital public infrastructure. In his previous role at the Minister of Electronics and IT, GOI, he worked on DPI negotiations at G20, tackling questions of the why, what, how of DPI. Ritul, given your experience in this field, who do you share with us your thoughts on the connections between digital

Ritul Gaur: sovereignty and DPI? Hi, thank you so much. a big thank you to the organizers and everyone else for attending. I wish I was there in person, but you’ll see two gentlemen in the room. There’s Ibrahim and there’s Talha, who are my colleagues from the digital backlines. So if I say anything which is controversial, they are my lawyers. So I want to start, and I also have a great, great job to explain something which I spend a lot of time theorizing and et cetera, which is digital public infrastructure. Think about digital public infrastructure. Think of society in the digital age. Now, what is it that is absolutely required? It is an identity system which is secure, which can be authenticated against something which can truly prove that you are you and in a unique way. So an identity is an important component of it. A fast payment systems, which allows you to transact both P2P and person to business, person to person, et cetera. And then data, which allows you to both store and share your data across both public and private services in order to access different services. Now, it’s not restricted to this because DPI is still an evolving concept and there are already new DPIs in climate, in commerce, et cetera, such as ONDC, et cetera. But broadly think about why are we referring to this as infrastructure is because it lays down just the common minimum rails as in the 19th and the 20th century roads rails did. And then it’s for others to come in and innovate and build on top of it for developing so many other services. Now, you could ask me a question that this is how digitization happens. What makes it new or why are we saying this? Why are we calling this as an approach? A simple answer to that is think of DPI from three common aspects, which is tech governance and community. Now, when you think of technology, it is an amalgamation of open source technologies using using open standards and open specs to build the tech that’s required. So essentially for your critical national digital infrastructure, you are not going for big vendor contract, but you are actually building something from scrap. You’re using a lot of open source tools. You’re using open standards. You’re using you’re not going for proprietary standards. You’re not going for big vendors. You’re using a lot of open technologies. The governance of DPI. So first is the tech. Second is the governance. The governance of DPI is multilayered. There is a governance embedded in the protocol itself, which is safety by design, security by design. And there’s a governance of specific aspect of DPI. Let’s say if it’s an ID, then there’ll be an ID regulation, an ID legislation. And of course, your broader umbrella data protection, GDPR kind of regulation, which also applies. So so that is the tech. Then there’s the governance. And then the most important part, DPI is nothing without its community. So to borrow, borrow a phrase from a professor, David Eves calls it that DPI allows you to have shared means to many ends because essentially it’s laying out the most common drill, but then allowing others to build a market economy around it, allowing others to use that ID to do a KYC to then provide services, allowing others to build that payment service app to then offer other things. So that’s the, that’s the amalgamation of these three things. And as Kristen referred, the two most important things of digital public infrastructure is it has to be open for all to access and it needs to be interoperable. It needs to be interoperable across different systems in the country, etc. Now, the element of sovereignty as to what what is the role of DPI and sovereignty is linked, I believe DPI does empower governments and countries to exercise more sovereign control over their critical national digital assets. And we’ve seen this in the case of India. India gradually moved away from Visa MasterCard. Now, 80% of our financial transaction goes through our national payment infrastructure called UPI. 80% of our digital financial transactions are not going through Visa MasterCard, but are this thing. Our national ID data, which includes our biometric, et cetera, everything is coded, homegrown, and it’s out of the data sets out of India. So I think in a lot of sense, both India, Brazil, Singapore, Togo, we’ve seen that how DPI has been a critical enabler of sovereignty. But now I think a lot. So I think at this stage that we need to take a step back and actually analyze what is digital sovereignty and what does it mean in this context? I’m going to break it down into three aspects, one being the data part of it, the other being the hardware part of it, and third being the software part of it. The software is the most easiest part. How does DPI enable sovereignty? A lot of DPIs are built on open source softwares. So essentially, you’re just taking something from GitHub, contextualizing it, making it in a way that it’s feasible for your population, it’s contextualized for your population. Essentially, it becomes your own source code with the moderation, with the modulation that is required. We’ve seen this in the case of MOSFET, which is an ID provider, OpenG2P, which is a government-to-citizen, government-to-person payment service provider, Engie, which is a wallet, Mojaloop, again, a fast payment system, OpenCRVS for civil registry, et cetera. So a lot of these software, which are out there and open as open digital assets, are adapted by the country to then contextualize in their own economy, and then the software is housed and hosted within the premise of that country, and it’s owned by that country. So, software is one aspect of DPI sovereignty. The second is the hardware. Now a lot of DPI-related stuff requires you to have biometric scanners, cameras, card printers, etc. I think in this case, the sovereignty is a bit malleable because you still require domestic and international vendors to procure, there are still very few companies that still make these kind of standardized hardwares, etc., which are required to enroll large swathes of population. So I think then, and of course, there are a lot of ingenious solutions that are required based on your contextualized population. In India’s case, we have something called voice box, which pops every time you make a payment. So essentially, it’s building trust, etc. So a lot of vendor management procurement happens in the hardware part of it, which could be both domestic but international. And then finally, the data aspect of it. So data is something which happens in both ways in terms of DPI. It also stays with country if you have data localization norms, if you have data housed within the premise of the particular ministries, etc. But also a lot of countries also go for cloud-based data because it’s cheaper, it’s easily intractable, you can also switch clouds, etc. So I think how can, and to sort of summarize this and say, how can DPI enable sovereignty is, of course, use open technologies, I think that’s the most important thing. But through regulation, use data localization norms, get better deals with vendor, make sure that for poorer countries, particularly, that a vendor does not come and harass you. So if you are going for a vendor, make sure that there is a high degree of vendor interoperability in your case, that if you want to move your data from Google Cloud to AWS or to Oracle, you can do it. Choices. I think at the very stage of conceptualization, design choices matter a lot. Go open by design, use open source, open standards, pick domestic vendor as much as possible. as you can. Don’t rely for the big vendors because they have a lot to service and you will be on the last tier list to be serviced. And I think the final and most important thing is the funding. Try to get neutral donors who do not try to push you a certain kind of technologies. Try to find partners who are invested in the longevity of the system and not the constituency back home that wants to sell you a certain kind of software which then the servicing of it will be super expensive. So I think with that I will conclude my statements. There is a big link and we’ve seen in countries like India, South Africa, Brazil etc where DPI is enabling high degree of sovereignty but there are multiple facets to that sovereignty that still needs to be figured out, it still needs to be tweaked, it still needs to be better managed. Thank you.

Rodolfo Avelino: Thank you. So ladies and gentlemen, let me introduce first Dr. Mouamad Alsour, the founder and president of Sustainability Professionals of Saudi Arabia, whose groundbreaking work and initiatives in sustainability and sustainable designs and green certifications have set benchmarks in the region and beyond. Renata, how do you comment on how the Brazilian plans for artificial intelligence relate to today’s challenges around the digital server? Thanks Rodolfo,

Renata Mielli: thanks Jeff for this workshop. I think Dr. Ming, Catherine for standing root to bring a very broad perspective about sovereignty, infrastructure. Dr. Ming brings some concepts, Catherine brings the aspects of connectivity and Korsten brings another aspect he told us about DPI. I will answer your question, but as I’m the last one to speak, I’m going to bring some summarized aspects about how Brazil and Brazilian government are seeing this broader challenging regarding sovereignty in infrastructure. Well, I started pointing something that’s very obviously, but nowadays we need to tell the obvious, that we live in a world in transition where every day all human activities, socioeconomic and cultural relations are mediated by information and communication technologies, by a broad digital ecosystem. The mastery of these new technologies reshapes the international geopolitical board and redefines groups of countries that are producers and consumers of the digital technologies, and this is our main concern these days. And we bring this debate during our G20 presidency under the AI priority we led on the digital economy working group this year. Besides AI, we had another three other priorities issues, meaningful connectivity, DPI and information integrity, all starting from this perspective about how we can move and address the challenge of these asymmetries we have in terms of technology and emergence new technologies. Well, these issues relate to these asymmetries between and within countries exist in many areas and have been present for a long time. This scenario has deepened significantly with the emergence of large digital platforms which, in a way, determine the current economic model of society and set new forms of capital accumulation. A few large companies, the big techs, operate in various areas of the economy but have platforms at the core of their operation that mediated commercial authorizations, the flow of information, the provision of services, and in this very moment all the infrastructure and knowledge about the development and the employment of AI in the world. So, regarding AI and other emerging technologies, what we have now, we are facing a scenario that, at least till this moment, we are facing a deepening divide and inequalities, particularly in the Global South. In the sense, the debate about the digital sort of sovereignty is increasing. This term refers, as Dr. Ming said, among other things, to a national strategy autonomy or its capacity to develop digital tools and artificial intelligence using its own infrastructure, data sets, workforce and businesses. Furthermore, it involves the ability to independently regulate and decide on its own digital and AI path in a quest to ensure inclusive growth and sustainable development. The digital sort of sovereignty refers to the ability of states to control their own infrastructure, emphasizing the position of each country in controlling ICTs, that is, a greater or lesser degree of autonomy to make choices and decisions in the field of technology. And, of course, in the field of cooperation with other countries, this is very important also. So, for us in Brazil, we have some key perspectives. The role of the states. So, emphasizing the importance of government action through public policies that support the development of technological infrastructure, science and technology initiatives and industrial policies to foster innovation and reduce dependency. This includes encouraging and promoting the use of national technologies and regulating the use of foreign technological tools. I’m going to say next about precisely about AI plan, but we have another public policies regarding industrial economic development and other initiatives that compose a very large umbrella of public policies regarding investments and in technology. The second point is sovereign digital infrastructures development, maintaining independent digital infrastructures that ensure national control and security. Meaningful connectivity. We also face profound challenges in the field of meaningful connectivity and access. This is how to reduce the prices of equipment, for example, cell phones and computers for the population. We cannot see only the access aspect. We need to see more broader aspects when we’re talking about meaningful connectivity. After all, if we are talking about leaving no one behind, we need to develop the capabilities locally to offer better services to society. For this, there must be meaningful connectivity and we need to strategically think about how to build a permanent digital training process for the entire population, from young people to the elderly, for people living both urban and rural areas. In addition to connectivity, we need to think about the availability of equipment, especially cell phones, which is the most used device to assess services that have the quality and minimal capacity to run applications and tools that use AI. Governance and regulation. And now in Brazil, we are discussing a bill regarding regulation on AI. And for us, it’s crucial to create frameworks for data governance and platform regulations also, that ensure accountability, transparency and ethical use of technology. In this scenario, we need to think about how to create a framework for where the digital system is available in the country, platforms and AI tools are mostly international, it’s necessary to discuss regulatory mechanisms that establish rules for the operation of these companies in the country with transparency obligations about their systems, granular information about aspects that have economic, social and political impacts, conduct adjustments mechanisms, among many other regulatory aspects related to social and human rights. Security and privacy and also develop sovereignty in security and privacy technologies. So for AI to have a positive impact in catalyzing innovation aimed to reducing inequalities and other social issues, its development must be guided by this proposal from the outset. This includes its conception, production, programming, the use of training data set structures to enable AI to achieve its goals with accuracy, linguistic, cultural and geographical diversity. Otherwise, AI could become yet another driver of inequality. This is why data sovereignty is central to the development, implementation and use of AI by countries that aspire to any degree of self-determination. Those was the main focus that Brazil brought to Digital and Digital Economic Working Group this year. We produced as Brazilian presidency contribution a toolkit for AI readiness assessment in partnership with UNESCO that was our knowledge partner with insights to leverage the potential of enabling a holistic and inclusive approach to the ethical and responsible development, deployment and use of AI technologies. And also a mapping AI adoption for enhanced public service with insights into systematic monitoring and relevant opportunities and challenge supporting ethical AI applications within and by governments. Consider that DPI is, as Rutul said, a very strong and potent tool to inclusive and sovereignty digital for countries. In terms of public policies, the perspective I bring to this debate are reflecting our Brazilian AI plan, as Rodolfo said, leaded by Minister of Science, Technology and Innovation this year. A plan that forecast is an investment of 23 billion reais, around $4 billion in the next year. next four years for Brazil is a very huge amount of money. In terms of infrastructure and sovereignty, I highlight some investments from the Brazilian artificial intelligence plan such as national infrastructure program for AI around 105 million dollars, sustainability and renewable energies program from AI around 83 million dollars, data and software ecosystem and structuring program for AI 165 million dollars, research and development program in AI 873 million dollars, no 144 million dollars, and the perspective to achieve an AI supercomputer that puts Brazil on the top of five supercomputers in the world. This is just some highlights on our AI plan that has five axes regarding governance, private sector investments, re-skilling and capacities from workforce and also infrastructure investments. That’s it for now. Thank you very much.

Rodolfo Avelino: Thank you. Thanks a lot for the very relevant points. Now we are going to open the floor to questions of the audience inside and online. First we can start inside.

Luca Belli: Hello, good afternoon. So Luca Belli, professor at UW Law School, very happy to hear that the the research I have been conducting with Professor Min Zhang has been presented here. And sorry if I was late. I was in another session. I was very happy also to see that a lot of the points that we raise in the research are now well-integrated. But maybe some of them are not so well-integrated. Let me give you a very good example, because we have been doing a lot of research on AI sovereignty over the past couple of years. And of course, connectivity is one of the points that we stress is essential to achieve AI sovereignty. And let me give you a very concrete example that also speaks to the debate on DPI that was brought here. Most global south countries, including Brazil, do not have meaningful connectivity. We have most of the population connected to zero rating plans, so to basically a very small selection of apps, including mainly the meta family of apps. So to give you concrete details that friends from SETIC here can confirm, thanks to a very good study that they have done on meaningful connectivity this year, 78% of the population in Brazil does not have meaningful connectivity. It means that only 22% are meaningfully connected. What does it mean concretely? I think the Brazilian government is putting a lot of money. Actually, we are analyzing this primarily in software and data with the AI plan. But even if we have the best possible language models trained with Brazilian data, if all Brazilians only access meta AI through WhatsApp that is zero rated, whereas no one else will be able to access the new fantastic domestic models created thanks to the plan, that is not the very best way of directing the public investment. And this is due to the fact that the access is an incredibly relevant variable in this context. As you were saying, as we have been demonstrating with research, the fact that 78% of the population simply access Meta AI and will never access Brazilian technology because they will keep on having not having money to pay for full internet connectivity and only being directed to Meta, Facebook, Whatsapp. That is an enormous impediment from national innovation and so frustrates a lot the logic very good logic of putting public money to improving national research and development but at the end of the day the consumers will not use it and will keep on not only using another non-Brazilian technology but also train it so for free of course. So I think this the entire logic here is a little bit frustrated and let me give you a very good example of an institution in Brazil that has understood this logic very well. The Brazilian Central Bank when they introduced PIX, our UPI, our Brazilian digital public infrastructure for payment, Whatsapp wanted to launch Whatsapp payments but they blocked it and they suspended it and the rationale was precisely because if it had been launched before PIX everyone in Brazil would have used only Whatsapp payment and no one today we would not be here praising PIX as an example of success story if the Brazilian Central Bank hadn’t blocked Whatsapp payment and hadn’t not suspended it until the entry in force of PIX because otherwise everyone in Brazil here would be using only Whatsapp payment and PIX nobody wouldn’t even know what it is. So I think that these are points that if not considered I know that very well that the Brazilian AI plan does not consider connectivity but I think it’s a mistake and I think that this as you were saying is an essential point and should be brought into the picture otherwise we risk putting a lot of public money for nothing. Thank you very much.

Rodolfo Avelino: Thank you for the question, Luca. Now let’s go to the online questions. Okay, okay. Okay, thank you very much.

Jose Renato: My name is Jose Renato. I am a researcher at the University of Bonn Sustainable AI Lab and also co-founder of LAPIN, non-profit organization in Brazil. Well, thank you so much. Amazing insights. I was actually wanting to ask the presenter and speaker from Georgia. I’m sorry, I didn’t get your name. I really apologize for that.

Ekaterine Imedadze: No worries, it’s Eka. You can call me Eka.

Jose Renato: Okay, nice to meet you. I was actually wondering if you could talk a little bit more about the data center related initiatives in Georgia. And also if you could also share how are you thinking on embedding this within like energy infrastructure, water infrastructure as this has been a very wide and hot topic in the last few months, I would say. So if you could share some thoughts about that and also the Brazilian government’s thinking about this kind of thing. So I think it will be interesting to hear. Thank you very much indeed.

Ekaterine Imedadze: Thank you so much for amazing question. Thank you. Actually, you pointed out in the question, the topics that I’ve actually missed and wanted to share about. So the data center topic, what we have now under discussion is like, first of all, as a regulator and as a state representative, we are working a lot about enabling access to the existing internet infrastructure, opening market, building the IXP and neutral exchange point. This is the first step we are seeing to be, it’s ongoing process. It’s somehow already almost done. And this is the first step to enable them the real data center. Another topic is resilience of infrastructure and finding out that we’re quite small country, but still find the geography where the data center is best to be located from the, also the energy point of view. The good side of the story is that we have, Georgia is the greener energy producing country. So we can ensure that the energy produced locally will be the green energy, which is a very important, how to say, a component of building the right data center and bringing the investors to be interested in this kind of the projects. And also related to this, one thing is producing green energy. Another point is having the geographic location where the energy efficiency will be the best with it’s actually with support of, actually Amazon did some kind of research that in Georgia, this energy efficient locations are present. And this is, so this is kind of projected level, but there is a lot of stuff still ongoing to be done. Security aspect, physical security aspects are very important that still needs to be resolved. And another part is also energy prices. We’re quite competitive electricity prices. So this is, those are the different, how to say different components of the projects we need to solve and put together. Yes, and most importantly to understand financing model, which will work best for this, to make not only the local like Georgia specific project, but regional projects. So investment options are there on table, whether their state should be part of it or it should be totally public or it should be public private partnership, et cetera. Those are kind of points to be resolved still. Thank you so much.

Rodolfo Avelino: Thank you. Let’s do a round of the two more questions and the speakers will answer.

Oms Juliana: Yes, I’ll just make the online. questions, and I think I have one more from the audience here. And then we make a round of the speakers answering, OK? From the Zoom questions, we have Azeem asking, he likes to learn about Peace Cable that Meta is investing. So maybe again to Eka about platforms investing in cable. Another question from Van, he asks, digital public infrastructure, if translated to other languages, can be translated as a state infrastructure. And this would be controlled or owned by then. Is there a line and widely accepted understanding of what the PI is? And finally, a question to Dr. Ming, examining digital sovereignty as a supranational issue, how does regime type influence collaboration? Are democratic regimes more likely to cooperate than authoritarian ones, or this is an outdated assessment? I think maybe we can do another. One more here? OK, I think this is the last one because of time.

Audience: No, it was the same. So I had two questions there. We have discussed, it was my question that you have said. Yes, it is. One, do you hear me? OK, so I have here two questions. So we have discussed several activities for infrastructure development, but they all are almost all were for original connectivity. So the question is that, how can the digital sovereignty on digital infrastructure that has a regional impact be used as a weapon against other countries? And if yes, how? How it can be eliminated? And a small comment that maybe many regional projects require several states to get engaged. So how can we ensure proper stability and productive management on the infrastructure, taking into account the challenges that we have discussed today about digital sovereignty and digital public infrastructures? And also that question.

Rodolfo Avelino: Thank you for the question. And now when answering the question, please give me your closing lines and the final comments. And can we start by the same order with Dr. Min?

Min Jiang: Sure. Thank you for the great question, Nada. And I think it’s a tricky question. I will make two points in relation to your question. First of all, traditional notions or definitions of sovereignty are usually predicated on nation states having a form of autonomy or self-determination, but do not take into account, in reality, the very notion of power. Small nations and small states know this very well, especially in the digital age. Big tech have power and financial power that can easily eclipse those of small nation states. In fact, if one examines the telegeography map that our previous speaker referred to early on about global undersea cables, companies like Amazon, Google, Facebook all have their own dedicated infrastructure at that level. So small countries, not only in the global South, but also, for example, EU, they recognize that in order to be sovereign, they must also cooperate and build alliances. important point to recognize. Also in a previous speaker, Ritu’s account of public digital infrastructure, he makes the case that nation states did digital development, especially those in the global south, have to draw upon open source and free software, which are very, very important notions to common digital sovereignty. So I think we need to disrupt how we think about sovereignty to begin with. And second, the question is about regime types. That’s a very, very important notion for sure. But we also need to recognize the regime types are labels that we attach to nations, but nations also change and evolve. The political system, as we have seen in the United States, in my own country, has evolved a lot. We just elected Donald Trump for a second term, right? So how do we label countries and what type of regime they are is becoming more and more challenging. And the United States is a country with great power, and with great power comes great responsibility. And what NSA, for example, implemented for a long time, and what the big tech are doing, perhaps challenge this very notion of what it means to be democratic. And I think we’re at an age where the older conceptualization and infrastructure and legal regimes to think about democratic is somewhat breaking down. And that’s why we’re seeing this resurgence of claim to digital sovereignty and different actors national or international are hoping to gain more independence, autonomy, and self-determination. So yes, I’m happy to carry on the conversation through some other means, but I will restrict my comments to the BoF for now. Thank you.

Rodolfo Avelino: Thank you very much, Min. Catherine?

Ekaterine Imedadze: Yes, sir. Very challenging questions, let me put it this way. And exactly, echoing what are the underlying challenges with sovereignty, starting from infrastructure level up to the service and data protection level. And what I wanted to outline. that yes, on the one hand side, the sovereignty can be used as some kind of weapon, some kind of strength from the one country having the totally sovereign kind of infrastructure, not giving the access, and kind of isolating from one hand side, conceptually isolating the country. On the other hand side, it requires a lot of effort when we speak about the regional perspective, putting the regional concept of sovereignty, so countries with very different political views should sit together and agree on the major terms. But I think that this debate of digital sovereignty, why it is an open debate and why it is an evolving debate, countries still are trying to understand what are these basic and minimal concepts of, on the one hand side, independence of infrastructure and data, and at the same time, the shared framework of data independence or protection of data. Without this kind of touch points among countries, between countries with very different political views or geopolitical locations, it’s impossible to let this very interconnected world work. We will need more and more interconnected data centers, otherwise it will not work at all. But at the same time, countries and regions are required to protect themselves by owning some kind of the infrastructure. So I think that this is the thin line where we need to all agree and we need to introduce some kind of frameworks. For Georgia, what I can answer is… is that the EU framework we decided to go with, existing EU framework of this sovereignty concept on data level GDPR that is provided by EU framework, legal framework for data protection is the one that is acceptable for us. And we think that this is the best model we can introduce and it should work for our region as well for the current situation. This is my answer.

Rodolfo Avelino: Thank you. Korstiaan, please.

Korstiaan Wapenaar: I’ll make my closing remarks very short. Maybe firstly, just to say thank you to everyone, to the organizers for the participation and to my fellow panelists for the interesting discussion. Without stepping ahead of the questions, pass to yourself maybe just a couple of thought provokers on the regional considerations for sovereignty, maybe just to propose the question around how one manages the aspirations of the AU to develop a continental identity system and how that would be governed and managed and the extent to which that is a risk or how to prevent exclusion across different markets. And then curious to hear from my fellow participant following me, their view on multiple definitions of DPI and what that big and small P looks like as we think about our colleague, Mr. Yves. Thank you very much, everyone.

Ritul Gaur: Thanks, thanks, Christian. I think to answer the first question, which I’m going to take away that, how do we ensure stability and management of DPI? I’m not going to do a regional, I don’t have an answer to that, but. But I can say like, just in terms of a geographical context, what we need to do is ensure that you have the highest grade data centers, therefore great data centers, you have security assessments, you have regular regular audits. And you could do similar things in cross-border context. If you have an ID payment or data sharing, which is in a regional context, we don’t have it in India’s case. But as we build, I think these are the three tech metrics that will follow. But then there will also be some non-tech, which is the governance side of it, which will also follow. Now, answering the perplexing puzzle of the DPI, which is what is the P about? Should DPIs be controlled or owned by state? I think A, to start off with, there’s no clear definition of DPI. I think it’s at a very evolving stage. The G20 definition is as confusing, as clarifying as it is. And I take some blame for it. But if you think about it, it has to be understood from a different grade perspective. Something like an identity is a very sovereign function. To say, you are you, can be trusted by a sovereign state more than any other entity. So in India’s case, ID sits out of the Ministry of Electronics and IT. It’s a statutory organization backed by a constitutional law. And the entity is posted with civil servants, et cetera. So it’s a very, very state-driven function. On the contrary, payment system is rather fluid. It is a non-profit structure. It’s a Section 8 company, which is a non-profit in India’s case. It is a conglomeration of different banks and the central bank coming together and just building the protocol. The rest of it is actually controlled by different banks who come and participate on top of it. But the role of the state in that case is the regulator. The state is only the regulators in India’s payment scenario. And similarly, the document, the data sharing wallet as well, the state, again, a Section 8 company has created it, which the state only regulates in terms of how you can share your… credentials, etc. So I think it will differ on a country to country basis. I remember some time back, I was in Ghana, and I was talking to the bureaucrat there, and he said, in our country, everything is a very private sector driven phenomenon. So how do we do it? So I think it will be a very country to country phenomenon. But in my limited experience, most ID systems, and I think Kristen would agree to it, we met last week in Bangalore, most ID systems are part of either the home ministry or the IT ministries, etc. So you will see a lot of identity function, which is so central to any targeted beneficiary delivery. It is essentially establishing your relationship with the state is done by the state. But other DPI functions can be performed by different partners. In fact, SingPass, PayNow, PromptPay, etc. These are some payment systems and other systems which are created by the private sector in conglomeration with the state. But the state’s role is at least in this case, to be a regulator, to be an observer that nobody creates disproportionate amount of monopolies, that nobody is playing, not playing by the rules. So to set the broad rules of the game, and then let the players come and come and build on the basis of what purpose does itself. So if it’s something which requires high degree of trust, authenticity, etc, state is the best entity to do it. If it’s something which can be created by different market players coming together, state can be an observer or regulator. So that’s my view. Finally, on DPI and sovereignty, I think it’s a very important link to be made there. My only concern is that as most countries go in the quest to build their DPIs, we should not lose sight of cross-border interoperability or interoperability of those different DPIs. So as we all go towards making our own payment systems, as we go towards making our ID systems, etc, we also need to be cognizant. isn’t enough, that we are also thinking of regional blocks. We are also thinking of cross-border interoperability, et cetera. So do not lose that. Otherwise, I think in a broader scheme of things, DPI is a big time enabler of sovereignty. Thank you.

Rodolfo Avelino: Thank you very much, Ritul. Renata, your final answer.

Renata Mielli: Yes, thank you. Thank you very much for this interesting panel. I will start saying that we need to see digital sovereignty as complementary with cooperation. It’s not just different things. Since each country faces different realities in these areas, in digital areas, cooperation will be fundamental. Without cooperation, we are not going to achieve sovereignty. Establishing mechanisms for regional cooperation that create complementarity strategies based on each country’s capabilities may be a more effective and faster path toward reducing inequalities and achieving greater autonomy for nations. I think we have to keep this in mind. Regarding the question that Luca made about connectivity in Brazil, he knows I’m profoundly and deeply critical of zero rating. But it’s important to say that in Brazil, we have a huge public policy in terms of strategic investment of government that calls PAC. How can I say PAC? Growth Facilitation Program. And the connectivity policies are in the PAC. And with 28 billion reals, something around $5 billion to invest in building connectivity in technologies, 5G, 4G, building back halls, backbones, school connectivity, health system connectivity. So there is a public policy that are being made inside the communication ministry, Ministry of Communications. And as I see and as the government see, as my minister of science, technology, innovation see, we cannot wait to solve the problems regarding connectivity. And I completely agree with you. Brazil doesn’t have meaningful connectivity for all population. But we need to start to build expertise and investments in infrastructure and in all the economic chain of the AI, because we need to start from some point. So these are two policies that needs to be put in movement with each other. Communications are dealing with connectivity, doing the investments. And we, as Minister of Science, Innovation, and other governments, and other ministers, are focus on how to build capabilities in terms of re-skilling, in terms of infrastructure, and building applications, AI applications. So that’s my point. We need to do the both thing together. So if you want to achieve some autonomy, sovereignty in Brazil regarding technology, digital technology and AI. So thank you very much for the opportunity. And that’s it.

Rodolfo Avelino: Thank you to our speakers for their great contributions and to everyone in the audience. This has been a very good workshop. We appreciate the IGF organizations for facilitating this valuable discussion. Thank you all.

M

Min Jiang

Speech speed

137 words per minute

Speech length

1598 words

Speech time

699 seconds

Digital sovereignty has multiple meanings and perspectives beyond just nation-states

Explanation

Digital sovereignty is not limited to nation-states but encompasses various perspectives including supranational, network, corporate, personal, post-colonial, and common digital sovereignty. This broader conceptualization complements the multistakeholder model by highlighting underlying power issues.

Evidence

The speaker references a book she co-edited titled ‘Digital Sovereignty in the BRICS Countries’ which explores these different perspectives.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

Agreed with

Ritul Gaur

Renata Mielli

Agreed on

Digital sovereignty is multifaceted and goes beyond nation-states

Differed with

Ritul Gaur

Differed on

Role of state in digital sovereignty

Small countries need to cooperate and build alliances to achieve digital sovereignty

Explanation

Traditional notions of sovereignty based on nation-state autonomy do not account for power dynamics in the digital age. Small nations and states recognize the need to cooperate and form alliances to achieve digital sovereignty, especially in the face of big tech companies’ power.

Evidence

The speaker mentions that EU countries recognize the need to cooperate to be sovereign, and that small countries in the global South also need to build alliances.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

Open source technologies are important for AI sovereignty in developing countries

Explanation

The speaker emphasizes the importance of open source and free software for digital sovereignty, especially for developing countries. These technologies allow nations to develop their digital infrastructure independently and adapt it to their local context.

Evidence

The speaker references Ritul’s account of public digital infrastructure and the need for global South countries to draw upon open source and free software.

Major Discussion Point

AI Development and Sovereignty

E

Ekaterine Imedadze

Speech speed

112 words per minute

Speech length

1948 words

Speech time

1038 seconds

Georgia faces challenges in developing data centers and connectivity infrastructure

Explanation

Georgia is working on enabling access to existing internet infrastructure, opening markets, and building neutral exchange points. The country is also considering factors such as energy efficiency, green energy production, and physical security for data center development.

Evidence

The speaker mentions ongoing projects to build IXPs and neutral exchange points, as well as research on energy-efficient locations for data centers in Georgia.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

K

Korstiaan Wapenaar

Speech speed

139 words per minute

Speech length

1062 words

Speech time

457 seconds

African countries struggle with fiscal and capacity constraints for digital infrastructure

Explanation

Many African countries face significant fiscal constraints and lack of expertise, which impacts the rollout of both hard and soft digital infrastructure. This has led to underperformance in digital transformation and e-government development.

Evidence

The speaker cites the e-Government Development Index, showing that only four African countries have achieved above the global average.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

DPI enables governments to deliver services at scale and reach people in need

Explanation

Digital Public Infrastructure (DPI) allows governments to deliver services to people and organizations at scale. This is particularly important for African states that have struggled to deliver services effectively in the past.

Evidence

The speaker mentions that digital transformation of the public sector is a prerequisite for socioeconomic development in Africa.

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

Agreed with

Ritul Gaur

Renata Mielli

Agreed on

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

R

Ritul Gaur

Speech speed

168 words per minute

Speech length

2274 words

Speech time

808 seconds

Digital sovereignty enables countries to exercise more control over critical digital assets

Explanation

Digital sovereignty allows countries to have more control over their critical national digital assets. This includes the ability to develop and operate their own technologies and infrastructure independently.

Evidence

The speaker cites India’s example, where 80% of digital financial transactions now go through the national payment infrastructure (UPI) instead of Visa or Mastercard.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

DPI components like digital ID and payment systems can enhance sovereignty

Explanation

Digital Public Infrastructure components such as digital identity systems and payment systems can enhance a country’s digital sovereignty. These systems allow countries to have more control over critical digital functions and reduce dependence on foreign technologies.

Evidence

The speaker mentions India’s national ID system (Aadhaar) and payment system (UPI) as examples of DPI enhancing sovereignty.

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

Agreed with

Korstiaan Wapenaar

Renata Mielli

Agreed on

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

The governance of DPI can vary from state-controlled to private sector-driven

Explanation

The governance of Digital Public Infrastructure can vary depending on the specific component and country context. Some DPI components, like identity systems, are often state-controlled, while others, like payment systems, may involve more private sector participation.

Evidence

The speaker contrasts India’s ID system (state-controlled) with its payment system (involving private banks but regulated by the state).

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

Agreed with

Min Jiang

Renata Mielli

Agreed on

Digital sovereignty is multifaceted and goes beyond nation-states

Differed with

Min Jiang

Differed on

Role of state in digital sovereignty

DPI should be designed for cross-border interoperability

Explanation

As countries develop their own Digital Public Infrastructure, it’s important to consider cross-border interoperability. This ensures that different national systems can work together and facilitates regional cooperation.

Evidence

The speaker warns against losing sight of cross-border interoperability while countries focus on building their own DPIs.

Major Discussion Point

Role of Digital Public Infrastructure (DPI)

R

Renata Mielli

Speech speed

107 words per minute

Speech length

1651 words

Speech time

922 seconds

Digital sovereignty should be seen as complementary to cooperation between countries

Explanation

Digital sovereignty and cooperation between countries are not mutually exclusive but complementary. Given the different realities faced by each country in the digital realm, cooperation is fundamental to achieving sovereignty.

Evidence

The speaker suggests that establishing mechanisms for regional cooperation based on each country’s capabilities may be a more effective path toward reducing inequalities and achieving greater autonomy.

Major Discussion Point

Digital Sovereignty Concepts and Frameworks

Agreed with

Min Jiang

Ritul Gaur

Agreed on

Digital sovereignty is multifaceted and goes beyond nation-states

Brazil is investing in connectivity infrastructure alongside AI development

Explanation

Brazil is implementing public policies for strategic investment in connectivity infrastructure through the Growth Facilitation Program (PAC). This includes investments in 5G, 4G, backhauls, backbones, and connectivity for schools and health systems.

Evidence

The speaker mentions a 28 billion reals (around $5 billion) investment in building connectivity technologies and infrastructure.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

Brazil is investing significantly in AI development and infrastructure

Explanation

Brazil has developed an AI plan that includes substantial investments in various aspects of AI development and infrastructure. This plan aims to build expertise and invest in the entire economic chain of AI.

Evidence

The speaker mentions a planned investment of 23 billion reais (around $4 billion) over the next four years for AI development in Brazil.

Major Discussion Point

AI Development and Sovereignty

Agreed with

Korstiaan Wapenaar

Ritul Gaur

Agreed on

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

AI development must be guided by reducing inequalities from the outset

Explanation

The development of AI should be guided by the goal of reducing inequalities and addressing social issues from the very beginning. This includes considerations of accuracy, linguistic, cultural, and geographical diversity in AI development.

Major Discussion Point

AI Development and Sovereignty

Data sovereignty is central to AI development and self-determination

Explanation

Data sovereignty is crucial for countries aspiring to any degree of self-determination in AI development and implementation. Control over data is seen as a key aspect of digital sovereignty in the context of AI.

Major Discussion Point

AI Development and Sovereignty

L

Luca Belli

Speech speed

162 words per minute

Speech length

652 words

Speech time

241 seconds

Lack of meaningful connectivity in Brazil limits access to domestic AI technologies

Explanation

Despite Brazil’s investments in AI development, the lack of meaningful connectivity for a large portion of the population limits access to domestic AI technologies. This situation may lead to most Brazilians only accessing foreign AI technologies through zero-rated apps.

Evidence

The speaker cites a study showing that 78% of the population in Brazil does not have meaningful connectivity, with many relying on zero-rating plans that primarily include Meta’s family of apps.

Major Discussion Point

Digital Infrastructure and Connectivity Challenges

Agreements

Agreement Points

Digital sovereignty is multifaceted and goes beyond nation-states

Min Jiang

Ritul Gaur

Renata Mielli

Digital sovereignty has multiple meanings and perspectives beyond just nation-states

The governance of DPI can vary from state-controlled to private sector-driven

Digital sovereignty should be seen as complementary to cooperation between countries

Speakers agree that digital sovereignty is a complex concept that involves various actors and perspectives, not just nation-states. It can include different governance models and requires cooperation between countries.

Importance of Digital Public Infrastructure (DPI) for sovereignty and development

Korstiaan Wapenaar

Ritul Gaur

Renata Mielli

DPI enables governments to deliver services at scale and reach people in need

DPI components like digital ID and payment systems can enhance sovereignty

Brazil is investing significantly in AI development and infrastructure

Speakers emphasize the crucial role of Digital Public Infrastructure in enhancing digital sovereignty and enabling governments to deliver services effectively, particularly in developing countries.

Similar Viewpoints

Developing countries and smaller nations face significant challenges in achieving digital sovereignty and building digital infrastructure, often requiring cooperation and support.

Min Jiang

Ekaterine Imedadze

Korstiaan Wapenaar

Small countries need to cooperate and build alliances to achieve digital sovereignty

Georgia faces challenges in developing data centers and connectivity infrastructure

African countries struggle with fiscal and capacity constraints for digital infrastructure

Unexpected Consensus

Importance of open technologies and interoperability

Min Jiang

Ritul Gaur

Open source technologies are important for AI sovereignty in developing countries

DPI should be designed for cross-border interoperability

Despite coming from different perspectives, both speakers emphasize the importance of open technologies and interoperability in achieving digital sovereignty, which is somewhat unexpected given the potential tension between sovereignty and openness.

Overall Assessment

Summary

The speakers generally agree on the multifaceted nature of digital sovereignty, the importance of Digital Public Infrastructure, and the need for cooperation and open technologies in achieving sovereignty. They also recognize the challenges faced by developing countries in building digital infrastructure.

Consensus level

There is a moderate to high level of consensus among the speakers on the main themes. This suggests a growing understanding of the complexities of digital sovereignty and the need for nuanced approaches that balance national interests with international cooperation and open technologies. The implications of this consensus could lead to more collaborative efforts in developing digital infrastructure and policies that support both sovereignty and global interoperability.

Differences

Different Viewpoints

Role of state in digital sovereignty

Min Jiang

Ritul Gaur

Digital sovereignty has multiple meanings and perspectives beyond just nation-states

The governance of DPI can vary from state-controlled to private sector-driven

Min Jiang emphasizes a broader conceptualization of digital sovereignty beyond nation-states, while Ritul Gaur focuses more on the varying degrees of state control in DPI governance.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the role of the state in digital sovereignty, the balance between national autonomy and international cooperation, and the effectiveness of current connectivity initiatives in developing countries.

difference_level

The level of disagreement among speakers is moderate. While there are some differing perspectives on specific aspects of digital sovereignty and infrastructure development, there is a general consensus on the importance of these issues for national development and the need for some form of cooperation. These differences highlight the complexity of implementing digital sovereignty in practice, especially for developing countries balancing national interests with global technological trends.

Partial Agreements

Partial Agreements

Both speakers acknowledge the importance of connectivity for AI development in Brazil, but disagree on the effectiveness of current approaches. Mielli emphasizes ongoing investments, while Belli argues that these efforts are insufficient to address the lack of meaningful connectivity.

Renata Mielli

Luca Belli

Brazil is investing in connectivity infrastructure alongside AI development

Lack of meaningful connectivity in Brazil limits access to domestic AI technologies

Similar Viewpoints

Developing countries and smaller nations face significant challenges in achieving digital sovereignty and building digital infrastructure, often requiring cooperation and support.

Min Jiang

Ekaterine Imedadze

Korstiaan Wapenaar

Small countries need to cooperate and build alliances to achieve digital sovereignty

Georgia faces challenges in developing data centers and connectivity infrastructure

African countries struggle with fiscal and capacity constraints for digital infrastructure

Takeaways

Key Takeaways

Digital sovereignty has multiple meanings and perspectives beyond just nation-states, including supranational, corporate, personal, and common digital sovereignty.

Digital infrastructure and meaningful connectivity remain major challenges for many countries, especially in the Global South.

Digital Public Infrastructure (DPI) is seen as an important tool for enhancing digital sovereignty and delivering services at scale.

AI development and data sovereignty are increasingly important for countries seeking technological autonomy.

Cooperation between countries and open technologies are crucial for achieving digital sovereignty, especially for smaller nations.

Resolutions and Action Items

Brazil plans to invest 23 billion reais (around $4 billion) in AI development over the next four years

Georgia is working on enabling access to existing internet infrastructure and building neutral exchange points as steps towards data center development

Unresolved Issues

How to balance national digital sovereignty efforts with the need for cross-border interoperability

How to address the lack of meaningful connectivity in many countries while simultaneously investing in advanced technologies like AI

The exact definition and scope of Digital Public Infrastructure (DPI) and its governance models

How to ensure proper stability and productive management of regional digital infrastructure projects

Suggested Compromises

Adopting a broader framework of digital sovereignty that includes multiple perspectives beyond just nation-states

Using open source technologies and open standards to build critical national digital infrastructure

Balancing state control and private sector involvement in DPI development based on the specific function and country context

Pursuing digital sovereignty efforts alongside regional cooperation and alliance-building

Thought Provoking Comments

Digital sovereignty as broadly conceptualized complements the multistakeholder model by foregrounding the underlying power issues that have prevented multistakeholderism to be more widely adopted.

speaker

Min Jiang

reason

This comment reframes digital sovereignty not as opposed to multistakeholderism, but as complementary to it. It suggests that digital sovereignty can address power imbalances that have limited multistakeholder approaches.

impact

This set the tone for considering digital sovereignty as a nuanced concept that goes beyond just state control, influencing subsequent speakers to discuss various dimensions and stakeholders involved in digital sovereignty.

DPI allows you to have shared means to many ends because essentially it’s laying out the most common drill, but then allowing others to build a market economy around it, allowing others to use that ID to do a KYC to then provide services, allowing others to build that payment service app to then offer other things.

speaker

Ritul Gaur

reason

This comment provides a clear explanation of how Digital Public Infrastructure (DPI) can enable both public and private sector innovation, highlighting its role in fostering a digital ecosystem.

impact

It shifted the discussion towards considering DPI as a foundation for broader digital development, rather than just a government-controlled system. This influenced later comments on the role of private sector and community in DPI.

78% of the population in Brazil does not have meaningful connectivity. It means that only 22% are meaningfully connected. What does it mean concretely? I think the Brazilian government is putting a lot of money. Actually, we are analyzing this primarily in software and data with the AI plan. But even if we have the best possible language models trained with Brazilian data, if all Brazilians only access meta AI through WhatsApp that is zero rated, whereas no one else will be able to access the new fantastic domestic models created thanks to the plan, that is not the very best way of directing the public investment.

speaker

Luca Belli

reason

This comment highlights a critical gap between infrastructure development and actual access, challenging the effectiveness of current digital sovereignty efforts.

impact

It prompted a response from the Brazilian representative about ongoing connectivity efforts and sparked a discussion about the need to address both infrastructure and access simultaneously in digital sovereignty initiatives.

We need to see digital sovereignty as complementary with cooperation. It’s not just different things. Since each country faces different realities in these areas, in digital areas, cooperation will be fundamental. Without cooperation, we are not going to achieve sovereignty.

speaker

Renata Mielli

reason

This comment synthesizes the discussion by emphasizing that sovereignty and cooperation are not mutually exclusive, but rather interdependent in the digital realm.

impact

It provided a concluding perspective that tied together various threads of the discussion, emphasizing the need for both national efforts and international cooperation in achieving digital sovereignty.

Overall Assessment

These key comments shaped the discussion by expanding the concept of digital sovereignty beyond state control to include multistakeholder approaches, the role of digital public infrastructure, the importance of meaningful connectivity, and the need for international cooperation. They challenged simplistic notions of sovereignty and highlighted the complex interplay between national interests, private sector involvement, and global collaboration in the digital realm. The discussion evolved from theoretical concepts to practical challenges and potential solutions, emphasizing the need for nuanced, context-specific approaches to digital sovereignty that balance national autonomy with international cooperation and equitable access.

Follow-up Questions

How can meaningful connectivity be improved to ensure wider access to national AI technologies?

speaker

Luca Belli

explanation

This is important because without meaningful connectivity, investments in national AI technologies may not reach the majority of the population, who may only have access to foreign technologies through zero-rating plans.

How are data center initiatives in Georgia being integrated with energy and water infrastructure?

speaker

Jose Renato

explanation

This is important for understanding the holistic approach to infrastructure development and its environmental impact.

What are the details of the Peace Cable that Meta is investing in?

speaker

Azeem

explanation

This information is relevant to understanding private sector investments in digital infrastructure.

Is there a widely accepted understanding of what Digital Public Infrastructure (DPI) is?

speaker

Van

explanation

A clear definition is important for consistent policy-making and implementation across different countries.

How does regime type influence collaboration on digital sovereignty issues?

speaker

Unnamed participant

explanation

Understanding this could provide insights into international cooperation patterns in digital governance.

How can digital sovereignty on infrastructure with regional impact be used as a weapon against other countries, and how can this be prevented?

speaker

Audience member

explanation

This is important for understanding potential geopolitical implications of digital infrastructure development.

How can proper stability and productive management of regional digital infrastructure projects be ensured, given the challenges of digital sovereignty?

speaker

Audience member

explanation

This is crucial for successful implementation of cross-border digital infrastructure initiatives.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.