Main Session 4: Looking back, moving forward – how to continue to empower the IGF’s role in Internet Governance

Main Session 4: Looking back, moving forward – how to continue to empower the IGF’s role in Internet Governance

Session at a Glance

Summary

This discussion focused on the role and future of the Internet Governance Forum (IGF) within the context of the World Summit on the Information Society (WSIS) Plus 20 review and the Global Digital Compact (GDC) implementation. Participants emphasized the IGF’s unique position as a multistakeholder platform for inclusive dialogue on internet governance issues. They highlighted its contributions over the past 19 years, including fostering global awareness of critical digital issues, developing intersessional work, and nurturing national and regional IGFs.

Key points of discussion included the need for the IGF to evolve and adapt to new challenges, produce more tangible outcomes, and enhance its inclusivity, particularly for underrepresented regions and communities. Participants stressed the importance of strengthening partnerships with UN agencies, the private sector, and other stakeholders. The role of national and regional IGFs in localizing internet governance discussions and driving grassroots engagement was emphasized.

Many speakers advocated for making the IGF a permanent institution within the UN system, with adequate funding and resources. They also called for improving the IGF’s ability to communicate its outcomes to relevant policymaking spaces and interfacing more effectively with governments and intergovernmental processes.

The discussion highlighted the need for the IGF to balance innovation with inclusivity and privacy concerns, address the digital divide, and focus on emerging technologies like AI. Participants agreed that the IGF should play a crucial role in implementing the GDC and contributing to the WSIS Plus 20 review process. The overall consensus was that the IGF remains a vital platform for shaping the future of internet governance, but it must continue to evolve to meet new challenges and increase its impact.

Keypoints

Major discussion points:

– The role of the IGF within the WSIS framework and how to enhance it

– Institutional improvements needed for the IGF, including making it permanent

– The importance of the multi-stakeholder model and inclusivity in internet governance

– The need for more tangible outcomes and actionable recommendations from the IGF

– The critical role of national and regional IGF initiatives (NRIs)

The overall purpose of this discussion was to reflect on the IGF’s contributions over the past 19 years and explore ways to strengthen its role and impact as it approaches its 20th anniversary and the WSIS+20 review. Participants aimed to identify priorities for improving the IGF’s institutional structure and its place within the broader internet governance ecosystem.

The tone of the discussion was largely constructive and forward-looking. There was a sense of pride in the IGF’s accomplishments, but also recognition of the need for evolution and improvement. The conversation became more urgent and action-oriented as it progressed, with many participants emphasizing the need for concrete steps to enhance the IGF’s relevance and effectiveness. Overall, the tone reflected a shared commitment to the IGF’s mission and a desire to see it adapt and thrive in the face of new challenges.

Speakers

– Carol Roach: MAG Chair for IGF 2024

– Gbenga Sesan: Executive Director of Paradigm Initiative, co-moderator

– Vint Cerf: Internet pioneer

– Christine Arida: Board Member of the Strategic Advisory to the Executive President, National Telecom Regulatory Authority of Egypt

– Timea Suto: Global Digital Policy Lead for the International Chamber of Commerce

– Valeria Betancourt: Internet Governance Lead, Association for Progressive Communications (APC)

– Kurtis Lindquist: President and CEO, Internet Corporation for Assigned Names and Numbers (ICANN)

– Jorge Cancio: Representative from Swiss government

– Nigel Hickson: Works for the Department of Science, Innovation, and Technology on Internet Governance issues

– Bertrand de La Chapelle: Executive director of the Internet and Jurisdiction Policy Network

– Juan Alfonso Fernández González: From the Ministry of Communications of Cuba Juan Alfonso Fernández González

Additional speakers:

– Nthati Moorosi: Minister of ICT Science and Innovation from Lesotho

– Manal Abdel Samad: Public policy advisor from Lebanon

– Khaled Fattah: Expert in cyber security

– Israel Rosas: From the Internet Society

– Annaliese Williams: Part of the technical community, involved in Australia’s national IGF

– Nnenna Nwakanma: From “the internet”

– Christine Amesson: From the Ministry of Economy and Finance in Benin

Full session report

The Internet Governance Forum (IGF) Discussion: Reflecting on the Past and Shaping the Future

This discussion focused on the role and future of the Internet Governance Forum (IGF) within the context of the World Summit on the Information Society (WSIS) Plus 20 review and the Global Digital Compact (GDC) implementation. Participants from various sectors and regions reflected on the IGF’s contributions over its 19-year history and explored ways to strengthen its impact and relevance moving forward.

1. The Role and Contributions of the IGF

Speakers unanimously recognised the IGF’s unique position as a multistakeholder platform for inclusive dialogue on internet governance issues. Timea Suto, representing the International Chamber of Commerce, highlighted the IGF’s role in fostering global awareness of critical digital issues and developing a vibrant intersessional work ecosystem. Valeria Betancourt from the Association for Progressive Communications emphasised the IGF’s function as a platform for debate on public policy issues across WSIS action lines, allowing different stakeholders to share challenges and solutions.

Göran Marby, President and CEO of ICANN, underscored the IGF’s centrality to the WSIS framework and internet governance, noting its role in shaping narratives and informing policymaking through open discussions. The IGF was praised for its ability to bring together diverse stakeholders, including governments, civil society, and the private sector, to address complex internet governance challenges.

Juan Alfonso Fernández González played a crucial role in motivating the audience, raising important questions about stakeholder representation and IGF attendance frequency. This highlighted the ongoing need to ensure diverse and consistent participation in the forum.

2. The Future of the IGF and WSIS+20 Review

As the IGF approaches its 20th anniversary, participants discussed its future role and potential improvements. There was broad agreement that the IGF should play a crucial part in implementing the Global Digital Compact (GDC) and contributing to the WSIS Plus 20 review process.

Timea Suto argued that the IGF should serve as a foundational resource for GDC implementation and maintain momentum for the WSIS+20 review. Valeria Betancourt echoed this sentiment, calling for the integration of the WSIS framework and GDC within the IGF’s work. She also stressed the need to operationalise the IGF’s vision for more impactful outcomes.

Several speakers, including Vint Cerf and Bertrand de La Chapelle, advocated for making the IGF a permanent institution within the UN system. They suggested that this would require revising the IGF’s mandate and improving its institutional structure to ensure adequate funding and resources. This proposal aimed to enhance the IGF’s stability and long-term impact.

Göran Marby emphasised the need for the IGF to adapt to remain relevant in a changing world, calling for strengthened partnerships with other UN agencies and processes. This sentiment was shared by Christine Arida, who suggested that the IGF should move to its next phase with more tangible outcomes and stronger linkages to governments.

3. Improving IGF Outcomes and Impact

A recurring theme throughout the discussion was the need for the IGF to produce more tangible outcomes and actionable recommendations. This was seen as crucial for enhancing the forum’s relevance and impact on global internet governance.

Valeria Betancourt highlighted the need to strengthen the IGF’s ability to communicate its messages to relevant policymaking spaces. Göran Marby agreed, stating that the IGF should focus on outputs that translate to actions. An audience member suggested that the IGF should generate more concrete recommendations and outcomes.

Nthati Moorosi, Minister of ICT Science and Innovation from Lesotho, proposed that the IGF should have a special forum with the private sector to address connectivity challenges. She also suggested that the IGF could track country progress on inclusivity goals, providing a more systematic approach to monitoring and evaluating the impact of IGF initiatives.

However, Vint Cerf offered a nuanced perspective, arguing that while the IGF can make strong, evidence-based recommendations, its primary strength lies in formulating problems or questions and suggesting where they should be addressed. This view highlights the ongoing debate about the IGF’s role in problem-solving versus problem identification and direction.

4. Enhancing IGF Inclusivity and Representation

Improving the IGF’s inclusivity, particularly for underrepresented regions and communities, was a key point of discussion. Göran Marby stressed the need to enhance inclusivity, especially for voices from the Global South, and to bring in more youth and marginalised communities. Carol Roach, the MAG Chair for IGF 2024, echoed this sentiment, calling for improved engagement with underserved communities.

The role of national and regional IGFs (NRIs) in localising internet governance discussions and driving grassroots engagement was emphasised. Christine Arida suggested leveraging the NRI network to shape the IGF’s renewed mandate. Israel Rosas from the Internet Society called for increasing the visibility of partnerships promoted by NRIs at national and regional levels to demonstrate the IGF’s tangible impact.

Valeria Betancourt highlighted the IGF’s unique ability to facilitate difficult conversations between stakeholders and governments. Audience members noted the IGF’s contribution to education and capacity-building through Internet Governance schools and its reach to grassroots levels and marginalised societies.

5. Balancing Innovation and Inclusivity

Participants stressed the need for the IGF to balance innovation with inclusivity and privacy concerns. The discussion touched on emerging technologies like artificial intelligence (AI) and their implications for internet governance. Speakers agreed that the IGF should play a crucial role in addressing these new challenges while ensuring that the benefits of digital innovation are accessible to all.

An interesting analogy was shared, suggesting that multistakeholder and multilateral processes need to “hold hands and dance together” rather than just shaking hands, emphasizing the need for deeper collaboration and integration.

Conclusion

The discussion revealed a strong consensus on the IGF’s importance and the need for its evolution. Participants agreed that the IGF remains a vital platform for shaping the future of internet governance, but it must continue to adapt to meet new challenges and increase its impact. Key areas for improvement include producing more tangible outcomes, enhancing inclusivity, strengthening partnerships with other stakeholders, and better integrating with other UN processes.

As the IGF approaches its 20th anniversary, the discussion highlighted the need for a clear vision and action plan for its future role. This includes working towards making the IGF a permanent operation within the UN context, developing strategies to improve engagement with underserved communities, and enhancing the IGF’s ability to produce and communicate actionable recommendations to policymaking spaces.

The creation of a compendium of the IGF’s achievements over the past 19 years was suggested as a way to showcase its impact and inform future directions. While there was broad agreement on these key issues, some questions remain unresolved, such as the specific mechanisms for improving the IGF’s tangible outcomes and impact, and how to effectively balance its role as an open forum for discussion with the need for more concrete outputs. These challenges will likely form the basis for ongoing discussions as the IGF continues to evolve and adapt to the changing landscape of global internet governance.

Session Transcript

Carol Roach: Internet Governance. I am Carol Roach, the MAG Chair for IGF 2024. So this Looking Back and Moving Forward is under the theme Improving Digital Cooperation for the Internet that we want. Over the past 20 years, the IGF has played a significant role in the Internet and the digital ecosystem. It has evolved to meet the exponential growth of the Internet and the digital technologies, leveraging a multi-stakeholder model to bring together experts, communities, and users to address innovations, opportunities, and risks. With this in mind, the session today will focus on the IGF’s role in the Global Digital Compact implementation in the context of the WSIS plus 20 review as well as enhancing the IGF presence in the WSIS architecture and IGF institutional improvements ahead of the WSIS plus 20 review process. So joining me on stage and online, we have Timea Suto, who is the Global Digital Policy Lead for the International Chamber of Commerce. We have Valeria Betancourt, Internet Governance Lead, Association for Progressive Communications, or APC. Christine Arida, Board Member of the Strategic Advisory to the Executive President, National Telecom Regulatory Authority of Egypt. And we have Kurtis Lindquist, President and CEO, Internet Corporation for Assigned Names and Numbers, ICANN. And my co-moderator is Benga Sessan, Executive Director of Paradigm Initiative. Welcome, everybody. So we’re going to jump straight into it and my first question goes to Timea. How did the IGF contribute to the 20-year implementation of the WSIS Action Lines, and what are the substantive contributions the IGF, mainly its reports, intersessional work, can bring to the GDC as we move into its implementation and look towards the WSIS Plus 20 review and beyond? It’s a big question, but I know you can handle that.

Timea Suto: Thank you very much, Carol. Good morning, everyone. It’s nice to see many of you here still on the last day of the IGF and with us in this session, and thank you to everyone for inviting me and ICC to share a few words. For those of you who don’t know us, the International Chamber of Commerce is a global business organization. We represent a network of companies of all sizes and sectors of over 45 million in number in more than 170 countries around the world, so we aim to bring their voice into their conversations here today. About the IGF, ICC was also very much involved in the WSIS process. We were the interlocutor for WSIS back in Geneva and Tunis phases 20 years ago almost, and since then we have an initiative that’s called Business Action to support the information society that looks at the outcomes of the WSIS process and tries to bring the business voices into this conversation. So it’s not unusual that we are always at the IGF and we’re trying to bring businesses into the IGF to support, of course, these meetings at the annual level, but also the work that happens intersessionally. So what has the IGF done in the past 20 years that is relevant to WSIS and the GDC? First of all, it has been instrumental in fostering inclusive multi-stakeholder dialogue on internet governance, of course, bringing together governments, businesses, civil society, academia, and the technical community, but also on a number of issues that are related to the internet. I like to say the technologies that either enable or are enabled by the internet. So the IGF has had the conversations in this multi-stakeholder setting in all these areas, and it has established itself as the premier global platform for open and constructive discourse on these issues. So for me, that is one element, the multi-stakeholder conversation and convening power. Another one of the IGF’s major contributions is that it really builds global awareness of the critical digital issues, whether that’s we’re talking about access to digital technologies or inclusion or cybersecurity or emerging technologies. And I think that’s really important. Thank you. encourages dialogue, shared understanding, and collaboration, which I think is a great contribution of the IGF. Over the years, the IGF has developed, as you said, vibrant intersessional work and an ecosystem through initiatives like best practice forums, the dynamic coalitions, the policy networks that have allowed these stakeholders from all around the world to coalesce and collaborate on specific issues on a year-round basis. And they have produced a lot of interesting reports and outputs on issues like cybersecurity, meaningful connectivity, AI, internet fragmentation, and many others. So while these outputs, yes, they are not binding, they provide valuable insights and practical guidance sometimes for policymakers and practitioners, and that is one more element that I think that we need to highlight from the past 20 years’ contributions of the IGF. And then, last but not least, it has fostered a network and the growth of the multi-stakeholder idea through the national and regional IGFs in local and regional communities. And this really helps localize the internet governance discussions that we have at this international level and really bring diverse stakeholders into the conversation to ensure that the regional priorities and voices are fed into the global debates, while also driving grassroots engagement on the digital issues that are pressing for a particular time and place. So these developments all together have been instrumental in identifying some actionable solutions and fostering alignment across sectors and regions, and therefore enabling a more cohesive approach to implementing the WSIS action lines, and I think should be leveraged to channel the voice of the multi-stakeholder community in the review process of the WSIS. But, of course, also very relevant for the GDC. As we consider the implementation of the Global Digital Compact, the IGF can really serve as a foundational resource, in my opinion, that capitalizes on this unique convening power that the IGF has to share insights and expertise that reflect the realities and aspirations of the multi-stakeholder community and to exchange best practices and forge partnerships. to foster and further the GDC implementation. It really is that unique forum that brings us all together where we can discuss, okay, what have you done on GDC, what am I doing, how do we move that forward and make sure that that voice of the community is really part of the discussions around how to implement the GDC as well. And last but not least, I think the IGF’s convening power can also help maintain momentum for these, both the WSIS Plus 20 review and the GDC. We tend to think of them as a moment in time, but I think the IGF has the power to carry that momentum further for the years to come and offer space for dialogue and monitoring and accountability on what we’ve committed ourselves to, whether that’s the WSIS Action Lines, whether that’s the GDC, or perhaps bringing the two together. So I’ll leave it at that, and turn it back to you, Carol.

Carol Roach: Thank you, Timea. I agree with you that we have such a community here, and we’ve already been in the space of discussions, and adding another discussion with regards to the GDC is something we’re able to handle. You’re quite right, thank you. So I’m now going to hand the same question over to Valeria. Thank you.

Valeria Betancourt: Absolutely. Thank you so much, Carol. Good morning, everyone. Very happy to see you all here, and also thank you for the opportunity to share the perspective of the Association for Progressive Communications, a global network of civil society groups working with the internet and other information and communication technologies to improve people’s lives. The Internet Governance Forum has been for years a platform for debate and for dialogue of public policy issues. that cut across multiple action lines, action lines of the WSIS. It complements both the WSIS forum’s role in monitoring progress regarding the WSIS action lines and also the CSTD, which serves as a forum for intergovernmental discussions. The Internet is at the heart of the WSIS vision of a people-centered, inclusive information society. Its development, its availability, how the Internet is used, how affordable and meaningful accessing it is, what languages one can use or read on the Internet, its potential for good but also its potential for harm. All of these Internet governance issues also relate to the WSIS action lines. And for the past 19 years, the IGF has given people from different parts of the world, with different perspectives and from different sectors and groups, the opportunity to share challenges and solutions. The IGF Best Practice Forum on Gender and Access, for instance, became the first global multi-stakeholder space for exploring gender-based online violence. The recommendations that emerged from this Best Practice Forum continues to inform efforts to make the Internet a space that is safe and is secure for women and gender minorities. Another example, the IGF Dynamic Coalition on Community Connectivity. It marked the first steps to recognition of the need to diversify solutions, models, markets and partnerships for empowering the unconnected to connect themselves. It has been mentioned already by Timea, but the regional and national IGFs are spaces for different stakeholders and for civil society to sit alongside governments and have difficult conversations about media freedom, human rights, accountability and digital justice. In the case of the LAC IGF, it has allowed us to build synergies with the e-LAC process, the regional digital agenda that resulted from the WSIS. So by addressing emerging issues and the way in which they are addressed, all aspects of society permeated by digitalisation have evolved, and, obviously, the global digital compact updates the framework for shaping our digital future and processes and proposes a set of shared principles and commitments for a more collaborative digital future. In that sense, it complements well the WSIS framework, and both should be integrated. The IGF is a central space to discuss ways to respond to gaps in the implementation in the WSIS goals, and tackle new issues addressed by the GDC, and in that way to contribute to reduce the fragmentation that permeates the global governance ecosystem of digital technologies. As the IGF vision beyond 2025 document that the Internet Strategy Working Group has put together, as it states, there is no dichotomy between the internet governance and digital governance, and the IGF, within its existing mandate, is a suitable platform for addressing the challenges and opportunities presented by the new and emerging digital technologies that shape our information society, and the related digital policy processes, and, at the same time, Also, continue analyzing and proposing solutions for the never-ending goals of the WSIS. I would like to call on the MAG and the IGF community to implement the actions proposed by the IGF Strategy Working Group in order to operationalize the vision for a more impactful IGF towards a people-centered and planet-centric digital policy, and also for democratic, inclusive, accountable and transparent governance of digital technologies. Last but not least, the IGF, we should recall, it also facilitated the IANA transition and also the NetMundial process emerged from the IGF. So there is a lot to build on the IGF, and the IGF is still playing a critical role to shape the digital future that will serve people’s lives in the best possible way.

Carol Roach: Thank you, Valeria. I do believe and agree with you that the IGF is well-positioned to look at the opportunities and the innovations as well as the threats and the risks, and we’re good at finding gaps and addressing them. So thank you very much. We are poised and ready to be impactful for the GDC as well as our communities. So we’re now going to move and end this session with the audience. We want to make it very interactive. I just want to remind the audience that you have a two-minute limit for your intervention. Please, this is not the time for advertising. Let’s just try to stick with the topics. I’m going to ask Jorge Cancio from Ofcom to start us off with his thoughts. Jorge, you want to take the mic?

Jorge Cancio: Oh, you’re on this side. Welcome. So, hello, do you hear me okay? Good. I don’t see you very well, but I hope you hear me. I’m hearing myself, so it must be working. So thanks for giving me this opportunity. I’m Jorge Cancio from the Swiss government. I was, I have to confess, I was in Tunis almost 20 years ago. When we agreed on the WSIS second phase, the Tunis Agenda for the Information Society. So I was part of that process. And if we look back, it’s not only 20 year old documents that we agreed on. We agreed on a vision of a people-centered, development-focused, human-based, human rights-based information society, what is a digital society. And we also agreed on means to make that happen with the so-called action lines, where the different agencies from the UN and also many other stakeholders would work towards that vision. We also agreed on some governance structures, the WSIS action lines would be discussed every year in what became the WSIS Forum. Member states would look into the progress in the Commission for Science and Technology for Development. And we would have the most creative kid on the block, the IGF, to look into emerging topics and have digital governance discussions. So as everything that becomes reality, it hasn’t been perfect. Many things have been imperfect, but we have achieved a lot. Many hundreds or thousands of millions of dollars have been spent in implementing those action lines. Many thousands of hours have been spent in discussing, in exchanging, in networking. So there’s something that became reality from the ideas that were discussed in 2003 and 2005. Now we have the challenge of a newer kid on the block, the global digital compact, which brings some important impulses on data governance, AI governance, on human rights online, on also what connectivity means today. And we have to look how do we implement those new goals. And we think that if we look at the wider WSIS family, so the architecture I mentioned before, where the IGF is a part of it, we have good means to do that, but we need a serious discussion amongst all of ourselves, being able to find a positive sum game in these discussions during the so-called WSIS plus 20 review, to see how this architecture, including the IGF, can deliver on a more fairer digital present and future that addresses the needs of everybody, be it on the global north or the global south. So I think we have a good basis. We have to try to strive for consensus, for common grounds, not being too much attached to flags or to words with many connotations, but really work towards an inclusive multi-stakeholder. So I hope this discussion helps in this, as seven years ago, the IGF helped in bringing about the beginning of the global digital compact process by recommending that the Secretary General should start discussion at the highest international level on the future of digital cooperation. Thank you.

Carol Roach: Thank you, Jorge. The floor is open for questions. You can just make your way down. Introduce yourself, please. Go ahead.

Audience: I’m Kim from the Telecommunication Directorate of Cambodia. Just a quick question. Connect from the previous panel to this panel. My question to all the panelists is how can we connect from the Internet fragmentation to the Internet governance? That’s my short question. Thank you.

Carol Roach: Any of the panelists want to take that? Okay. Sorry. Can you repeat the question again, please? Just give us a brief introduction of what the previous panel was, so we can make the connection.

Audience: Okay. In the previous panels, we heard a lot about the Internet fragmentation, the cause and effects, and we did not really hear much about how to fix it. So in this panel, it’s about the Internet fragmentation. Internet Governance. So what I want to hear is how can the panellists in this discussion connect from the Internet fragmentation to the Internet Governance? So it seems like how can we fix the Internet fragmentation by using the Internet Governance? That’s how I can emphasize to my question. I hope that all panellists are clear with what I want to ask. Is it okay with that or not really?

Timea Suto: Thank you, sir. Yes, the question is clear.

Audience: Excellent.

Timea Suto: Thank you very much for repeating that and connecting us to the previous panel. And this is what IGF should be about, by the way, having conversations across panels as well, not just in sessions that we are sitting in. So Internet fragmentation, the way I see it, how it connects to Internet Governance, Internet Governance discussions here and generally policy discussions on digital issues should be the way through which we actually make sure that we collaborate to avoid fragmented Internet. We are all here talking about an open, interconnected, interoperable, resilient Internet. If we approach these issues, the issues of the Internet or the issues on the Internet in silos or we try to fix them in our own bubbles, in our national context or regional context or in our own stakeholder groups, we are already creating fragmentation. The Internet is something that transcends borders and that is its beauty and I think the greatest challenge. We cannot address what we want the Internet to look like or how the Internet should work without being the same idea of working like the Internet in the Governance conversations. My friend Bertrand says that it was a quote from Kofi Annan, we need to be as innovative as the people. who invented the Internet, if we want to have the conversations about the Internet. So we need to be able, in the Internet governance conversation, the digital policy conversations, to bring ideas, laws, regulations, whatever might be necessary, that connect, rather than break up the various parts that we want to fix.

Valeria Betancourt: Yes, in addition, I think from the governance point of view, it is very important to avoid the proliferation of processes, and to facilitate the integration of processes, so they can work together in a synergetic way, cross-fertilizing not only policy dialogue, but also policy development towards common ends. I think the challenges that we have to face really demand all the stakeholders to work around common goals, and the proliferation of processes, siloing of issues, does not contribute to precisely providing all the necessary solutions and responses that have to be consistent in terms of the different levels that we have to address. In that sense, I think the IGF is a very unique space to precisely consolidate the conversation, the analysis to bring together, and to also identify specific key messages and recommendations that could go to the different decision-making spaces, including at national level. So in that sense, I think fragmentation could be avoided also by integrating the different processes that are relevant for the governance and the policies of the Internet and digital technologies.

Carol Roach: Yes, I think also it’s one of the reasons why the IGF found it necessary to bring, to add the parliamentary and the judiciary track so that we can take that from discussion into policy, bringing the policymakers, the decision-makers into the conversation so that it’s just not talk, that we could move it forward in this collaborative space. Thank you. Vint?

Vint Cerf: Thank you very much, Carol. So far, this discussion has been extremely helpful. I just wanted to make a couple of observations that drive my conclusions. The first thing is that we’ve all seen the Internet evolve over time, and today it does things that we weren’t doing 30 years ago or 20 years ago. The important point is that the IGF needs to evolve as well, and it needs a permanent presence, because the Internet is not a static thing. IGF has to adapt to the Internet’s evolution, and so we need an IGF there at all times. It should become a permanent part of the DESA operation. It should have mainline funding at roughly $3 million a year, and we need to activate our ability to assess the state of the Internet collectively. There are examples of metrics of the condition of the Internet. UNESCO has its Internet universality indicators, for example. We might want to work together to assemble a picture of the Internet on an annual basis in order to present policymakers with a sense for where the Internet is going and where there is need for additional governance activities. I would love to see us harness even more the NRIs and the annual IGF meetings and the intersessional activity in order to assess the implementation of the GDC. So we have a big opportunity, I think, as IGF to contribute to the Internet’s evolution and continued utility, and as many of the speakers have pointed out, the Internet’s fragmentation is in some ways the antithesis of its original design, which was to connect everything together. And so we have to be continually concerned about that. fragmentation and abuse of the Internet. We need policies and implementations that will create a safer environment and one which is more productive for everyone. So thanks so much for letting me intervene, Carol. I’m eager to hear what others have to say.

Carol Roach: Somebody online needs a mic open.

Audience: Hello, everybody. My name is Muriel Alapini, and I’m from Benin. Please allow me to speak French because I knew I would be a little emotional during my speech, so I will be go with my native language. Thank you. Good morning, everyone. This is Muriel Ami from Benin. I am intervening as a user of Internet, and I would like to thank each and every person present here physically and remotely, allowing us actually to have Internet and to do whatever we are doing today. In Africa, the Internet is improving, and today, thanks to the IGF, you have no idea how many people are actually learning every day to use Internet. This is allowing them to change their lives and to have a new perspective on the world. The number of people who are participating in associations related to governance, Internet governance, et cetera, is huge. So we reach a point today, a culminated point, where I would like to ask you, everyone, decision makers and representatives from different stakeholders in IGF, to really look at the human side, the human beings who, through Internet, are only thinking about the role of associations. What are these associations doing? What is the IGF doing? The government in Africa is interested in Internet. This was not the case years ago. You have students, you have children. And today, everyone is involved in this collaborative work. Therefore, I would like to call upon you, beside discussions and diplomacy and words, please take into consideration that the humanity today depends on this, actually, tool and on the work that was done by the associations in the field that are bringing a new layer to the population. And this new layer of population is interested in these issues. And the fragmentation and division should not be existent. Despite all of this, actually, we have people who are interested in Internet and all relevant issues. Of course, nothing is perfect, and we need action. We need tangible actions to continue to work with you. And we hope to improve, actually. And we hope that you will be proud of us, because there will be new people contributing in your work. So thank you so much for your attention.

Carol Roach: Thank you. Tamir or Valeria, you’d want to add to the comments? No? Thank you very much for your observations. And the national and regional IGFs are doing tremendous work on the ground. And that is why we can see a lot of movement and improvements as well as the DCs, the dynamic coalitions. We have a question on the floor.

Audience: I’m going to speak in Arabic. In Arabic, the ambassador of goodwill and peace in Yemen, we are suffering from the worst humanitarian crisis, and the Internet blocked the rule of women. So what is the right procedures that we can use the Internet to free and set free the detained women? And why not the international organization enhance the Internet connectivity in Yemen, which suffer from the lack of information and the Internet governance? And how can we protect our data and our service on the Internet space?

Carol Roach: Very good question. Valerio, Tamir? Vint, any input?

Vint Cerf: I think the question, if I’ve understood the question correctly, how do we deal with the inequities that appear, especially in the Internet environment for various groups, women for example, children and others? As you all know, our purpose in the Internet is to be an inclusive and supporting environment where everyone has useful access. The only thing that we can do as we see these inequities happening is to make recommendations for more inclusive capacity, more inclusive practices. Perhaps the best way we can do that is to give examples where these problems have been addressed successfully and to explain what went into those solutions. Some of these problems are not technical. They’re social, they’re economic, they’re cultural. And this is where we have to understand that not all of the solutions are going to admit of a technical response, which makes it harder to address. But recognizing that these solutions require a variety of different responses across social, technical, and economic spheres, I think is an important first step.

Carol Roach: Thanks, Vint. A question from the floor.

Audience: Thank you very much. My name is Martina Legal Malakova. I’m from Slovakia. I’m a MAG member and I represent private sector. We always see the same countries active in also digital or green transition. My question is, what is the action plan that also another countries will be active in IGF and also in implementation GDC that we will achieve our digital future human and planet centric? Thank you.

Carol Roach: Okay. Sorry, Martina, can we have the question again? Sorry.

Audience: Sorry. So my question is, what is our action plan that also another countries will be active in IGF much more and also we will ensure that they will be active in implementation GDC that we will reach our human and planet centric digital future?

Valeria Betancourt: Just a brief response to that. As I was saying, I think the vision that the IGF is crafting for the future of the IGF also includes a set of suggested actions for the IGF community to be able to precisely shed some light in relation to how to refresh the role of the different stakeholders towards precisely that type of digital future. So I think I could call on also again to the IGF community and the MAG to put in action some of those recommendations that I think will put us in the right track towards integrating processes and tackling the challenges that we have.

Kurtis Lindquist: And I think also we need to work closer with the community that the IGF has built through the national and regional networks, but also just to all of us individuals, try and connect the conversations that we have here at the IGF on the international level to the realities that matter to those countries and the stakeholders in those countries, so it’s not just becomes an ivory tower of IGF conversations, but that they actually are connected to the realities. Not everything will be relevant or relevant in the same way to everybody, but I think we need to do a better job in translating what our work here means to those who are working on the ground. We cannot see the podium from up the stage here from the right, but I understand there’s people on the right side, so if I can maybe ask people, I don’t know, if you use the left side, I know it’s a bit of a track, but the moderator can see you, otherwise we can’t see you because of this podium right on the side of the stage. So I’m sorry, I just realized that somebody was there. So Bruna and Bertone are on that side. Maybe use the left side, please, because then we can see you. I’m sorry.

Carol Roach: I just want to interject. I think from the previous speaker and Martina, I think what they’re asking us is to improve our outreach, and as Timea say, to come up with an action plan to make sure that the outcomes that we have here get some kind of traction within the countries that need it. So I think outreach. a better outreach program is required. I don’t know who got to the stage first. Okay, everybody’s moved to this side. So go ahead, speaker on the floor. Go ahead on the floor. Mic on the floor, please.

Audience: Thank you very much, the chair, for giving me the floor. My name is Mary Uduba. I’m from Nigeria. I coordinate the West Africa Internet Governance Forum. I’m part of the Africa Internet Governance Forum and have been part of the WSIS from 2005 in Tunis. The IGF has progressed. We have seen participants from government and business, I mean private sector, and now we are including the parliamentarians and the judiciary. My question is when and where will the multi-stakeholder process and the multilateral process have a handshake? Will it be the implementation of the GDC? We need that handshake. We need it to be smooth enough for us to speak from the same angle and read from the same perspective. The same pitch. Vince, you can answer this question. Thank you.

Vint Cerf: I’m sorry that I was distracted taking notes and I’m not sure that I have the, if I was being asked to answer the question, I think I need a little summary of what the question was.

Audience: Can I say it again?

Carol Roach: Yes, go ahead.

Audience: I’m asking when and where will the multi-stakeholder process process, which the IGF is, and the multilateral process, which the government, okay, where and when will it shake hands, have a handshake when it’s smoothing out, and is the GDC implementation will it help to do this? That’s what I’m asking.

Vint Cerf: Yeah, that’s a wonderful formulation. Thank you for the summary and for the question. You know, I think part of the answer is right here. This is where the multilateral and the multistakeholder process should come together. Remember, multistakeholder includes government. And as Carol has pointed out, we’ve created tracks that include jurists and legislators and regulators to participate in the conversations. And I think you’ll also notice that some of the other parts of the internet infrastructure, think about ICANN and its governmental advisory committee. We have places where these kinds of connections can be made. Even in the technical community, the Internet Engineering Task Force has government representatives showing up to consider technical matters. So your point, which I’d like to emphasize, is that we should be looking for places where the governments and the rest of the multiple stakeholders can work together. And IGF is certainly one of them, which is why I think this should become a permanent part of the UN landscape.

Carol Roach: Thank you, Vint. I’d also like to point out that the IGF works closely with the inter-parliamentary group, and so we are handshaking in that way as well. From the floor, question? Question on the floor, mic, please. Oh, now it’s working.

Audience: Thank you. Thank you, Carol. Now, I just wanted to bring us back to the previous session, because during the Policy Network for Internet Fragmentation session, we discussed specifically Article 29C from the GDC, and asked everyone really a question as to how other stakeholder groups besides governments would think we could act and work together in order to implement, and not just implement, but also analyze, and maybe even establish KPIs for analyzing the implementation of the GDC. And I think, to be honest, this is the main question in the room, right? right, and we still have a lot of questions in the open as to how the GDC communicates with the WSIS Plus 20 process, what’s the role of the IGF, and how can we help steer this process, and maybe looking into next year, it would be really important for us to come to Norway, or come to the Norway preparation, living here as well, with a very clear picture as to how the IGF should help in these discussions, and perhaps develop our own sort of KPIs, together with the Sao Paulo Motor Stakeholder Guidelines to analyze how both processes will go, and how can we improve this space as well. That’s all. Thanks.

Carol Roach: Thank you. Very good input there. Anybody online? I see Nigel Hickson, so we will take one from the online. Go ahead, Nigel.

Nigel Hickson: Yes, good morning. I hope you can hear me. Clearly. Thank you. I wish I was there with you. I’m Nigel Hickson. I work for the Department of Science, Innovation, and Technology on Internet Governance issues, and I think this has been a really inspiring discussion. I think that the questions we had earlier on what the IGF can do for marginalized constituencies and marginalized people is just so important, because if the Internet Governance Forum and other bodies cannot work to, if you like, improve processes and to capacity-build in different countries, then I really do think we’ve lost an opportunity, and I think my good friend Mary from Nigeria put a finger on it earlier, in that we have to, if you like, work together as governments in these processes. Often, the multi-stakeholder approach, and I think it’s really important that we and multilateral words are conflated. And what really matters is not the terminology, it’s the ability of people to work together and to cooperate and to coordinate. And in that spirit, just two remarks on what the IJF can perhaps do better. I think what it can do better is not just, if you like, having topics to discuss, which is important, of course. It’s important that we discuss AI. It’s important that we discuss new and emerging technologies. But it’s also critical that we help communities in their capacity building. And there, I think the IJF could do a lot more. We could respond to the comments and the needs and the messages we get through the NRI network, and we could respond at the IJFs to a certain amount of capacity building, case studies where people have, in other countries, solved problems that countries still have with connection, with competition, competition policy, and many other issues where we could perhaps do more to enable people to get online in an affordable way. So I do think there’s more we can do, and we look forward, of course, to the mandate being renewed of the IJF, which will give it new spirit, which will give it new dynamic to work on these critical issues. So thanks for this panel, it’s really great. Carol, you’re an inspiration, as always.

Carol Roach: Thank you very much, Nigel. We’ll just take one more question in this segment, and, oh, sorry, I wanna make it equal, so we’ll do Nima and then one more from the floor. Nima? Oh, sorry, I think I pronounced your name wrong. Yes, I can’t see that well from this side. Nima, go ahead. Remember, two minutes, please. So, I’m going to turn it over to Mary, who is going to talk about the fragmentation of the internet. Mary, do you want to go first? Can everyone hear me? Yes, Nana, go ahead.

Audience: Thank you very much. This has been a very interesting one. I had to wake up early to follow. So, I’m going to talk about the importance of Internet Governance schools. The IG schools are very critical in building capacity on one end, but also inducting people into the multi-stakeholder model of governance, not just on Internet governance, and that’s why I’m very happy that we have the judiciary joining us after the parliamentarians. I believe that for those of us who are faculty, we have seen that those stakeholders, individuals from whatever stakeholder group they’ve come from, who have gone through the Internet Governance schools, are more holistic in their approach. I think the IG schools have been around for 12, 15 years, and I believe that there is a generation from the Internet Governance schools that have been around for a long time, and that’s why I’m very happy that we have the judiciary joining us today. I would like to thank all of the graduates who are now the decision-makers, and I would like to put it on our table that as we renew the mandate, and, by the way, I applaud the support of the UK Government to have this mandate, because this is where the national and regional initiatives actually come together. The schools are born by NRIs. The schools help induct people into the multi-stakeholder framework. The schools help us to raise holistic stakeholders into this, and they are more engaged because they’re coming through the right way. So I just want to lay this on our tables, pay homage to all of those who’ve been from the WSIS era, but encourage the newer generation like Muriel and the younger ones who come after us. So there is the Vincent’s generation, there is the Nena generation, there’s the Muriel generation. And I believe that as we work together to build more, the schools are critical. And like Mandela says, education is still the strongest weapon we have for development. And it implies in the IG section that educating IG schools may be our greatest weapon to ensure a multi-stakeholder and a united global digital community. My name is Nena. I come from the internet, and it’s been an honor following every step of the way from here in Abidjan. Merci beaucoup.

Carol Roach: Thank you very much, Nena. We have one last question from the floor. We do have another segment to get into, another exciting one. So last question from the floor. Thank you.

Bertrand de La Chapelle: Thank you. And thank you for making this a real interactive session. And apologies for my Martini voice, which is a strange thing to have in Saudi Arabia. My name is Bertrand de La Chapelle. I’m the executive director of the Internet and Jurisdiction Policy Network. Like Jorge, I was in the WSIS when people who were extremely innovative, including Nitin Desai, and Marcus Kummer, created the innovation that is the IGF on which we continue to run. The IGF has developed, added innovations as well, and one of the most important is the creation of the national and regional IGFs, which came up, bottom up the way they should be. The key problem we have today is that the IGF is caught in a catch-22 situation. It doesn’t have the resources to fulfill all its potential, and it cannot get the resources unless it charts a vision of what it wants to achieve and what is the potential it wants to accomplish. We have the WSIS Plus 20 review process, and what I’m going to say is not the only thing that we should discuss at the WSIS Plus 20 review. There are many other topics, and Jorge mentioned them among others. But there’s one thing that we should definitely put on the agenda as quickly as possible, is beyond the prolongation or the permanent nature of the mandate or the existence of the IGF, we need to seriously discuss the revision of its mandate and the improvement of its institutional structure. I do not know how to do it. I may have ideas. Other may have ideas. We will have to discuss this during that period. The suggestion I’m making is that we ask, and we collectively launch a consultation to all the national and regional IGFs in the perspective of the IGF in Norway in June next year and beyond, to ask at least to all of them this question, maybe others, but this question, how do you see the improvements, the new mandate for the IGF, how to improve its institutional structure, and most importantly, how should we discuss this? Should we have a new WGIG? Should we have another group? Should we have something more than all the reviews that have already been made? But this question of the new mandate is more important than just the question of reconduction. Thank you

Carol Roach: Thank you. I this is a very good opportunity to invite persons to go onto the IGF website and to review the vision document that the working group on strategy produced and the The MAG has given a nod to and it gives an outline to much of what was said just now with regards to a Way forward and with that I’m gonna hand over to Gbenga

Gbenga Sesan: It’s all yours, sir, thank you. Thank you Carol and thank you Timia. Thank you Valeria I I think that was a that was a good one Now we of course, I will say thank you to Christian. Christian is with us online Thank you for being patient, but this the first conversation then feeds into the second and thanks Curtis for Being patient now, we will move to have a conversation on two specific areas One is how do we enhance the IGF within the WSIS architecture? And the other which a few people had actually, you know entered on as we were getting comments and questions Is on the institutional improvement of the IGF we’re talking about a renewal of the mandate making it permanent But what would that look like? Institutionally, and so I have two questions for both Christian and and and caught Curtis and These two questions you could decide to answer just one or you know Take one of the questions or both And the first is how does the IGF Currently fit into the structure of implementing the WSIS outcomes and what role should the IGF play as we approach WSIS plus 20 review and beyond? That’s the first question. And the second is, taking into account all the contributions of the IGF over the last 19 years from all stakeholders, and the role of the IGF within the UN ecosystem, what do you consider the key priorities that we should focus on on strengthening the IGF in terms of institutional and other areas? I guess we can start with Christine, whose mic is already ready to go.

Christine Arida: Thank you very much, can you hear me well?

Gbenga Sesan: Yes.

Christine Arida: Okay, great. Maybe I should start my intervention by congratulating the Kingdom of Saudi Arabia for such a successful and great IGF, even if I’m participating online, it’s obvious and clear, and I’m really proud that the IGF is again in the Arab region. Also, let me say that I very much appreciate the opportunity to join this session, and I really thank the MAG for organizing this timely discussion on the future of the IGF and how to continue to empower its role in the digital governance space. I think I need to begin like all the other people who talked in the session before me to acknowledge the longstanding contribution that the IGF has had to the internet governance space. It has evolved into this unique platform that we all cherish because it is providing open dialogue for everyone on equal footing. But not only that, it’s also this space where emerging policy challenges have arised in a bottom-up manner and are being analyzed, discussed in a multi-stakeholder setup, and where in-depth policy options are developed and put forward to the wider community. I think we all appreciate that very much and have been long enough in there to see that as well. So it’s the time now to look at the future and address the gaps in order to advance the IGS mandate within the upcoming WSIS Plus 20 review. And I really like the interventions that were made by all the different participants from the floor, how we have to look at that. I particularly like Bertrand’s intervention that we really should look at the how, not just at the renewal, but what should we be doing to continue to build on this unique role. So I would like in my intervention to focus on three main points, which I believe, in my view, are critical to the empowerment of the IJF moving forward. And they are, in my view, all equally important, no particular order. So number one, and this was mentioned also by others, is the extended network of national, regional, and youth IJFs. I think this grassroots network of NRIs, which has grown organically over the years, reflects the specific diversions of the IJF community. NRIs reach to their communities, to their multi-stakeholder communities, and they’re so well-positioned to reach out to policy-making bodies. And I say not only policy-maker, but also policy-making bodies within their respective territories and regions. And therefore, I think they can, they are really well-aligned with the WSIS, they have put the IJF well-aligned with the WSIS vision to be people-centered, inclusive, and development-oriented. And they provide a source of innovation. We’ve seen so many different modalities, emerging policy topics, and different output shapes come out innovatively from NRIs and being injected into the global IJF. So I think we would be losing a lot if we do not leverage the NRIs’ extended network ahead of the WSIS Plus 20 review to shape up the renewed mandate of the IJF. I think this is where we should look, this is where we should have the discussions, and then we should have a coordinated work that would come through the NRIs to identify the gaps and put in directions for evolution and development, possibly at the upcoming IJF in Norway. The second point I want to tackle is the importance of creating linkages and building channels beyond the IJF community. The IJF has 20 years. of policy dialogue wisdom, and it is a cornerstone of the digital governance landscape, with no doubt. But it is also one component of the bigger WSIS family, Jorge talked about that. It has its unique role, which complements the action line facilitators, the program is monitoring for us, or even intergovernmental discussion venues. But with that holistic view in mind, I think we should really look at that there is no dichotomy between Internet governance and digital governance. The mandate of the IGF has been so broad that it has really put forward all the rapidly evolving challenges of digital policies, and this is clearly visible in the agenda over the past years, in its intersessional work, and also in the wealth of output it produces. I think what we’re missing, or what we need to do more, and this has been said, is how to communicate this more effectively and efficiently beyond the IGF’s own community. And in order to do that, linkages need to be made, more institutional coordination needs to be made, and to do that we need adequate and efficient resourcing and institutionalization of the IGF, which is also imperative if the IGF would play the role we want it to play in the implementation and the follow-up of the GTC. My third and last point is about the role of the IGF in inspiring multilateral processes and helping them evolve in the spirit of multistakeholder principles. The Net Manjar Plus 10 multistakeholder statement talks about the importance of including stakeholder voices in multilateral and decision-making processes, and that is important. Why? Because without that we wouldn’t have effective solutions to challenges that we face, and solutions wouldn’t be implementable without that. So the IGF has the beauty of both worlds. It is the innovative kid on the block, the innovative multistakeholder process within the multilateral UN system, and so best fitted to address the gap between discussion and action, between dialogue and recommendations. Therefore, I think the IGF should dedicate a track, not only at the annual event, but also intersessionally, also within NRIs, to pull, analyze, and help evolve the different flavors of multistakeholder approaches that are out there, and then echo them to inspire multilateral processes, and obviously the Sao Paulo multistakeholder guidelines would serve well in that respect. To that end, I think we should formalize the IGF evolution in order to give room for ongoing and continuous innovation and experimentation within the IGF framework. We should be bold enough to harness its potential to deliver actionable and tangible outcomes, such as evidence-based policy recommendations or policy testbeds, for example. And finally, I think we should be really mindful of the growing, increasingly fragmented digital governance ecosystem, and make sure the IGF plays the role we want it to play in the coordination of this space by providing cohesion and inclusive and diverse participation. of stakeholders, especially from developing countries and the global south. With that, I go back to you, and thank you very much for listening to me.

Kurtis Lindquist: Thank you. I’d also like to start by thanking the Kingdom of Saudi Arabia for a very successful hosted IGF. You asked the question, what does the IGF fit into the current institutional structure of the VISIS outcomes? The IGF is really the central to the VISIS framework and to the VISIS internal governance. VISIS recognized that the multistakeholder collaborative effort and collaboration is a requirement of necessity for successfully implementing the goals from the VISIS process. In that spirit, the UN and the UN Secretary General established the IGF as the platform to facilitate interrelated public policy discussions. In this space, the multistakeholder approach, which is very core to the VISIS outcomes, actually is brought to life and becomes concrete. For the last 19 years, the IGF has provided this platform for government, civil society, business, academia, and the technical community to engage with each other in these open discussions that we’re having here and have had for the last 19 years. And internet governance and the future of the internet. And the real value in this is fostering this dialogue and building consensus on these topics that then shapes the narratives and inform policymaking at national and international levels as we go forward. And as we come to the VISIS plus 20, and I think IGF is going to have, in order to remain relevant, I mean the world around us is changing as well, it’s not a static world, we’re not in the same world as we were 19 years ago, and IGF really needs to change to facilitate the new discussions and amplify to the decision makers and there’s a real outcome that the discussions here is taken away. IGF should continue to strengthen partnerships with other UN agencies and topical and specialist agencies and their processes. And there is really a need for the coordinated global governance more than ever before. As again the world has evolved on from the 19 years ago when we came together in Tunis. And the IGF really already was recognized in the GDC as the primary multi-stakeholder platform for discussion of internet governance issues. And we really should make sure that we don’t undermine that role and the importance as expressed in the GDC. Your second question was about the substantive contribution by the IGF in 19 years. And I think there’s three key areas to focus on is to strengthen the outputs. The IGF has been exceptionally effective in generating rich and meaningful discussions on a wide range of topics relating to internet governance. But we need those discussions to translate into tangible and actionable outcomes that as I said we can take with us and bring back into policy-setting discussions. And this should also help to bring impact in bridging the digital divide as we heard in some of the questions in the previous session. And make sure it helps also to connecting the unconnected people. The second key area is to enhance inclusivity. It was also a topic of the previous session or the previous part. And we really need to bring in more voices especially from the global south into the discussions. The youth and marginalized communities. I mean this was I think two or three of the questions or points made earlier. I completely agree with that. There is a diversity of perspectives here that makes the IGF quite unique. Again it’s one of the outcomes of the multi-stakeholder process, right? That’s why we’re here. And it’s really important that those unique perspectives are shared with these underrepresented regions. That they are heard in this and brought into these discussions. So make sure we have really have the broad stakeholder base that we can have in the multi-stakeholder process. And I think it’s also important to say that representation matters because the governance framework we have and it was driving to really must reflect realities of needs of all the Internet users all around the world in order to make this is also have confidence in the process. And the last area is to adapt to new challenges. IGF has evolved. over the years. It’s not the same IGF as 18 years ago, so there has been an evolution. There are new challenges, new topics, and this has really been the platform for this multi-stakeholder process and participation to discuss around these emerging technologies and emerging challenges and policy challenges. We need to make sure that we continue to bring that in and to adopt it. And if we don’t address these priorities, I think the IGF has a risk of losing a bit of, maybe credibility, but also relevance, right? There is a risk that that’s undermined. And last, of course, there is also geopolitical issues that could deepen the global divides, and we had to be very careful to also address that and provide a platform for that to ensure that we have, as Vint talked about, the continued unified and open Internet that’s globally reachable. That’s a very important part, as we heard in the previous session as well.

Gbenga Sesan: Thank you, Curtis. And the interesting thing is you’ve both emphasized the role of multi-stakeholder, the multi-stakeholder nature of the IGF and the lessons we’ve learned and how that helps with a lot of diversity. We will come to the room now. This room is full of experts, and I’m sure that, you know, we will benefit from the contributions and conversation. I will invite, to get us started on this, I am sure we have Joan in the room to get us started on this this twin questions. First of all, what is the role of the IGF within the, you know, WSIS framework? And secondly, how do we work on improving this in terms of improving the institutionalization of the IGF as we go into the next phase of the work?

Juan Fernandez: Good morning. I’ve been asked to motivate the audience for this second and round of interactions and questions and ideas. So, but in order to motivate the audience, I need to know the audience. I will introduce myself and if you allow me, I will ask just a couple of questions to try to feel the audience. My name is Juan Fernandez. I’m from the Ministry of Communications of Cuba. That means that I represent government. And you know, here, there are different stakeholders. So my first question, I’d rather, with that light in my face, I don’t see, I will move over there. Well, maybe I will take the microphone so I can move freely. I will ask, can you please raise your hand, those in the audience that represent government, government stakeholders? As usual, a minority here. And please, can you raise your hand, the members that represent civil society? More or less the same. And private sector companies? Well, and the fourth is the technical community. More or less the same. It’s more or less an equal distribution. So thank you very much. And my second question. For whom of you, this is the first IGF? Who repeats, who is the second? And more than two? Oh, the majority. So we have a very faithful community, Carl, that keeps on coming. And so to pass the floor to you to ask the questions, I just want to elaborate on that. In my view, you know, I said I was government. I was in Tunis when… and the IGF was agreed, you know, the proposal to have an IGF, and I’m going to be very honest to you. In that moment, I really did not understand why we needed a new forum that, you know, we have a very tight schedule all the year. So I really was a little bit skeptical about the IGF. But you know what? After the years has passed and the IGF has evolved to what is now, that is not only a yearly meeting, but we have all these national and regionals, IGF, we have a lot of intersessional things that are going on, dynamic coalitions, best practice forum, that are going and working all the year around, and voluntarily by people that are really interested in this thing. So it really, you know, I fell in love with the IGF, and that’s why I’m here now. So I think that the IGF has this unique characteristic that among all the fora in the UN system and maybe beyond, that the program and the issues discussed are proposed by you, by the community that comes to the IGF. The topics of the workshops, even the main things in every year is selected by consultation from a bottom-up consultation. So I think this is the beauty of the IGF. And now that we have reached the 20th anniversary next year, and there’s a big question to the international community, whether the IGF should continue or not, I would like to ask you a very concrete question. What would you say to the officials that are considering whether to recommend if the IGF should continue? or not? What would you say to them? And with that, I open the floor to answer this one question. Thank you.

Gbenga Sesan: Thank you so much. Yes, I think that was a good opener for this second segment of questions, and I’m sure you gave some work to the cameraman there. All right, I see we have a queue already. Please be reminded, we’ve got two minutes for each speaker, preferably answering the question of how do we internalize the IGF within the framework of WSIS plus 20, and we’ll focus on the second segment. Thank you.

Audience: Hello everyone, this is Manal Abdussamad from Lebanon. I’m a public policy advisor based here in Riyadh. This is my first IGF. I have two viewpoints that focus on two key ideas. First of all, I believe that IGF should focus on balancing innovation with inclusivity, with privacy, with equitable access to all, especially and particularly for the underserved communities and vulnerable people. And secondly, the major question and the major viewpoint is IGF after 19 years, why it isn’t generating outcomes and recommendations? We know very well that having this open-ended, this continuous dialogue without tangible recommendations can lead to nothing and can risk impact. Thank you.

Gbenga Sesan: Thank you so much. Let’s have the, if there’s anyone to my right, I can’t see you. Okay, there’s no one. Okay, so thank you so much. Everyone is using the left. Please go ahead. The next comment.

Audience: My name is Khaled Fattah. First of all, hello Vint. I’m sure we’re keeping you up late. This is Khaled speaking, and hello Christine. These are colleagues from the days when I refer to the days of when dinosaurs roamed. before the IGF, so I also, I can’t remember, I don’t know who the person who was speaking earlier was, so I was also there when IGF was created. One of the things I want to point out, and taking away the question of what do we do next, IGF is, by next IGF, we no longer are a teenager, so we won’t have the excuse of still experimenting. With the advent of technology, the advent of the ability of AI to crack so much of the democratize, the risks, through cyber, through the internet, to society all over the world, it is imperative, and I think I will draw on the question from the last intervention, we must find a way, and I’ve been saying this for many years, Vint, if you recall, we must a way of making conclusions so that we serve society, both of us, you and I, we have had this journey for a long time to try and make the internet a better place, not the OK Corral that it is today. So I think we need to really get out of our comfort zone. Without getting out of comfort zone, we will continue to be, we will continue, unfortunately, to remain a talking shop, and that in itself isn’t sufficient to come up with answers. Because today, just by doing this, and the leveraging of AI, a 15-year-old could shut down a city and a country, and I’m speaking now as an expert in the space of cyber. So I think we need to get out of comfort zone and find a way forward, and perhaps be more creative in what we need to do moving forward. Otherwise, as, where’s the ICANN CEO? He made a comment earlier on in his presentation that was very, very vital. We risk losing value and losing our raison d’etre. and I want to see this succeed because this is also my baby just like it’s yours, Vint.

Carol Roach: Thank you. So I’ll go online. I see two hands. I see Nena from the internet and Vint. So I’ll take Nena and then we’ll have Vint and then I’ll come back offline or on-site. If Nena is not ready we can take Vint and when she’s ready I’m sure I’ll see that on the screen. Vint, please.

Vint Cerf: I just wanted to respond to Khaled’s question. Of course the IGF can’t possibly solve all problems and so my recommendation and response to what should we do is to focus not only on the problems which we talked a lot about but also where they could be addressed. So IGF should not necessarily in itself try to solve problems, all of them. Some of it surely can make strong and evidence-based recommendations for but one of the strongest things we can do is to formulate the problem or the question and then suggest where that question should be addressed.

Carol Roach: Okay, thank you, Vint. Nena, are you ready or should I go?

Audience: Yes, I am. I needed to be unmuted. Thank you, Gbenga. I want to speak to the questions that were asked. What am I going to tell someone are the reasons we should make the IGF permanent or renew its mandate. I have seven reasons. The first is that this is one true global So I think that is one reason to preserve the multi-stakeholder model. The second is that we have global instances. We have the global, we have the regional, we have the national, we have sub-regional, and I don’t know of any other instance so far that we have all of those instances. So I think that is one reason to keep the IGF. The IGF is an instance that does not heavily depend on the UN for its financing. I mean, it’s very cheap, by the way, to have these conversations, to have these convenings. So it does not heavily depend on the UN and its finances. I earlier talked about the Internet Governance Schools. The IGF allows us to have education that is capacity-building packaged into it. So I believe it’s a holistic governance capacity-building package that is built into the IGF. There is also the cost of changing what we have now. We don’t have any other thing, it is true. But what is the cost of abandoning it to seek for something else? I think that cost is way more. And finally, I think that the IGF, because of the instances that we have, because of its nature, because of what it already gives us, is a great place to serve the GDC, which is what we are working on now. So I believe that as a vehicle, as an implementation vehicle, as a broad way, and as a leeway, as an avenue for the GDC, GDC, the IGF, it’s a great place. And please, one thing, emerging technologies. 10 years ago, we used the word emerging technologies generally, but now I think that in the past few days, past few years, the AI has come into force. There will be more in the future. And I think that the IGF space is a good place for us to have conversations on emerging technologies. My name is Nnena. I come from the internet, and I think this will be my last intervention for the day. Thank you very much.

Gbenga Sesan: Thank you, Nnena.

Audience: I will speak in French. Can I proceed, please?

Gbenga Sesan: Go ahead.

Audience: My name is Christine Amesson. I come from Benin, from the Ministry of Economy and Finance. We had a great session, but I would like to ask about the repercussions. We, in 2003 and 2005, put with the Society of Information and Technology new measures. Why don’t we change the paradigm to become an entity of observation and allow each stakeholder from this private sector to actually take part? Each can actually participate and bring his expertise, progress, and say whether this is doable or not. Why do we? How about internet belonging to everyone? And what do you think about this new idea that I brought?

Gbenga Sesan: Thank you so much.

Audience: Hi, I am Israel Rosas from the Internet Society. First of all, I also want to thank you for organizing this session in this format. I think it’s part of the value of the IEF. One of the contentious topics is whether the IEF provides tangible outcomes. And I think that we, this community, should think about how to increase the visibility of the partnerships that the NRIs are promoting and using at the national and regional level, because that’s where the multi-stakeholder model translates into tangible, concrete outcomes that benefit people, where all stakeholders can participate and create concrete solutions in benefit of people. Thank you.

Gbenga Sesan: Okay. We’ll take, let’s have, we have two comments online and I see, we’ll close the queue now because of time and we have four people. So let’s have two more comments in the room. We’ll take two comments online and we’ll come back to the two last comments in the room. And at this point, if you could help me spend a minute, I’ll get to everyone, I promise. If you spend a minute, I’ll get to everyone.

Audience: Thank you. I’ll be very brief. My name is Annalise Williams. I’m part of the technical community and I’m very involved in Australia’s national IGF. I moderated a session yesterday on evolving the IGF that generated many ideas that will maybe be put forward in the session report, but just briefly, they included closer engagement with the NRIs and incorporating national and regional discussions into the program of the global IGF in a more coordinated way. And perhaps looking back, some sort of compendium over the 19 years of the IGF and its achievements and outcomes and the things that have been discussed here. In response to the question about what we would say to decision makers, the DNS Research Federation report on the IGF’s achievements said if the IGF… didn’t exist we’d have to create it, but the IGF does exist, it’s very valuable, I think it should continue and its mandate should be permanent and I think we can perhaps think about how the IGF works in accordance with its Tunis, the mandate in its Tunis agenda in relation to identifying emergent issues and bringing them to the attention of relevant bodies and the general public and where appropriate making recommendations. Thank you.

Carol Roach: Thank you so much. The next comment from the room.

Nthati Moorosi: Thank you very much. I will try to spend a minute. My name is Nthati Moorosi. I am the Minister of ICT Science and Innovation from Lesotho. I’ve been sitting here, this is the first time I attend IGF and I must say it’s a very good forum because it brings everyone to the table, but I’ve been really bugged by the objective of IGF regarding leaving no one behind, regarding inclusivity and looking at the challenges that I shared when I was at the panel about some of the challenges that we have in Africa regarding inclusivity, especially the price of the gadgets, the price of the data. As governments we’ve done a lot to connect people with the infrastructure, but a lot of people are not connected to internet because of the gadgets. I think it’s about time we ask IGF to have a special forum with the private sector to come up with solutions for us regarding some of these special challenges and maybe the IGF to track all the countries to see how we are doing from time to time in the objective of bringing everyone and inclusivity and leaving no one behind. Thank you. Thank you so much.

Audience: We have Anup and one other, and online, can we take those two comments? Thank you for the opportunity to let me speak. So I would just like to stand with my views, that I stand in solidarity with IGF to continue from a personal perspective, for the room should be made for the innovations and inclusions and widely reaching to rightist societies, even like the social societies and civil societies and people of the world for better governance, forum and interaction. So I believe that IGF has the whole potential to reach to the grassroots levels. Even I was attending the cyber security meeting today, and there also we spoke about that how IGF can reach up to the marginalized society and help the unlearned people to learn about the perspective of internet governance forums. And thus, I would also like to say, governance and system rethinking is crucial with evolving nature of technology. And thus, I also discovered the digital peace day, which I would like to celebrate with IGF further. Thank you.

Gbenga Sesan: I’m sure if we did a world map of this session, one huge one would be NRIs, and I think that is really critical. Let me give our panellists the chance in two minutes each to give us what would be your last word and ask a recommendation or a best practice that you see in these conversations that we’ve had. Let me start with you, Timea, if you don’t mind.

Timea Suto: Okay. Thank you. Thank you, Gbenga. I’ll try to be brief in how to respond to a very large question and to the very rich discussion that we’ve had today. I want to go to the comment that Mary raised from Nigeria about how multistakeholder and multilateral need to shake hands. I don’t think they only need to shake hands, they need to hold hands and go along the road together. It’s an either-or, and I think it’s about time that we’ve realised that. We don’t need to be married to words. It’s not something that we say this word or the other, and we’ve solved the issue. We do need to think about implementation, and I think the implementation is bringing the these two ideas into one governance conversation where we truly have everybody at the table and make recommendations together that resonate with the local realities. And that can only happen if we have governments, businesses, civil society, the technical community, academics, all around the table that bring their own perspectives. It’s not about everybody having a vote or everybody negotiating an outcome, but everybody having a voice and for that voice to actually be heard and having a dialogue. And that is what I mean by multi-stakeholder, multilateral needing to hold hands as we move along the business road together.

Gbenga Sesan: I like the picture I’m seeing there. They shouldn’t just shake hands. We need to hold hands and maybe take a walk. Valeria.

Valeria Betancourt: Yes, I cannot agree more with Temea. And I could say that the IGF has to also strengthen its capacity to not only produce messages, but to communicate those messages to the relevant policymaking spaces. I think that’s essential. Having the messages obviously is necessary, but the IGF has to strategically identify the different spaces and processes and forums where to take those messages and shape and help shaping the decisions. For instance, around what is being decided at the moment around artificial intelligence governance or data governance. And this should also include the national level processes. So in that sense, I think the ability to also communicate and connect with governments is something that has to be strengthened. In other words, interfacing with the governments, interfacing with decision-making spaces, intergovernmental processes is key. And the IGF should continue to evolve in that direction.

Gbenga Sesan: Thank you so much.

Kurtis Lindquist: With the risk of taking the analogy slightly too far, but I don’t think you just need to hold hands. I think you need to dance together because they are very much linked and intertwined. Because the IGF is uniquely positioned within the business framework and the UN ecosystem. I don’t think there’s a going back on that, right? Even if the format. And I think it’s one of the few spaces, again, where all the stakeholders can really come together and they can tackle complexities of internet governance. But as I said in my intervention, to fulfill this potential, it also needs to adapt, right? We need to evolve it. It needs to, as we heard from the speakers, really focus on outputs that we can translate into actions. And it has to be a platform that’s truly inclusive. We need to expand on this. And the process has to be flexible enough to address the challenges that’s coming. Again, the space is static. We’re not back to 19 years ago.

Gbenga Sesan: Thank you so much. Christine.

Christine Arida: Thank you. I think I want to echo what the Minister of Liswato said. Because I know we’ve had at the IGF for years and years so many discussions about meaningful access, about connecting the next billions. And the points that were raised by Her Excellency the Minister are points that were really mentioned in all those discussions, in that dialogue. So to echo what every second person in the session has been saying, it’s about having more actions, having more recommendations, having linkages to governments. I think we just need to move with the IGF to the next phase. We need to figure out how we can use the potential that the IGF has, and actually in its current trend date, even to deliver on tangible outcomes. And we need to innovate to do that. Innovate in terms of frameworks, in terms of modalities. And again, the second point that I want to say is about the NRIs. I think there’s so much that’s been said about the NRIs in this session. We need to capture that and move towards implementing it. And I will bring an example from my region, the Arab region. We just had a collaborative session of different national and regional and subregional IGFs just a few weeks ago. And one of the recommendations that came out of that session was that our collective NRI should inspire decision-making and multilateral and intergovernmental processes within the region. I think we should encourage that in various regions as well and move on the global level to that.

Carol Roach: Thank you. Thank you. Vint, you’re literally next to Christine on the big screen in this main room. So your closing thoughts.

Vint Cerf: Oh my. Well, let me just be very brief and say that in this session and in the other sessions that I’ve attended, I reached the conclusion that IGF should become a permanent operation within the UN context and that we should work towards that outcome as we prepare for the IGF meeting in Norway and the subsequent WSIS plus 20. I think Bertrand makes the good point that we should consider moving in that direction by laying out what issues we should be addressing and how we should address them. Keep in mind that the IGF and the NRIs are not the only place where solutions might be found. So let’s be expansive as we think about problem formulation. Let’s think about where those problems could best be addressed. This has been a very stimulating session. I took a lot of notes and I’m looking forward to the closing question coming up later today.

Gbenga Sesan: Thank you, Vint. Thank you, Christine. Thank you, Kurtis. Thank you, Valeria. And, of course, thank you, Carol. I guess this literally brings us to the end of this session, but I see your mic, so something is coming.

Carol Roach: Sorry. So I’m going to take 15 seconds. I think, as Mike shared, this is my takeaway and something that we will bring to the Secretariat, the strengthening of partnerships with other UN agencies, strengthening of partnerships with private sector and business community, and the handshakes and the hand-holding, we need more of that. We’ve started with the inter-parliamentary union. I think there’s also something similar with the diplomatic corps or state missions, so I think we probably could extend that, and we must, must improve engagement and outreach with the underserved. Thank you. Thank you to the organizers for this fantastic session.

T

Timea Suto

Speech speed

155 words per minute

Speech length

1356 words

Speech time

524 seconds

IGF fosters inclusive multi-stakeholder dialogue on internet governance

Explanation

The IGF has been instrumental in bringing together diverse stakeholders including governments, businesses, civil society, academia, and the technical community. It has established itself as the premier global platform for open and constructive discourse on internet governance issues.

Evidence

The IGF has had conversations in this multi-stakeholder setting on a wide range of internet-related topics.

Major Discussion Point

Role and Contributions of the IGF

IGF builds global awareness of critical digital issues

Explanation

The IGF plays a crucial role in raising awareness about important digital issues worldwide. It covers a broad range of topics from access to digital technologies to cybersecurity and emerging technologies.

Major Discussion Point

Role and Contributions of the IGF

IGF has developed vibrant intersessional work and ecosystem

Explanation

The IGF has created a robust ecosystem of initiatives that allow stakeholders to collaborate on specific issues year-round. These include best practice forums, dynamic coalitions, and policy networks.

Evidence

These initiatives have produced reports and outputs on issues like cybersecurity, meaningful connectivity, AI, and internet fragmentation.

Major Discussion Point

Role and Contributions of the IGF

IGF has been instrumental in identifying actionable solutions across sectors and regions

Explanation

The IGF has played a key role in finding practical solutions and fostering alignment on digital issues across different sectors and regions. This has enabled a more cohesive approach to implementing the WSIS action lines.

Major Discussion Point

Role and Contributions of the IGF

IGF should serve as foundational resource for GDC implementation

Explanation

The IGF can be a crucial resource for implementing the Global Digital Compact (GDC). It can leverage its unique convening power to share insights, exchange best practices, and forge partnerships to further GDC implementation.

Major Discussion Point

Future of the IGF and WSIS+20 Review

IGF should maintain momentum for WSIS+20 review and GDC

Explanation

The IGF has the potential to sustain momentum for both the WSIS+20 review and the GDC beyond their initial moments. It can offer a space for ongoing dialogue, monitoring, and accountability on commitments made in these processes.

Major Discussion Point

Future of the IGF and WSIS+20 Review

V

Valeria Betancourt

Speech speed

122 words per minute

Speech length

1098 words

Speech time

538 seconds

IGF provides a platform for debate on public policy issues across WSIS action lines

Explanation

The IGF serves as a forum for discussing public policy issues that span multiple WSIS action lines. It complements the roles of other WSIS-related forums in monitoring progress and facilitating intergovernmental discussions.

Major Discussion Point

Role and Contributions of the IGF

IGF allows different stakeholders to share challenges and solutions

Explanation

The IGF provides an opportunity for people from various parts of the world, with different perspectives and from different sectors, to share challenges and solutions related to internet governance. This fosters a global dialogue on critical issues.

Evidence

Examples include the IGF Best Practice Forum on Gender and Access addressing gender-based online violence, and the Dynamic Coalition on Community Connectivity promoting diverse solutions for connecting the unconnected.

Major Discussion Point

Role and Contributions of the IGF

IGF should integrate WSIS framework and GDC

Explanation

The IGF should work towards integrating the WSIS framework and the Global Digital Compact. This integration can help address gaps in WSIS goal implementation and tackle new issues addressed by the GDC, reducing fragmentation in the global digital governance ecosystem.

Major Discussion Point

Future of the IGF and WSIS+20 Review

IGF vision should be operationalized for more impactful outcomes

Explanation

The IGF community should implement the actions proposed by the IGF Strategy Working Group to operationalize the vision for a more impactful IGF. This would contribute to shaping a people-centered and planet-centric digital policy, and promote democratic, inclusive, accountable, and transparent governance of digital technologies.

Major Discussion Point

Future of the IGF and WSIS+20 Review

IGF needs to strengthen ability to communicate messages to policymaking spaces

Explanation

The IGF should enhance its capacity to not only produce messages but also communicate these messages to relevant policymaking spaces. This is essential for shaping decisions on issues such as artificial intelligence governance or data governance.

Major Discussion Point

Improving IGF Outcomes and Impact

Agreed with

Kurtis Lindquist

Christine Arida

Agreed on

IGF should produce more tangible outcomes and recommendations

IGF allows stakeholders to sit with governments and have difficult conversations

Explanation

Valeria Betancourt highlights that the IGF, particularly through its regional and national initiatives, provides a space for different stakeholders to engage with governments on challenging topics. This facilitates important discussions on issues such as media freedom, human rights, and digital justice.

Evidence

Betancourt mentions the example of the LAC IGF, which has allowed for building synergies with the e-LAC process, the regional digital agenda resulting from WSIS.

Major Discussion Point

Enhancing IGF Inclusivity and Representation

K

Kurtis Lindquist

Speech speed

161 words per minute

Speech length

1239 words

Speech time

461 seconds

IGF is central to WSIS framework and internet governance

Explanation

The IGF plays a crucial role in the WSIS framework and internet governance. It embodies the multistakeholder collaborative approach recognized as necessary for successfully implementing WSIS goals.

Evidence

The UN Secretary General established the IGF as the platform to facilitate interrelated public policy discussions.

Major Discussion Point

Role and Contributions of the IGF

IGF provides platform for open discussions shaping narratives and informing policymaking

Explanation

For the past 19 years, the IGF has provided a platform for various stakeholders to engage in open discussions on internet governance and the future of the internet. These discussions shape narratives and inform policymaking at national and international levels.

Major Discussion Point

Role and Contributions of the IGF

IGF needs to adapt to remain relevant in changing world

Explanation

As the world has evolved significantly since the IGF’s inception, the forum needs to adapt to facilitate new discussions and amplify outcomes to decision-makers. This adaptation is crucial for the IGF to remain relevant in addressing current global challenges.

Major Discussion Point

Future of the IGF and WSIS+20 Review

IGF should strengthen partnerships with other UN agencies and processes

Explanation

The IGF should continue to strengthen partnerships with other UN agencies and topical specialist agencies. This is necessary to address the increasing need for coordinated global governance in the evolving digital landscape.

Major Discussion Point

Future of the IGF and WSIS+20 Review

IGF should focus on outputs that translate to actions

Explanation

While the IGF has been effective in generating rich discussions on internet governance, it needs to focus on translating these discussions into tangible and actionable outcomes. This is crucial for maintaining the IGF’s relevance and credibility.

Major Discussion Point

Improving IGF Outcomes and Impact

Agreed with

Valeria Betancourt

Christine Arida

Agreed on

IGF should produce more tangible outcomes and recommendations

IGF should enhance inclusivity, especially voices from Global South

Explanation

The IGF needs to bring in more voices, especially from the Global South, into its discussions. This inclusivity is crucial for ensuring that the governance framework reflects the realities and needs of all internet users worldwide.

Major Discussion Point

Enhancing IGF Inclusivity and Representation

Agreed with

Carol Roach

Agreed on

IGF should enhance inclusivity and representation

IGF should bring in more youth and marginalized communities

Explanation

The IGF should make efforts to include more youth and marginalized communities in its discussions. This diversity of perspectives is what makes the IGF unique and valuable in the internet governance landscape.

Major Discussion Point

Enhancing IGF Inclusivity and Representation

Agreed with

Carol Roach

Agreed on

IGF should enhance inclusivity and representation

IGF needs to be truly inclusive platform

Explanation

For the IGF to fulfill its potential, it needs to be a truly inclusive platform. This means expanding participation and ensuring the process is flexible enough to address emerging challenges in the digital space.

Major Discussion Point

Enhancing IGF Inclusivity and Representation

V

Vint Cerf

Speech speed

134 words per minute

Speech length

953 words

Speech time

423 seconds

IGF should become permanent part of UN landscape

Explanation

Vint Cerf concludes that the IGF should become a permanent operation within the UN context. This conclusion is based on the discussions in this session and others he attended at the IGF.

Major Discussion Point

Future of the IGF and WSIS+20 Review

Agreed with

Bertrand de La Chapelle

Agreed on

IGF should become a permanent part of the UN system

B

Bertrand de La Chapelle

Speech speed

145 words per minute

Speech length

397 words

Speech time

164 seconds

IGF mandate should be revised and institutional structure improved

Explanation

Bertrand de La Chapelle suggests that beyond prolonging the IGF’s existence, there needs to be a serious discussion about revising its mandate and improving its institutional structure. This is seen as crucial for the IGF to fulfill its potential and address current challenges.

Major Discussion Point

Future of the IGF and WSIS+20 Review

Agreed with

Vint Cerf

Agreed on

IGF should become a permanent part of the UN system

N

Nthati Moorosi

Speech speed

146 words per minute

Speech length

203 words

Speech time

83 seconds

IGF should have special forum with private sector on connectivity challenges

Explanation

Nthati Moorosi, Minister of ICT Science and Innovation from Lesotho, suggests that the IGF should organize a special forum with the private sector to address connectivity challenges. This is particularly important for addressing issues like the high cost of devices and data in Africa.

Evidence

The speaker mentions challenges in Africa regarding the price of gadgets and data, which prevent many people from connecting to the internet despite infrastructure improvements.

Major Discussion Point

Improving IGF Outcomes and Impact

IGF should track country progress on inclusivity goals

Explanation

Moorosi proposes that the IGF should track the progress of all countries in achieving inclusivity goals. This would help in monitoring efforts to leave no one behind in terms of internet access and use.

Major Discussion Point

Improving IGF Outcomes and Impact

C

Christine Arida

Speech speed

156 words per minute

Speech length

1354 words

Speech time

520 seconds

IGF should leverage NRI network to shape renewed mandate

Explanation

Christine Arida emphasizes the importance of leveraging the network of National and Regional IGF Initiatives (NRIs) in shaping the renewed mandate of the IGF. The NRIs are seen as crucial for reaching out to local communities and policy-making bodies.

Evidence

Arida mentions that NRIs reflect the specific diversions of the IGF community and are well-positioned to reach out to policy-making bodies within their respective territories and regions.

Major Discussion Point

Enhancing IGF Inclusivity and Representation

IGF should move to next phase with more tangible outcomes and government linkages

Explanation

Arida suggests that the IGF needs to move to its next phase, focusing on producing more tangible outcomes and strengthening linkages with governments. This involves innovating in terms of frameworks and modalities to deliver on concrete results.

Evidence

She cites an example from the Arab region where a collaborative session of different national and regional IGFs recommended that their collective NRI should inspire decision-making in multilateral and intergovernmental processes within the region.

Major Discussion Point

Improving IGF Outcomes and Impact

Agreed with

Valeria Betancourt

Kurtis Lindquist

Unknown speaker

Agreed on

IGF should produce more tangible outcomes and recommendations

C

Carol Roach

Speech speed

116 words per minute

Speech length

1259 words

Speech time

649 seconds

IGF should improve engagement with underserved communities

Explanation

Carol Roach emphasizes the need for the IGF to improve its engagement and outreach with underserved communities. This is seen as a crucial step in making the IGF more inclusive and representative.

Major Discussion Point

Enhancing IGF Inclusivity and Representation

Agreed with

Kurtis Lindquist

Unknown speaker

Agreed on

IGF should enhance inclusivity and representation

Agreements

Agreement Points

IGF should produce more tangible outcomes and recommendations

Valeria Betancourt

Kurtis Lindquist

Unknown speaker

Christine Arida

IGF needs to strengthen ability to communicate messages to policymaking spaces

IGF should focus on outputs that translate to actions

IGF needs to generate tangible recommendations and outcomes

IGF should move to next phase with more tangible outcomes and government linkages

Multiple speakers emphasized the need for the IGF to produce more concrete, actionable outcomes and effectively communicate these to relevant policymaking bodies.

IGF should enhance inclusivity and representation

Kurtis Lindquist

Unknown speaker

Carol Roach

IGF should enhance inclusivity, especially voices from Global South

IGF should bring in more youth and marginalized communities

IGF reaches grassroots levels and marginalized societies

IGF should improve engagement with underserved communities

Several speakers agreed on the importance of making the IGF more inclusive, particularly by involving underrepresented groups such as those from the Global South, youth, and marginalized communities.

IGF should become a permanent part of the UN system

Vint Cerf

Bertrand de La Chapelle

IGF should become permanent part of UN landscape

IGF mandate should be revised and institutional structure improved

Both speakers advocated for the IGF to become a permanent fixture within the UN system, with suggestions for revising its mandate and improving its institutional structure.

Similar Viewpoints

Both speakers emphasized the importance of the IGF in implementing and integrating the Global Digital Compact (GDC) with existing frameworks like WSIS.

Timea Suto

Valeria Betancourt

IGF should serve as foundational resource for GDC implementation

IGF should integrate WSIS framework and GDC

These speakers highlighted the importance of the IGF’s ecosystem, including its intersessional work and National and Regional Initiatives (NRIs), in shaping discussions and informing policy.

Timea Suto

Kurtis Lindquist

Christine Arida

IGF has developed vibrant intersessional work and ecosystem

IGF provides platform for open discussions shaping narratives and informing policymaking

IGF should leverage NRI network to shape renewed mandate

Unexpected Consensus

Need for IGF to adapt and evolve

Kurtis Lindquist

Christine Arida

Bertrand de La Chapelle

IGF needs to adapt to remain relevant in changing world

IGF should move to next phase with more tangible outcomes and government linkages

IGF mandate should be revised and institutional structure improved

There was an unexpected consensus among speakers from different backgrounds on the need for the IGF to evolve and adapt its structure and processes to remain relevant and effective in the changing digital landscape.

Overall Assessment

Summary

The main areas of agreement centered around the need for the IGF to produce more tangible outcomes, enhance inclusivity, strengthen its role in implementing the GDC, and evolve its structure and processes to remain relevant.

Consensus level

There was a high level of consensus among speakers on these key issues, suggesting a shared vision for the future of the IGF. This consensus implies a strong foundation for potential reforms and improvements to the IGF’s structure and processes, which could lead to more effective internet governance discussions and outcomes in the future.

Differences

Different Viewpoints

IGF’s role in producing tangible outcomes

Unknown speaker

Vint Cerf

IGF needs to generate tangible recommendations and outcomes

IGF should not necessarily in itself try to solve problems, all of them. Some of it surely can make strong and evidence-based recommendations for but one of the strongest things we can do is to formulate the problem or the question and then suggest where that question should be addressed.

While one speaker argues for the IGF to produce concrete recommendations and outcomes, Vint Cerf suggests that the IGF’s role should be more focused on problem formulation and directing issues to appropriate bodies for resolution.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the extent of the IGF’s role in producing concrete outcomes versus facilitating discussions, and the specific ways in which the IGF should evolve to meet current challenges.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental importance of the IGF and the need for its evolution, with differences mainly in the nuances of how this should be achieved. This suggests a generally unified vision for the future of the IGF, which could facilitate productive discussions on its development and role in internet governance.

Partial Agreements

Partial Agreements

All speakers agree that the IGF needs to evolve and adapt to new challenges, particularly in relation to the Global Digital Compact (GDC) and WSIS framework. However, they differ slightly in their emphasis: Timea Suto focuses on the IGF as a resource for GDC implementation, Valeria Betancourt emphasizes integration of WSIS and GDC, while Kurtis Lindquist stresses the need for overall adaptation to remain relevant.

Timea Suto

Valeria Betancourt

Kurtis Lindquist

IGF should serve as foundational resource for GDC implementation

IGF should integrate WSIS framework and GDC

IGF needs to adapt to remain relevant in changing world

Similar Viewpoints

Both speakers emphasized the importance of the IGF in implementing and integrating the Global Digital Compact (GDC) with existing frameworks like WSIS.

Timea Suto

Valeria Betancourt

IGF should serve as foundational resource for GDC implementation

IGF should integrate WSIS framework and GDC

These speakers highlighted the importance of the IGF’s ecosystem, including its intersessional work and National and Regional Initiatives (NRIs), in shaping discussions and informing policy.

Timea Suto

Kurtis Lindquist

Christine Arida

IGF has developed vibrant intersessional work and ecosystem

IGF provides platform for open discussions shaping narratives and informing policymaking

IGF should leverage NRI network to shape renewed mandate

Takeaways

Key Takeaways

The IGF plays a crucial role in fostering inclusive multi-stakeholder dialogue on internet governance issues

The IGF has contributed significantly to building global awareness of critical digital issues over the past 20 years

There is broad agreement that the IGF’s mandate should be renewed and made permanent within the UN system

The IGF needs to evolve to remain relevant, focusing on producing more tangible outcomes and recommendations

Enhancing inclusivity, especially for voices from the Global South and marginalized communities, is seen as critical for the IGF’s future

The national and regional IGF initiatives (NRIs) are viewed as a key strength to be further leveraged

There is a need to better integrate the IGF with other UN processes and improve communication of IGF outcomes to policymakers

Resolutions and Action Items

Work towards making the IGF a permanent operation within the UN context

Develop a vision and action plan for the IGF’s future role ahead of the WSIS+20 review

Strengthen partnerships between the IGF and other UN agencies, private sector, and civil society

Improve engagement and outreach with underserved communities

Enhance the IGF’s ability to produce and communicate actionable recommendations to policymaking spaces

Unresolved Issues

Specific mechanisms for improving the tangible outcomes and impact of the IGF

How to effectively balance the IGF’s role as an open forum for discussion with the need for more concrete outputs

Detailed plans for enhancing inclusivity and representation, particularly from the Global South

The exact nature of potential revisions to the IGF’s mandate and institutional structure

Suggested Compromises

Balancing the need for more tangible outcomes with maintaining the IGF’s open, non-binding nature

Integrating multistakeholder and multilateral approaches in internet governance, described as needing to ‘hold hands and dance together’

Finding ways to make IGF discussions more relevant to local realities while maintaining a global perspective

Thought Provoking Comments

The IGF has the beauty of both worlds. It is the innovative kid on the block, the innovative multistakeholder process within the multilateral UN system, and so best fitted to address the gap between discussion and action, between dialogue and recommendations.

speaker

Christine Arida

reason

This comment insightfully positions the IGF as a unique bridge between multilateral and multistakeholder approaches, highlighting its potential to drive concrete outcomes.

impact

It shifted the discussion towards how to leverage the IGF’s unique position to produce more tangible results and recommendations.

IGF should continue to strengthen partnerships with other UN agencies and topical and specialist agencies and their processes. And there is really a need for the coordinated global governance more than ever before.

speaker

Kurtis Lindquist

reason

This comment emphasizes the need for greater coordination and collaboration in global internet governance, recognizing the evolving complexity of the digital landscape.

impact

It prompted further discussion on how the IGF can better integrate with other governance processes and agencies to increase its impact.

We must find a way of making conclusions so that we serve society… Without getting out of comfort zone, we will continue to be, we will continue, unfortunately, to remain a talking shop, and that in itself isn’t sufficient to come up with answers.

speaker

Khaled Fattah

reason

This comment challenges the status quo and pushes for more concrete outcomes from the IGF, highlighting the urgency of addressing emerging digital challenges.

impact

It sparked a debate about the need for the IGF to evolve beyond discussion to produce more actionable results.

IGF should not necessarily in itself try to solve problems, all of them. Some of it surely can make strong and evidence-based recommendations for but one of the strongest things we can do is to formulate the problem or the question and then suggest where that question should be addressed.

speaker

Vint Cerf

reason

This comment provides a nuanced perspective on the IGF’s role, suggesting it should focus on problem formulation and directing issues to appropriate bodies rather than trying to solve everything itself.

impact

It reframed the discussion about the IGF’s purpose and potential impact, leading to more focused ideas about its future role.

I think it’s about time we ask IGF to have a special forum with the private sector to come up with solutions for us regarding some of these special challenges and maybe the IGF to track all the countries to see how we are doing from time to time in the objective of bringing everyone and inclusivity and leaving no one behind.

speaker

Nthati Moorosi

reason

This comment from a government minister highlights the need for more concrete action on digital inclusion, particularly in developing countries, and suggests a new role for the IGF in tracking progress.

impact

It brought attention to the practical challenges of digital inclusion and prompted discussion on how the IGF can facilitate more targeted solutions and accountability.

Overall Assessment

These key comments shaped the discussion by pushing it beyond general praise for the IGF towards a more critical examination of its future role and potential for impact. They highlighted the need for the IGF to evolve, produce more tangible outcomes, and better integrate with other governance processes while maintaining its unique multistakeholder character. The discussion moved from celebrating the IGF’s past achievements to envisioning how it can adapt to meet current and future challenges in global internet governance, with a particular emphasis on inclusivity, concrete problem-solving, and bridging the gap between dialogue and action.

Follow-up Questions

How can we connect from Internet fragmentation to Internet governance?

speaker

Kim from the Telecommunication Directorate of Cambodia

explanation

This question aims to understand how Internet governance can address the issue of Internet fragmentation, which was discussed in a previous panel.

How can we ensure the IGF generates tangible outcomes and recommendations?

speaker

Manal Abdel Samad

explanation

This question addresses the concern that after 19 years, the IGF may not be producing concrete results, which could risk its impact and relevance.

How can the IGF get out of its comfort zone and find more creative ways to address current challenges?

speaker

Khaled Fattah

explanation

This question suggests the need for the IGF to evolve and become more action-oriented in the face of rapidly advancing technologies and their associated risks.

How can the IGF increase the visibility of partnerships promoted by NRIs at national and regional levels?

speaker

Israel Rosas from the Internet Society

explanation

This question aims to highlight the concrete outcomes of the multi-stakeholder model at local levels and demonstrate the IGF’s tangible impact.

How can the IGF incorporate national and regional discussions into the program of the global IGF in a more coordinated way?

speaker

Annalise Williams

explanation

This question suggests a need for better integration of local and regional perspectives into the global IGF agenda.

Can the IGF create a special forum with the private sector to address challenges related to device affordability and data costs?

speaker

Nthati Moorosi, Minister of ICT Science and Innovation from Lesotho

explanation

This question addresses specific barriers to internet inclusivity in Africa and suggests a more targeted approach to problem-solving within the IGF framework.

How can the IGF track countries’ progress on inclusivity and leaving no one behind?

speaker

Nthati Moorosi, Minister of ICT Science and Innovation from Lesotho

explanation

This question proposes a more systematic approach to monitoring and evaluating the impact of IGF initiatives on global internet inclusivity.

How can the IGF strengthen its capacity to communicate its messages to relevant policymaking spaces?

speaker

Valeria Betancourt

explanation

This question addresses the need for the IGF to more effectively influence decision-making processes, particularly in areas like AI and data governance.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #35 Advancing Online Safety Role Standards

Open Forum #35 Advancing Online Safety Role Standards

Session at a Glance

Summary

This discussion focused on applying human rights standards to online spaces and emerging technologies, particularly artificial intelligence (AI). Experts from various Council of Europe bodies discussed how existing conventions and recommendations address online safety, violence against women and children, and discrimination risks in AI.

The panelists emphasized that human rights apply equally online and offline, but acknowledged challenges in implementation. They highlighted the importance of both legal and non-legal measures, including education, awareness-raising, and multi-stakeholder cooperation. The Lanzarote Convention on protecting children from sexual exploitation and the Istanbul Convention on violence against women were cited as key frameworks that have been adapted to address online dimensions.

Regarding AI, the discussion explored both risks and opportunities. Concerns were raised about AI potentially amplifying existing biases and creating new forms of discrimination. However, panelists also noted AI’s potential to identify patterns of discrimination and improve safeguards. The need for transparent, auditable AI systems and updated non-discrimination laws was stressed.

The experts called for greater collaboration between governments, civil society, and tech companies to ensure online platforms uphold rights. They emphasized the importance of political prioritization and moving from rhetoric to action in addressing online harms. The discussion concluded that innovation and human rights protection are not mutually exclusive, but require clear standards, commitment, and cooperation across sectors.

Keypoints

Major discussion points:

– Applying human rights standards to the online/digital space

– Challenges and opportunities of AI for human rights, especially regarding discrimination and vulnerable groups

– Need for comprehensive legal and non-legal approaches to protect rights online

– Importance of multi-stakeholder collaboration between governments, civil society, and tech companies

– Balancing innovation with human rights protections in developing new technologies

Overall purpose:

The goal was to explore how established human rights standards can be understood and applied in the online space and with new digital technologies, with a focus on protecting vulnerable groups like women and children.

Tone:

The tone was primarily informative and analytical, with speakers providing overviews of relevant conventions, recommendations, and challenges. There was an underlying sense of urgency about the need to take action, but the tone remained measured and solution-oriented throughout. Towards the end, some speakers emphasized the need to move from rhetoric to concrete action in a slightly more forceful tone.

Speakers

– Menno Ettema: Moderator, Council of Europe, Hate Speech, Hate Crime and Artificial Intelligence

– Octavian Sofransky, Council of Europe, Digital Governance Advisor Camille Gangloff, Council of Europe, Gender Equality policies

– Naomi Trewinnard: Council of Europe, Sexual violence against children (Lanzarote Convention)

– Clare McGlynn: Professor at Durham Law School, Expert on violence against women & girls 

– Ivana Bartoletti: Member of the Committee of Experts on AI, Equality and Non-Discrimination of the Council of Europe, Vice President and Global Chief Privacy and AI Governance Officer at Wipro

– Charlotte Gilmartin: Council of Europe, Steering Committee on Anti-Discrimination, Diversity and Inclusion (CDADI)

Additional speakers:

– Clara McLaren: Professor at the Dunham Law School, an expert on violence against women and girls

Full session report

Expanded Summary of Discussion on Human Rights in the Digital Age

Introduction

This discussion, moderated by Menno Ettema of the Council of Europe’s Anti-Discrimination Department, explored the application of human rights standards to online spaces and emerging technologies, with a particular focus on artificial intelligence (AI). Experts from various Council of Europe bodies examined how existing conventions and recommendations address online safety, violence against women and children, and discrimination risks in AI.

Key Themes and Arguments

1. Applying Human Rights Standards Online

The panellists unanimously agreed that human rights apply equally online and offline. Menno Ettema framed the central question of the discussion: “How can well-established human rights standards be understood for the online space and in new digital technology?” This set the agenda for exploring specific ways in which existing frameworks are being adapted to digital contexts.

Octavian Sofransky presented the Council of Europe’s digital agenda, emphasizing the organization’s commitment to protecting human rights in the digital environment. A Mentimeter poll conducted during the discussion showed that participants felt some or all human rights are more difficult to apply online, underscoring the complexity of the issue.

Naomi Trewinnard emphasised the importance of the Lanzarote Convention in setting standards to protect children from sexual exploitation online. She also mentioned a background paper prepared for the Lanzarote Committee on emerging technologies. Similarly, Clare McGlynn discussed how the Istanbul Convention, adopted in 2011, addresses the digital dimension of violence against women, with a General Recommendation on this topic adopted in 2021. These examples illustrated how existing legal frameworks are being adapted to address online harms.

2. Artificial Intelligence and Human Rights

The discussion explored both the risks and opportunities presented by AI in relation to human rights. Ivana Bartoletti provided a critical perspective, stating, “AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It does because it perpetuates and can amplify the existing stereotypes that we’ve got in society.” She also raised concerns about new forms of algorithmic discrimination created by AI that may not be covered by existing laws.

Naomi Trewinnard noted that AI is being used to facilitate sexual abuse of children online, highlighting the urgent need for updated protections. However, Bartoletti also emphasised AI’s potential for positive impact, stating, “We can leverage AI and algorithmic decision-making for the good if we have the political and social will to do so.” This balanced view led to a discussion of specific ways AI could be used to promote equality and human rights, given proper guidance and political commitment.

Octavian Sofransky highlighted the Council of Europe’s work on the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, demonstrating the organization’s proactive approach to addressing AI-related challenges.

3. Collaboration to Protect Rights Online

A recurring theme throughout the discussion was the need for multi-stakeholder collaboration to effectively address online human rights issues. Naomi Trewinnard highlighted the importance of cooperation with the private sector to obtain electronic evidence in cases of online child exploitation. She also emphasised the critical need for global collaboration and mentioned the annual Awareness Raising Day about sexual abuse of children on November 18th.

Ivana Bartoletti suggested the use of regulatory sandboxes to allow governments, companies, and civil society to work together on AI governance. She also discussed the EU’s Digital Services Act (DSA) as an example of regulatory efforts in this space. Clare McGlynn called for greater political prioritisation and action from tech platforms to address online harms.

4. Balancing Innovation and Human Rights Protection

The experts grappled with the challenge of balancing technological innovation with human rights protections. While recognising the potential benefits of AI and other emerging technologies, they stressed the need for transparent, auditable systems and updated non-discrimination laws.

Clare McGlynn emphasised the societal and cultural dimensions of online violence, stating, “If we’re ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys.” This comment broadened the scope of the discussion to include education and awareness-raising as key strategies alongside legal and technological approaches.

5. Defamation Laws and Human Rights Defenders

In response to a question from audience member Jaica Charles, Menno Ettema addressed the issue of defamation laws being misused against human rights defenders online. He highlighted the Council of Europe’s work on Strategic Lawsuits Against Public Participation (SLAPPs) and emphasized the need to protect freedom of expression while combating online hate speech, referencing CM Recommendation 2022-16 on combating hate speech.

Conclusions and Unresolved Issues

The discussion concluded that innovation and human rights protection are not mutually exclusive but require clear standards, commitment, and cooperation across sectors. The experts called for a move from rhetoric to concrete action in addressing online harms and protecting human rights in digital spaces.

Several unresolved issues emerged, including:

1. How to effectively balance innovation with human rights protection in AI development

2. Addressing new forms of algorithmic discrimination not covered by existing laws

3. Ensuring transparency and auditability of AI systems used by private companies

4. Protecting human rights defenders from misuse of defamation laws to silence them online

The discussion highlighted the need for ongoing dialogue, research, and collaboration to address these complex challenges at the intersection of human rights, technology, and governance. The use of Mentimeter questions throughout the session encouraged active participation and provided valuable insights into audience perspectives on these critical issues.

Session Transcript

Menno Ettema: Hi Imano, she’s just joining now. Perfect, good. Then I will slowly kick off. You should be hearing on channel one. Can you hear me now Imano? No? Yes, great. Okay, good. Good morning everyone. Good afternoon for those in other parts of the world or good evening. Good night. We are here at an open forum for one hour, a short timeline to discuss quite a challenging topic, which is to advance online safety and human rights standards in that space. I will shortly introduce myself first. I’m Imano Etema. I work for the Council of Europe in the Anti-Discrimination Department, working on hate speech, hate crime and artificial intelligence. And I’m joined by quite an extended list of speakers and guests. I’m joined here by Clara McLaren. And I’m sure that the technician is precisely Clara McLaren. She is a professor at the Dunham Law School, expert on violence against women and girls online. We are also joined here in the room by Ivana Bartoletti, member of the Committee of Experts on AI, Equality and Non-Discrimination of the Council of Europe, and also Vice President and Global Chief Privacy and AI Governance Officer at Wipro. Also with us is Naomi Trevannert, Council of Europe, as well as working on sexual violence against children, the Lanzarote Convention. As online moderator, we have with us Charlotte Gilmartin, who works also in the anti-discrimination department and is secretary to the expert committee on AI, non-discrimination, and equality. And Octavian Sofdrasky, digital governance advisor, also at the Council of Europe. The session is about human rights standards and if they also apply online, question mark. And I think it’s important to acknowledge that the UN and regional institutions like the Council of Europe, but also the African Union and others have developed robust human rights standards for all its member states. And that also includes other key stakeholders, including business and civil society. The UN and the Council of Europe has clearly stated human rights apply equally online as it does offline. But how can well-established human rights standards be understood for the online space and in new digital technology? So that’s the question of today. I would like to give the floor first to Octavian, who will provide us a little bit of information about the Council of Europe’s digital agenda. So just to set the frame for our institution, and then we will broaden the discussion from there or actually narrow it into really working on the anti-discrimination field. Octavian, the floor is yours.

Octavian Sofransky: Ladies and gentlemen, dear colleagues, I’m greeting you from Strasbourg. The Council of Europe, the organizer of this session, remains unwavering in its commitment to protecting human rights, democracy, and the rule of law in the digital environment. This dedication was reaffirmed by the Council of Europe’s Secretary General during the European Dialogue on Internet Governance in Vilnius last June. The Secretary General emphasized that the digital dimension of freedom is a priority for the Council of Europe. Europe. Our organization has always recognized the importance of balancing innovation and regulation in the realm of new technologies. In reality, these elements should not be viewed as opposing forces but as complementary partners ensuring that technological advancements genuinely benefit our societies. A Council of Europe Committee of Ministers declaration on the WSIS plus 20 review was issued this September advocating for a people-centered approach to internet development and the multi-stakeholder model of internet governance and supporting the extension of the IGF mandate for the next decade. Moreover, we are proud to announce the adoption of the pioneering Framework Convention on Artificial Intelligence and Protecting Human Rights, Democracy and the Rule of Law last May. This landmark convention, which was opened for signature at the Conference of Ministers of Justice in Vilnius on 5 September very recently, is the first legally binding international instrument in this field and has been already signed by 11 states around the world. Sectoral instruments will complement this convention, including possibly on online safety, our today’s section topic. As a long-time supporter of the IGF process, the Council of Europe has prepared several sessions for this real edition of the IGF, including on privacy, artificial intelligence and indeed the current session on online safety, a topic that remains a top priority for all European states and their citizens. Thank you.

Menno Ettema: Over to you, Menno. Thank you, Octavian, for elaborating the Council of Europe’s work and the reason for this session and a few others. Can I ask all the speakers that are joining us online to switch on their cameras, because it makes it a little bit more lively for us here in the room, but also online that are joining. Thank you very much. I would like to thank you for this, Octavian, and I would like I would like to go over to Naomi, because the Lanzarote Convention on sexual violence against children has a long-term experience with the topic, it’s a very strong standard, but recently published a new document on the digital dimension of sexual violence against children. Naomi, I give the floor to you to introduce the convention and the work that it does.

Naomi Trewinnard: Thank you, Meno. Good morning, good afternoon, everybody. I’m very pleased to be joining with you today. I’m a legal advisor at the Lanzarote Committee Secretariat, and that’s the committee of the parties to the Convention for the Protection of Children Against Sexual Exploitation and Sexual Abuse. So, as Meno mentioned, this is a really comprehensive international treaty that is open to states worldwide, and it aims to prevent and protect children against sexual abuse and to prosecute those who offend. So I wanted to just briefly present some of the standards that are set out in this convention. So firstly, to do with prevention, it requires states to screen and train professionals, ensure that children receive education about the risks of sexual abuse and how they can access support if they’re a victim, as well as general awareness raising for all members of the community and also preventive intervention programmes. When it comes to protection, really, we’re trying to encourage professionals and the general public to report cases of suspected sexual abuse and also to provide assistance and support to victims. And this is including through setting up helplines for children. When it comes to prosecution, it’s really essential to ensure that perpetrators are brought to justice. And this comes through criminalising all forms of sexual exploitation and sexual abuse, including those that are committed online, for example, solicitation or grooming. child, offences related to child sexual abuse materials, so also called child pornography, and also witnessing or participating in sexual acts over a webcam. The Convention also sets out standards to ensure that investigations and criminal proceedings are child-friendly, so there the aim is really to avoid re-victimising or re-traumatising the child victim, and also to obtain best evidence and uphold the rights of the defence. So in this respect the Lanzarote Committee has recognised the Children’s House or Barnahus model as a promising practice to ensure that we obtain good evidence, perpetrators are brought to justice and we avoid victimising children. So these standards and safeguards apply equally to abuse that is committed online and also contact abuse that is committed offline. The Treaty really emphasises the importance of multi-stakeholder coordination in the context of combating online violence, and this Convention really specifically makes a reference to the information and communication technology sector, and also tourism and travel and banking and finance sectors, really trying to encourage states to coordinate with all of these private actors in order to better protect children. The Lanzarote Committee has adopted a number of different opinions, declarations and recommendations to clarify the ways in which this Convention can contribute to better protect children in the online environment. For example by confirming that states should criminalise the solicitation of children for sexual offences even without an in-person meeting, so when this is in order to obtain sexual offences online, and also given the dematerialised nature of these offences multiple jurisdictions will often be involved in a specific case. We might have the victim situated in one country, electronic evidence being stored on a server in a different country, and the perpetrator sitting in another country. committing this abuse over the internet. Therefore, the committee really recognises and emphasises the importance of international cooperation, including through international bodies and international meetings such as this one. The convention is also really clear that children shouldn’t be prosecuted for generating images or videos themselves. We know that many children are tricked or coerced or blackmailed into this or, you know, generate an image and thinking it’s going to be used for a specific purpose within a conceptual relationship. And then it gets out of hand. So the committee’s really emphasised that we should be protecting our children, not criminalising or prosecuting them. In terms of education and awareness raising, the committee really emphasises that we need to ensure that children of all ages receive information about children’s rights. And also that states are establishing helplines and hotlines, like reporting portals, so that children have a place, a safe place to go to get help if they’re becoming a victim. And in that context, it’s also really essential to train persons working with children about these issues so that they can recognise signs of abuse and know how to help children if they’re a victim. So I’ve put some links to our materials on the slides and I’ll hand back to Menno now. Thank you for your attention.

Menno Ettema: Thank you very much. Thank you very much, Naomi. It is quite elaborate work to be done. But what I think the convention really outlines is that it’s legal and non-legal measures. And it’s the comprehensive approach and the multistakeholder approach that’s really important in addressing sexual exploitation of children or violence against children. In that line of thought, I want to also give the floor to Claire, who can speak on the work of the Istanbul Convention, around the Istanbul Convention, and particularly because it published a relatively new policy recommendation, number one, on the digital dimension of violence against women, which I think is a very important document to share here today.

Clare McGlynn: Yes, good morning, everybody. And thank you very much. So I’m Claire McGlynn. I’m a professor of law at Durham University in the UK. And I’m also a member of the Council of Europe’s expert committee on technology facilitated violence against women and girls. So I’m going to briefly talk today about the Istanbul Convention that’s just been referred to, which was adopted first in 2011. And there’s four key pillars to make this a comprehensive piece of law. It talks about prevention, protection, prosecution, and integrated policies. Now, the key theme of the Istanbul Convention is that violence against women and girls must be understood as gendered. Violence against women and girls is perpetrated mainly by men. It’s also experienced because women and girls are women and girls. Now the monitoring of that convention is done by the body called GRIVIO. That’s the independent expert body which undertakes evaluations of state compliance, as well as preparing various thematic reports. And as already mentioned, in 2021, GRIVIO adopted our general recommendation on the digital dimension of violence against women and girls. So this general recommendation offers an interpretation of the Istanbul Convention, in light of the prevalence and growing concern and harms around online and technology facilitated violence against women and girls. It provides many detailed explanations as to how the convention can be interpreted and adopted in light of the prevalence of online abuse, including things like reviewing relevant legislation in accordance with whether the digital dimension of violence against women and girls is particularly acute. We see this particularly in the area of domestic violence, where some legislation does not account for the fact that in reality today, most forms of domestic abuse involve some element of technology and online elements. It also talks about incentivizing internet intermediaries to ensure content moderation. The point here is about how women’s human rights are being inhibited and affected by online abuse. And regulation, such as content moderation, is necessary to protect those rights. In other words, regulation frees women’s speech online by ensuring we are more free and able to talk and speak online rather than self-censoring in the light of online abuse. It also talks, for example, about the importance of undertaking initiatives to eradicate gender stereotypes and discrimination, especially amongst men and boys. If we’re ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys. Thank you very much.

Menno Ettema: Thank you very much, Claire. I really like the general recommendation because of how it portrays the offline forms of violence against women and harassment and all the different ways and shapes it forms and how that is actually also mirrored in the online space. So it’s really a very clear explanation of how one and the other are the same, the online and the offline, even though we call it maybe different or it might be slightly differently presented because of the online context. But the dynamics are very similar. Thank you. Content moderation is an important part here as well. And working again with stereotypes and attitudes is a challenge. So it’s, again, legal, but also the non-legal approaches are very important. Thank you very much. Ivana, can I give the floor to you? Because one new area is, of course, AI. Octavian already mentioned it, the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, just adopted. From, can you just give two short words and how these human rights standards apply in the AI field? And then we’ll give the floor to the rest of the audience. And then we’ll come back a little bit more on the discrimination risks when it comes to AI, including gender equality.

Ivana Bartoletti: Thank you so much. So AI, of course, is one of the most talked things at the moment at the IJF here. We’ve been talking about AI a lot. And what is the impact is on AI on existing human rights and civil liberties? So obviously, artificial intelligence has been capable of doing so many excellent and good things over recent years. Can you hear me? There’s been a big push over recent, especially with the right gender. And talking about this, which is the idea we’re used to, which is that, oh, it’s breaking up. Can you hear me? Is that OK? OK. So I’m talking about generative AI, which is the idea we’ve seen that can generate images, that can generate, that is another area of discussion. Now, AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It does because it perpetuates and can amplify the existing stereotypes that we’ve got in society. That’s crystallizing them into either representation. So what you were mentioning earlier, you were saying, how do we change these stereotypes beyond the legal side? Well, there is an issue here, because the use of big data and machine learning, it can amplify the existing stereotypes. crystallizing, crystallize them. And on the other hand, it does provide, for example, with very easy and lower the bar to access to tools, such as, for example, generative AI tools that can generate deep fake images. And whether this is in the space of fake information, whether this is in the space of assigning civil women to depict pornography, what we are seeing is, again, the lowering the bar of access to these tools can have a detrimental impact on on on women, especially. But if you think about privacy, for example, I mean, privacy, and and the what Claire was saying, which was saying, you know, a lot of the domestic abuse is enabled by technology, AI plays a big part in it. Because of the enablement of tools like that, they can turn into monitoring tools. And these monitoring tools can turn into real tools of suppression. So we are very firm that, and the convention is wonderful, in the sense that it’s the first in really international convention, yes, you have the European Act, which is limited to Europe, the convention is international, alongside many other things that have happened. So for example, the digital compact at the UN, that thinks of reframing, framing human rights into the digital space, there’s been declaration happening. So there is definitely a discussion that is happening globally on how we protect safeguarded enhance human rights in the age of AI, but it’s not an easy task. Also, and and is one that needs to see all actors involved in.

Menno Ettema: Thank you very much. This is only just a small start on the discussion on AI. So we’ll come back to that in a second round. But I think what I what we’re trying to do here is to explain the various conventions that exist in a few of the various conventions that are exist related to discrimination and protection of groups that are particularly targeted, Istanbul Convention, the Lanzarote Convention. But I think I also wanted to engage with the audience here in the room and, and also online. We we launched a little Mentimeter because for us, it’s a and I’ll ask Octavian to put the mentor Mentimeter on online. Because for us, it’s very evident that human rights apply equally online as offline. But maybe we’re wrong. I was wondering what others think about this. So I have a little Mentimeter for for a little quiz to just put the finger on the polls. Octavian, are you there? Can you put the Mentimeter on please?

Octavian Sofransky: The Mentimeter is on.

Menno Ettema: We can’t see it. You have to change screens. Okay. I can assure you we tested this yesterday and it worked perfectly. But when the pressure is on, there’s always a challenge. Okay, well, Octavian is dealing with the technical challenge. And maybe I can give this floor first to Charlotte. Maybe there are already some questions from the audience online. And then I’ll go to the here’s the Mentimeter. Sorry, Charlotte. So you can scan the QR code or go to menti.com and then use the code. that is mentioned there, 29900183. So if you scan it or type it in, yes, I see people going online. Great. Then we can go to the next slide, Octavian, for the new first quiz. So are there specific human rights that are more difficult to apply online? There are four options, so please answer. Meanwhile, Charlotte, maybe I can give you the floor while people cast their votes. From online, were there any questions or comments that we should take in here in the room?

Charlotte Gilmartin: For now, there’s one comment from Peter King Quay from Liberia IGF. Their question is, what are some of the key interventions that can be suggested to increase or improve this topical problem in West Africa, especially in West Africa and the MRU region of Liberia, Sierra Leone, Guinea, and Ivory Coast, vis-a-vis these conventions and norms against women and girls, especially the Istanbul Convention? Thank you very much.

Clare McGlynn: Can I give the floor to Claire on this question? Yes, what I would add is that, as the colleague is possibly aware, the African Commission a couple of years ago did adopt a specific resolution on the protection of women against digital violence in Africa. And the work of the Special Rapporteur on the Rights of Women in Africa has done a lot of work around this regarding the online dimension. So both that specific resolution and the work of the Special Rapporteur are likely to… perhaps provide some further help and guidance on the particular issues and problems and challenges and opportunities arising in Africa. Thank

Menno Ettema: you very much Claire and I would also say that the recommendation number one of the Istanbul Convention of the Grevio, I mean it gives very practical suggestions on what can be addressed and I think this can be adjusted to, adapted to local context of course, always, that’s everywhere including the European continent but I think there are many guidelines there or suggestions that would be equally applicable in other parts of the globe. I see that the overall people, there’s a tendency to say yes all human rights or some human rights are more difficult to apply online. It’s an interesting result so yes all human rights apply online but it’s sometimes more difficult so maybe some people want to respond to that. I would like to go to the second question of the Mentimeter. What should be done more to ensure human rights online? So if some human rights are more difficult to apply online what could be done? What do you think could be done? And meanwhile I want to check the audience here if there are any questions or statements that they would like to share. Yes in the back of the room. Could you please state who you are just for the audience? Should be. It should work. Can you hear? Yes. Very well.

Audience: My name is Jaica Charles. I work with the Kenyan section for the International Commission of Jurists Nairobi. So thank you. First of all thank you for these wonderful presentations from various stakeholders. We appreciate you a lot. It’s actually something that we are very much interested in as digital rights expertise and human rights activists, sorry human rights defenders and especially on digital rights based on AI. So my question is there has been especially in African context there has been a lot of fight by the authorities towards the human rights defenders in the name of defamation. I hope we all understand defamation. So defamation has been used against human rights defenders online whenever they try to I mean to pinpoint issues regarding human rights online. They are mostly being charged under defamation and you’ll see aspects of abductions and such like things and especially it has happened recently in African context. For example in Kenya there was a Gen Z movement which was well known all over. So how can we approach that or how can we cap that especially in the context of AI to prevent such like things from happening? Like how can we I mean protect the human rights defenders online from being charged under defamation or rather being used on grounds of defamation as a tool to prevent them from doing the human rights work? Thank you.

Menno Ettema: Thank you very much. Just looking at my speakers who would like to pick up this question. And maybe I give it a go first myself and then the other colleagues can contribute. I mean, it’s a very pressing question. I think within the European scope, maybe if I may translate to that area where I’m more knowledgeable. So within the European scope, the Council of Europe and national authorities are moving away from defamation laws and legislation. I think also in the UN, this is echoed that defamation laws are not particularly helpful. And because of the way it’s often formulated and applied, that’s a problem. There are questions about now hate speech legislation, for example. And the Council of Europe has adopted a recommendation in 2022, CM Recommendation 2022-16, if you want to check it out, on combating hate speech. And it specifically explains and argues why defamation laws are not up to the task to actually deal with hate speech. And hate speech is a real problem for societies. It undermines the rights of persons targeted or groups that are targeted and undermines cohesion in communities. And I think well-crafted hate speech laws may function quite well. But well-crafted also means that we need to acknowledge the severity of hate speech. So you have hate speech that’s clearly criminal, falls under criminal responsibility. And this should be a very restrictive understanding, so we should be very clearly defined, explained what we understand, which grounds are protected under this hate speech under criminal law. Then you have other forms of hate speech that could be addressed through administrative law and civil law. For example, self-regulatory mechanisms with the media or political parties that have administrative law in place. And that is a less severe intervention when it comes to freedom of expression, Article 10 of the European Convention, for example. And it’s this balancing act. And then we have other forms of hate speech that cannot be restricted through legislation but still is harmful, so we need to address it. So I would really argue that taking inspiration from the recommendation, for example, to really engage in a national dialogue on reforming legislative situations and to really abide by a very narrow understanding of hate speech that falls under criminal law. And in the recommendation, we also refer to international UN standards and conventions that specify what falls under that. And then other forms of reactions you could do, including non-legal measures, education, awareness raising, counter speech, etc. And this would be a much better response. And defamation laws should not be used in such a way. It can be very easily misused. Well-construed hate speech laws should help. There’s also the work on SLAPs, strategic litigation, that might also give some guidance on what could be done to address misuse of legislation for silencing a group. So SLAPs, there’s a recommendation on SLAPs, and that’s quite an interesting document that could guide you in your work in that sense. Thank you. Naomi, please.

Naomi Trewinnard: Thank you. Yeah, I just wanted to perhaps share some insights of something parallel that we’ve dealt with at the level of protecting children from sexual abuse. So the convention is quite clear that professionals and all those that have a reasonable suspicion of sexual abuse in good faith should report it to the appropriate authorities, to like child protection authorities or police or whatever, but also that people who report in good faith should be protected from criminal or civil liability, so also be protected against claims of defamation. And so actually the Lansworthy Committee is looking at this question at the moment, looking how to reinforce protections for professionals so that they can respect their duties of confidentiality, their obligations to keep information safe, but also their duties to protect children. And I think it’s a really fine balancing act, but certainly I think clear guidance from states and policymakers setting out the ways in which people can be, when they’re denouncing something or reporting something, the ways that they should be protected from consequences can be very helpful as well.

Menno Ettema: Thank you Naomi for that addition. Just going to the Mentimeter, I see a few suggestions, several suggestions, thank you for that. Education, more education, content moderation, more research and data privacy laws, working on safety at all levels, physical and the online space and strengthen frameworks and their interpretation. So it’s quite an array, but I see very much also education as mentioned by a few people. Thank you. I would like to open a second section of the discussion and going back to AI because it’s the new, it’s the big elephant in the room. So I mean it’s the elephant in the room and the question is if human rights standards are in the area, for example, of gender equality, non-discriminations and the right of the child, are delicate porcelain that will soon encounter an elephant stampede or are there actually opportunities by the use of AI and should we not be so worried about the rights, human rights of these groups when it comes to the deployment of AI. Ivana, you already mentioned some aspects that, yeah, AI and human rights, they slowly come together, we need to be cautious, there are risks, but maybe there’s some more to add specifically in the area of non-discrimination and equality, also from your work in the expert committee.

Ivana Bartoletti: Octave, next slide, yes, thank you. Yeah, so I think in, I mean AI enables a lot of this, not the question that we just had, for example, about human rights, the same with journalists, no, there is a lot, there is also gender dimension of it because what happens often is that it is women who are the ones that are targeted the most and the elephant in the room is AI because AI has made a lot of this very much available, okay. So in the area of, if you think about artificial intelligence and algorithmic decision making, first we have to distinguish, it’s very important, one is what is so-called discriminative AI, which is the machine learning, what we use more traditionally, although it’s not really traditional, but in essence. So what is happening in that space, especially around algorithmic decision making, is that we are seeing women being, and especially the intersection between gender, race and other dimensions, we have seen often women being locked out of services, been discriminated against, it happened a lot, for example, with facial recognition, it happened a lot with banking services, education, now this is because AI needs data, data is central, data is scarce, often data comes the western world, therefore when this data then is the, it doesn’t work, no? It goes on and off, we can’t hear you, but it goes. When the, and therefore this bias is, exists, because it exists in society, so there is, to an extent, there is little technological solution to a problem which is a societal problem. So we have seen this bias with generative AI, we have seen another set of issues, which is, which, in the sense that, it doesn’t work. In the sense that these products are also the product of the scraping of the web, which means taking language as it is, bringing a whole set of new issues, like the language that we are all taught. talking about and learning from these tools, is it inclusive or not? So, I think there is an understanding that has become more mainstream around all of this, around the fact that discriminative AI and generative AI, in a combination of the two, can perpetuate and softwareise existing inequalities into systems that make decisions and predictions about tomorrow. However, there is also a positive use of these tools that can be where we can leverage AI to try and address some of these systems. For example, leveraging big data to understand the root causes of these inequalities. For example, understanding that there are links between discrimination or sectors and areas of discrimination, looking at big data that we wouldn’t be able to look at through human eyes. Using artificial intelligence and algorithmic systems to set a higher bar on, for example, how many women we want to work in a business, by manipulating the data. By manipulating the data, using synthetic data, by creating data sheets that enable us to improve the outputs. What I’m trying to say is that we can leverage AI and algorithmic decision-making for the good if we have the political and social will to do so. Because if we leave it to the data alone, it’s not going to happen because data is simply representative of the world. And I think there is an understanding in the study on the challenges and opportunities that we’ve done. And I encourage everyone to read it. It’s important because we provide an understanding of where bias comes from. The fact that this bias that is detrimental to women’s human rights, to discrimination, that is dangerous. We provide a set of recommendations for states to say, how can we challenge this? How can we look at existing non-discrimination laws and see if they’re fit for the age of AI? For example, if a woman is discriminated and is not getting access to a service because she is a woman and also a black woman, how are we going to ensure that this intersectional source of discrimination is addressed by existing non-discrimination law? And furthermore, who is going to have the burden of the proof? Because if the individual who is already vulnerable in the big problem that we have, which is the unspoken figure, which is the asymmetry between us as individuals and the data and the extractivism and the complexity of what some call surveillance capitalism, in this big asymmetry, it can’t be left to the most vulnerable to say, I am going to challenge this. So this also means that there has to be strong regulation in place to make sure that the onus is on the company to provide the level of transparency, challengeability and clarity and auditability of the systems that they’re using, so that the onus is not just left on the individual to challenge, but these systems can be open to question by civic society, government and institutions. Business can play a big part in it. So what I’m trying to say here is that AI can be used, and especially if I think about automated bots, responsibly automated bots, can be great in supporting public sector, private sector to develop and create AI, which is inclusive. We can use AI, big data strategies to really understand where the bias may come from. We can look at big data analytics and really say, identify patterns of discrimination. There is a lot that can be done in this space, but there has to be that willingness to do so. So I’m really hoping that in a space like this, we can… I mean, a document like that one that brings together can be leveraged beyond Council of Europe, because it’s really important… important that we understand that existing legislation around discrimination law, privacy laws, may need to be looked at in order to be able to cater for the harms that come from algorithmic decision-making or generative AI.

Menno Ettema: Thank you very much, Ivana. That’s quite an elaborate and detailed analysis of the challenges that lie ahead, but also the opportunities. There are opportunities and possibilities. Can I give the floor to Naomi, maybe from the perspective of the risk for children’s safety and the use of AI?

Naomi Trewinnard: Sure, and thank you. Thank you for the floor. In terms of AI, the Lanzarote Committee has been paying particular attention to emerging technologies, especially over the last year or so. The committee has actually recognised that artificial intelligence is being used to facilitate sexual abuse of children. Ivana mentioned generative AI models. We know that generative AI is being used to make images of sexual abuse of children, and also that large language models are being used to facilitate grooming of children online and identification of potential victims by perpetrators. Generative AI is also being used to alter existing materials of victims. I know of cases where a child has been identified and rescued, but the images of the abuse are still circulating online, and now AI is being used to alter the images of the abuse of this child that’s been rescued to create new images of that child being abused in different ways. Then we also know that this is being used to generate completely fake images of a child, and that in some cases those fake images of a child naked or being sexually abused are being used to coerce and blackmail the child, either into making images and videos of themselves. Sometimes it’s being used to blackmail children in order to get contact details of their friends, so that the perpetrator can have a wider group of victims. And in other cases, we know of fake images being used to blackmail children for financial gain. And so all of these different forms of blackmail and abuse of children have been recognised as a form of sexual extortion against children by the Lanzarote Committee. And many mentioned at the beginning of our session, the Lanzarote Committee held a thematic discussion on this issue actually in November. So just a few weeks ago in Vienna and has adopted a declaration which sets out really some steps that states particularly can take to better protect children against these risks of emerging technologies, such as criminalising all forms of sexual exploitation and sexual abuse facilitated by emerging technologies. So looking at legislation, making sure regulation is in place, including AI generated sexual abuse material, and also ensuring that sanctions are effective and proportionate to the harm caused to victims. So historically, we’ve seen sanctions and criminal codes being much lighter for, for example, for a child sexual abuse material offence where there’s no contact with the victim. So perhaps really looking at those codes to see if that’s still effective and proportionate, given the harm that we know is being caused to children today by these technologies. On the screen there, you have a link to a background paper that was prepared for the committee, which really explores in detail, setting out the risks and the opportunities of these emerging technologies. And just to close, I wanted to mention that criminalising these behaviours is not enough. So the committee is also called on states to make use of these technologies. So as Ivana mentioned, there’s also a great opportunity here to leverage these technologies to help us better identify and safeguard victims and also to detect and investigate perpetrators. So this really requires cooperation. with the private sector, especially as regards preserving and producing electronic evidence that can be then used in court across jurisdictions, and the Cybercrime Convention, the Second Option Protocol, also provides really useful tools that states can use to better obtain evidence. So I just wanted to close by saying we’re really grateful to have this opportunity to share this with you, and we’re really interested in exchanging further with those in the room about how to cooperate to better protect children. And perhaps just lastly, to mention that the 18th of November is the annual Awareness Raising Day about sexual abuse of children, and it’s really an invitation to all of you to add that date to your calendars and to do something on the 18th of November each year to raise awareness about sexual abuse so that we can better promote and protect children’s rights. Thank you.

Menno Ettema: Thank you, Naomi, and also for mentioning the International Day, because it is awareness raising in education is a key part of resilience, but also being aware also for parents and others to support children that are in a possible target. I’ll soon give the floor again to the audience, but I just want to also give the floor to Claire on violence against women and AI. Ivana already addressed some of these points, but I’m sure Claire has some contributions to also from the graveyard’s perspective.

Clare McGlynn: So yes, I don’t know if the slide is going to come up that I’d prepared, but it’s actually just need to be very brief, because what I wanted to say follows on from Ivana, and in fact, refers to and provide the link to the report that she and Raffaele Zenedes wrote about, in fact, the opportunities of AI as well as the challenges and particularly drawing out what states could be doing and particularly things like reinforcing the rights and obligations. around taking positive action in terms of using AI to eliminate inequalities and discrimination. But the one point I’ll just add there is as well that Ivana’s report refers to the possibility that into the future, there will be other vulnerable groups that are not necessarily covered by existing anti-discrimination laws. And so we have to be very open to how experiences of inequality and discrimination might shift in the advent and the world of AI and be alive to that and be ready to take steps to help protect those individuals. Thank you.

Menno Ettema: Thank you. Octavian, this next slide didn’t come up. Maybe you could work on that. Because I think, yes, exactly. Because it’s very important to encourage people to take a quick picture. Because I think the report that Claire refers to is in particular, and Ivana also worked on and referred to as particularly useful to understand the risks of AI when it comes to discrimination, but also particularly also to gender equality or violence against women and the steps that can be taken. I think the point here is that there are new groups or new, we sometimes talk about grounds or characteristics that are particularly coming up because of the AI, intersection of data or data points create new, how do you call it, references.

Ivana Bartoletti: Algorithmic vulnerability, yeah. The point here is that you can have a, so when you think about non-discrimination laws, you think about specific grounds, right? You say you can’t be discriminated because of this ground, religion or whatever. The problem with AI is that the algorithmic discrimination, which is created by the AI itself, because it can identify, for example, can discriminate against somebody because they go on a particular website, or because at the intersection between going on a website and doing something else, this is big data, right? The algorithmic discrimination may not overlap with the traditional sources of discrimination, the grounds of discrimination. So there is a lack of overlap. So somebody may be discriminated for an algorithmic discrimination, which may not overlap with the traditional grounds that we were protected people for. So this lack of overlap is what Claire is referring to. And this is something that we need to think because we may need to look beyond the way that we’ve looked into discrimination law until now.

Menno Ettema: Yeah, thank you very much. I want to go back for a last round to the audience and also launch another little quiz with the Mentimeter. So I’ll ask Octavian to change the screen to the Mentimeter. Octavian, can you manage? Well, Octavian is trying that out. Maybe Charlotte, can I give you the floor first if there are any further comments or questions that came from the online audience?

Charlotte Gilmartin: Not just at the moment, no, no further questions. But I have put the links to the documents that all the speakers have discussed in the chat. So if any participants want to want to find the links, they should all be there.

Menno Ettema: That’s great. I take this opportunity to also mention everybody that’s in the room. The recordings will be online later on on the YouTube channel of the IGF. And there you can then also find all the links because the chat will be also visible in the recordings. Octavian, are you with us? Do you manage with the Mentimeter? Octavian? Yes, there you go. So it’s the same quiz, but in case you lost connection, you can scan against the QR code and use the numbers. I see people registering again. First question. So as I stated, the beginning AI is the elephant stampede trampling over gender equality, non-discrimination, the rights of the child. Yes, no holding them back. The AI, of course. No, elephants are skillful animals and human rights are not fragile. Or maybe, but let’s not blame the elephants. Meanwhile, are there any questions in the audience in the room? Just checking quickly. There you go. Yes. Can you hear me? Yes.

Audience: Ivana and Naomi all mentioned collaboration. So how can governments, civil society, tech companies more effectively collaborate to ensure that the online platforms are protecting and upholding rights? Can I ask who you are? Sorry, yes. I’m Mia McAllister. I’m from the US.

Menno Ettema: Great. Thank you. The question was to Claire and Ivana. Claire, would you like to start?

Clare McGlynn: No, I’m happy. Ivana has probably got more expertise in this particular aspect.

Ivana Bartoletti: So, thank you for the question. So, there are several aspects here. First of all, there are a lot of, so, there is responsibility coming from platforms and private sector, okay, which are very important. So, for example, I mean, if I think about the European Union, the DSA, which goes in that direction, content moderation, having, so, there is an issue, there is something about transparency, requiring transparency, requiring openness, requiring auditability. So, for example, one of the provisions of the DSA is that data can be accessed, and there’ll be brushing over things, but for researchers to then be able to understand what are the, what could be some of the sources of online hate or, so, and, so, there is an onus that must be placed onto companies that is important. There is AI literacy that needs to happen within education, in education settings. I always say we need people to develop a distrust by design as a way to grow with these technologies, but challenge them. You know, we need to tell people that they have to challenge all of this. It’s really important also to look at new regulation, but it’s also very, in my view, important that we create safe environments for companies and governments together to experiment. So, for example, the sandboxes are very good. There are different kinds of sandboxes, regulatory, technical, but it’s really important that companies, because there are some things that are very hard to tackle in this field, especially with generative AI. They are difficult, okay, because some of the things can be odd with the very nature of generative AI. So, having these sandboxes where you can have government, civic society to work together, to look into this product, influence this product, I think this is really, really important. So, I would push towards this kind of collaboration.

Menno Ettema: Thank you very much. Davian, could you just launch the last question, just to gather some further thoughts on what can be done more to ensure human rights in the use of AI? I just wanted to ask if there’s any other questions from the audience or online? No? Then while people answer this question, maybe a last word, a recommendation for us to take forward. We have two minutes, a minute left, so maybe Naomi, just a last final word of wisdom.

Naomi Trewinnard: Well, thank you. Yeah, I think just to reiterate again, I think the key is really collaboration and dialogue, so I think this is an excellent opportunity at the IGF to have this dialogue. For those that are interested in collaborating with the Lanzarote Committee, please do get in touch, our details are on there, and we also regularly at the Council of Europe have stakeholder consultations in the context of developing our standards and recommendations, so please tech companies do engage with us and let’s have a constructive dialogue together to better protect human rights online. Thank you, Naomi. Claire,

Clare McGlynn: last word of wisdom? Yes, I think what we need to see is greater political prioritisation and the need to move basically from the rhetoric to action, and that for me means demanding that the largest tech platforms actually act to ensure that we proactively reduce the harms online. There is a lot of very positive rhetoric, but we’ve yet to see an awful lot of action and actual change.

Ivana Bartoletti: Thank you. Ifeana? Yeah, to me it’s very much breaking that innovation versus human rights versus privacy versus security versus safety argument sometimes we hear, you know, it’s like the one hand there is the argument that we’ve got to innovate, we have to do it fast and quickly, and to do so we may have to sacrifice. Well, that is an argument that doesn’t stand, that this is Claire’s right, you know, this is where we need more action. Yeah, we need to do all,

Menno Ettema: and it’s possible to do all, through cooperation, clear standards, and clear commitment. Legal and non-legal measures, I think those are the key takeaways and the key words that I want to take forward. I thank my panellists, also my colleagues Charlotte and Octavian for the support. Thank you everyone for attending this session, and if there are any other questions, please be in touch with us through the forums on the Council of Europe website, or directly you have our details in the IGF web. Okay, thank you very much, and thank you technical team for all the support. Yes, I do. Thank you.

Audience: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

M

Menno Ettema

Speech speed

147 words per minute

Speech length

2883 words

Speech time

1174 seconds

Human rights apply equally online and offline

Explanation

Menno Ettema asserts that human rights standards should be applied in the same manner in both online and offline contexts. This implies that the protections and freedoms guaranteed by human rights laws should extend to digital spaces.

Major Discussion Point

Human Rights Standards Online

Agreed with

Octavian Sofransky

Agreed on

Human rights apply equally online and offline

Multi-stakeholder collaboration and dialogue is key

Explanation

Menno Ettema emphasizes the importance of collaboration and dialogue among various stakeholders to effectively address online human rights issues. This approach recognizes that protecting rights in the digital space requires input and action from multiple sectors.

Major Discussion Point

Collaboration to Protect Rights Online

Agreed with

Naomi Trewinnard

Ivana Bartoletti

Agreed on

Need for collaboration to protect rights online

O

Octavian Sofransky

Speech speed

139 words per minute

Speech length

307 words

Speech time

131 seconds

Council of Europe has developed robust human rights standards for member states

Explanation

Octavian Sofransky highlights that the Council of Europe has established comprehensive human rights standards that apply to its member states. These standards are designed to protect human rights, democracy, and the rule of law in the digital environment.

Evidence

The adoption of the Framework Convention on Artificial Intelligence and Protecting Human Rights, Democracy and the Rule of Law in May, which was opened for signature in September.

Major Discussion Point

Human Rights Standards Online

Agreed with

Menno Ettema

Agreed on

Human rights apply equally online and offline

N

Naomi Trewinnard

Speech speed

160 words per minute

Speech length

1710 words

Speech time

637 seconds

Lanzarote Convention sets standards to protect children from sexual exploitation online

Explanation

Naomi Trewinnard explains that the Lanzarote Convention establishes standards for protecting children from sexual exploitation and abuse, including in online contexts. The convention requires states to implement measures for prevention, protection, and prosecution of offenders.

Evidence

The convention criminalizes various forms of online sexual abuse, including grooming, and emphasizes the importance of international cooperation in addressing these issues.

Major Discussion Point

Human Rights Standards Online

AI is being used to facilitate sexual abuse of children online

Explanation

Naomi Trewinnard points out that artificial intelligence is being utilized to enable and exacerbate the sexual abuse of children in online environments. This includes the use of AI to generate abusive content and facilitate grooming.

Evidence

Examples include the use of generative AI to create images of sexual abuse of children and large language models being used to facilitate grooming of children online.

Major Discussion Point

Artificial Intelligence and Human Rights

Cooperation with private sector needed to obtain electronic evidence

Explanation

Naomi Trewinnard highlights the necessity of collaboration between governments and private sector companies to access and preserve electronic evidence. This cooperation is crucial for effectively investigating and prosecuting online crimes, particularly those involving child exploitation.

Evidence

Reference to the Cybercrime Convention and its Second Optional Protocol as tools for obtaining evidence across jurisdictions.

Major Discussion Point

Collaboration to Protect Rights Online

Agreed with

Menno Ettema

Ivana Bartoletti

Agreed on

Need for collaboration to protect rights online

C

Clare McGlynn

Speech speed

132 words per minute

Speech length

795 words

Speech time

359 seconds

Istanbul Convention addresses digital dimension of violence against women

Explanation

Clare McGlynn discusses how the Istanbul Convention has been interpreted to address the digital aspects of violence against women and girls. The convention recognizes that online and technology-facilitated violence are forms of gender-based violence that require specific attention and action.

Evidence

The adoption of a general recommendation on the digital dimension of violence against women and girls by GREVIO in 2021.

Major Discussion Point

Human Rights Standards Online

AI creates new forms of algorithmic discrimination not covered by existing laws

Explanation

Clare McGlynn points out that AI systems can create new forms of discrimination that may not be covered by traditional anti-discrimination laws. This algorithmic discrimination may affect groups that are not typically protected by existing legislation.

Major Discussion Point

Artificial Intelligence and Human Rights

Differed with

Ivana Bartoletti

Differed on

Effectiveness of existing laws in addressing AI-related discrimination

Greater political prioritization and action from tech platforms needed

Explanation

Clare McGlynn calls for increased political focus and concrete actions from major technology platforms to address online harms. She emphasizes the need to move beyond rhetoric to implement effective measures for protecting rights online.

Major Discussion Point

Collaboration to Protect Rights Online

A

Audience

Speech speed

63 words per minute

Speech length

308 words

Speech time

289 seconds

Some human rights are more difficult to apply online

Explanation

The audience response indicates a perception that certain human rights may be more challenging to implement or enforce in online contexts. This suggests that the digital environment presents unique challenges for human rights protection.

Major Discussion Point

Human Rights Standards Online

I

Ivana Bartoletti

Speech speed

139 words per minute

Speech length

2037 words

Speech time

876 seconds

AI can perpetuate and amplify existing stereotypes and biases

Explanation

Ivana Bartoletti explains that AI systems can reinforce and magnify existing societal biases and stereotypes. This occurs because AI models are trained on data that reflects historical inequalities and discriminatory patterns.

Evidence

Examples of bias in facial recognition systems and banking services that disproportionately affect women and minorities.

Major Discussion Point

Artificial Intelligence and Human Rights

Differed with

Clare McGlynn

Differed on

Effectiveness of existing laws in addressing AI-related discrimination

AI can be leveraged to address inequalities if there is political will

Explanation

Ivana Bartoletti argues that AI technologies can be used positively to identify and address societal inequalities. However, this requires intentional effort and political commitment to harness AI for social good.

Evidence

Suggestions include using big data analytics to identify patterns of discrimination and leveraging AI to set higher standards for diversity in businesses.

Major Discussion Point

Artificial Intelligence and Human Rights

Regulatory sandboxes allow government, companies and civil society to work together

Explanation

Ivana Bartoletti proposes the use of regulatory sandboxes as a collaborative approach to addressing challenges in AI governance. These sandboxes provide a safe environment for experimentation and dialogue between different stakeholders.

Evidence

Mention of different types of sandboxes (regulatory, technical) as spaces for collaboration.

Major Discussion Point

Collaboration to Protect Rights Online

Agreed with

Menno Ettema

Naomi Trewinnard

Agreed on

Need for collaboration to protect rights online

Agreements

Agreement Points

Human rights apply equally online and offline

Menno Ettema

Octavian Sofransky

Human rights apply equally online and offline

Council of Europe has developed robust human rights standards for member states

Both speakers emphasize that human rights standards should be applied consistently in both digital and physical spaces, with the Council of Europe playing a key role in developing these standards.

Need for collaboration to protect rights online

Menno Ettema

Naomi Trewinnard

Ivana Bartoletti

Multi-stakeholder collaboration and dialogue is key

Cooperation with private sector needed to obtain electronic evidence

Regulatory sandboxes allow government, companies and civil society to work together

These speakers agree on the importance of collaboration between various stakeholders, including governments, private sector, and civil society, to effectively address online human rights issues and challenges in AI governance.

Similar Viewpoints

Both speakers highlight the importance of specific conventions addressing digital dimensions of violence and exploitation, particularly for vulnerable groups like children and women.

Naomi Trewinnard

Clare McGlynn

Lanzarote Convention sets standards to protect children from sexual exploitation online

Istanbul Convention addresses digital dimension of violence against women

Both speakers point out that AI systems can reinforce and create new forms of discrimination, potentially affecting groups not typically protected by existing legislation.

Ivana Bartoletti

Clare McGlynn

AI can perpetuate and amplify existing stereotypes and biases

AI creates new forms of algorithmic discrimination not covered by existing laws

Unexpected Consensus

Positive potential of AI in addressing inequalities

Ivana Bartoletti

AI can be leveraged to address inequalities if there is political will

Despite the discussion largely focusing on the risks and challenges of AI, Ivana Bartoletti unexpectedly highlights the potential for AI to be used positively in addressing societal inequalities, given the right political commitment.

Overall Assessment

Summary

The main areas of agreement include the application of human rights standards online, the need for multi-stakeholder collaboration, the importance of specific conventions addressing digital violence, and the recognition of AI’s potential risks and opportunities.

Consensus level

There is a high level of consensus among the speakers on the importance of protecting human rights online and the need for collaboration. This consensus implies a strong foundation for developing and implementing effective strategies to address online human rights issues and AI governance challenges. However, there are nuanced differences in approaches and emphasis, particularly regarding the potential of AI to address inequalities.

Differences

Different Viewpoints

Effectiveness of existing laws in addressing AI-related discrimination

Clare McGlynn

Ivana Bartoletti

AI creates new forms of algorithmic discrimination not covered by existing laws

AI can perpetuate and amplify existing stereotypes and biases

While both speakers acknowledge AI’s potential for discrimination, Clare McGlynn emphasizes the inadequacy of existing laws to address new forms of algorithmic discrimination, whereas Ivana Bartoletti focuses on how AI amplifies existing biases without explicitly stating that current laws are insufficient.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of existing legal frameworks in addressing AI-related discrimination and the specific approaches to leveraging political will and tech platform action.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but have slightly different emphases or approaches. This suggests a general consensus on the importance of addressing human rights in the digital space and the challenges posed by AI, but with some variation in proposed solutions or areas of focus. These minor differences do not significantly impede the overall discussion on enhancing online safety and human rights standards.

Partial Agreements

Partial Agreements

Both speakers agree on the need for political action to address AI-related challenges, but Ivana Bartoletti emphasizes leveraging AI positively to address inequalities, while Clare McGlynn focuses on demanding action from tech platforms to reduce online harms.

Ivana Bartoletti

Clare McGlynn

AI can be leveraged to address inequalities if there is political will

Greater political prioritization and action from tech platforms needed

Similar Viewpoints

Both speakers highlight the importance of specific conventions addressing digital dimensions of violence and exploitation, particularly for vulnerable groups like children and women.

Naomi Trewinnard

Clare McGlynn

Lanzarote Convention sets standards to protect children from sexual exploitation online

Istanbul Convention addresses digital dimension of violence against women

Both speakers point out that AI systems can reinforce and create new forms of discrimination, potentially affecting groups not typically protected by existing legislation.

Ivana Bartoletti

Clare McGlynn

AI can perpetuate and amplify existing stereotypes and biases

AI creates new forms of algorithmic discrimination not covered by existing laws

Takeaways

Key Takeaways

Human rights apply equally online and offline, but some are more difficult to enforce in the digital space

Existing human rights conventions like Lanzarote and Istanbul need to be adapted for the online context

AI poses both risks (amplifying biases, facilitating abuse) and opportunities (addressing inequalities) for human rights online

Multi-stakeholder collaboration between governments, tech companies, and civil society is crucial for protecting rights online

There is a need to move from rhetoric to concrete action in enforcing human rights standards online

Resolutions and Action Items

States should criminalize all forms of sexual exploitation and abuse facilitated by emerging technologies

Governments and companies should create regulatory sandboxes to experiment with AI governance

Tech platforms need to take more proactive measures to reduce online harms

Stakeholders should engage in dialogue and consultations to develop better online protection standards

Unresolved Issues

How to effectively balance innovation with human rights protection in AI development

How to address new forms of algorithmic discrimination not covered by existing laws

How to ensure transparency and auditability of AI systems used by private companies

How to protect human rights defenders from misuse of defamation laws to silence them online

Suggested Compromises

Using AI and big data analytics to identify patterns of discrimination while ensuring privacy protections

Developing narrowly-defined hate speech laws instead of broad defamation laws to protect freedom of expression

Balancing content moderation to protect vulnerable groups while preserving free speech online

Thought Provoking Comments

The UN and the Council of Europe has clearly stated human rights apply equally online as it does offline. But how can well-established human rights standards be understood for the online space and in new digital technology?

speaker

Menno Ettema

reason

This framed the key question for the entire discussion, setting up an exploration of how existing human rights frameworks can be applied to rapidly evolving digital spaces.

impact

It set the agenda for the session and prompted speakers to address specific ways human rights standards are being adapted for online contexts.

The committee really recognises and emphasises the importance of international cooperation, including through international bodies and international meetings such as this one.

speaker

Naomi Trewinnard

reason

This highlighted the critical need for global collaboration in addressing online safety and rights issues that transcend national borders.

impact

It shifted the conversation to focus on international cooperation and multi-stakeholder approaches throughout the rest of the discussion.

If we’re ever going to prevent and reduce violence against women and girls, including online and technology facilitated violence against women and girls, we need to change attitudes across all of society and including amongst men and boys.

speaker

Clare McGlynn

reason

This comment emphasized the societal and cultural dimensions of online violence, moving beyond just technical or legal solutions.

impact

It broadened the scope of the discussion to include education and awareness-raising as key strategies alongside legal and technological approaches.

AI does threaten human rights, especially for the most vulnerable in our society. And it does for a variety of reasons. It does because it perpetuates and can amplify the existing stereotypes that we’ve got in society.

speaker

Ivana Bartoletti

reason

This introduced a critical perspective on AI, highlighting its potential to exacerbate existing inequalities and human rights issues.

impact

It sparked a more nuanced discussion about both the risks and potential benefits of AI in relation to human rights and online safety.

We can leverage AI and algorithmic decision-making for the good if we have the political and social will to do so. Because if we leave it to the data alone, it’s not going to happen because data is simply representative of the world.

speaker

Ivana Bartoletti

reason

This comment provided a balanced view on AI, acknowledging its potential for positive impact while emphasizing the need for intentional human guidance.

impact

It led to a discussion of specific ways AI could be leveraged to promote equality and human rights, shifting the tone from purely cautionary to also considering opportunities.

Overall Assessment

These key comments shaped the discussion by framing it within the context of applying existing human rights frameworks to digital spaces, emphasizing the need for international cooperation, highlighting societal dimensions beyond technical solutions, critically examining the impact of AI on human rights, and exploring the potential for AI to be leveraged positively with proper guidance. The discussion evolved from a general overview of online human rights issues to a nuanced exploration of specific challenges and opportunities, particularly in relation to AI and international collaboration.

Follow-up Questions

How can we protect human rights defenders online from being charged under defamation laws?

speaker

Jaica Charles

explanation

This is important because defamation laws are being misused to silence human rights defenders, particularly in the African context.

How can existing non-discrimination laws be adapted to address algorithmic discrimination that may not align with traditional protected grounds?

speaker

Ivana Bartoletti

explanation

This is crucial as AI systems can create new forms of discrimination that current laws may not adequately cover.

How can we leverage AI and big data to understand and address root causes of inequalities?

speaker

Ivana Bartoletti

explanation

This represents an opportunity to use AI for positive social impact and to combat discrimination.

How can governments, civil society, and tech companies more effectively collaborate to ensure online platforms are protecting and upholding rights?

speaker

Mia McAllister

explanation

Effective collaboration between these stakeholders is crucial for addressing online safety and human rights issues.

What are some key interventions to improve online safety for women and girls in West Africa, particularly in relation to the Istanbul Convention?

speaker

Peter King Quay

explanation

This highlights the need for region-specific strategies to implement global human rights standards in the digital space.

How can AI literacy be improved through education to help people critically engage with these technologies?

speaker

Ivana Bartoletti

explanation

Developing ‘distrust by design’ and critical thinking skills is important for navigating the challenges posed by AI technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #50 Digital Innovation and Transformation in the UN System

Open Forum #50 Digital Innovation and Transformation in the UN System

Session at a Glance

Summary

This session focused on digital innovation in the United Nations system, featuring presentations from representatives of UNHCR, UNICEF, the UN Pension Fund, and UNICC. The speakers discussed various digital transformation initiatives aimed at improving services and operations within their respective organizations.

UNHCR’s presentation highlighted their digital strategy, which includes efforts to empower refugees through digital skills and access, as well as initiatives to improve internal operations. Key focus areas included digital inclusion, protection, and innovation, with examples such as a refugee services mobile app and efforts to combat misinformation.

UNICEF shared their approach to digital resilience for children, emphasizing the importance of protecting children’s data rights and digital security. They outlined a framework encompassing data protection, information security, and responsible data use for children.

The UN Pension Fund presented their innovative digital identity solution for proof of life verification, utilizing blockchain, biometrics, and AI technologies. This system aims to streamline the process of confirming beneficiaries’ status while ensuring security and privacy.

UNICC, as the UN’s shared IT services provider, showcased various digital projects supporting multiple UN agencies. These included AI-powered chatbots, the UN digital ID platform, and cybersecurity initiatives, demonstrating UNICC’s role in facilitating digital transformation across the UN system.

The discussion highlighted common themes of collaboration, efficiency, and the responsible use of technology to support UN mandates. Speakers also addressed questions about accessibility, education for refugees, and the potential for sharing UN-developed solutions with external entities.

Keypoints

Major discussion points:

– Digital innovation and transformation efforts across UN agencies (UNHCR, UNICEF, UN Pension Fund)

– Use of emerging technologies like blockchain, biometrics, and AI to improve services and operations

– Importance of data protection, privacy, and ethical use of technology

– Collaboration and shared solutions across the UN system

– Accessibility and inclusion considerations in digital initiatives

The overall purpose of the discussion was to showcase how different UN agencies are leveraging digital innovation and emerging technologies to improve their operations and better serve their constituents, whether refugees, children, or retirees. The speakers aimed to highlight both agency-specific initiatives as well as collaborative efforts across the UN system.

The tone of the discussion was largely informative and positive, with speakers enthusiastically sharing their agencies’ digital transformation journeys and achievements. There was an underlying tone of collaboration, with multiple speakers emphasizing the importance of working together and sharing solutions across UN agencies. The Q&A portion at the end introduced a slightly more critical tone, with audience members raising questions about monitoring fund recipients and ensuring accessibility for all users.

Speakers

– Dino Cataldo Dell’Accio – Chief Information Officer of the United Nations Pension Fund

– Michael Walton – Head of Digital Services at UNHCR (UN Refugee Agency)

– Fui Meng Liew – Chief of Digital Center of Excellence at UNICEF

– Sameer Chauhan – Director of the United Nations International Computing Center (UNICC)

– Sary Qasim – Representative of the Government Blockchain Association in the Middle East

Additional speakers:

– Nancy Marango – Chairman of an organization in Kenya

– Audience member – From University of Ghana and Internet Society Ghana chapter

Full session report

Digital Innovation in the United Nations System: A Collaborative Approach

This session, held in the context of the Internet Governance Forum (IGF), showcased digital innovation initiatives across various United Nations organizations. Representatives from UNHCR, UNICEF, the UN Pension Fund, and UNICC presented their agencies’ efforts in leveraging technology to enhance services and operations.

Key Themes and Initiatives

1. UNHCR’s Digital Strategy

Michael Walton, Head of Digital Services at UNHCR, outlined the agency’s digital strategy focusing on:

a) Digital Inclusion: Ensuring refugees have access to digital tools and skills

b) Digital Protection: Safeguarding refugees’ digital rights and privacy

c) Digital Innovation: Leveraging technology to improve service delivery

Walton highlighted initiatives such as a refugee services mobile app and efforts to combat misinformation, emphasizing the importance of digital inclusion across age and gender divides.

2. UNICEF’s Digital Resilience and Public Goods

Fui Meng Liew, Chief of Digital Center of Excellence at UNICEF, presented the organization’s approach to digital resilience for children and its digital public goods initiatives. Key aspects included:

a) Digital public infrastructure development

b) A database of digital interventions for children

c) Efforts to protect children’s data rights and digital security

Liew emphasized UNICEF’s work on creating digital public goods and identifying digital solutions for children with disabilities.

3. UN Pension Fund’s Digital Identity Solution

Dino Cataldo Dell’Accio, Chief Information Officer of the UN Pension Fund, introduced an innovative digital identity solution for proof of life verification, utilizing blockchain, biometrics, and AI. The system’s four-proof framework includes:

a) Proof of identity at enrollment

b) Proof of authentication for subsequent interactions

c) Proof of liveness to prevent fraud

d) Proof of life to confirm beneficiary status

Dell’Accio highlighted the use of permissioned blockchain to balance security with organizational control.

4. UNICC’s Shared Digital Solutions

Sameer Chauhan, Director of UNICC, showcased various digital projects supporting multiple UN agencies, including:

a) AI-powered chatbots for enhanced user interactions

b) Cybersecurity initiatives coordinating threat intelligence across the UN system

c) Support for the UN digital ID platform

Chauhan emphasized UNICC’s role in facilitating digital transformation and collaboration across UN agencies.

5. UN Digital ID Project

Multiple speakers highlighted the UN Digital ID project as a significant cross-agency initiative, demonstrating the collaborative nature of digital innovation within the UN system.

Common Themes

Throughout the presentations and subsequent Q&A session, several common themes emerged:

1. Collaboration and knowledge sharing across UN agencies

2. Emphasis on data protection, privacy, and responsible use of technology

3. Focus on digital inclusion and accessibility, particularly for vulnerable populations

4. Adoption of emerging technologies like blockchain, AI, and biometrics

5. Development of scalable and cost-effective solutions

Audience Engagement

The session included a Q&A period where audience members raised questions about accessibility efforts for people with disabilities and the potential for sharing UN-developed systems with external entities. Speakers addressed these questions, highlighting ongoing initiatives and challenges in these areas.

Conclusion

The session provided a comprehensive overview of digital innovation efforts across the UN system, demonstrating a unified approach to leveraging technology for improved service delivery and operational efficiency. While each organization has its unique focus areas, the discussion revealed a strong foundation for collaboration and knowledge sharing in digital transformation efforts. As UN agencies continue to navigate the complexities of digital innovation, their collective efforts promise to enhance the impact and reach of their vital work worldwide, while addressing shared challenges such as digital inclusion, data protection, and the ethical use of emerging technologies.

Session Transcript

Dino Cataldo Dell’Accio: Welcome to Session 50 on Digital Innovation in the United Nations System. Good morning, good afternoon, good evening, depending on where you are located. Welcome to our audience here in Riyadh and welcome to our audience online. My name is Dino Di Lattro. I’m the Chief Information Officer of the United Nations Pension Fund. And here today with my colleagues and friends from UNHCR, UNICEF, and UNICC, we are going to present the digital innovation in our respective organizations. Here in Riyadh, I’m with Mike Walton, the Head of Digital Services in UNHCR. And online, we are joined by Fuyi Meng, the Chief of Digital Center of Excellence of UNICEF, and Samir Shahwan, Director of the United Nations International Computing Center. The sequence of presentation will be as follows. Mike will present the experience and the strategy in UNHCR. We’ll then go online with Fuyi Meng. She will be presenting the UNICEF experience. And then I will make a presentation on the experience of the UN Pension Fund. And last but not least, we are going to have Samir, as Director of UNICC, is going to present how UNICC is actually supporting us and the entire UN system in our efforts to implement digital innovation and transformation in the UN system. So without further ado, I’ll give the floor to Mike. And I will be monitoring also the chat for any questions or comments that you may have. Thank you. Mike?

Michael Walton: Thank you, Dino. And good morning, everybody. I was told that I spoke a little bit too fast when I did another session. So please wave at me if I’m speaking too fast and I’ll slow down. So I’m Mike, I head up the digital service at the UN Refugee Agency, and good to see you all today. Really glad that we can share the experiences. We’ve been through a bit of a journey since lockdown where we started defining our digital strategy. So we were working on our digital strategy all through the pandemic. And when we decided to do it, one of the things the high commissioner said to us was this has to be designed from the ground up, has to be designed by the regions, and it has to be a framework. It can’t be a very restrictive strategy because every region is different. So we had a fantastic response from all of the regions. They ran workshops locally, and we really feel that at the end of it, we got something that was very locally relevant to people. So. I won’t play the video. But just just to remind people about really the last 10 years in terms of the number of forcibly displaced, just to kind of state UNHCR’s mandate here, since I started in 2011, we’ve seen lots of many different conflicts around the world. And this is everywhere from from America’s over to the Asia region. And you can see there the number of forcibly displaced is just grown really hugely. And I know we’re all aware of that because it touches on on all of our countries and all of our lives. So there’s a huge need out there, a huge need to meet the needs of these people and to work with them to help them rebuild their lives. This is Jessie, she’s from DRC. I met her in Kakuma Refugee Camp. This was about five or six years ago. She’s now moved on, but she was she was she was running a coding center in Kakuma with very little resources. She had a generator. She had a team. She had some. laptops, and they were developing Android coding apps for the local community in Kakuma Refugee Camp. And it was really amazing just to see what they’ve been able to do with such little resource. It’s actually grown much bigger now, and now there are big coding centers. And there is an initiative called I Am Code, which focuses on female coding skills. And it’s growing from strength to strength. But this is in its minority. So we need to see many more examples of local skills and investment. So don’t worry about reading this slide. It’s our digital strategy on a page. But I’ll talk you through it, just to say. But what we tried to do was a cross-cutting digital strategy for every part of the organization. And really, again, although we talk to regions, we also talk to all of the divisions at HQ to make sure that their needs were met. So I’ll walk you through this just very briefly. The first part of the strategy is around refugee empowerment and making sure that refugees have the digital skills and the agency and tools to engage in the digital world. Many of us take the digital access for granted, and we can do many things. But for many, the gap still exists. So how can we address that gap? Really, to rebuild lives. And rebuilding lives is something that’s so important. Refugees spend many of their years, many of their youthful years, in refugee camps. And things like education are so important to help them build the skills further. We also had two pillars of the strategy which were looking internally, which are how do we work better and how can we be more effective as an organization? And I’ll go into those two pillars. But it was important for us to match what we were saying externally with what we were doing internally, too. At the end, maybe we have some discussion time about the commonality, because there’s so much common need across all of the conversations I’ve heard. I’ll focus on a couple in a minute. But just to say, I’ve heard so much about capacity building over the last few weeks. How do we work together, multi-stakeholder, to deliver the best possible outcomes? And how do we work together to make sure that we’re deliver a capacity-building approach across all of our users? How can we be efficient in our digital tools that we produce? How can we be more effective and make sure that we reuse, we share and we don’t waste valuable resources? And how can we make sure that we don’t, with one hand, provide good, but with the other hand, inadvertently cause harm, perhaps by using climate-heavy, carbon-heavy approaches? So how can we really be good at making sure that we do no harm? And ethics is going to be a really important part of our focus next year. And if anyone here is interested in being involved in that with us, we’re really looking at how we can have an ethical approach to the use of tech in the humanitarian sector, working with other stakeholders. With accessibility, we’ve worked a bit with Microsoft and Google, and we’ve met with a group of refugees in London at Google headquarters. So thanks to them for helping facilitate that meeting. But as well as Google and Microsoft, there are so many other applications being developed. How can we make sure that the disabled users, whether it’s those with visual impairment, those with hearing impairments or other forms of disability can be properly catered for in the digital world? So we’re really focusing on that. And we’ve done a big piece of work on accessibility. It’s a common need. It’s come up again and again. How can we share? How can we have a joint approach and a joint center of excellence to something like digital accessibility so we can be more effective and not reinvent the wheel? My team’s mantra is engage, engage, engage, engage with the communities, make sure that we’re delivering something that is relevant to them. Two of my team members have just been to Iraq and to Rwanda and have sat with our new refugee gateway to make sure that actually the prototype is meeting their needs and meeting their expectations. So really important that we design for user feedback. Digital inclusion is a critical first pillar of the strategy. I won’t go on too much about digital inclusion because it’s been such a key focus of this week. You know, I’ve heard almost every session about digital inclusion. It starts with connectivity and we have a big refugee connectivity for refugees initiative. But it also goes on to when you get that connectivity, what can you do with it? How can you learn? How can you train? How can you get work? How can you be financially included? So for us, that inclusion is very all encompassing. Connectivity is always talked about. But what about all of the other things that come? I’ve heard one or two, one conversation actually on age. And I sat with my 94 year old father in law before I came here trying to get him to use his mobile phone so he could access. some of the services and call for safety if something happened to him, he wouldn’t have been able to do that without support. So I think when we’re talking about inclusion, we’re also talking about inclusion of elderly people who may not have the skills, or people who don’t have the access, and also make sure that that’s completely equal across the gender divide as well. So the second piece is digital services. So what we said was, okay, so when people get connectivity, the ability for us to provide services to individuals is going to be much more effective and much more efficient. So we’re now building a digital gateway, which is a one-stop shop for refugees to register, access appointments, find work, and find essential services. So that’s very much in its first stage of development. However, and we’ve heard a lot about risks, I think, in this week, with connectivity comes the risks and the threats of being online. So we have a huge program of work on digital protection, and I’ll talk a little bit more about that, but that really covers everything from hateful content. Imagine if you’re a refugee coming to a new community, whether that’s in the UK or anywhere else, and you’re met online with hateful comments or with misinformation, and it’s a real issue for us. So we want to make sure that we find a way of addressing that. Communicating, how we communicate with each other, how we communicate with our partners is another pillar. I won’t go into too much detail on that today, but that’s very much about how do we use all of the new communication tools to be more effective, and how do we work digitally? This may not look like a digital picture, but we were using Internet of Things monitoring for our water deliveries to make sure that we knew what the water levels were, we knew when deliveries needed to be made, and we’re using Internet of Things technology to help us inform and send data back for that. So really, really useful, interesting examples. Dino, please tell me if I need to speed up. know we talked about innovation, what we have is we have a fantastic innovation team in UNHCR. They have a digital innovation fund and actually there’s a refugee-led fund as well which goes into it and a data fund too but let’s talk about the digital fund. We have a certain amount of money allocated to the innovation fund and refugee-led organizations can apply too and it’s really about if you have a good idea and you want to test it, how can they get funding for it and how can they do a pilot and a prototype. It’s a really good mechanism and we find that we have maybe 50 or so, 100 or so different applications across the funds every year and some of them will fail, that’s the nature of innovation and some of them will succeed and that can help us move forward. This is a just a picture of our refugee services mobile app that I mentioned that’s being tested in Rwanda and Iraq. So you can see it’s online services, you’ll be able to arrange appointments, refugees will be able to access documents, they’ll be able to register. Imagine the huge queues that happens when there is a sudden onset of an emergency and the sudden need to be registered as a refugee. With a mobile app we can help relieve that. We’re not trying to take away face-to-face contact, that will always be our number one priority but we are looking at how can we actually speed up registration so people can access essential services and information as quickly as possible. This is our help website. When I arrived we didn’t have a help website, now we have 14 million forcibly displaced visiting our help website for critical information every year and it’s really important, it’s a lifeline for people. They can find out how to access, how to go through the asylum process, how to access essential services that in the country they’re in and it’s becoming, we now have pretty much all country operations covered by this and when things suddenly change, for example in the Syria situation or in Ukraine situation, it’s really important to be able to provide information quickly to people, quickly to refugees and as well as this we have a WhatsApp service which is about engaging refugees who are using WhatsApp but when Ukraine happened, we realized that WhatsApp wasn’t really the primary tool of communication in Ukraine, it was it was Viber and we hadn’t really explored Viber before so we really needed to kind of look at what are all of the different messaging platforms that we should be working on and how can we be more effective. We work really closely with partners, some of whom have been here this week so with Meta, with Google, with the EU, how can we really work with them and also how can we work to adapt business practice as well. so that actually business practice changes for some of these companies so that they also do no harm when they’re operating. Just a little thing around hate speech really, you know, it’s we’ve done a two-year project that’s been funded by the European Union and it’s about now we’re really talking about information integrity and trusted information. How can we be sure that trusted information is is accessible at all times and how can we make sure that that access to information is not restricted? So we’ve been working a lot with UNESCO, with ICRC and with the Norwegian government, with the Swiss government on really looking at ways that we can make sure that there is a safe and trusted information world out there. I don’t think I can play this video but if it’s circulated, is it possible to play that video on the You don’t necessarily need the sound, that’s fine. But just as an example, this is a AI-generated video. Don’t worry about the sound. An AI-generated video that was playing out in the Rohingya crisis and it was, if you can tell, it was by AI because actually AI is quite bad at creating the letters and the acronyms of UNHCR, so it’s quite wrong. But it’s basically showing UNHCR, bearing arms and holding content, which actually is never the case. We only act in peace and it’s completely fabricated content. So how do we really work against AI to make sure, work with AI and restrict AI to make sure that we don’t get false content out there? This was also, you know, a quite heavy issue for us in terms of impact on our operations and impact on refugee safety in Indonesia. So we had to be really careful at the time, too. Okay, so and then survey results, we actually looked at some of, we asked a survey of some of the questions that we wanted to ask refugees. And actually, you know, the many many, Many refugees have faced hate speech, have seen it on their own channels and have been disturbed and affected by that. So how can we really make sure that we’re really addressing that hate speech and it really isn’t causing harm? And you can see there, psychological harm, social harm, financial harm. We all reported this when we went out and we did a survey. Coming to a close now, but just some of the things that we’re seeing is borders are increasingly becoming digitized. And if you’re a refugee seeking safety, how do we make sure that actually safety is really there and that the technology that can be used for good at borders is also not used in the wrong way at borders? So we’re really looking and keeping an eye on the technology. Looking at what might happen here, biometric travel documents, apps and websites that are being used. So bots. It’s only being tested. Data is really essential. I went to a great presentation on data and movements and how can we really ensure that we can do good predictive analysis there. Again, I won’t play this video, but again, if you have the deck afterwards, you can see that. And finally, just to say, as I said at the beginning, how can we work in the spirit of the digital compact on common areas? Ethics, a common approach to ethical, ethical approaches. Gender, how can we really make sure that we are including both gender, but also age, as I mentioned at the beginning. Accessibility, let’s not reinvent the wheel and create many different training courses. On accessibility, there only needs to be one that we can share or maybe several that we can share. These things can be adapted. And then policy, again, lots of discussion on policy today and standards. So that’s it for me, Dino, over to you.

Dino Cataldo Dell’Accio: Thank you very much, Mike, for the very insightful, comprehensive presentation. Very impressive, the amount of technologies and the scope of your purview. Fantastic. Thank you so much for sharing. So, we are now going to pass the floor to our colleagues, Kim Yang, Chief of the Digital Center of Excellence at UNICEF. She’s based in New York. Kim Yang, if you can please turn on the video, as you already did. Thank you so much. And also share your screen for the presentation. Kim Yang is going to make a presentation on digital resilience for impact for children. The floor is yours, Kim Yang. Thank you.

Fui Meng Liew: Thank you, Dino. First of all, can you hear me loud and clear?

Dino Cataldo Dell’Accio: Yes, we can hear you very well. But if you can please share your screen for the presentation. It’s not shown on the main screen.

Fui Meng Liew: Let me do that now, because I have to replace Mike’s sharing right now. And please let me know if you can see it.

Dino Cataldo Dell’Accio: Not yet.

Fui Meng Liew: What about now? Okay. Let me see. Yeah. Okay, perfect. It’s clear now. Yeah, it’s clear?

Dino Cataldo Dell’Accio: Yes, it is. Thank you.

Fui Meng Liew: Thank you, Dino. Dino, because we are hearing online from the room a bit choppy, the voice, so please feel free to stop me if in the room you don’t hear me very clearly. First of all, I want to thank Mike for doing such a good presentation about digital transformation and innovation in the impact for refugees. And today, or now, I’m going to take us to another turn in terms of how do we see digital innovation and transformation for refugees. the impact for children. As an introduction, my name is Hui-Ming Liu. I am the Chief of Digital Center of Excellence of UNICEF, and I’m based in Nairobi. Good morning, good afternoon, good evening to everyone today. So as some of you might have known, UNICEF is a child rights organization. From our perspective, we look at the holistic view of how UNICEF as a UN agency can actually propel and deliver impact for children around the world. And today, in this conversations that I’m having with you all, we’re going to focus on the digital resilience framework that UNICEF put together to really look at in the digital age, how do we make sure that the rights for children in terms of assessing data and their digital rights are being protected along the way. So it is go without saying that digital is intertwined in UNICEF’s DNA. In UNICEF, we are actually guided every four years by our strategic plan. Digital transformation is in our current strategic plan of four years from 2022 to 2025. What does that really mean? That really means that the entire organization look at digital as a very important change strategy for us to change our way of delivering services and impact for children. And it’s not only that. In the UN organization, most recently in September, we also see that the approval and launch on the pack of the future that actually trigger a lot of conversations. Most importantly, the global digital compact actually also bind us with the member states of our ambition on digital. So this is not digital is not only really happening in. UNICEF and it is really happening around the world and we are living in a digital world. What we are also seeing most importantly is that our stakeholders, over 190 governments and territories, countries and territories that UNICEF work very closely with, we’re seeing a sea change of them coming out and telling us the fact that they need us to be also stepping up on how to do digital delivery for services and any of the results that we deliver for children. Just a little bit of data. Over the last two years, UNICEF around the world, we actually successfully got a sneak peek about what are the digital interventions that we are hearing from our country offices. As I mentioned, we have a global footprint. We have country offices in over 160 countries and we’re in seven regions. What we’re seeing is that through our knowledge management platform, we’re looking at over 1,600 digital interventions and this can be digital innovations, digital ideas and this can also be digital interventions that scale massively in the country with the government. This is not a small number in all ways and means and this really strengthens our belief that digital is intertwined in UNICEF. How do we see digital is important for child rights, being a child rights organization? The way that we look at it is that it’s the whole journey of a child. In UNICEF, our mandate is covering from health, education, sanitation and so forth. Really, that reflects into a journey of a child from birth to a child getting registered with a legal ID. a child being vaccinated, they receive social protection, and they receive the right education, and they receive the protection all the way from zero age, all the way to when they become a young adult. And there are some disturbing numbers that we’re seeing so far is that, let’s say, let’s take the example from legal identity perspective. A child is born, and we know that from our interventions and data collection from all the government agencies is that in sub-Saharan Africa, just for an example, over 90 million children are actually not registered. If a child is not registered legally, that means that will truncate the potential of the child because he or she will not be able to successfully get vaccinated, may not, most likely not get the proper education, and it might get into a lot of social protection gray zone that we cannot protect the child. So the fact that the child is being registered and the child getting the right services to us is really, really important. And what does it mean with digital? With the notion of governments as getting more and more digital, and we’re seeing a proliferation of digital solutions or digital public goods in the market, we do see that there is a strong need for UNICEF to come out working with the lights of this digital solutions providers, such as DHIS2, the Digital Health Information System 2, that, sorry, the District Health Information System 2 that is managed by University of Oslo, OpenSPP, that’s about social protection that is championed by some of the big member states such as the Germany, Primero, which is a case management system, Learning Passport, Giga. Giga is actually a very interesting one because the way that we see digital is. It’s also the importance of having connectivity in every school, in every primary healthcare center that we provide the services directly or indirectly to children and family. Giga is a flagship partnership with ITU, of UNICEF with ITU to really connect every school. So that’s a very, very ambitious goal, but it just goes to show that digital is important for child rights. And we do see that for us, we have to come out and really support this cause. I just want to spend a little bit of time about not only digital is important for child rights, how UNICEF walked the talk about enabling the digital public goods in the digital ecosystem to deliver results for children. A little bit of background of digital public goods. There is an alliance called Digital Public Goods Alliance, which UNICEF is a co-founder of the alliance that established three years ago. And we have been very strong in really working with different key partners from the Norwegian to UNDP and so forth to really realize the key principles of DPG from making sure that the solutions give countries greater control on how they build and enhance the digital public infrastructure, being able to give them the control of being able to optimize and offer cheaper and faster implementation. I hope the room can hear me because I see. Okay, so I continue. So also offer cheaper and faster implementation than the proprietary solution so that governments do not get into vendor lock-in situations. Last but not least, catalyze the local tech. ecosystem. So, you know, locally, there can be a vibrant ecosystem to sustain the work on digital public goods. And on the right hand side, you’ll see some of the UNICEF, how we walk the talk on investing in not only the key principles, but investing in some of this digital public goods, ranging from RapidPro, a real-time messaging platform that used by more than 100 countries. And last year, we sent more than 1.2 billion messages yearly to globally, all the way from Primero, Oki, Bebo and Yoma, they covered different sectoral needs of UNICEF in delivery impact for children. So switching gear a little bit, we live in a digital world and by design, we are vulnerable. So from the way that we see it, not only cyber attacks is everywhere, cyber attacks impact UN agencies as much as it impacts private sector or government agencies. So we take it very seriously that it is our accountability and responsibility to make sure that we have a way to ensure children’s data and the rights are fully respected. And in that notion, we started the framework of digital resilience, which the core objective of it is to be able to protect the data rights and keep personal data secure and be able to use it ethically. And in this realm, we’re talking about data of children as well, which in most of the time, they could be the most vulnerable group. They could be the most vulnerable group because they are not the attempt to make decision. I’m still hearing some sound from IGF7 here. So I just want to make sure everyone can hear me. So how does data resilience work? It actually has three key pillars or three key elements. One is about data protection. Second is about information security. Third is about responsible data for children. In data protection, we care about how to ensure the rights to know why personal data is being collected and to give consent for its use. I know in a lot of the countries, there’s clear data protection laws and so forth. From UNICEF perspective, we want to also keep the standard to make sure that we walk the talk on making sure data protection is clearly and adhered by in UNICEF and with the work that we do on the ground. Information security is to really make sure that we introduce a technical controls on who can access the data and when and really protect the data that is in our custody. The third pillar about responsible data for children is really about ensuring the adoption of the highest ethical standards. And we’re talking about data for children and of children. And this can be really sensitive data if we’re not actually do this right. So- The IGF-7, it’s unmuted, it creates a background noise. The fact that- It needs to be unmuted. The fact that we need to reduce the misuse of data and make sure there’s no misuse of data and use the data to its maximum potential is in the pillar of the responsible data for children. And how do we put the framework at use? There are multiple channels that we’re putting it at use. First is that we are integrating the resilience into the technology playbook. Technology playbook is a UNICEF way of providing a comprehensive guide for- our program and our technology for development colleagues on the ground to implement digital programming solutions. We make a very, very strong effort in integrating the digital resilience framework as an assessment into the playbook, so that we make sure that it is actually being fully utilized at all initiatives at the country level. So that’s one part of how we put it in use. The second part of how we put it in use is to also really amplify the need of it by strengthening the digital resilience framework through global partnership. And we have been concertedly working with different partners outside of UNICEF, and we will open for more feedback from this group as well. We really seek the collaboration with governments and donor agencies for implementing the capacity building for the information security, data protection, and also responsible use of data of the digital resilience framework as well. With that, I’m gonna pause, and thank you all for letting me have the stage to share with you what UNICEF is doing in digital transformation, digital innovation, more importantly, our digital resilience work. And I wanna hand it over back to the room. Dino, over back to you. Dino, I cannot hear you. I’m not sure. I think workshop seven is muted.

Dino Cataldo Dell’Accio: Can you hear me now?

Fui Meng Liew: Yes, I was able to hear you, but hand it over back to you.

Dino Cataldo Dell’Accio: Thank you very much, Fu-Ming. I really like your presentation, and actually, I really. like the fact that you gave some concrete example the digital public goods because there is a lot of talk about DPI but it’s not always clear for for many in the audience how this is translated into concrete application so thank you very much for sharing the example of UNICEF. So with that said we are now switching to my organization the UN Pension Fund where I would be walking you through the innovation that we implemented in the digital identity for proof of life or proof of existence. So I’m going to share my screen and make the presentation. So in the case of the UN Pension Fund we are focused on a particular demographics basically what we refer to as the aging population. The UN Pension Fund has 86,000 retirees and beneficiaries. They lived across the globe in more than 192 countries and one of the problem historically the UN Pension Fund had to address was how to confirm and determine that this individual that received periodic benefits from the UN Pension Fund are indeed still alive. So I will share with you how we address this problem using emerging technologies. So very briefly some data some indicators about the UN Pension Fund. We are striving to meet all the benchmark that have been defined for our operation and making sure that we pay 100% of the benefit. The market value of our asset as of today is actually around $100 billion. This in 2023 was evaluated at 84.4. It’s basically one of the biggest financial asset of the United Nations globally, an asset of $100 billion. And we are fully funded. So we are one of the few pension fund in the world, which is a defined benefit fund, pension fund, and we are completely funded. Actually in 2023, we are 117% funded. We serve 25 member organization. The 83,000 priority benefit a year you see in this slide actually increased to 86,000 and the 143,000 participant now increased to 150,000. These are some of the emblems and logos that you probably you’ll be able to recognize vis-a-vis the member organization that we are serving. One of our most important partner is actually going to present shortly after me is the United Nations International Computing Center, which actually develop technologically speaking, all our innovative, most innovative application and solution, which I’m going to speak about today vis-a-vis my organization. But then Samir, the director of the United Nations International Computing Center will be able to articulate a much wider presentation vis-a-vis what UNICC does, not only for the UN Pension Fund, but also for UNHCR, UNICEF, the entire ecosystem of the UN system. So here is the problem that I was alluding to before. For more than 70 year, the UN Pension Fund had this issue of serving… this 86,000 individual 190-feet country and determine whether and confirm whether they were alive. And how this was done was done by mailing through 195 postal services a form, a paper-based form, asking them to sign it and return it back through the same 195 postal services. So as you can appreciate, this process was highly prone to at least delays. And indeed, we had to perform this process twice a year, but in some cases also to loss of paper, which unfortunately, in extreme cases, because of the rules of the regulation of the pension fund, we were forced to suspend payments. And that, of course, as you can appreciate, have very serious and negative impact on the household. So how did we address this problem? We transformed, we digitalized, hence digital innovation slash transformation, a paper-based and mail-based process into a solution that uses emerging technology with a mobile application, blockchain, but also biometrics, artificial intelligence, and global positioning system. And this is indeed the real screenshot of our internet posting in 2021 when we went live with the application and we announced to all our stakeholders and clients the deployment of the solution itself. So on the right side, you see the screenshot of the web application, sorry, of the mobile application that each one of our client and user. can download on their device. And on the left side, you see the screenshot of our internet posting. So when I was given, when we were given this problem, we had to articulate and basically translate the problem into its main logical components. And the way we did this is that we identify the need to provide four proofs. One is the proof of identity at the very beginning of the process when we enroll our user, our client into the solution. And then proof of authentication every after, every time the application is utilized. As I alluded to before, this is not just a digital identity solution, but it’s a digital identity solution for proof of life, proof of existence, demonstrating and confirming that indeed the intended recipient is still alive. And therefore we had to provide a proof of existence. And then a proof of transaction in order to make sure that we were going to create an independently auditable and traceable record that could be verified and validated by external parties. And ultimately a proof of location because in some cases, our beneficiary receive or can receive, if they want, payment in local currency, hence the need to determine that indeed they reside in the country for which they elected to receive the benefit. So of course they were implicit, but also explicit benefit expectations such as security, reliability, transparency, accountability, and ultimately of course, attribution for the transaction. So in doing so, we were inspired by the 2018 UN Secretary General Strategy on New Technologies, as well as to the… Sustainable Development Goals, and most recently, by the UN 2.0 initiative of the UN Secretary General. So how did we address, as I alluded to before, we use and adopt and adopted emerging technology, blockchain, biometrics, specifically facial recognition, artificial intelligence, and geolocation. So blockchain served us as the technology that allowed us to create an immutable and independentable, auditable, traceable, triple entry general ledger, where each of the transaction that occurs is recorded in an immutable manner. Why blockchain? I added this slide because oftentimes, especially at the beginning 2021, when we were live, I was asked, why did you use blockchain? Is it because it’s now a trendy things to do? Well, no, indeed, as a first and foremost chief information officer, I wanted to make sure that we were going to use a technology that would also prevent any potential form of collusion, primarily with my own stuff, because of course, technologically speaking, the solution can also be implemented using distributed encrypted databases. By a database, by definition, hazard requires a database administrator that by default has a super user access to potentially manipulate the database. And therefore, I wanted to make sure that in order to protect my staff and myself, and of course, the organization, I was going to adopt a technology that will prevent and detect any potential cases of collusion. Therefore, by adopting blockchain, I adopted technology that did not have any type of central control. That however, in our case. utilizing a permission-based blockchain, we could determine who could participate. In our case, the scope of the application is very defined and limited to the 86,000 beneficiary of the UN Pension Fund. And of course, support privacy with the use of specific technology, such as zero-knowledge proof, and create, maintain, and audit an immutable ledger. There are documents also issued by the World Economic Forum, for example, that help organizations to determine when and if indeed blockchain is the suitable technology. Having used a technology based on biometrics and facial recognition, very soon, we realized that we were going to be exposed to new vulnerabilities, such as the vulnerability presented and the threats presented by artificial intelligence with deep fakes. And therefore, we decided to fight AI with AI by embedding into the solution an AI module that verifies and tests whether indeed the person on the other side of the phone, of the camera, is a real person and not a synthetic or artificially generated image. Therefore, facial recognition, which is stored only on the device of the user, so we do not transmit, we do not store at any given time the biometric profile, the biometric data of the user, which should remain on their phone in order to authenticate them, and in order to provide an input at a specific event, such as authentication has occurred and is recorded on the blockchain. And finally, the GPS capture the location when and if needed. So I’ve been working for the Indonesia for 25 years, and most of my career was spent in IT auditing. So one of the immediate second questions. that I wanted to address after designing and implementing this application with the subjective expertise of our colleagues in the UNICC was, how can I go now in front of my governing bodies and confirm that indeed a solution that makes use of emerging technology is secure, it’s credible, it’s trustworthy, it’s sustainable. And I started my quest to try to free criteria standards, best practices, especially international best practices most effectively the application was credible and reliable. So one of the first thing that I did in absence of specific standards of blockchain was to adopt the ISO for those who are familiar with cybersecurity standard, the ISO 27001, which is a set of best practices and international best practices on information security management system. And of course, the scope of the certification was focused on the application itself. And we got certified since then since 2021, we were subject to surveillance audit on a yearly basis. The second thing, dealing with biometrics and dealing with potential bias of technology, I also subjected a solution without direct assessment to confirm and demonstrate that there was no bias in the authentication identification. I also follow, of course, technical specification issue by my colleagues in the ITU, International Telecommunication Union, as well as additional documents standard best practices issued by the ISO organization. And finally, I also submitted the application to the specific cybersecurity assessment. As in data privacy. So the UN is not subject. for example the GDPR, but the United Nations adopted the same principle and issued its own data privacy principle that has a specific align for example the GDPR and similar regulation and therefore we conduct an assessment of the data privacy. Right now I’m in the process to have the application also be tested to a very recent ISO criteria for presentation attack protection. Talking about the things ISO organization came up with this 30107 standard that it’s a set of tests that can be conducted and confirm whether an application that uses biometrics is indeed resistant potential attack created generated using artificial intelligence that is in progress I hope to have it completed by the first quarter of 2025. Our solution the UN as a whole at the highest level by a body called the chief executive board decided to adopt the same solution to create now a digital identity for all staff members of the United Nations and indeed this is the document which is publicly available where the CEB the chief executive board of the United Nations launched the UN digital ID project defined the terms of reference of the product and appointed again UNICC the United Nations International Computing Centre has the technical support a subject matter expert for the implementation of this solution not only with the UN pension fund as in my case but now with the entire. UN system. And these are some screenshots of the application itself that our colleagues and myself included will be able now to download on our phone and use specific services related to us as staff members. Some mention about acknowledgement and recognition. In 2021, when we went live, we received an award from the UN Secretary General on innovation and sustainability. Shortly after, the Governing Blockchain Association, which is present here in Riyadh under the banner of the IGF Dynamic Coalition on Blockchain Assurance and Standardization, gave us an award on social impact. We also became a case study by Gartner. Gartner is a world leader in IT advisory services and market research. So they actually documented and issued a case study on our solution. And there was also a trade journal specific on investment and pension that recognized our efforts. And here we are, as I alluded to, we are here in Riyadh in the Village booth. I created a specific dynamic coalition and I brought to the attention of all our stakeholders the importance of assurance of blockchain, but more broadly of the emerging technologies. Thank you very much for your attention. I will now pass the floor to Samir Shawan, the Director of UNICC, which led the technical implementation of this sophisticated solution and can speak more broadly about his support to all the UN system. Thank you very much. Samir, the floor is yours.

Sameer Chauhan: Thank you, Dino. I will turn my camera on. Hopefully, you can see me. Yes, we can see you. Great. And I will try and share this presentation. Please let me know if you can see it. Yes, if you can put it in slideshow mode. Perfect. Thank you. Great. So, thank you and honor to come after all the other colleagues from UNHCR, UNICEF and yourself Dino, and thank you for having me join this IGF forum. As I mentioned earlier in your presentation, we are designed to support all of you, our partners in your digital transformation journey. So I’ll walk us through a few slides to tell you about what we do, and there are some really good connections to what you shared Dino earlier so I think it’ll be useful to present that from our point of view of what we did. Just a bit of background, we are designed through a general assembly resolution to provide technology support to the rest of the UN system. And at this point we are the largest technology partner strategic partner for anything digital as well as cyber for the UN system. We have over 100 clients and partner organizations so essentially the entire UN system works with us in some way, shape or form for cyber and digital efforts that they have underway. And we have been around for 54 years now. I think it was 54 years as of yesterday. So we celebrated our 54th birthday, and a whole diversity of different digital services and solutions and we operate out of five locations currently. I think Dino you shared a version of this map also. So we support everybody across the UN in their digital journeys, and the work we do we try to look at it from a perspective of which SDGs do we support. And since we work with the entire UN system what we realized when we started doing a mapping is that our work and our projects support all 17 SDGs. One more point to mention here is our board which the entire UN system, just approved recently, a few months ago, a new corporate strategy for UNICC. It essentially is a strategic framework that allows us to build digital foundations that all the UN partners can then use for their journeys, for their digital transformation journeys, for developing their digital solutions in order to achieve their organizational outcomes. So, there are five pillars of the strategy, everything from where the solutions are built, i.e. the infrastructure, where it’s running, to the actual digital tools and solutions that we can help our partners with, to securing those solutions, to using data and AI services to derive insights from all of the solutions that have been built by the partners, and finally, providing expertise and insights. So, provide experts who can provide support and guidance to our partners as they go on their digital transformation journeys. And I’ll just spend a few minutes sharing some examples of the kinds of work we do. It spans a very wide spectrum. On average, we do about 150 to 200 new digital projects a year for our partners, and we manage more than nine petabytes of data in our data centers, as well as support partners in the cloud, on the public cloud. So, it’s a very vast portfolio of technology services, but these are just some snapshots, some examples to give you an idea of the kind of work we do. So, the big buzzword today is Gen AI. So, lots of UN partners are implementing Gen AI solutions for their specific business needs, for their mandates, for the work that they do, out in the field, out on the ground, whether it’s for refugees, for children, for women, for health, and so on and so forth. What we are doing is behind the scenes building common solutions that multiple partner organizations can use, and this is a great example of that. UNHCR built a solution which was to centralize HR policies. and to provide a Gen AI chatbot. For example, you can ask, can you explain to me how much uncertified sick leave do I have, or how do I get parental leave? Can I request to work 90% of the time? So those kinds of policies and guidelines, what we’ve done is we took the solution that UNHCR had built, and we expanded it to make it work for 13 different organizations. So we sift through all of the HR policies from these organizations, train the AI on it, and make sure that we can make this chatbot available to all of the employees of these organizations. And this is being scaled up, so more and more partner organizations are continuing to join this platform. Another one which Dino spoke about earlier is we built the core technology that was used by the pension fund to build a certificate of entitlement that Dino shared. What we did then is, since this was championed by the CEB, the Chief Executive Boards of the UN system, they asked us to build on top of that pension fund solution, a platform or a solution that will be the UN digital ID. So today it’s six organizations, UN Secretariat, UNDP, UNHCR, UNICEF, the pension fund, and WFP, who have launched this. The first phase of the system is live. And the goal is multiple capabilities and features will be built onto this platform using that same blockchain biometric solution that Dino spoke about earlier. And this will allow all UN organizations eventually to use this one common platform to facilitate all of the data exchange, interoperabilities about staff data. And the data will be controlled by the staff members themselves. They will get to choose which information they share with which stakeholder when, and be able to revoke that access once they’ve shared that information. So this is just another example of a shared common collective solution that we have built. And when it comes to digital innovation, the solutions could be big or small, could be very, very complex, it could be fairly simple, but as we work with each one of our partners, our goal is to make sure that it is serving their needs and it is actually impactful. So this is a great example of what we built for PAHO, which is a Pan-American health organization. It’s the WHO regional office in the Americas. They asked us to build RPA solutions, which they call MAX and MIA. To start to handle all of their manual processes, which handle procurement. So MAX and MIA are now live. They are bots that are running at PAHO that automate all of the purchase orders for health supplies. So during COVID, it was a huge blessing for them because they were handling massive volumes of vaccine and supplies that were being shipped all over the Americas. And then MAX uses AI and machine learning. So it’s a much more advanced bot, if you will, than MIA. To automatically create advanced shipping notifications. So there’s much more complexity there. It’s typically something that human would do, but with the constant training, it is able to then generate this shipping notification and saves thousands of hours of human time every week. Another example I’d like to share is work we’ve done with UNDP through a partnership UNDP has with the EU to monitor elections assistance. And they created a joint task force, the EU and the UNDP, and asked us to build an early warning and early response system, which has now been called the iReports platform. So what this iReport platform does is gathers real-time information on the ground and provides all the relevant national authorities and independent monitoring authorities. Risk reports flags any incidents around electoral violence or gender violence, and it allows for coordinated nationwide responses. to these elections. And this platform, again, the value UNICC brings is it’s being built and enhanced election after election. So there’s more and more functionality and more and more capability that is being built to make sure that this platform is meaningful and can ensure that elections are handled seamlessly without any violence, without any interruptions. We have a large cybersecurity practice. And what we do here, what I’m demonstrating here is again, that collective collaboration across the UN system where we have brought the entire UN system together to make sure we look at and analyze all the threat intelligence so that we can all make collective decisions of how best to defend ourselves from cyber risks or cyber attacks. So what we do is we gather information from all of our partner agencies, the entire UN system, as well as we enrich it with additional information from commercial security firms, service providers, different member state government agencies, law enforcement, and other trusted sources. And those insights are then refined. And up-to-date timely information is presented to the entire UN community to say, how do you identify potential anomalies which could be risks or attacks? How do you respond to those? What kind of mitigating actions do you take? So that’s an example of, again, a shared collective responsibility in action. And this might be the last example I think I have, but what I wanted to highlight here is, we also take solutions that have been built by other partners. So in this instance, this was a solution that was jointly built by UNICEF, UNHCR, and the World Food Programme. And they asked us to take this on board. The idea was to have one common platform through which all the civil society organizations, these are third-party organizations out at the country level that these three organizations used to work with, to build one common interface. a partner portal through which they can all come in and interact with these UN organizations. So that platform was handed over to us by UNICEF and others, and we took it and we’ve scaled it, enhanced it, built more capabilities into it. And in the meantime, more and more organizations have joined. So at this point, I believe there are 12 or 13 UN organizations that use it. And this becomes one common way through which I think it’s 40,000 civil society organizations now can interact, partner, and do work with the entire UN system. As you can see, there’s massive functionality that’s in there, including some chatbots and some PSCA modules to make sure that any civil society organization the UN system works with has gone through some level of screening and background checks, if you will. So these are examples of the kinds of work we do at UNICC. Each one of these projects is designed for scale. It’s designed for use by multiple organizations. And as always, we have to be cost effective and respect all the principles of the UN family, which is to make sure we’re neutral and we are unbiased in the work we do. With that, I’ll stop, Dino, and hand the floor back to you. And thank you again from my side for giving us a chance to speak about what we do. Thank you.

Dino Cataldo Dell’Accio: Thank you very much, Samir, for joining us and for sharing this very meaningful application that you have built for the benefit of the UN. I actually really like that quote that you use, built by the UN for the UN. So I think that is very indicative of the spirit that we all follow collaboratively in sharing our problem and finding common solutions. So thank you very much. I think it was very insightful, very meaningful. So with that said, I think we are perfectly on time. with our schedule. We now have approximately 10-15 minutes for any question or answer that you may have. So I see three hands raised and if I can ask to pass the mic. So first and it’s over here, please.

Sary Qasim: Hello, hi, good morning everyone and it’s really an incredible efforts that the UN is doing in digital transformation and the application that you have applied with the system. My question is, such application, is it unapplicable to offer it for cities and governments over the world for them to use it as something from the UN with a very good like, you know, public-private partnership because, you know, such systems is very important for the community but it’s expensive as well. Such governments just avoiding implementing these systems because of its cost, you know. Is that something you may think about in the future? Thank you.

Dino Cataldo Dell’Accio: Can you please just mention your name and which organization you represent?

Sary Qasim: My name is Sari Qasim. I’m from the Government Blockchain Association and I represent the GBA in the Middle East.

Dino Cataldo Dell’Accio: Thank you. Maybe this is a question for Samir that can elaborate a little bit more about the mandate and the mission of UNICC vis-a-vis potential interaction beyond the UN and with other entities.

Sameer Chauhan: Thank you, Dino. So thank you for your question and two things to mention here. One is, yes, we ourselves UNICC was designed to work with the UN system and beyond. So we can work with other non-profit organizations worldwide. And since our whole premise of creating UNICC was efficiencies of scale and reuse, anything we’ve built for our partners, like we did for DINO or for UNHCR or UNICEF or others, we can, under certain guidelines, make available to others, member states. So we have initiatives that we’re working with, for example, with CITES, where we are working with our partners like ITU. So it’s typically with a UN partner, but we do collaborate with CITES and any knowledge we have, any expertise we have, we want to make sure we make it available to the broader community besides outside of the UN. And the second part I’ll make is our partners themselves. As you saw in examples from Mike, from UNICEF, from DINO, what we try and do across the entire UN ecosystem is make sure what we’re building is relevant and applicable in the greater world outside. So many, many solutions, digital solutions that are being built by our partners with ICC or without are designed for use at a country level. So the examples you heard about DPIs, DPGs, those are designed by UNICEF, UNHCR and others so that they can be made available to countries. So I hope that answers your question.

Dino Cataldo Dell’Accio: Thank you very much, Sameer. Thank you for the question from the audience. Please, if you can say also your name and your organization.

Audience: Thank you very much for this interesting presentation and the good things we are doing in the form of exciting innovation. My name is Youssef Amadou from University of Ghana. I’m also part of Internet Society Ghana chapter. I want to know, this question goes to you, about your fund. The first question is who qualifies for the fund? That’s my first question. The second question is how do you do due diligence to find out if the grantees are the right people who are applying for the fund? And finally, how do you monitor the grantees? I have an experience. I’m part of a World Bank project. We give funds to organizations, individuals for capacity building and job creation. We went for observation just last week and we found that one grantee has closed down his organization. He has changed his number and we cannot find him. Although we have fund management system, which is on an application, we have monitors who go to see what they are doing. But still, this person has vanished. Yours is a remote one. How do you monitor to find out that the grantees are real people and they exist and they are locations that you can easily find them, apart from using the technology? And the last one goes to the police organization. I’m a structural designer. online content for my university. I want to know your, what does that mean? What do you support? The refugees here. Refugee. I really care about formal education that you provide them. You have a common platform where the refugees can take course formal courses for other basic level, high school level, or university level, as well as professional courses for every refugee across the globe. We have something like that. We have intention to do that. Thank you very much.

Dino Cataldo Dell’Accio: Thank you for your question. I’ll try to be very brief because there are many details in the process. However, so first and foremost, answer to the first question, who qualifies? As I alluded to, the UN Pension Fund serves 25 organizations. The majority of them are UN entities. So all the UN entities in the system, plus an additional international organization that, although are not part of the UN system, do have certain, do meet certain requirements and parameters. And therefore, they submit a request to be accepted into the UN Pension Fund. We have a board that meets regularly every year, and they go through this process and then ultimately decide whether or not they can be included. Second question, how do we vouch? How do we validate? How do we monitor? So someone, in order to become a retiree, had to be a former staff member, which means we have a history that the member organization has to provide to us through data interfaces. So we have two main data interfaces. One is called the financial data interface. The other one is called the human resources data interfaces, through which we collect the entire history on a regular basis of each staff member so that before they reach the normal retirement age, we get in touch with them, and we ask them certain documentation that proves and validates who they are and where they are going to live. The process of the digital identification for proof of life and proof of existence starts with an onboarding process, where I made that distinction before, identification versus authentication. So the first step is to identify can be admitted into the digital identity solution of the UN Pension Fund. In the onboarding process, we verify. Originally, we wanted to do in person, but that happened exactly through the pandemic. So we had to convert a process to a digital process with a video onboarding process that’s conducted by our call center, where we require the person decide to be onboarded with their national ID. We match their national ID with a document on file, on record, and we confirm, again, we are using the AI also in that instance, that indeed the person on the other side of the camera matches what we have on file. And once we have done that, then we authenticate the person. There is an additional question that usually you did not ask, but it’s commonly asked. How do we take into account the aging process? Because unfortunately, we get older. So how do we make sure that the biomarker provides? So every time, at least once a year, every time the user utilizes the application, their biometric profile is updated. So we take into account the aging process. I hope I addressed your question. If not, we can take it offline. Thank you. Mike?

Michael Walton: Great. Thank you. And on education, we have quite a vast connected education program with lots of partners. So we don’t develop courses ourselves, but we do have lots of partnerships with different universities across the world and online courses like Coursera and LinkedIn, et cetera. So we’re about to go into the process now of creating a new application, a new site called Opportunities. And we work really closely with, again, universities where we’re funding scholarships via the German government and other partners as well. So we don’t do the development courses ourselves, but we do try and gather together in every country and every opportunity the places they can go to both do tertiary education, but also other educational opportunities for training. So I’m happy to speak to you more about that, but yeah, it’s really a core part of our program, this connected education piece, and how you can provide remote learning. Thanks.

Nancy Marango: And the last question. Thank you very much for the floor. My name is Nancy Marango. I serve as chairman for an organization in Kenya. My question and concern is about accessibility. We have like a standardized system integrating the middle age and the elderly in terms of information access and opportunity access within the system. Thank you.

Dino Cataldo Dell’Accio: Is that question directed to pension fund, UNHCR, UNICEF, all of us?

Nancy Marango: I love you because disability does not choose and everyone is a candidate.

Dino Cataldo Dell’Accio: Thank you. Thank you. Very briefly, maybe I’ll respond for the pension fund. Our demographics, by definition, is a very peculiar type of demographic, as I alluded to in my title. We are dealing with the aging population, so definitely issues with accessibility are taken into account both on our website, in the instruction, and also in allowing those who have limitations to be assisted by legal guardians. So we are allowed also for that. Thank you.

Michael Walton: I’ve done a lot of work on disability over the last years, really looking at all of our applications, because what we want to ensure is that when somebody wants to work for you in HCR, they have the trust and the belief that they can come to a workplace where those applications can be used. From a refugee perspective, 100%, we are working with refugees, with disabilities to make sure. But I think your question is wider, which is how can we make sure this happens across the UN? And I think that there is a real potential here for a joint centre of excellence approach on accessibility so that everybody can take the benefit of that. I 100% agree with that priority. Thank you.

Dino Cataldo Dell’Accio: Thank you, Mike. I don’t know whether Sui Meng, would you like to also provide a perspective on UNICEF?

Fui Meng Liew: Yeah, sounds great. So internal to UNICEF, we take accessibility very, very seriously. Just most recently, all our websites is actually proven to be accessible. And accessibility is not only just by age. There’s also disabilities and things like that. For people that have colourblind, can they actually access the information on our internal system? So that’s one angle. The other angle is about children. A lot of children that are having disability and needed better digital accessibility functions in… the tools that we provide, a lot of them are not identified. So we are also working on how to make sure that we identify that age group, like children, very more intentionally so that we can provide interventions digitally to give them, to let them, to help them actually learn, to help them being able to get through the journey better. So two different angles that we’re looking at accessibility and they’re both very, very important for our work as well. Thank you for the question.

Dino Cataldo Dell’Accio: Thank you, Fui Meng. And Samir, would you like to also provide a little bit of analysis in?

Sameer Chauhan: Yes, please. Thank you. So absolutely. I think as you heard from the other speakers, this is front and center in the work we all do. Overall across the UN ecosystem, there is a whole group called the DTN, which is the chief technology officers of the entire UN system. They meet on a regular basis and accessibility guidelines and standardization is a key topic that they discuss at that level. So we want to make sure that everybody is using the same approach. And I can speak for UNICC, all the systems we build, we make sure that we adhere to those UN system-wide guidelines for accessibility. But there are additional guidelines that come into play. For example, human rights guidelines that have been issued by the secretary general. So we want to make sure that in all of the digital work we do, those human rights guidelines are also taken into consideration as well as any other gender considerations, et cetera. And as Dino alluded to in the solution that has been built by the pension fund, which we worked with the pension fund on, we also had to make sure that there is no bias from a regional perspective or a geographical perspective in the solutions we’ve built. So there are many, many factors to take into consideration here, but absolutely it’s something we’re all very conscious of and working on addressing every day. Thank you.

Dino Cataldo Dell’Accio: Thank you very much, Samir. So we are perfect on time. Thank you very much. for your participation. I don’t see any question on the chat, so I think that we can thank the active participation of the audience, both in person as well as online, and thanks all my colleagues and friends, speakers, Mike Walton, Xiuying Meng, and Samir Shawan for participating. Thank you. The recording will be available on the website along with the presentation. Thank you. Thank you very much. Have a good day. Thank you. Bye-bye. Bye-bye. Bye-bye. Bye-bye.

M

Michael Walton

Speech speed

175 words per minute

Speech length

3034 words

Speech time

1038 seconds

Digital strategy focused on refugee empowerment and internal effectiveness

Explanation

UNHCR developed a digital strategy aimed at empowering refugees and improving internal organizational effectiveness. The strategy includes pillars for refugee empowerment, digital services, and internal efficiency.

Evidence

Development of a digital gateway for refugees to access services, and implementation of Internet of Things technology for water delivery monitoring.

Major Discussion Point

Digital Innovation in UN Organizations

Differed with

Fui Meng Liew

Dino Cataldo Dell’Accio

Differed on

Approach to digital innovation

Need to address digital inclusion and accessibility

Explanation

UNHCR recognizes the importance of digital inclusion and accessibility for refugees. This includes providing connectivity and ensuring digital services are accessible to all, including those with disabilities.

Evidence

Refugee connectivity initiative and work on digital accessibility with partners like Microsoft and Google.

Major Discussion Point

Challenges and Considerations in Digital Transformation

Agreed with

Fui Meng Liew

Dino Cataldo Dell’Accio

Sameer Chauhan

Agreed on

Importance of digital inclusion and accessibility

Multi-stakeholder partnerships for capacity building

Explanation

UNHCR emphasizes the importance of partnerships for capacity building in digital innovation. They collaborate with various stakeholders to deliver the best possible outcomes and share resources efficiently.

Evidence

Partnerships with Meta, Google, and the EU for digital initiatives.

Major Discussion Point

Collaboration and Knowledge Sharing

Agreed with

Fui Meng Liew

Sameer Chauhan

Agreed on

Collaboration and shared digital solutions across UN agencies

Digital services improving refugee assistance

Explanation

UNHCR has implemented digital services to improve assistance to refugees. These services aim to make registration, access to information, and service delivery more efficient and effective.

Evidence

Development of a help website with 14 million annual visitors and a WhatsApp service for information dissemination.

Major Discussion Point

Impact and Future Directions

F

Fui Meng Liew

Speech speed

141 words per minute

Speech length

2193 words

Speech time

928 seconds

Digital resilience framework to protect children’s data rights

Explanation

UNICEF developed a digital resilience framework to protect children’s data rights and ensure secure and ethical use of personal data. The framework includes pillars for data protection, information security, and responsible data use for children.

Evidence

Implementation of the framework in UNICEF’s technology playbook and strengthening through global partnerships.

Major Discussion Point

Digital Innovation in UN Organizations

Differed with

Michael Walton

Dino Cataldo Dell’Accio

Differed on

Approach to digital innovation

Importance of data protection and responsible use of children’s data

Explanation

UNICEF emphasizes the critical nature of protecting children’s data and using it responsibly. This includes ensuring the highest ethical standards in data collection and use.

Evidence

Development of a responsible data for children pillar in the digital resilience framework.

Major Discussion Point

Challenges and Considerations in Digital Transformation

Shared digital public goods across UN agencies

Explanation

UNICEF promotes the development and use of digital public goods across UN agencies. This approach aims to provide countries with greater control over their digital infrastructure and catalyze local tech ecosystems.

Evidence

Investment in digital public goods like RapidPro, Primero, and Giga.

Major Discussion Point

Collaboration and Knowledge Sharing

Agreed with

Michael Walton

Sameer Chauhan

Agreed on

Collaboration and shared digital solutions across UN agencies

Digital solutions supporting child rights and protection

Explanation

UNICEF develops digital solutions to support child rights and protection. These solutions aim to address various aspects of a child’s journey, from birth registration to education and social protection.

Evidence

Development of digital solutions for birth registration, vaccination tracking, and education access.

Major Discussion Point

Impact and Future Directions

Agreed with

Michael Walton

Dino Cataldo Dell’Accio

Sameer Chauhan

Agreed on

Importance of digital inclusion and accessibility

D

Dino Cataldo Dell’Accio

Speech speed

128 words per minute

Speech length

3492 words

Speech time

1626 seconds

Blockchain-based digital identity solution for UN pensioners

Explanation

The UN Pension Fund developed a blockchain-based digital identity solution for proof of life verification. This solution aims to replace the traditional paper-based process with a more efficient and secure digital method.

Evidence

Implementation of blockchain, biometrics, and AI technologies in the digital identity solution.

Major Discussion Point

Digital Innovation in UN Organizations

Differed with

Michael Walton

Fui Meng Liew

Differed on

Approach to digital innovation

Security and privacy concerns in digital identity systems

Explanation

The UN Pension Fund addresses security and privacy concerns in their digital identity system. This includes measures to protect personal data and ensure the system’s integrity.

Evidence

Adoption of ISO 27001 certification and compliance with UN data privacy principles.

Major Discussion Point

Challenges and Considerations in Digital Transformation

Agreed with

Michael Walton

Fui Meng Liew

Sameer Chauhan

Agreed on

Importance of digital inclusion and accessibility

Adoption of international standards and best practices

Explanation

The UN Pension Fund adopts international standards and best practices in their digital innovation efforts. This ensures the credibility, reliability, and sustainability of their digital solutions.

Evidence

Certification with ISO 27001 and adherence to ITU technical specifications.

Major Discussion Point

Collaboration and Knowledge Sharing

Innovation in pension management and verification

Explanation

The UN Pension Fund has innovated in pension management and verification through digital solutions. This includes the development of a digital identity system for proof of life verification.

Evidence

Implementation of a mobile application for pensioner verification using blockchain and biometrics.

Major Discussion Point

Impact and Future Directions

S

Sameer Chauhan

Speech speed

168 words per minute

Speech length

2358 words

Speech time

839 seconds

Shared digital solutions and platforms for UN agencies

Explanation

UNICC develops shared digital solutions and platforms for UN agencies. This approach aims to increase efficiency and collaboration across the UN system.

Evidence

Development of a Gen AI chatbot for HR policies used by 13 organizations and a UN digital ID platform for six organizations.

Major Discussion Point

Digital Innovation in UN Organizations

Agreed with

Michael Walton

Fui Meng Liew

Agreed on

Collaboration and shared digital solutions across UN agencies

Ensuring solutions are scalable and cost-effective

Explanation

UNICC focuses on developing scalable and cost-effective digital solutions for UN agencies. This approach aims to maximize the impact of digital innovations across the UN system.

Evidence

Development of shared platforms like the iReports election monitoring system and the UN Partner Portal.

Major Discussion Point

Challenges and Considerations in Digital Transformation

Agreed with

Michael Walton

Fui Meng Liew

Dino Cataldo Dell’Accio

Agreed on

Importance of digital inclusion and accessibility

Common digital foundations for UN partners

Explanation

UNICC builds common digital foundations that UN partners can use for their digital transformation journeys. This includes infrastructure, digital tools, security measures, and data services.

Evidence

Implementation of a new corporate strategy with five pillars covering various aspects of digital foundations.

Major Discussion Point

Collaboration and Knowledge Sharing

Expanding shared UN digital solutions to other organizations

Explanation

UNICC aims to expand shared UN digital solutions to other organizations beyond the UN system. This includes making solutions available to non-profit organizations and member states.

Evidence

Initiatives to collaborate with CITES and make expertise available to the broader community.

Major Discussion Point

Impact and Future Directions

Agreements

Agreement Points

Importance of digital inclusion and accessibility

Michael Walton

Fui Meng Liew

Dino Cataldo Dell’Accio

Sameer Chauhan

Need to address digital inclusion and accessibility

Digital solutions supporting child rights and protection

Security and privacy concerns in digital identity systems

Ensuring solutions are scalable and cost-effective

All speakers emphasized the importance of making digital solutions accessible and inclusive for various user groups, including refugees, children, the elderly, and those with disabilities.

Collaboration and shared digital solutions across UN agencies

Michael Walton

Fui Meng Liew

Sameer Chauhan

Multi-stakeholder partnerships for capacity building

Shared digital public goods across UN agencies

Shared digital solutions and platforms for UN agencies

Speakers agreed on the importance of collaboration and sharing digital solutions across UN agencies to increase efficiency and maximize impact.

Similar Viewpoints

Both speakers emphasized the importance of data protection and security in digital systems, particularly for vulnerable groups like children and pensioners.

Fui Meng Liew

Dino Cataldo Dell’Accio

Digital resilience framework to protect children’s data rights

Security and privacy concerns in digital identity systems

Both speakers highlighted how digital solutions can improve service delivery and support the rights of vulnerable populations like refugees and children.

Michael Walton

Fui Meng Liew

Digital services improving refugee assistance

Digital solutions supporting child rights and protection

Unexpected Consensus

Adoption of emerging technologies across different UN agencies

Michael Walton

Dino Cataldo Dell’Accio

Sameer Chauhan

Digital strategy focused on refugee empowerment and internal effectiveness

Blockchain-based digital identity solution for UN pensioners

Shared digital solutions and platforms for UN agencies

Despite their different focus areas, all three speakers showed a strong commitment to adopting emerging technologies like blockchain, AI, and biometrics in their respective agencies, indicating a broader trend of technological innovation across the UN system.

Overall Assessment

Summary

The main areas of agreement among speakers included the importance of digital inclusion and accessibility, collaboration across UN agencies, data protection and security, and the adoption of emerging technologies to improve service delivery.

Consensus level

There was a high level of consensus among the speakers, suggesting a unified approach to digital innovation across different UN agencies. This consensus implies a strong foundation for future collaboration and knowledge sharing in digital transformation efforts within the UN system.

Differences

Different Viewpoints

Approach to digital innovation

Michael Walton

Fui Meng Liew

Dino Cataldo Dell’Accio

Digital strategy focused on refugee empowerment and internal effectiveness

Digital resilience framework to protect children’s data rights

Blockchain-based digital identity solution for UN pensioners

While all speakers focused on digital innovation, they had different primary focuses based on their organizations’ mandates. UNHCR emphasized refugee empowerment, UNICEF prioritized children’s data rights, and the UN Pension Fund focused on digital identity for pensioners.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were in the specific focus and approach to digital innovation, based on each organization’s mandate and target population.

difference_level

The level of disagreement was relatively low, with speakers mostly presenting complementary rather than conflicting approaches. This suggests a cohesive overall strategy for digital innovation across UN organizations, with each entity adapting to its specific needs and challenges.

Partial Agreements

Partial Agreements

All speakers agreed on the importance of collaboration and shared resources, but had different approaches. UNHCR focused on multi-stakeholder partnerships, UNICEF on digital public goods, UN Pension Fund on international standards, and UNICC on shared platforms across UN agencies.

Michael Walton

Fui Meng Liew

Dino Cataldo Dell’Accio

Sameer Chauhan

Multi-stakeholder partnerships for capacity building

Shared digital public goods across UN agencies

Adoption of international standards and best practices

Shared digital solutions and platforms for UN agencies

Similar Viewpoints

Both speakers emphasized the importance of data protection and security in digital systems, particularly for vulnerable groups like children and pensioners.

Fui Meng Liew

Dino Cataldo Dell’Accio

Digital resilience framework to protect children’s data rights

Security and privacy concerns in digital identity systems

Both speakers highlighted how digital solutions can improve service delivery and support the rights of vulnerable populations like refugees and children.

Michael Walton

Fui Meng Liew

Digital services improving refugee assistance

Digital solutions supporting child rights and protection

Takeaways

Key Takeaways

UN organizations are implementing various digital innovation initiatives to improve their operations and services

Digital strategies focus on empowering beneficiaries (e.g. refugees, children) as well as improving internal effectiveness

Data protection, privacy, and responsible use of technology are key priorities across UN digital initiatives

There is a strong emphasis on collaboration and shared digital solutions across UN agencies

Digital inclusion and accessibility remain important challenges to address

Resolutions and Action Items

UNICC to continue developing shared digital platforms and solutions for use across UN agencies

UN organizations to adhere to common accessibility guidelines and standards in digital solutions

Explore creation of a joint center of excellence on digital accessibility across UN system

Unresolved Issues

How to make UN digital solutions more widely available to cities and governments

Addressing challenges of monitoring and verifying grantees/beneficiaries in remote settings

Fully integrating elderly and disabled populations into digital systems

Suggested Compromises

Balancing need for in-person verification with digital onboarding processes during pandemic

Using AI and regular biometric updates to account for aging in digital identity systems

Thought Provoking Comments

We had to articulate and basically translate the problem into its main logical components. And the way we did this is that we identify the need to provide four proofs. One is the proof of identity at the very beginning of the process when we enroll our user, our client into the solution. And then proof of authentication every after, every time the application is utilized.

speaker

Dino Cataldo Dell’Accio

reason

This comment provides a clear framework for approaching the complex problem of digital identity verification, breaking it down into distinct components. It demonstrates a systematic and logical approach to solving a technical challenge.

impact

This comment set the stage for a deeper discussion on the technical aspects of digital identity solutions. It led to further explanations about blockchain, biometrics, and AI, showing how these technologies address each ‘proof’ in the framework.

By adopting blockchain, I adopted technology that did not have any type of central control. That however, in our case utilizing a permission-based blockchain, we could determine who could participate.

speaker

Dino Cataldo Dell’Accio

reason

This comment highlights a key benefit of blockchain technology in ensuring security and preventing collusion, while also explaining how it can be adapted for specific organizational needs. It shows a nuanced understanding of the technology’s capabilities.

impact

This insight sparked further discussion on the use of emerging technologies in UN systems, leading to explanations of how AI and biometrics are integrated into the solution.

We have a large cybersecurity practice. And what we do here, what I’m demonstrating here is again, that collective collaboration across the UN system where we have brought the entire UN system together to make sure we look at and analyze all the threat intelligence so that we can all make collective decisions of how best to defend ourselves from cyber risks or cyber attacks.

speaker

Sameer Chauhan

reason

This comment emphasizes the importance of collaboration and shared resources in cybersecurity, highlighting a unified approach across the UN system. It demonstrates how individual organizational efforts can be amplified through collective action.

impact

This comment shifted the discussion towards the broader implications of digital transformation across the UN system, emphasizing the importance of shared solutions and collaborative efforts.

I think when we’re talking about inclusion, we’re also talking about inclusion of elderly people who may not have the skills, or people who don’t have the access, and also make sure that that’s completely equal across the gender divide as well.

speaker

Michael Walton

reason

This comment broadens the discussion on digital inclusion beyond just connectivity, highlighting the importance of considering age, skills, and gender in digital access. It brings attention to often overlooked aspects of the digital divide.

impact

This comment led to a more comprehensive discussion on accessibility and inclusion, with other speakers addressing how their organizations consider these factors in their digital solutions.

Overall Assessment

These key comments shaped the discussion by moving it from a general overview of digital transformation in UN agencies to a more nuanced exploration of specific challenges and solutions. They highlighted the complexity of implementing digital solutions in a global context, emphasizing the need for security, accessibility, and collaboration. The discussion evolved from technical aspects of digital identity to broader considerations of inclusion and shared resources across the UN system, demonstrating the multifaceted nature of digital innovation in international organizations.

Follow-up Questions

How can we have an ethical approach to the use of tech in the humanitarian sector?

speaker

Michael Walton

explanation

This was mentioned as an important focus area for the coming year, with interest in involving other stakeholders.

How can we create a joint center of excellence for digital accessibility across the UN system?

speaker

Michael Walton

explanation

This was suggested as a way to share resources and avoid reinventing the wheel on accessibility issues across UN organizations.

How can we address the digital inclusion gap for elderly people and ensure equal access across the gender divide?

speaker

Michael Walton

explanation

This was highlighted as an important consideration in digital inclusion efforts beyond just connectivity.

How can UN digital solutions be made available to cities and governments worldwide through public-private partnerships?

speaker

Sary Qasim

explanation

This was raised as a potential way to make UN-developed systems more widely accessible and affordable for governments.

How can we create a common platform for refugees to access formal education courses at various levels across the globe?

speaker

Audience member (name not provided)

explanation

This was suggested as a way to provide more comprehensive educational opportunities for refugees.

How can we better identify and provide digital interventions for children with disabilities?

speaker

Fui Meng Liew

explanation

This was mentioned as an important area of focus for UNICEF to improve digital accessibility for children with disabilities.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #9 Digital Technology Empowers Green and Low-carbon Development

Open Forum #9 Digital Technology Empowers Green and Low-carbon Development

Session at a Glance

Summary

This open forum focused on how digital technology can empower green and low-carbon development. Experts from various fields discussed the intersection of digitalization and environmental sustainability. The speakers emphasized that digital transformation should be people-centered and aimed at achieving sustainable development goals. They highlighted the need for coordinated efforts between digital and green transformations to address climate change and other environmental challenges.

Several key points emerged from the discussion. First, while digital technologies offer immense potential for sustainable development, they also have their own environmental footprint that needs to be managed. The lifecycle approach to assessing the environmental impact of digitalization was emphasized, covering production, use, and end-of-life stages. Second, the importance of capacity building, especially in developing countries, was stressed to ensure they can benefit from digital technologies while managing associated environmental costs.

Speakers also highlighted the role of education in promoting digital literacy and environmental awareness. Examples were shared of how AI and other digital technologies are being used in schools to foster understanding of low-carbon practices. The potential of AI in modeling clean energy transitions and improving climate models was also discussed.

The forum concluded with insights from industry representatives on practical applications of digital technologies in reducing energy consumption and promoting circular economy principles. Overall, the discussion underscored the need for collaborative efforts across sectors and countries to harness digital technologies for sustainable development while mitigating their environmental impacts.

Keypoints

Major discussion points:

– The importance of coordinating digital technology development with green/low-carbon goals

– The need for capacity building, education, and talent cultivation to support digital and green transformations

– The environmental impacts and footprint of digital technologies themselves

– The potential for AI and digital tech to model and support clean energy transitions

– Practical examples of using digital tech for sustainability in areas like education and data centers

Overall purpose:

The purpose of this forum was to explore how digital technologies, especially AI, can be leveraged to support green and low-carbon development goals. Speakers discussed both the opportunities and challenges of aligning digital and environmental agendas.

Tone:

The overall tone was optimistic and forward-looking, with speakers highlighting the potential for digital tech to drive sustainability. There was also a sense of urgency about addressing environmental challenges. The tone remained consistent throughout, with all speakers adopting a similar professional and solution-oriented approach to the topic.

Speakers

– Xue Lan: Senior Professor, Tsinghua University; Dean, Schwarzman College; Member of the Leadership Group, UN Internet Governance Forum (IGF)

– Long Kai: Deputy Director, Cyber Information Development Bureau, Cyberspace Administration of China

– Peng Gang: Vice President and Provost of Tsinghua University

– Gong Ke: Former President of the World Federation of Engineering Organizations; Executive Director of Chinese Institute for New Generation Artificial Intelligence Development Strategies

– Torbjorn Fredriksson: Director of the Office of ICT Policy at the United Nations Conference on Trade and Development (UNCTAD); Head of UNCTAD’s work on performance and digital economy

– Su Jun: Dean of the Intelligent Society Governance from Tsinghua University

– Dou Guimei: Principal of Tsinghua University Primary School; Co-director of the National Experiment Base of the Intelligent Society Governance

– Eduardo Araral: Former Vice Dean and Director at the Lee Kuan Yew School of Public Policy, National University of Singapore

– Zhou Chaonan: Chairperson of Range IDC; Vice-chairman of Henggang

Additional speakers:

– Fabio Fred Dixon

– Professor Salva

– Professor D’Souza

Full session report

Digital Technology Empowering Green and Low-Carbon Development: A Comprehensive Forum Summary

This open forum brought together experts from academia, government, and international organisations to explore the intersection of digital technology and sustainable development. The discussion centred on how digital innovations, particularly artificial intelligence (AI), big data, and 5G, can be leveraged to support green and low-carbon initiatives while addressing the environmental challenges posed by digitalisation itself.

Key Themes and Agreements

1. Digital Technologies as Enablers of Sustainable Development

There was broad consensus among speakers that digital technologies have significant potential to accelerate the achievement of sustainable development goals. Torbjorn Fredriksson, Director at UNCTAD, emphasised this point, which was echoed by other participants including Gong Ke, Long Kai, and Eduardo Araral. They agreed that when properly implemented and coordinated with green initiatives, digital technologies can improve environmental outcomes.

Fredriksson highlighted the potential to accelerate sustainable development goals and introduced the Global Digital Compact, a UN initiative aimed at outlining shared principles for an open, free, and secure digital future for all. Gong Ke stressed the importance of a people-centred approach to digital technology, underscoring the complexity of balancing technological advancement with human and environmental needs.

2. Environmental Impacts of Digitalisation

Speakers acknowledged the significant environmental challenges posed by the digital sector. Fredriksson pointed out that the ICT sector generates substantial greenhouse gas emissions, with the production of digital devices requiring large amounts of raw materials. He emphasised the need for a lifecycle approach to understand the full environmental impact of digitalisation, highlighting the growing energy and water consumption of data centres, as well as the increasing problem of e-waste, particularly in developing countries.

Zhou Chaonan, representing the industry perspective, discussed innovative solutions to mitigate these impacts, particularly focusing on improving the energy efficiency of data centres through advanced cooling systems and reducing Power Usage Effectiveness (PUE). This partial agreement between Fredriksson and Zhou Chaonan illustrates the industry’s recognition of the problem and efforts to address it.

3. Strategies for Green Digital Development

Several strategies were proposed to align digital development with environmental sustainability:

a) Circular Digital Economy: Fredriksson emphasised the need to move towards a circular digital economy to reduce the environmental footprint of digital technologies. This approach involves designing products for longevity, reuse, and recycling, as well as implementing effective e-waste management systems.

b) Interdisciplinary Research and Education: Peng Gang, Vice President of Tsinghua University, stressed the importance of interdisciplinary research and education on AI and sustainability. This view was shared by Su Jun, who highlighted the need to study the social impacts of AI through large-scale experiments, introducing the AIC (Artificial Intelligence Social Impact Implementation) initiative.

c) AI in Education: Dou Guimei provided insights on using AI to create smart, low-carbon campuses and enhance environmental education. She discussed initiatives such as AI-powered energy management systems and personalised learning platforms that reduce resource consumption while improving educational outcomes.

d) Energy-Efficient Infrastructure: Zhou Chaonan discussed improving the energy efficiency of data centres, highlighting the industry’s role in sustainable digital development through innovations in cooling technologies and renewable energy integration.

e) AI for Clean Energy Transition: Eduardo Araral presented on the potential of AI for complex modelling of clean energy transitions, emphasising the need for advanced computational tools to optimise renewable energy systems and grid management.

4. Policy and Governance for Digital Sustainability

The discussion also touched upon the policy and governance aspects of aligning digital and green agendas:

a) National Initiatives: Long Kai highlighted China’s efforts to promote coordination between digitalisation and green transformation, including policy frameworks that encourage the integration of digital technologies in environmental protection and resource management.

b) International Cooperation: Speakers emphasised the need for global collaboration on technology, standards, and policy to address the challenges of sustainable digital development. This includes initiatives like the Global Digital Compact and efforts to align digital standards across countries.

c) Industry Support: Zhou Chaonan discussed the importance of establishing mechanisms to support the green upgrading of industries through digital technologies, emphasising the role of public-private partnerships in driving innovation.

Thought-Provoking Insights

Several comments stood out for their ability to frame the discussion in broader terms:

1. Gong Ke posed the fundamental question: “What is the overarched goal of the digitalisation?” This prompted participants to consider the purpose of digital transformation beyond technical implementation.

2. Fredriksson highlighted the complex challenge of balancing digitalisation’s environmental costs with its potential benefits, especially for developing countries. He stated, “We need to both help countries to deal with the costs of digitalisation in terms of the environment but we also need to continue to support countries that are far behind in order for them to be able to use digital technologies to address the environmental concerns.”

3. Su Jun cautioned against becoming “slavers of technology,” emphasising the need for human-centred AI development and the importance of large-scale social experiments to understand AI’s impacts.

4. Du Guimei brought the discussion to a practical level by considering AI’s impact on elementary education, broadening the scope to include everyday applications of digital technologies in creating sustainable learning environments.

Areas for Further Exploration

The forum identified several key areas requiring further investigation:

1. Utilising AI for complex modelling of clean energy transitions and optimising renewable energy systems

2. Further reducing the Power Usage Effectiveness (PUE) of data centres and exploring innovative cooling technologies

3. Improving integration of digital transformation and green development policies at national and international levels

4. Addressing the ‘double bind’ faced by developing countries regarding digitalisation and environmental costs

5. Transitioning towards a more circular digital economy, including improved e-waste management and product design for longevity

6. Strengthening interdisciplinary education in AI, digital technologies, and societal governance

7. Enhancing collaboration among stakeholders in digital and green development, including academia, industry, and government

8. Promoting global cooperation in technology communications, standards alignment, and financial integration for digital and green development

9. Expanding research on the social impacts of AI through initiatives like the AIC (Artificial Intelligence Social Impact Implementation)

10. Developing and implementing AI-powered solutions for smart, low-carbon campuses and educational institutions

Conclusion

The forum demonstrated a high level of consensus on the potential of digital technologies to support sustainable development and the need for coordinated approaches to digital and green transformations. While specific strategies and implementations varied among speakers, there was agreement on the general direction towards integrating digital and environmental agendas. The discussion underscored the growing recognition of the interconnectedness of digital and environmental issues, suggesting potential for more integrated policy approaches and research initiatives in the future. The emphasis on interdisciplinary collaboration, global cooperation, and human-centred technology development provides a roadmap for future efforts in harnessing digital technologies for sustainable development.

Session Transcript

Xue Lan: Yes, could you help me? Channel two, right? Can you help me? Okay. Ladies and gentlemen, good morning. Welcome to this open forum, Digital Technology Empowering Green and Low Carbon Development. My name is Fang Zhang. I’m an associate professor from the Institute of Intelligent Society Governance, Tsinghua University. It’s my honor to serve as a moderate for this forum today. First, on behalf of the organizers, I would like to extend warm welcome to our distinguished experts and the audience here or online. This forum focus on the critical intersection between the digital technology and green low carbon development. We are very honored to have nine speakers with us today. Seven of them will join on the site and two will join remotely. As you can see from the agenda, we have a very packed but exciting program ahead. To kick off the forum, please let me to invite Mr. Nong Kai, Deputy General Director of the Bureau of Information Technology, Development, Cyberspace Administration of China, to deliver the opening remarks. Welcome.

Long Kai: Distinguished guests, ladies and gentlemen, good morning. Welcome to the 19th IGF Open Forum on Digital Technology in Power Screen and Low Carbon Development. It’s a great pleasure to meet all of you here in Riyadh. On behalf of the organizers, the Bureau of Digitalization Development, Cyberspace Administration of China, I would like to express our gratitude for your participation. I would also like to extend special thanks to the Kingdom of Saudi Arabia, this year’s host country of IGF, for their extraordinary work and dedication to the success of this forum. The Chinese government places great importance on the coordination between digitalization and green transformation, and it’s committed to promoting the empowerment of green and low carbon development through the adoption of digital technologies. President Xi Jinping emphasized the need to strengthen economic and technological cooperation, accelerate the coordination of digitalization and green transformation, and advance the upgrading of energy resources, industrial structures, and consumption patterns to foster green socio-economic development. The 20th National Congress of the Communist Party of China proposed that we will support enterprises in upgrading traditional industry via AI and green technologies. In July 2024, the Chinese government issued a policy document guidelines on accelerating the comprehensive green transformation of socio-ecological development, which further outlines the acceleration of coordination between digital transformation and green technology. Third, we will promote digital industry development by green transformation in the traditional industry. To achieve these goals, a set of initiatives have been implemented, including pilot projects in 10 cities in China. By so far, these efforts have gained significant progress and results, providing replicable and scalable examples. We believe that digitalization and green transformation are increasingly becoming notable trends in the global development. Practical evidence has shown that digital technologies can deeply integrate with a set of key sectors such as energy, power, industry, and technology. We believe that digitalization and green transformation can deeply integrate with a set of key sectors such as energy, power, industry, and technology. Such as energy, power, industry, transportation, and constructions in carbon reduction. This integration effectively enhances the efficiency of energy and resource utilization, playing a pivotal role in the green transformation of traditional industry. From a global perspective, this coordination has become a key driving force of sustainable global growth. On this occasion, I would like to share a few insights. To start with, innovation andEuropean development. We are pushing for international collaboration in technology innovation, research, and application. The strengthening of strategic researches and enterprises’ technology strategy and the strengthening of the company’s status as a technology innovation core encourage the application of technological achievements and keep pace with new industries, new patterns, and new urges. First, innovation drives development. We need to foster technological innovation and application and coordination between digitalization and green technology. This includes strengthening fundamental research and the deployment of cutting-edge technologies, enhancing the role of enterprises as the primary driver of technological innovation, promoting the commercialization of technology, and continuously nurturing new industries and business models. We need to focus on data centers, communication stations, electronic information products, manufacturing, and use, accelerating the development of energy-saving and carbon-reducing technologies, and promoting the green development of the digital industry. Particular focus should be placed on the development and promotion of energy-saving and carbon-reducing technologies in a set of areas such as data centers, communication-based stations, and the manufacturing of electronic products to facilitate the green and low-carbon development of the digital industry. Third, how to promote development? Digital technology can provide networkization, digitalization, and smartness for the green development of the economic society, and promote the overall energy consumption and carbon emissions of society. Second, integration leads to development. Digital technologies can provide networked, digitized, and intelligent tools for green transformation, contributing to reduce overall energy consumption and carbon emissions. Efforts should be made to accelerate the fusion of digitalization and green transformation, driving the deep integration of emerging technologies such as Internet, Big Data, Artificial Intelligence, and 5G with green and low-carbon industries. This will enable the green transformation of key sectors such as industry, energy, construction, and transportation, shifting industrial structures from high-carbon to low-carbon and from low-end to high-end. Third, open public development. Digitalization and green transformation are common opportunities for the development of countries around the world. We must continue to deepen and expand the field of bilateral coordination, multilateral dialogue and cooperation, strengthen the field of bilateral coordination, technical rules, standardization, promote policy communication, technology exchange, project cooperation, and talent training. Third, openness makes development mutually beneficial. Digitalization and green transformation are shared opportunities for countries all over the world. We should further deepen and expand the lateral and multilateral dialogue and cooperation in this field. This includes strengthening the formulation of technical rules and standards, promoting policy coordination, technological communication, project collaboration, and talent training. We hope this will become a key element in efforts to advance the shared future of humanity. Ladies and gentlemen, friends, Today’s forum has built a good platform for exchange with Sino-China Cooperation. Famous professors, experts, scholars, and business representatives from all over the world have come together. I hope we can deepen exchanges, strengthen cooperation, and develop together to discuss the best practice for future bilateral cooperation, to build a beautiful earth, and to share a green future. Ladies and gentlemen, dear friends, Today’s forum provides an excellent platform for discussion on the coordination between digitalization and green development, gathering professors, experts, scholars, and industrial representatives from around the world. Let’s further deepen communication, strengthen cooperation, and provide better insights into the best practices in this field, contributing to a shared green future and a beautiful earth. Finally, I wish this forum great success. Thank you all.

Xue Lan: Thank you, Director Nongkai, for your insightful remarks. Now let me invite Professor Peng Gang, Vice President and Provost of Tsinghua University, to deliver opening remarks. Please welcome. Welcome. Distinguished guests and esteemed guests from all over the world, good morning. First of all, on behalf of Tsinghua University, as the organizer of this open forum,

Peng Gang: I would like to extend my warmest welcome to all of you. It’s a great pleasure to meet all of you here on the 19th IGF. This forum provides us a great chance to have a discussion on leveraging digital technology for green and low-carbon development, a crucial topic for today’s world. On behalf of Tsinghua University, the co-organizer of this open forum, I want to extend my warmest welcome and sincere gratitude to all the experts and guests here. The green and low-carbon development in this digital era is a focusing topic of global governments in today’s world. Chinese President Xi Jinping has emphasized that green and low-carbon transformation is a key to high-quality development. The green and low-carbon development in this digital era is a focusing topic of global governments in today’s world. Chinese President Xi Jinping has emphasized that green and low-carbon transformation is a key to high-quality development. In July 2024, the Chinese government issued guidelines on accelerating the comprehensive green transformation of socio-economic development, In July 2024, the Chinese government issued guidelines on accelerating the comprehensive green transformation of socio-economic development, advocating for deep integration of digitalization and green development to drive this transition. The convergence of these strategies greatly promote sustainable growth in China and global countries. The convergence of these strategies greatly promote sustainable growth in China and global countries. The university has played an important role in the digitalization process of green and low-carbon development. In recent years, Tsinghua University has focused on the cross-disciplinary exploration of digital technology and green and low-carbon development. Universities, as hubs of innovation and leadership in sorts, should play a vital role in advancing this agenda. At Tsinghua University, we have prioritized interdisciplinary research at the intersections of digital technology and green development. Tsinghua University is not only one of the first universities in China to carry out artificial intelligence teaching and research, but also established several research institutes such as the Carbon Energy Laboratory and the Carbon Integrated Research Institute in recent years, and established solid technologies for the research of related cross-disciplinary courses. As one of China’s pioneers in AI research and education, we have also established institutions such as Low-Carbon Energy Lab and Institute for Carbon Neutrality, laying a solid foundation for such interdisciplinary exploration. At the same time, we also pay great attention to the social issues behind digital technology applications. In 2019, we established the Institute for Intelligent Social Governance, and studied a series of issues on the basic theory and policy of intelligent society, such as the deep study of digital technology and green and low-carbon development. Over the past five years, the Institute has achieved a series of significant achievements in the field of digital technology, and has contributed to the global smart society governance. In the meantime, we also recognize the societal challenges faced by digital technology. In 2019, we established the Institute for Intelligent Society Governance, focusing on fundamental theories and policy issues related to digital technologies and low-carbon development. Over the past five years, the Institute has made significant contributions to global intelligent governance through research and policy insights. With the strong support of China’s National Internet Information Office, Tsinghua University has hosted the opening seminar on digital technology and green and low-carbon development. We have invited experts from the United Nations, China, Singapore, Australia, and many other countries and international organizations. We hope to use this opportunity to further address the issues of digital technology and green and low-carbon development, and build a high-level international dialogue and exchange balance. With the guidance from the Cyberspace Administration of China, we are proud to host this forum, bringing together experts from the United Nations, China, Singapore, Australia, and other nations and global organizations. This forum aims to foster high-level dialogue and collaboration. With the guidance from the Cyberspace Administration of China, we are proud to host this forum, bringing together experts from the United Nations, China, Singapore, Australia, and other nations and global organizations. This forum aims to foster high-level dialogue and collaboration, generating fresh ideas and advancing global practice in digitalized green development. Regarding this issue, I would like to share three key thoughts with all of you. First, we should pay more attention to the cultivation of young talents. To achieve this goal, we should strengthen interdisciplinary education in AI, which is the most important part of the digital economy. Second, we should support the development of high-quality human resources. Third, we should support the development of high-quality human resources. To achieve this goal, we should strengthen interdisciplinary education in AI, bring technologies and societal governance to cultivate outstanding talents with technical expertise and societal awareness. Second, we should further enhance collaboration among stakeholders. Second, we should further enhance collaboration among stakeholders. In the coming years, we will deepen partnerships among governments, industries, academia, and research institutions to address fundamental and practical challenges, driving innovation and application. Third, we should promote global cooperation, Third, we should promote global cooperation, Third, we should promote global cooperation, with focuses on advancing international collaboration in technology communications, standards alignment, and financial integration. We can further make the welfare brought by technological evolution more inclusive. I wish this forum great success. Thank you. I wish this forum great success. Thank you.

Xue Lan: Thank you, President Pongda, for your inspiring speech. Now let’s turn to our distinguished panel of experts to share their insights on the theme today. Our first speaker is Professor Gong Ke, former President of the World Federation of Engineering Organizations and also Executive Director of Chinese Institute for New Generation Artificial Intelligence Development Strategies. Professor Gong, the floor is yours.

Gong Ke: Thank you. Thank you so much. So, may I have my slides? So, today I’m going to discuss with you two questions. First, here is an internet governance forum. We talk about internet, we talk about digital technology, we talk about artificial intelligence. So, the first question is, what is the overarched goal of the digitalization? What is the overarched goal? What is the digitalization we want? So, the second question is, how to achieve these goals? So, first, now today the world is in the course of… two transitions, transformations. First, it’s the digitalization. Let’s say, because the digital technology is a technology, generative, revolutionary, general purpose technology, they change our world, change our life in a profound way. But what is the problem for this? This transition should not be a technology-centered transition, be a people-centered transition. However, the biggest problem facing to the humanity on the earth and how to sustain the human life on the earth. So here is the 17 global sustainable goals that are common goals for the humanity. It’s a common task for all country. So the core issue of these goals is to balance the increasing of the human wellbeing and decreasing the national cost. So these goals are about human wellbeings and these goals are about the national cost. And these goals are linked to the human wellbeing and the national cost. So that’s a core issue of the sustainability of the earth. This picture from IPCC shows you since the industrial revolution in the 19th century, the average earth surface temperature is increasing. It’s increasing. However, if we move out of human activities, industrialization and so on and so forth, we have to see this change of the temperature of the earth’s surface. That means that in the average level as it’s at a worse millions of years. So the key issue is to save the conditions of the earth for the humanity. And that is the goal of the digital transition. That’s the goal of digital transition. That’s a human centered digitalization. So this is not my personal opinion. That’s a consensus of United Nations, the member states, 196 member states. Here I see the newly adopted global digital compact by the United Nations two months ago. And the first sentence of this global compact is digital technology are dramatically transforming our world. They offer immense potential benefits for the wellbeing and advancement of people and societies. And for our planet, they hold us the finest of accelerate achievement of the sustained goals. Sustainable development goals is the goals of the digitalization. And there’s some objectives. The first object, no. No. No. Anyway, the first objective of the global digital compact is to achieve, to accelerate the achievement of the sustainable. Next page, please help me. Next page. Can you go to the next page? Okay. So, however, we have the goal of the digitalization. The question is how to achieve this goal. In the narrative, as mentioned by Dr. Long, it’s a coordinated transformation of digital and green transformation. But to coordinate transformation, we need a coordinated capacity. We have to turn these digitalization to turn these technical key technologies, 17, I named 17 of them, and the 17 sustained development goals, and to turn this into so-called digital coordinated digital capacity and sustainability capacity. Only with this, we can achieve all those. Capacity building, as mentioned by Provost Pong, education, the key to achieve our goals. So, finally, because of the limitation of the time, I just conclude my presentation in three words. Understanding the digital and green transformation, and settings as redevelopment as a goal of digitalization, reaching a building capacity for the digital transformation.

Xue Lan: Thank you so much. I stop here. Thank you, Professor Gong, for your brilliant presentation. Next, I’m delighted to invite Mr. Fabio Fred Dixon, Director of the Office of ICT Policy at the United Nations Conference on Trade and Development, UNCTADS. He’s also the head of UNCTADS’ work on performance and digital economy. Welcome, Fabio.

Torbjorn Fredriksson: Thank you very much, Madam Moderator. And let me start by expressing my appreciation to the China Cyberspace Affairs Administration and to Tsinghua University for allowing UNCTADS to contribute to this very important dialogue on the shift to low carbon and a more digital economy and society. I’m delighted to welcome Mr. Fred Dixon to share his thoughts on this topic. Low carbon and a more digital economy and society. I’d like to share with you some findings from our recent report, the Digital Economy Report 2024. And I will seek to be very brief, but I hope that participants can also get access to the presentation afterwards. And let me see if I can get to the next slide. Yes, thank you. So the digital technologies are rapidly transforming our economy as you know, as has been said already. This is creating a much faster and much more powerful interaction between people and machines in the digital space. But at the same time, we know that we have big digital divides. Many countries are far behind in taking advantage of these digital opportunities. In addition, we have the parallel development of planetary boundaries being breached, including from climate change. But in many other areas as well. So from that perspective, it is very important to ensure that the development of the ICT sector of the digital technologies is also environmentally sustainable. In fact, we can see already that the ICT sector is generating a greenhouse gas emissions of a similar magnitude as the aviation or international shipping industry. China is of course very important in this context because China is a giant in the context of the digital economy. And we have just a few examples here on this slide showing the tremendous progress that China has made in digitalization and is now a very important force for the globe in terms of digitalization. In the report, we take a lifecycle approach to exploring the environmental footprint of digitalization. We have been talking so far a lot about how digital technologies can help with environmental concerns, but we also need to recognize that digitalization in itself is generating an environmental footprint. So we’re looking in the report at the production stage, at the use stage and at the end of life stage of the lifecycle. And we’re looking at the direct effects in terms of natural resource depletion, energy use, water use, greenhouse gas emissions and pollution and some others. If we start with the production phase, we often think about digitalization as being something virtual, something happening in the cloud. But the reality is that it has a very heavy material footprint. For example, to produce one laptop weighing around two kilos requires the extraction of 800 kilograms of raw material resources. We also see that the devices that we’re using in the digital economy are becoming more complex. For instance, when we produced telephones about 50 years ago, we needed 10 of the elements included in the periodic table. By 1990, we used 27 such elements. And now when we produce smartphones of various kinds, we use 63 of the elements in the periodic table. That is more than half of all the known elements on earth. The same minerals that we use when we produce the digital technology are also the same that we use to provide a more low carbon environment. So that means that these two transitions that we are witnessing towards a low carbon economy and towards a more digital economy is generating tremendous demands for metals and minerals. For example, the demand for cobalt, graphite, and lithium is expected to increase by 500% until 2050. At the use phase, I’m just zeroing in here on the data centers because as we shift increasingly to computing intensive technologies like artificial intelligence, the virtual reality, crypto mining, and so on and so forth, this is generating a tremendous increase in the need for electricity and water that are used by data centers. And the big… data center operators, they can no longer keep their greenhouse gas emissions stable or reduce them because of the growing demand for energy and electricity. This is having a strong impact both at the local level where the data centers are located and of course if it generates more greenhouse gas emissions it will also have global implications. The third phase is about how we deal with the waste from digitalization and here we have seen a growing amount of waste being generated over the past decade or so in an increase by more than 30 percent and we can see that most of the waste per capita is generating the most advanced economies, the most digitally ready economies. Unfortunately a lot of the waste that is generated is not collected properly, not collected formally and therefore also not managed formally and at the global level only one quarter of all the digitalization related waste is currently collected formally. In China that share is around 16 percent and in Africa it’s only one percent so we have a long way to go to improve that situation and one of the problems is that many developing countries lack proper legislation to organize the formal collection of waste and especially here digitalization related waste. An overall recommendation of the report is that we need globally to move more towards a circular digital economy. The production of digital products currently is very linear and that increases the need for raw materials and it reduces our ability to extract valuable materials from the digital devices and servers and so on that come to the end of life. So this is an area where there’s a huge potential for improvement and this is something that requires a collective effort across stakeholders and across countries. We are also in need to address what we call the double bind of developing countries. You can see that currently most developing countries have to bear a high share of the cost of digitalization from an environmental perspective but many are not very successful in benefiting from these digital opportunities so we need to both help countries to deal with the costs of digitalization in terms of the environment but we also need to continue to support countries that are far behind in order for them to be able to use digital technologies to address the environmental concerns. Finally let me say that in order for us to make progress here we need to work collectively as the previous speaker was saying we have come to an agreement at the global level to work together to build the global digital compact. We need to better integrate what we do in the digital space and what we do in the environmental space and we need to strengthen the capabilities of the poorer countries to be able to develop the right policy responses in this area and also to benefit more from digitalization. I will end that and I will just direct you to the report for those who are interested in learning more about this and I thank you again for allowing me to participate in this session. Thank you very much.

Xue Lan: Thank you very much Mr. Fred Dixon for your wonderful presentation. Now let me welcome Professor Xu Jun, the Dean of the Intelligent Society Governance from Tsinghua University. Welcome, the floor is yours Professor Xu. Thank you very much. Dear ladies and gentlemen, dear Professor Gong Ke, Mr. Wang Jianchao, Mr. Long Kai,

Su Jun: dear Professor Hun Gang and Ms. Zhou Taonan, dear friends. First of all, it’s my great honor to impress my warm welcome to all of you. Thank you for coming. Nowadays, information technology, especially AI, is being widely deployed, bringing significant improvement in productivity and allowing people to enjoy tremendous benefit. A brand new age, intelligent society is coming. But as the English novelist Charles Dickens said, it was the best of times, it was the worst of times. AI also brings unpredictable risks and challenges. To focus on these risks and challenges, Tsinghua University established a platform research institute, ISG, the Institute of Intelligent Society Governance in 2019. The ISG focused on interdisciplinary research on AI social impact and its mission is to create a humanity-intensive society. Since 2021, to investigate the social impact of AI, Chinese government launched a nationwide initiative called AIC, which stands for Artificial Intelligent Social Impact Implement. There are 92 experiment base was established and more than 2,000 scenarios were set with about 30,000 people involved. To my knowledge, this initiative is the largest scale social experiment on AI application worldwide. According to our research, intelligent society has five key characteristics, including information reach, instant feedback, reshaping of cognitions, deconstruction of recognization, voice of the volunteers. We also studied some concepts such as information, token, group, polarizations, opinion, meaning, pollutions, platform, power, energy consumption, online gaming, and digital labor. All these concepts have been studied in my new book, which was released yesterday morning. We have found that it is a significant challenge to enable low-carbon development through digital technology. While consuming energy, AI also has huge potential to reduce energy consumption, promoting low-carbon development across society, especially in traditional high-energy consumption sectors, such as power system, transportation system, heavy industry, and construction business. For example, in Eidos, a western city in China which is famous for its energy industry, we have examined how AI facilitated the transportation of energy industry. Our research showed that the coordinated transformation of digital and green development can reduce energy consumption and promote low-carbon development. Ladies and gentlemen, the history of human society is the history of science and technology development. Intelligent society is an amazing age. AI is one of our greatest inventions in our history. But in the future, people must not and shall not become slavers of technology. Albert Einstein, the greatest scientist, once remarked, concern for man himself and his fate must always find the chief interest of all technical endeavors. Overcoming these challenges is a tough work. As highlighted in a UN report recently, governing AI for humanity, it is not technological change itself, but how humanity responds to it that automatically matters. Today, we gather together to show our hope and ambition for the future. Let’s work together to make sure that all nations, all people over the world, all people over the world can benefit from the coordinated transformation of digital and green development, and to build a human-centered, humanity-intensive society. We shall do it, and we must do it. Thank you.

Xue Lan: Thank you, Professor Su, for your insightful presentation. Now, let me welcome Ms. Dou Guimei, principal of Tsinghua University Primary School and also co-director of the National Experiment Base of the Intelligent Society Governance. Ms. Dou, the floor is yours. Thank you.

Du Guimei: distinguished guests and friends from around the world. I’m the principal from Tsinghua University Primary School. I would like to start from a passage from the UN 52 years ago entitled We Only Have One Earth. This is a typical case for topic-based learning practice in my class advocated for low-carbon lifestyle in the community. Finally, we expanded to the proposal for the health school. I think that topic-based learning practice is about coming out of the classroom and apply what you learn to daily practice. I’ve been working on topic-based learning for the past 30 years. After years of exploration, this practice has won the most important prize in China’s fundamental education. So, topic-based learning is the beginning of academic research to come out of the classroom and strengthen the practice. We feel that the Chinese people emphasize cultural confidence and talk about cultural heritage and culture. In this way, in the course of our research over the years, my team and I have been studying topic-based learning, which is similar to today’s concept education, to cultivate students’ core qualities. So, in the course of our research over the years, we have also won the highest honor award for the first-ever national educational achievement. Over the past 10 years, I have led a team to form the theoretical and practical results of topic-based learning. On November 1 this year, it was officially released and became a national achievement. I’ve been working on topic-based learning for the past 30 years. After years of exploration, this practice has won the most important prize in China’s fundamental education. And we still pay close attention to serializing the framework for topic-based teaching and collect a set of works. All these works were finally published on November 1 and recognized by the nation. Entering the age of intelligence, we are trying to use the method of topic-based teaching to incorporate the core values of digital education and ecological civilization into the education philosophy of Tsinghua High School, the central school of children’s standing, and to combine it with China’s ancient Japanese culture and the meaning of proximity to time. Then, through the four mechanisms of value co-creation, mutual respect, mutual incentive, digital resilience, we are trying to improve and further optimize the quality of the campus in all aspects. Now, I can take the methodology of topic-based teaching practice to digital literacy and environmental awareness education and integrate them into our philosophy of let students be the core of the school. By incorporating traditional culture elements into our classroom, we try to promote environmental education from our four main channels, namely value co-creation, class integration, compatible incentives, and digital empowerment. Taking AI-powered low-carbon campus as a case for topic-based learning, we have established four main scenarios. Scenario one, low-carbon intelligent platform. This is our zero-carbon Google campus. Scenario two, low-carbon smart platform. This is our low-carbon high-tech platform. Scenario three, low-carbon high-tech cloud. This is our high-tech cloud-based cloud platform. Scenario four, low-carbon smart cloud. This is our zero-carbon booth that monitors temperature, humidity, wind, 5MPF, wind power, and PV output. This allows us to dynamically access the school’s environmental status and become the learning material for children. Scenario two, low-carbon education. In the campus, we are setting up a solar panel. Students can use digital technology to assist students in ecological research and improve the cultivation of biodiversity. Students can use infrared cameras to observe wildlife in the protection of natural resources. This is also an important part of their learning. The scenario two is low-carbon education application. In our campus, we have installed a lot of solar flowers. We also leverage intelligent technologies for students’ ecological explorations. During the exploration, they use in-frame cameras, telephoto lenses, and AI image recognition to classify wildlife and gain firsthand experiences. Scenario three is low-carbon interaction space, carbon cycle science and technology experience. In the school, we use sand to simulate the ocean, rivers, cities, forests, oil fields, farms, farms, volcanoes, and sky scenes. The elements in the corresponding scene will move in this way to improve the students’ basic cultivation and cultivation of environmental balance. Scenario three is low-carbon interactive spaces. We also implement several carbon cycle experiences focusing on simulation in ecosystems like oceans, cities, and forests, bringing environmental science to life. Scenario four is sports in the smart playground. In the school, we have added AI rope skipping, AI 50-meter running, sunbathing, fun integrated machine, and dance. The children have accumulated data in this part and developed a tree-growing tree that is made into a sunroof. The tree continues to grow into a forest in the class’s collective activities, and we can see the students’ growth. Scenario four is intelligent physical education. We further add some intelligent applications to traditional EE education like AI-assisted running among a lot of sports activities. We also develop an intelligent system for students to keep track of their progress in physical education. So in the age of AI, how to deal with the full growth of elementary school students’ lives and the harmony of getting along with us in this era. So when we held the opening ceremony, I also brought this petition today. I think this should be a basic consensus for all citizens of the world. Finally, on behalf of all students at Tsinghua University Primary School, I would like to share the Low-Carbon Campus Initiative for a sustainable future with global friends. So finally, where are the children? Where is the AI-enabled low-carbon campus education? Let’s work together. Thank you. In short, we believe that where children are, where AI-encouraged low-carbon campus is. Let’s work together. Thank you.

Xue Lan: Thank you for such a wonderful speech on the role of education in AI and low-carbon development. Thank you again, Mr. Do. Next, I would like to invite Professor Edward Arrawa, former Vice Dean and Director at the Lee Kuan Yew School of Public Policy, National University of Singapore, to deliver the speech. Welcome, Professor Arrawa. Please share your slides. Can you hear me? Yes. Wonderful.

Eduardo Araral: Thank you, Professor Fang and to colleagues at Tsinghua University for this invitation. I am honored to speak to you today on the topic of Modeling Clean Energy Transition Using AI, which is the current project of my interest. AI can help improve climate models, as the recent Economist magazine has reported. Artificial intelligence is helping improve climate models, improve prediction accuracy, improve policymaking, reduce computational cost, and to scale regional and global patterns. But my focus is not on climate prediction, but on how can we use AI to help us create better models for energy transition. So I created a model on clean energy transition. In this model, you have the… the dependent variable, clean energy adoption as a function of policies, infrastructure, fossil fuel risks, and other variables. I’m still working on this paper, but basically the idea is how to use AI to do a complex modeling of this clean energy transition. The idea here is that this energy transition depends on so many factors, and governments, and donors, and UN agencies should be able to pay attention to these variables. So one important factor to monitor is the policy support for clean energy transition, the readiness of national infrastructure in terms of the grids, storage, and offshore facilities, the fossil fuel risks, such as the volatility, supply, security, geopolitical risks, and the risk of declining investments in fossil fuel. Then, of course, you have the issue of availability of financing for clean energy, especially for developing countries. Then you have the issue of the grid, electrical grid. Are they ready to be integrated, especially for renewable energy like solar, wind? Then you need to look at market maturity. What is the level of competition and openness in the energy market? Then you have to factor in social acceptance. Do the people accept clean energy? Then you have to look at carbon pricing regulations, carbon taxes, and emissions limits, nationally determined contributions. You have to include that in the model as well. Then you have to also look at, including the model, the rate of technological innovation, such as advancements in solar panel, wind technology, the efficiency and storage of batteries, as well as developments in nuclear modular reactors. Then you also have to look at energy demand, which is a function of economic growth, and then the demand for clean energy over time. And so what I’m trying to say is that those of you who are interested in the audience to collaborate with me on this project, how can we use AI to do a complex modeling of this clean energy transition, you can get in touch with me. That’s all I wanted to say. Again, thank you to Professor Phan for this kind invitation. Thank you. Excuse me, thank you, Professor Irani. Next, I would like to invite Professor Salva from Kansas School of Business and Curriculum at the University of Texas. Thank you. And hopefully, the more outflow you have and the less inflow, the more efficient you will be. I will not go into this, but we have modeled all of this and the role of cognitive computing technologies to help you get more efficient and use lower energy. So give you just a couple of examples. We have used digital twins to be able to analyze what are the inefficiencies in moving capital from one place to another, okay? In terms of moving capital, because if you can model how capital moves, then you can increase the efficiency and use lower energy. Another way is you can actually understand what is inhibiting the movement of capital. So here, you have two different images where you just strategically relocating and using information technologies, you can increase capital intensity. Here is an example of how AI is being used to regenerate capital so that you are recycling, and it goes back to the comments made earlier about circular economy. And just one more, here is where you can map using cognitive computing technologies where your capital needs to be upgraded or maintained before it loses complete value. So how do you do that? I’m happy to share more examples of these projects we are doing across the globe. And with that, thank you very much. Thank you.

Xue Lan: Thank you, Professor D’Souza. Now, let’s welcome our last speaker, Ms. Zhou Chaonan, the chairman person of Range IDC, the last but not the least.

Zhou Chaonan: The floor is yours. Ms. Zhou Chaonan, the chairman person of Range IDC, the chairman person of Range IDC, the vice-chairman of Henggang, the chairman person of Gongke, and the guests from all over the world, good morning, everyone. I’m very honored to participate in this open forum on digital technology assistance, green low-carbon development. I’m from the Chinese Renzhe Group. It’s a high-tech enterprise that provides scale-up, green, and smart sourcing resources for the artificial intelligence industry. Here, I’d like to share a few points with you on the practical and thought-provoking development of smart technology-assisted low-carbon green. Distinguished guests and dear friends from around the world, good morning. It’s a great honor to participate in this open forum on digital technology empowering green and low-carbon development. I’m from Range Group, a high-tech enterprise dedicated to providing large-scale, green, and intelligent computing power resources for the AI industry. Today, I would like to briefly share some of our practices and thoughts on how digital technology contributes to green and low-carbon development. President Xi Jinping stressed that green low-carbon development mechanisms should be strengthened to support enterprises to use digital technology, green technology to transform and upgrade traditional industries. It can be said that the double-edged sword of digital technology and green technology has become an important issue in the transformation of today’s society. Chinese President Xi Jinping has emphasized the importance of establishing mechanisms for green and low-carbon development and supporting enterprises in upgrading traditional industries with digital and green technologies. Indeed, the integration of digital and green technologies has become a vital topic in the transformation of modern society. Since the establishment of Range Group, we have been committed to the mission of industrialization and have been committed to providing green, sustainable, and smart infrastructure for the transformation of modern society. After 15 years of development, we have deployed seven intelligent computing industry members in the seven smart industrial areas, including Changshan, Yuegang, Dawan, Chengqing, Hailan, and Gansu, to contribute to the smart transformation of all industries. Since its inception, Range Group has adhered to the mission of serving the nation through industry, and is committed to providing green and sustainable computing infrastructure to support the nation’s societies in China and global nations. Over the past 15 years, we have established seven intelligent computing industry parks in China, including Hebei Province, Pinghu in Yangtze River Delta, Foshan and Huizhou in Guangdong, Chongqing, Hainan Free Trade Port, and Lanzhou in Gansu, contributing to the intelligent transformation of a set of industries. We need to pay more attention to the decoupling technology under the existing structure. China’s data centers, green energy, and national standards for energy and power use efficiency are constantly improving. The PURE value has gradually dropped from 1.6 to 1.3. We have been working on this at the frontline for a long time, and have been working hard to explore the PURE value in public and private sectors, and to further reduce the PURE value in the population. Through our continuous efforts, our data centers have been able to solve systems such as low-carb, full-fat, and smart systems through modular and integrated devices and smart operations to achieve a PURE value of around 1.15. If we use the most advanced near-freezing mode, the PURE value can be further reduced to 1.09. The power usage effectiveness PURE of data centers in line with national green energy standards has been continuously improving from 1.6 to 1.3 in the past several years as the explorers in the frontier of the industry. We have been keeping on exploring whether there is still any gap to further reduce the PURE. Through years of effort, we developed a self-designed intelligent low-carbon cooling system for data centers which optimize energy efficiency through modular design, integrated equipment, and intelligent operations. This innovation has reduced PURE to approximately 1.15 and thanks to the commercial liquid cooling methods, it can reach as low as 1.09. This project has incurred significant demonstration impact, earning us recognition as a national green data center from the Ministry of Industry and Information Technology and other government agencies. In the future, we believe that green development will be the ultimate trend for intelligent society and our efforts will yield more profound socio-economic impacts. We have established the Intelligent Society Research Institute in cooperation with Tsinghua University to better utilize Tsinghua University’s multi-disciplinary research and the technical and practical advantages of Runze Group to support the development of digital green system. In addition, we have jointly launched a series of forward-looking standardized and systematic work in the field of intelligent society transformation related to standardization and other issues related to multilateral conferences to further communicate our experience and results with our friends. In the meantime, we aim to address deeper institutional challenges behind digital technology’s role in green and low-carbon development. To this end, we have co-established a joint institute on intelligent society research with Tsinghua University, hoping to better leverage both Tsinghua University’s multi-disciplinary strengths and Runze Group’s technical and practical expertise. We have initiated several forward-looking projects focusing on the coordination between digital and green development, standards for intelligent society, among other important issues. In the future, we look forward to sharing our insights on important global platforms like IGF and engaging with you all on these crucial issues. Thank you. Thank you.

Xue Lan: Thank you. Let us work together to contribute to the great cause of empowering green and low-carbon development through digital technology. We have two minutes with all of you. Turn on the voice of this video. Please switch to Channel 1. Channel 1. Channel 1. Okay. A wonderful speech and also the video. And I want to take the last chance to thank you all, our distinguished experts and also audience for your patience. We really run out of time, but I think we also had a wonderful discussion today. I hope we will keep in touch and keep our passion on this important issue in our era. In the end, I would like to claim that we are closing our events today. So please keep in touch. We will see you next year. Thank you.

T

Torbjorn Fredriksson

Speech speed

145 words per minute

Speech length

1202 words

Speech time

494 seconds

Digital technologies can accelerate achievement of sustainable development goals

Explanation

Fredriksson argues that digital technologies have the potential to significantly contribute to achieving sustainable development goals. This aligns with the UN’s perspective on leveraging digital advancements for global sustainability efforts.

Evidence

Reference to the recently adopted global digital compact by the United Nations

Major Discussion Point

The role of digital technology in sustainable development

Agreed with

Gong Ke

Long Kai

Eduardo Araral

Agreed on

Digital technologies can contribute to sustainable development

Differed with

Gong Ke

Differed on

Focus of digital technology implementation

ICT sector generates significant greenhouse gas emissions

Explanation

Fredriksson points out that the Information and Communication Technology (ICT) sector is a significant contributor to greenhouse gas emissions. The scale of emissions from this sector is comparable to other major industries.

Evidence

Comparison of ICT sector emissions to those of the aviation or international shipping industry

Major Discussion Point

Environmental impacts of digitalization

Agreed with

Zhou Chaonan

Agreed on

Environmental impacts of digitalization need to be addressed

Production of digital devices requires large amounts of raw materials

Explanation

Fredriksson highlights the substantial material footprint of digital device production. This emphasizes the often overlooked physical resource requirements of the digital economy.

Evidence

Example of a laptop requiring 800 kilograms of raw material resources for production, and the increasing complexity of elements used in smartphone production

Major Discussion Point

Environmental impacts of digitalization

Agreed with

Zhou Chaonan

Agreed on

Environmental impacts of digitalization need to be addressed

Data centers consume increasing amounts of energy and water

Explanation

Fredriksson discusses the growing energy and water consumption of data centers. This increase is driven by the shift towards more computing-intensive technologies.

Evidence

Mention of technologies like artificial intelligence, virtual reality, and crypto mining increasing demand for data center resources

Major Discussion Point

Environmental impacts of digitalization

Agreed with

Zhou Chaonan

Agreed on

Environmental impacts of digitalization need to be addressed

E-waste is a growing problem, especially in developing countries

Explanation

Fredriksson raises concerns about the increasing generation of electronic waste. He points out that this issue is particularly problematic in developing countries due to lack of proper management systems.

Evidence

Statistics on e-waste collection rates globally and in specific regions like China and Africa

Major Discussion Point

Environmental impacts of digitalization

Need to move towards a circular digital economy

Explanation

Fredriksson advocates for transitioning to a circular digital economy model. This approach aims to reduce raw material demand and improve the recovery of valuable materials from end-of-life digital devices.

Evidence

Mention of the current linear production model for digital products and its limitations

Major Discussion Point

Strategies for green digital development

G

Gong Ke

Speech speed

115 words per minute

Speech length

619 words

Speech time

321 seconds

Digital technology must be people-centered and aimed at sustainability

Explanation

Gong Ke emphasizes that digital technology should prioritize human needs and sustainability. He argues that the ultimate goal of digitalization should be to support sustainable development and improve human well-being.

Evidence

Reference to the United Nations’ Global Digital Compact and its alignment with sustainable development goals

Major Discussion Point

The role of digital technology in sustainable development

Agreed with

Torbjorn Fredriksson

Long Kai

Eduardo Araral

Agreed on

Digital technologies can contribute to sustainable development

Differed with

Torbjorn Fredriksson

Differed on

Focus of digital technology implementation

L

Long Kai

Speech speed

111 words per minute

Speech length

954 words

Speech time

511 seconds

Digital and green transformations need to be coordinated for sustainable growth

Explanation

Long Kai stresses the importance of aligning digital transformation with green development initiatives. He argues that this coordination is crucial for achieving sustainable economic growth and environmental protection.

Evidence

Reference to Chinese government policies and guidelines promoting the integration of digitalization and green transformation

Major Discussion Point

The role of digital technology in sustainable development

Agreed with

Torbjorn Fredriksson

Gong Ke

Eduardo Araral

Agreed on

Digital technologies can contribute to sustainable development

China is promoting coordination of digitalization and green transformation

Explanation

Long Kai highlights China’s efforts to integrate digital technologies with green development initiatives. He emphasizes the government’s commitment to this coordinated approach for sustainable growth.

Evidence

Mention of specific policy documents and initiatives, such as guidelines issued by the Chinese government in July 2024

Major Discussion Point

Policy and governance for digital sustainability

Need for international cooperation on technology, standards and policy

Explanation

Long Kai advocates for increased global collaboration in areas of technology innovation, standards development, and policy alignment. He emphasizes that this cooperation is essential for addressing global challenges in digital and green development.

Evidence

Call for strengthening bilateral coordination, multilateral dialogue, and cooperation in various fields including policy communication and technology exchange

Major Discussion Point

Policy and governance for digital sustainability

E

Eduardo Araral

Speech speed

130 words per minute

Speech length

715 words

Speech time

329 seconds

AI and digital technologies can improve climate models and energy transition planning

Explanation

Araral discusses the potential of AI to enhance climate modeling and energy transition planning. He proposes using AI for complex modeling of clean energy transition, considering multiple factors affecting the process.

Evidence

Reference to a model created for clean energy adoption that incorporates various factors such as policies, infrastructure, and fossil fuel risks

Major Discussion Point

The role of digital technology in sustainable development

Agreed with

Torbjorn Fredriksson

Gong Ke

Long Kai

Agreed on

Digital technologies can contribute to sustainable development

P

Peng Gang

Speech speed

138 words per minute

Speech length

882 words

Speech time

382 seconds

Importance of interdisciplinary research and education on AI and sustainability

Explanation

Peng Gang emphasizes the need for cross-disciplinary exploration of digital technology and green development. He highlights Tsinghua University’s efforts in promoting interdisciplinary research and education in these areas.

Evidence

Mention of Tsinghua University’s establishment of research institutes focused on AI and carbon neutrality, as well as the Institute for Intelligent Social Governance

Major Discussion Point

Strategies for green digital development

D

Dou Guimei

Speech speed

113 words per minute

Speech length

944 words

Speech time

501 seconds

Using AI to create smart, low-carbon campuses and education

Explanation

Dou Guimei presents the application of AI and digital technologies in creating environmentally friendly educational environments. She discusses how these technologies can be integrated into various aspects of campus life to promote sustainability awareness and practices.

Evidence

Examples of AI-powered low-carbon campus initiatives, including smart energy monitoring systems and interactive educational tools for environmental awareness

Major Discussion Point

Strategies for green digital development

S

Su Jun

Speech speed

96 words per minute

Speech length

511 words

Speech time

318 seconds

Importance of studying social impacts of AI through large-scale experiments

Explanation

Su Jun emphasizes the need for comprehensive research on the social implications of AI. He highlights China’s initiative to conduct large-scale social experiments to better understand and address the challenges posed by AI integration in society.

Evidence

Reference to China’s Artificial Intelligence Social Impact Implementation (AIC) initiative, involving 92 experimental bases and over 2,000 scenarios with about 30,000 participants

Major Discussion Point

Policy and governance for digital sustainability

Z

Zhou Chaonan

Speech speed

111 words per minute

Speech length

892 words

Speech time

482 seconds

Improving energy efficiency of data centers through innovative cooling systems

Explanation

Zhou Chaonan discusses the efforts to enhance the energy efficiency of data centers. She highlights the development of advanced cooling systems that significantly reduce power usage effectiveness (PUE) in data centers.

Evidence

Description of self-designed intelligent low-carbon cooling systems that have reduced PUE to approximately 1.15, with potential to reach as low as 1.09 using commercial liquid cooling methods

Major Discussion Point

Strategies for green digital development

Agreed with

Torbjorn Fredriksson

Agreed on

Environmental impacts of digitalization need to be addressed

Establishing mechanisms to support green upgrading of industries with digital tech

Explanation

Zhou Chaonan emphasizes the importance of creating supportive mechanisms for industries to adopt digital and green technologies. This approach aims to facilitate the transformation and upgrading of traditional industries towards more sustainable practices.

Evidence

Reference to President Xi Jinping’s emphasis on strengthening green low-carbon development mechanisms to support enterprises in using digital and green technologies

Major Discussion Point

Policy and governance for digital sustainability

Agreements

Agreement Points

Digital technologies can contribute to sustainable development

Torbjorn Fredriksson

Gong Ke

Long Kai

Eduardo Araral

Digital technologies can accelerate achievement of sustainable development goals

Digital technology must be people-centered and aimed at sustainability

Digital and green transformations need to be coordinated for sustainable growth

AI and digital technologies can improve climate models and energy transition planning

Multiple speakers emphasized the potential of digital technologies to support sustainable development goals and improve environmental outcomes when properly implemented and coordinated with green initiatives.

Environmental impacts of digitalization need to be addressed

Torbjorn Fredriksson

Zhou Chaonan

ICT sector generates significant greenhouse gas emissions

Production of digital devices requires large amounts of raw materials

Data centers consume increasing amounts of energy and water

Improving energy efficiency of data centers through innovative cooling systems

Speakers acknowledged the environmental challenges posed by the digital sector, particularly in terms of emissions and resource consumption, and discussed the need for innovative solutions to mitigate these impacts.

Similar Viewpoints

Both speakers highlighted China’s efforts to integrate digital technologies with green development initiatives, emphasizing the government’s role in promoting this coordination for sustainable growth and industrial upgrading.

Long Kai

Zhou Chaonan

China is promoting coordination of digitalization and green transformation

Establishing mechanisms to support green upgrading of industries with digital tech

Both speakers emphasized the need for comprehensive, interdisciplinary research on the implications of AI and digital technologies, particularly in relation to sustainability and social impacts.

Peng Gang

Su Jun

Importance of interdisciplinary research and education on AI and sustainability

Importance of studying social impacts of AI through large-scale experiments

Unexpected Consensus

Integration of AI and sustainability in education

Peng Gang

Dou Guimei

Importance of interdisciplinary research and education on AI and sustainability

Using AI to create smart, low-carbon campuses and education

While most discussions focused on broader policy or technological aspects, these speakers unexpectedly converged on the importance of integrating AI and sustainability concepts directly into educational settings, from university research to primary school campuses.

Overall Assessment

Summary

The main areas of agreement centered around the potential of digital technologies to support sustainable development, the need to address the environmental impacts of digitalization, and the importance of coordinating digital and green transformations. There was also consensus on the need for interdisciplinary research and education in these areas.

Consensus level

The level of consensus among the speakers was relatively high, particularly on the overarching themes of leveraging digital technologies for sustainability and the need for coordinated approaches. This consensus suggests a growing recognition of the interconnectedness of digital and environmental issues, which could potentially lead to more integrated policy approaches and research initiatives in the future. However, the specific strategies and implementations varied among speakers, indicating that while there is agreement on the general direction, there is still room for diverse approaches in addressing these challenges.

Differences

Different Viewpoints

Focus of digital technology implementation

Torbjorn Fredriksson

Gong Ke

Digital technologies can accelerate achievement of sustainable development goals

Digital technology must be people-centered and aimed at sustainability

While both speakers emphasize the importance of digital technology for sustainability, Fredriksson focuses on its potential to accelerate sustainable development goals, while Gong Ke stresses the need for a people-centered approach.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific focus and implementation strategies for digital technology in sustainable development.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the importance of digital technology for sustainable development but differ in their emphasis on specific aspects or approaches. This suggests a general consensus on the overall direction, which is positive for advancing the topic of digital technology empowering green and low-carbon development.

Partial Agreements

Partial Agreements

Both speakers acknowledge the energy consumption issue of data centers, but while Fredriksson highlights the problem, Zhou Chaonan focuses on solutions through innovative cooling systems.

Torbjorn Fredriksson

Zhou Chaonan

Data centers consume increasing amounts of energy and water

Improving energy efficiency of data centers through innovative cooling systems

Similar Viewpoints

Both speakers highlighted China’s efforts to integrate digital technologies with green development initiatives, emphasizing the government’s role in promoting this coordination for sustainable growth and industrial upgrading.

Long Kai

Zhou Chaonan

China is promoting coordination of digitalization and green transformation

Establishing mechanisms to support green upgrading of industries with digital tech

Both speakers emphasized the need for comprehensive, interdisciplinary research on the implications of AI and digital technologies, particularly in relation to sustainability and social impacts.

Peng Gang

Su Jun

Importance of interdisciplinary research and education on AI and sustainability

Importance of studying social impacts of AI through large-scale experiments

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

So the first question is, what is the overarched goal of the digitalization? What is the overarched goal? What is the digitalization we want?

speaker

Gong Ke

reason

This question frames the entire discussion by pushing participants to consider the fundamental purpose of digitalization, rather than just its technical aspects.

impact

It set the tone for subsequent speakers to address not just how to implement digital technologies, but why and to what end, particularly in relation to sustainability goals.

We need to both help countries to deal with the costs of digitalization in terms of the environment but we also need to continue to support countries that are far behind in order for them to be able to use digital technologies to address the environmental concerns.

speaker

Torbjorn Fredriksson

reason

This comment highlights the complex challenge of balancing digitalization’s environmental costs with its potential benefits, especially for developing countries.

impact

It broadened the discussion to include global equity concerns and the need for a nuanced approach to digital development that considers both environmental and economic factors.

AI is one of our greatest inventions in our history. But in the future, people must not and shall not become slavers of technology.

speaker

Su Jun

reason

This statement encapsulates both the promise and potential pitfalls of AI, emphasizing the need for human-centered development.

impact

It shifted the conversation towards the ethical implications of AI and digital technologies, prompting consideration of how to ensure technology serves humanity rather than the reverse.

So in the age of AI, how to deal with the full growth of elementary school students’ lives and the harmony of getting along with us in this era.

speaker

Dou Guimei

reason

This comment brings the discussion to a practical, grassroots level by considering how AI impacts education and child development.

impact

It introduced a new perspective on the application of AI and digital technologies in everyday life, particularly in education, broadening the scope of the discussion beyond high-level policy considerations.

Overall Assessment

These key comments shaped the discussion by expanding its scope from technical implementation to broader considerations of purpose, global equity, ethics, and practical applications in areas like education. They encouraged a more holistic view of digitalization and its impacts on society, environment, and human development. The discussion evolved from focusing solely on how to implement digital technologies to critically examining why and for whom these technologies should be developed, emphasizing the need for a human-centered, sustainable approach to digital transformation.

Follow-up Questions

How can we use AI to do complex modeling of clean energy transition?

speaker

Eduardo Araral

explanation

This is important to better understand and predict the factors influencing clean energy adoption, which could inform policy decisions and strategies for accelerating the transition.

How can we further reduce the Power Usage Effectiveness (PUE) of data centers?

speaker

Zhou Chaonan

explanation

Continuing to improve energy efficiency in data centers is crucial for reducing the environmental impact of digital infrastructure as demand for computing power grows.

How can we better integrate digital transformation and green development policies?

speaker

Torbjorn Fredriksson

explanation

Improved policy integration is necessary to ensure that digitalization supports rather than hinders environmental sustainability goals.

How can we address the ‘double bind’ faced by developing countries in terms of digitalization and environmental costs?

speaker

Torbjorn Fredriksson

explanation

This is important to ensure that developing countries can both benefit from digital opportunities and manage the environmental impacts of digitalization.

How can we move towards a more circular digital economy?

speaker

Torbjorn Fredriksson

explanation

Transitioning to a circular model is crucial for reducing the environmental footprint of digital technologies, particularly in terms of resource extraction and e-waste management.

How can we strengthen interdisciplinary education in AI, digital technologies, and societal governance?

speaker

Peng Gang

explanation

This is important for cultivating talents who can address the complex challenges at the intersection of digital technology and sustainable development.

How can we enhance collaboration among stakeholders (governments, industries, academia, and research institutions) to address challenges in digital and green development?

speaker

Peng Gang

explanation

Improved collaboration is necessary to drive innovation and practical applications in sustainable digital development.

How can we promote global cooperation in technology communications, standards alignment, and financial integration for digital and green development?

speaker

Peng Gang

explanation

International cooperation is crucial for ensuring that the benefits of digital and green technologies are inclusive and globally distributed.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

DC-DH: Health Digital Health & Selfcare – Can we replace Doctors in PHCs

DC-DH: Health Digital Health & Selfcare – Can we replace Doctors in PHCs

Session at a Glance

Summary

This discussion focused on the potential for artificial intelligence (AI) and digital health technologies to replace or augment doctors in healthcare delivery. Participants explored how AI could address healthcare shortages, improve quality of care, and enhance patient experiences. They discussed the advantages of AI, including 24/7 availability, personalized care, and the ability to process vast amounts of medical knowledge. However, concerns were raised about the digital divide, potential loss of human touch in healthcare, and the need for proper regulation and ethical considerations.

The panel highlighted successful implementations of digital health solutions in low-resource settings, such as MomConnect in South Africa, which provides maternal health information via mobile phones. They emphasized the importance of building trust in AI systems and ensuring they are culturally appropriate and accessible to all populations. The discussion touched on how AI could liberate healthcare workers from administrative burdens and allow them to focus on more complex cases.

While some panelists believed AI could largely replace primary care visits and certain specialties like radiology, others argued for a hybrid model where AI augments rather than replaces human doctors. The importance of training future healthcare professionals to work alongside AI was stressed. The panel concluded that while AI will significantly transform healthcare delivery, human touch will likely remain valuable in certain aspects of care. They emphasized the need for continued research, careful implementation, and addressing equity concerns as AI becomes more prevalent in healthcare.

Keypoints

Major discussion points:

– The potential for AI and digital health technologies to augment or replace doctors, especially for primary care

– Challenges around trust, quality, and equity in implementing AI/digital health solutions

– The changing relationship between patients, doctors and technology in healthcare delivery

– The need to address the digital divide and ensure access to digital health tools in rural/underserved areas

– The importance of human touch and empathy in healthcare, even as technology advances

Overall purpose/goal:

The discussion aimed to explore how AI and digital health technologies are reshaping healthcare delivery, and to what extent they may replace or augment the role of doctors, especially in resource-constrained settings.

Tone:

The overall tone was optimistic about the potential of technology to improve healthcare access and quality, while also being thoughtful about challenges and ethical considerations. There was a mix of excitement about technological possibilities and caution about potential downsides or unintended consequences. The tone became more speculative towards the end when panelists were directly asked if AI would replace doctors.

Speakers

– Rajendra Pratap Gupta: Chairman of the board for HIMSS India, moderator of the discussion

– Peter Preziosi: President of CGFNS Global, a nurse by training

– Mevish P. Vaishnav: Leader of the International Patients Union

– Zaw Ali Khan: From the American University of Barbados

– Debbie Rogers: Runs Village Outreach in Africa

– May Siksik: Runs the Innovation Network Canada

Additional speakers:

– Melody Musoni: Works for a think tank called ECDPM

– Sakshi Pandita: Moderating online questions

Full session report

AI and Digital Health: Transforming Healthcare Delivery

This discussion, moderated by Rajendra Pratap Gupta, explored the potential for artificial intelligence (AI) and digital health technologies to reshape healthcare delivery, particularly in resource-constrained settings. The panel, comprising experts from various healthcare and technology backgrounds, examined how AI could address healthcare shortages, improve quality of care, and enhance patient experiences.

Historical Context and Potential Impact

Rajendra Pratap Gupta opened the discussion by reflecting on his predictions from 10 years ago about technology eliminating middlemen in healthcare and bringing services closer to users. This set the stage for exploring how AI and digital health are now poised to revolutionize healthcare delivery.

The panelists generally agreed that AI and digital health technologies have significant potential to augment and transform healthcare. Peter Preziosi highlighted his work in Rwanda, where remote patient monitoring devices and nurse-led primary care models are being implemented. May Siksik, from the Innovation Network Canada, suggested that AI could provide more personalized and empathetic care than time-constrained doctors, especially for cultural minorities like First Nations.

Debbie Rogers, who runs Village Outreach in Africa, emphasized the potential of digital health tools to improve access to care in underserved areas. She cited successful implementations such as MomConnect in South Africa, which provides maternal health information via mobile phones and has significantly improved maternal health outcomes. Another example mentioned was the use of AI-powered sonograms in Ghana.

Mevish P. Vaishnav, leader of the International Patients Union, introduced the Patient Centricity Index, a tool developed to measure and improve patient-centered care. Interestingly, both Vaishnav and Rogers noted that patients, especially youth, may trust and engage more with AI-powered health services, particularly for sensitive topics like sexual and reproductive health.

Challenges and Considerations

Despite the optimism, the panel identified several challenges in implementing AI in healthcare:

1. Digital divide: Rogers emphasized the need to address inequitable access to digital health tools in rural and underserved areas.

2. Cost and infrastructure: Preziosi highlighted barriers in low-resource settings, while Rogers provided specific cost comparisons (20 cents vs $10.20 per user per year for different AI solutions).

3. Cultural context: Rogers pointed out the need for localization of AI solutions, noting that current large language models are primarily trained on Western culture and medicine.

4. Regulatory and ethical considerations: Zaw Ali Khan stressed the importance of these factors in AI adoption.

5. Maintaining human touch: An audience member raised concerns about preserving empathy and in-person care options.

6. Misinformation: Melody Musoni, an audience member, brought up concerns about quality control in digital health information, particularly around fad diets and miracle cures.

Future Role of Healthcare Professionals

The panel had differing views on the extent to which AI might replace doctors:

– Khan suggested that AI may replace primary care centers and some specialized roles.

– Rogers proposed that doctors who use AI will replace those who don’t.

– Siksik envisioned AI as part of interdisciplinary healthcare teams.

– Preziosi maintained that human touch would remain important for some aspects of care.

– Vaishnav and Siksik believed that AI could replace the majority of doctor visits, particularly in primary care.

There was agreement on the need to train a new generation of doctors to work effectively with AI, as highlighted by Khan. This suggests a future where healthcare professionals work alongside AI technologies rather than being entirely replaced by them.

Key Takeaways and Emerging Trends

1. AI has the potential to reduce medical errors and address healthcare worker burnout.

2. The cost of implementing AI solutions is decreasing while computational power is increasing, as noted by an audience member.

3. AI may provide more transparent explanations for diagnoses and treatment recommendations compared to overworked healthcare workers.

4. There’s a need for academic institutions to adopt digital health solutions and train future healthcare professionals in their use.

5. Despite the rise of digital solutions, there may still be a continued need for human doctors, analogous to the resurgence of physical bookstores despite e-books (as pointed out by Rogers).

Conclusion

The discussion highlighted both the transformative potential of AI in healthcare and the complex challenges that must be addressed for its successful implementation. While there was general optimism about AI’s ability to improve healthcare access and quality, particularly in resource-constrained settings, the panelists also emphasized the need for careful consideration of ethical, regulatory, and cultural factors. As healthcare continues to evolve with technological advancements, finding the right balance between AI-driven efficiency and human-centered care remains a critical challenge for the future of healthcare delivery.

Session Transcript

Rajendra Pratap Gupta: conversational AI and low income and source settings. The whole idea was to check the level of awareness people have on the word conversational AI and what is the relevance. To most of us who are in this field, it is chatbots. But trust me, despite asking people, we couldn’t get a speaker from our vast network on LinkedIn. When we wrote we need speakers on conversational AI, so-called president of the associations in digital health said we do not know this field. Actually, it is chatbots. We had a very good panel with us that time, the head of AI at WHO, the president of humans.ai, Sabin Dima, Dinu, who is the CIO of the United Nations Staff Pension Fund, and Dr. Olubosa from Nigeria. We ran a good session. And a year now, we are actually talking about, will AI replace doctors? But let’s go back 10 years into history. I think I need to check. Can you move the slides, please? So as the chairman of the board for HIMSS India, 10 years ago, I told doctors that the future is tech, please, tech, IGF. Tech, please, tech, IGF. This is what the slide was. The technology eliminates middlemen. Today, if your doctors don’t need technology, technology would not need them in the future. And 10 years fast forward, we are in 2024, heading into 2025, which is just a decade after. And look at what has happened. And let me also go back to share what I presented at that point in time. I think I need more. Some issue with the presentation, we’re just going to sort it out. I think whenever we talk about doctors and AI, some technological glitch comes in, which I’m not very surprised because they are a very powerful stakeholder in this discussion. Can we manually move the slides, please? Can we? So let me talk to you, technology moves faster than I can. I’m sorry about that. That’s the way it is. Sorted out? So, no, no, no. So look at this slide. You know, those of you who have used the PCO booths to go and make calls and the long distance trunk calls. And what happened when the cell phone moved analog to the technology move from analog to digital? You know, we can make calls from cell phones wherever we want, whenever we want. And now you can use your watch to make phones. So effectively, technology has progressed. And what it has done is eliminated the middle person. You know, you don’t need to go to a PCO booth. You don’t need to use. Lines for a trunk call, you can use it from your cell phone. I use my cell phone. I sometimes don’t carry my phone at all. Next slide, please. Next. And this is another interesting slide on the entertainment sector. We had this huge antenna. Sometimes it was 70 feet. You need to connect to get to watch maybe seven, eight channels. What happened when it moved? Entertainment moved from analog to digital. You moved to a set-top box. And now, anything you want to watch, you just pay for it. And it’s at your convenience. So what it does, technology is taking that middleman away. Again, it took the middleman away. There is no need for antenna. I don’t need a set-top box. You don’t need a provider of that service in your area. You are connected globally through the satellite system. Next slide, please. This is my most favorite slide. At one time, Kodak and Fuji were fighting for the color of the prints. And they were talking about the quality. Digital photography came in. And Canon actually is the one which came in. And what appeared was both of them literally became bankrupt. But that was not the end of the story. We have now mobile phones which have the camera. So you can look at what has happened is we have moved a whole lot of generation moving analog to digital. Now, in everything you will see is that we eliminated the middleman. Same happened with libraries and the bookstores. Today, you have your Kindle or your e-book reader. And you can actually read whichever book you want. I carry hundreds of books on my Kindle. And what effectively happens is technology gets you closer to the user and eliminates the connector. Now, if I take these slides back to my health, what I believe is that when a product, service, or a sector undergo the digital transition, analog to digital, intermediary goes off. And health care has a lot of intermediaries right now. So I think the fate of clinicians is a foregone conclusion. And this is what I’ve been saying for over a decade. I’m going to stop here and look at asking my expert panel, which is with me, is like, what do we feel about the future? Are we going to have a day where AI will see you now is going to happen instead of the doctor will see you now? So I’m going to put this question to Dr. Peter Pezziosi. Dr. Peter, thanks for joining us. I think it’s too early in the morning or too late in the night for you to join us. I’m sorry for the dot R. But thank you for taking our time. Peter Pezziosi is the president of CGFNS Global. And he’s done extensive work in terms of trying to provide relief to people where clinicians are in short supply through nurses. Dr. Peter Pezziosi, what’s your experience of working in Rwanda? Would you want to share with us?

Peter Preziosi: Sure, definitely. And I’ll be very brief. But, Rahendra, I agree. I think it’s here and it’s even coming. And I agree with you about the displacement as we look at these medical brains that are coming that are powered by AI and AI engines that are looking at technology. Just for background, I lead a 50-year-old global assessment and certification organization supporting the mobility of nurses and allied health professionals worldwide. Earlier this year, we began searching for the right partners to evaluate some technology-enabled new models of care in the primary care and public health space. Working with the Society for Family Health in Rwanda and a remote patient monitoring device manufacturer, Medwand, what we did is we set out to establish a model of nurse care. nurse-led primary care that would be easily replicated across the globe, cost-effective, co-dependent on technology, and one that promotes access to care and prevention. This model will contribute to already existing task-shifting initiatives that help to bring care closer to the communities. The Society for Family Health is Rwanda’s premier organization for providing health to rural communities throughout Rwanda by constructing and equipping health posts operated by nurses. Under our protocols that we’ve developed, we’ve embarked on a journey that we believe will impact health prevention, health promotion and preventive care around the globe. We see this by empowering nurses, community health workers, and other allied health professionals by equipping them with the right technology tools to provide care. And to seek for a second opinion when they have to. So this is still emerging that we’re looking at. We believe this care solution will reduce the number of referrals that are transferred to upstream health facilities that are already crowded and lack adequate resources, and make the model for an integral part of primary healthcare. The model will also contribute to job creation by allowing nurses to increase their healthcare portfolio and entrepreneurship. So far, we’re in five remote locations that have been identified in rural Rwanda that center around designated health posts and district hospitals. Healthcare workers at those locations have been trained on the MedOne device that captures all vital signs and some more functionalities critical in providing primary care. So within this remote patient monitoring device, it includes a thermometer, stethoscope, ECG, EKG, pulse oximeter. a high-definition camera that can be used to view inside the ear, nose, throat, and skin, as well as integrate with any blood pressure monitor, glucometer, and spirometer. The device is synced with a tablet, which is used to capture, store, and transmit information, and also make it possible to have real-time consultations via video conference. Early indicators have been very positive. The health post workers have shown the ability to adapt to the technology and have communicated the anticipated decrease of time needed for diagnosis with the ability to send data real-time to advanced practice nurses. The community has shown an increased receptiveness and indications of better compliance when diagnosis is accelerated. Travel times and distance are reduced significantly. The physician staffs receiving information in hospitals in the major hubs are less burdened to have to track data, and their response time is reduced on their behalf as well. So, overall, we’re confident that our original intent will have the strong empirical data needed to impact the lives of many by reducing cost, improving health, increasing reach, and empowering a new up-skilled health workforce. Over the next few months, we’ll be gathering hard data to prove that the nurse-led primary healthcare model can have an impact in rural communities in Africa and then also beyond in other continents. Thanks, Rajendra.

Rajendra Pratap Gupta: Thank you so much, Peter, and I think you made very important points of moving from a doctor-centric system to a lot of health professionals like nurses. And we have a work which we are doing with nurses and pharmacists, and we believe that the future of healthcare lies in digital health plus nurses or digital health plus pharmacists and not necessarily just doctors. That being said, I think we have to move forward. you also leveraging artificial intelligence and empowering nurses to take on-spot decisions with regard to patients’ queries, needs, and serving them?

Peter Preziosi: We have a new initiative actually funded through Johnson & Johnson that will be started in Ghana. It’s AI-powered sonograms to detect hypertension in pregnant women to prevent the maternal child deaths that are so prevalent in Ghana. So we’re looking at that. There are other initiatives, some point-of-care solutions that we want to test in other areas. And actually, we started to talk with you and love to work with you on looking at various primary care providers that could really help. I think nutrition, registered dieticians will be very useful. And I’m pleased to see that the International Patients Union is on this panel because I think that that’s an important role to really look at this from a consumer perspective. Because I think what we are going to have to look at is helping to provide self-care solutions to the patient community before they even need to get to nurses, allied health professionals, and others.

Rajendra Pratap Gupta: Thank you so much, Peter. And I think it makes my job easier to switch to the Patients Union representative here. Mavish, you run the International Patients Union. And is there something that you want to share with this panel on the prospects of replacing the need for doctors in settings where you didn’t just need OTC-based medications or acute care problems? And what are you doing in that area for the patients?

Mevish P. Vaishnav: Good morning, everyone. This is Mavish from India. And I lead the International Patients Union. In the health care sector, if you see, everybody is organized, be it doctors, nurses, pharmacists. but patients are the one who are unorganized. Nobody hears them. So, at International Patients Union, we have a platform, the Patients Union, where we provide a platform for the patients to voice their opinions, their concerns, and share their views. They can share their views with other patients so that they can get well-managed for their diseases. If you see at Patients Union, we are making artificial intelligence, the authentic intelligence. Medical science is a science. So, if you say, I have a fever, I’ll give you a paracetamol. If you say, I have a cold, I’ll give you the medicine for cold. If you have a pain, I’ll give you the medicine for pain. That’s a painkiller. But, if all these things can be put into a system, which is AI-backed, it can actually help doctors as well as patients. Patients doesn’t need to travel from 40 kilometers or 30 kilometers to just visit a doctor for primary care. The doctors can be saved from their timing to see the secondary care and tertiary care, the surgeries. And through this system, we at Patients Union are developing a program. We have launched the Patient Centricity Index, through which every system has been put, like the symptoms of the diseases. If you just open your app and say, I have following symptoms for my knee pain or something like that, it will throw you with the diseases, the right prescription for it. And this way, we can actually save time for doctors. So, yes, AI is important and can be helpful in the primary care.

Rajendra Pratap Gupta: So, you’re saying, Mehwish, that patients in India will be able to get their information about healthcare needs, not from the net, which is unverified, but from Patients Union without even meeting a doctor, is that right? Yes. So, effectively, this will be probably the first experiments of its kind where patients can get information about primary care from an artificial intelligence-backed system.

Mevish P. Vaishnav: Yes, it is an authentic data that will help them.

Rajendra Pratap Gupta: Fantastic. Zaw Ali Khan is with us from the American University of Barbados. Zoh, you run an academic organization, but you also are a tech czar in terms of bringing new tech to healthcare. Given what’s going around the world, do you think it’s time for us to prioritize where do we invest clinicians’ time and how much of their role we replace with technology? What’s your view on that?

Zaw Ali Khan: Thank you, Dr. Rajan, for inviting me to this session. I feel that there are certainly many use cases where the clinician’s role can completely be… eliminated, not just for the sake of providing convenience to the patient, but also for reducing the workload of the doctors themselves. So there are plenty of use cases that would see the replacement of doctors with technology altogether. But there are certain, I feel, you know, challenges where it’s a softer approach which will help. Because, you know, as far as the regulatory issues are concerned, you need to have doctors on board to approve these tools. And in order to have them on board, you need to make them feel safe and give them the confidence to use these tools. So for that, I feel that the role of academic organisations is paramount. Because, you know, the doctors that have already practised their whole life in a certain way, asking them to change, asking them to adopt a new thing, means you’re basically telling them that whatever you were doing so far was inadequate. There were some limitations in that. And that’s a very hard or very bitter pill to swallow. And instead of that, if you were to highlight their own challenges and how tools and technologies are sorting those issues, then they’d be more on board with this idea. And this is what I’m talking about, the more experienced doctors who are at the tail end of their career. But of course, they are the stalwarts. If they were to, you know, join these pioneering efforts, then everyone else will follow. At the same time, you also need a new generation of doctors who are capable of navigating these digital tools. So over there, again, the academic organisations have a role of making sure, first of all, these academic organisations any of the teaching hospitals, they themselves need to adopt more and more digital health solutions so that they can demonstrate to the students. Unless they adopt themselves, they won’t be able to demonstrate. Secondly, once they have demonstrated all of that, they need a structured course for that. And we have been fortunate enough to partner with the Digital Health Academy for providing our students with an elective for digital health. So that’s a structured course which is delivered by experts from around the world. And lastly, so first thing I said, more adoption of health tools by making sure that the health care workers don’t feel threatened, at least at the get-go. And actually, it is not a threat. Because the main problem that you’re trying to solve is the shortage of health care workers. And a related problem that you’re trying to solve is health care worker burnout. So if you’re solving burnout, every health care worker would help you achieve that goal. So focus on burnout. Focus on, from the perspective of health care workers, focus on burnout. From the perspective of patients, focus on medical errors. From the perspective of health care organizations or health care systems, focus on the shortage of health care workers. It’s the same problem, but just with three different angles. And once you’ve done that, then all the stakeholders will come together to adopt more of these solutions. And ultimately, of course, everyone, once they start seeing the benefits, I’m sure that one or two generations down the line, they’d be surprised that, oh, we used to have PHCs. So that’s the future that I envision. And I hope that we’re able to accelerate that.

Rajendra Pratap Gupta: So this is very interesting, the point that you make. And at the Academy of Digital Health Sciences, where we run courses on doctors, our experience has been very different. The doctors who take courses are like 30 years, 40 years experience. And when they pass the course after one year, they’re as excited as kids, sending what they created to their sons, grandsons. And the unfounded fear becomes forces that help them. So that’s what we have seen in doctors. So we’re going to see a competence divide. But what is a worrying and probably a call to action for clinicians, everyone, I guess, is like the slides that I showed. The technology didn’t wait for the sector to evolve. It just disrupted it. So the fact is that whether doctors adopt it or not, technology is going to invade that. That’s a fact of life. Our studies show that some of the specialties will get totally replaced, which is a very, very strong statement to make. But when I look back at the statements made 10 years back, that actually come true. I think fields like radiology, dermatology may not be doing well with just doctors. AI can do a phenomenal job out there. But there are surgeries where robotics plus surgeons are needed. Specialists may be needed for neurology and other cardiology sector. But AI is getting matured with time. I mean, that’s what we’ve been seeing. And now let me get to Debbie Rogers, who joined us from Africa. Debbie, you run Village Outreach in Africa. And countries like India, Africa, and other LMIC countries face a huge shortfall of doctors. What do we do? Do we wait for doctors to get prepared and be MBBS, MDMS? Or do you bring technology? Because there’s nothing that exists. What do you say?

Debbie Rogers: I definitely am a proponent of bringing technology into the mix to relieve some of the burden on the health care system. In sub-Saharan Africa, we have 14% of the world’s population. but 25% of the world’s disease burden and only 3% of the health workers. So if you look at those stats, it’s very easy to see that if we just keep trying to train more and more health workers, we’re not going to get anywhere. We have to be using technology to be able to augment the work of health workers. And the way we think about this is we think about moving care from the facility. A lot of work has happened to move care into the community with community health workers. But with a mobile phone, you can move care from the facility to the community to somebody’s own home. And so we use very simple technology, technology like SMS and WhatsApp to communicate directly to citizens and to help them from a self-care perspective, but also to access the right services at the right time. An example of this is a program we’ve been running for 10 years in South Africa called Mom Connect. And basically every mother who goes into a clinic for her first ANC visit is signed up to the platform. And throughout her pregnancy and up until the baby is 10 years old, she is able to receive messaging, which will help her to care for herself and to understand better how to care for her baby as well. And we’ve seen really great results both on self-care. So things like better nutrition and uptake of breastfeeding to access to services like uptake of family planning after birth, improved attendance of ANC visits. So we can really see that something as simple as an engaging platform delivered directly to a mobile phone. You don’t have to have any training. It’s exactly what you do to communicate to your friends and family that can have a huge impact on health and it can relieve a lot of the burden on the health care system. So that the health workers can be doing the work that they really need to be doing and not the work. that can be taken over by technology. I think for me, I don’t know if we can say necessarily that we will replace doctors, but I certainly believe that doctors who use AI and use technology will definitely be replaced by those who don’t. I mean, the other way around, those who don’t use it will be replaced by those who do. Because I think it’s going to make their work so much more efficient and effective. It’s going to make patient experience much better and people are going to vote with their feet and they’re going to go where the patient experience is better. And I think that’s definitely where I see things going is very much more task shifting, different tasks going to different caters of health workers. And those who use AI and use technology are definitely going to be replacing those who don’t.

Rajendra Pratap Gupta: David, do you think, you know, we have been always saying that doctors who use digital health replace those who don’t, but do you think there could be an extension of this saying that the healthcare workers who use digital health will replace doctors who don’t? You know, because I can tell you, you know, this is not coming just because I’m sitting on this dais on IGF and I’m saying, there’s a hospital where they segregate the high-risk pregnancies with normal pregnancies. And I was told by the chairman of that hospital, who is a doctor, this hospital has been around for 76 years, that in normal pregnancies, the mother does not see a doctor even after delivery. It is the nurses who handle it. So, and they use technology. So just imagine how much precious time of the doctor, the gynecologist or obstetrician is saved because the delivery happens only through nurses. And this is not a small number. I mean, this is, I’m talking of India and this hospital has been doing it. So, I mean, I’m, you know, getting to the stage to conclude, that India has been an evangelist of technology. I was skeptic, you know, in the beginning, you know, thinking that it will not happen. But as I see over the last few decades, I think technology has been showing itself with great confidence. commitment and accuracy now. So do you think that other healthcare professionals like, you know, our friend, Dr. Peter Preziosi is doing in Rwanda and Ghana, will other clinicians who use technology replace doctors? Because given, I mean, I see your numbers, India has the same problem. You have 25% of disease burden, 3% of workers, no way on earth you’re going to match them over the next few decades. No way. So do you think technology will be…

Debbie Rogers: I agree, there’s gonna be massive task shifting and things that were previously deemed as only able to be done by certain specialists are going to be, is going to be able to be done by somebody who does not have the same amount of training plus technology. So I do think the task shifting and moving from one cater to another is definitely gonna happen. And we see it happening already because just out of necessity, I mean, in Sub-Saharan Africa, there aren’t enough doctors to reach everybody. So a huge amount of the care is already on nurses. I do think though, we have to think carefully about the fact that we still have too few nurses and we still have too few community health workers and we still have a problem of burnout, which means that as fast as we’re training people, we’re losing them. And I think we have to think very carefully about, and I do believe this is an important role of technology, removing the burden from healthcare workers so that they can do what they really need to be doing rather than things that can be done by technology. And I think that’s gonna help an enormous amount to relieve that huge dearth of healthcare workers that we have worldwide, not just in Sub-Saharan Africa.

Rajendra Pratap Gupta: Thanks, Debbie. You know, I’ll add to that, we are very lucky to have one of the leaders in India who is the president of the Indian Nursing Council, Dr. Dilip. You know, what we have done as Academy of Digital Health Sciences, we have partnered with him to train 2 million nurses on digital health. And same thing we have now done with pharmacists, we’re going to train 400,000. on digital health. I think these could do phenomenal work and I think by next year, IGF, we’ll be able to show the impact we have created training of most like, I think within now and then, we should aim to touch half a million of them trained. Eventually we’ll train maybe three million of nurses and pharmacists to take on some of the frontal roles. But having said that, now I have with me Dr. May Seksek. She runs the Innovation Network Canada. May, you have been leading the Innovation Network and you’re building a new healthcare model. Where does technology fit in the model? Is it going to be more dominant than the clinicians or it’s going to be just again back to the same old model? And going by the fact that Canada is among the handful of countries which pioneered digital health. My very dear friend Richard Alvarez was the President and CEO of Canada Infobae which was the first government organization to implement digital health. What’s your take on that?

May Siksik: I want to say sorry Thanks Rajendra. I wanted to say that Debbie has brought up really important statistics and some critical information here. The demand for healthcare is rocketing and supply is still a space. The system is destined to crash at some point. From my perspective, it’s again going back to these stats, we really need to have technology that is able to support the healthcare system We really need to have technology take care of what’s on doctors and nurses. I think that’s really important. So in the system that we’ve developed, one of the main concerns right now based diagnosis and so on is the fact that large language models will not hallucinate. This is one of the things that we’re working on. You can hear me? Okay, perfect. So one of the things that we really need to address is hallucination. Right now, AI-based diagnosis cannot be working on right now. I think we really need to address that. And we’re working actively at Innovation Network with academic organizations to address this issue so that we can actually have an AI-based system that can be qualified to do such tasks without a human supervising it. And I think that’s really important because then we can save a massive amount of time from healthcare professionals. Doctors, for example, in Canada, they get paid by every 15 minutes. And I mean, most of the time, it’s very difficult to actually address a medical case, especially with complex medical cases, within 15 minutes, which means that they’re not really able to do their job as well as they can and they should. So having, I mean, the way I think about it is that it’s having, for example, medical AI-based medical diagnosis is like having a co-pilot for doctors. You know, you wouldn’t set foot on a plane if you don’t know that there’s a co-pilot that’s for redundancy. Yet we go to doctors and who, I work with a lot of physicians and I hear from them that mistakes are often made and often they get buried with doctors. So it’s important to have these tools for doctors to not just empower patients, but also empower doctors as well. Another aspect I wanna bring is the fact that I don’t think they’ll ever be able to, AI will ever be able to completely eliminate doctors. I think we’ll always have, but we’ll save tremendously in terms of the time that’s needed from doctors. I think there are two aspects here. One of them is the fact that we’re always, you know, we are human beings and. We often, we need that interaction and assurance from doctors. We also need to understand for, especially complex medical issues where we need to look at different factors like ethnic background and culture and so on. You need to have that contextual understanding. Now, having said that, this has been a huge source of medical errors that actually literally led to fatal issues for patients. So it’s a double-edged sword. We need, doctors can be really good at understanding contextual context for medical cases, but at the same time, they can also make mistakes because of context, because they’re drawing patterns and so on. So again, going back to having that co-pilot from technology is really important.

Rajendra Pratap Gupta: May, and I would ask you that if you were to start moving from your driver driving you versus a car which is driven autonomous, would you do that?

May Siksik: Would I use an autonomous driving vehicle? Yeah. So in fact, I mean, I have actually worked in compliance for a safety standard for autonomous driving vehicles. I’ve worked very closely on this field and I can tell you, so my job was actually to really oversee where mistakes can happen and the system could result in fatal issues. And because of that back, and it was quite a complex process so I worked with all, so, and you can see that mistakes could happen and compliance is extremely important. So you really need to have a standard where you actually have to check things. So, and this is going back to, we really need to qualify AI.

Rajendra Pratap Gupta: So May, in this May, this is the month of May, I will. in U.S. delivering the opening address at ADA and I decided to use an autonomous vehicle, PIMO. I didn’t have a problem. It’s the worst thing. To a doctor, you go once in a year, probably twice in a year. If you’re a chronic patient, multiple times. But in heavy traffic in the United States, in the morning or evening, you won’t drive and take the risk. But I decided to do that. Let me tell you, it was pretty safe. If I can trust a driverless car and reach there, again, going back to the presentation I made 10 years ago in Hames, what essentially this technology is doing is taking the middleman away. There’s no driver in the car. I reached my destination on time. The driver doesn’t, you know, fleece me by taking a longer road, charging me more. I’m just charged. I go to my destination. So if you trust a driverless car, why won’t we trust a doctorless surgery with a robot?

May Siksik: So I totally agree with you, but driverless cars have a standard that they must be compliant with. And I can tell you, it’s a very, very complex and big process. So you’ve got auditors that come and make sure that it’s compliant. And we can do that, Rajendra, with medical-based diagnosis. And I think we’re getting close to there. We’re not 100% there yet. We’re not, we’re not, we can’t qualify yet, but we need to get there. We need to have a standard and we need to be able to say we’re compliant with the standards.

Rajendra Pratap Gupta: Yeah. And I agree. And going to Mavesh, you know, you’re talking of patients. Are patients ready to accept this kind of technology? I mean, Mavesh, I want to throw this question that you run Patients Union. Are patients ready to accept AI as a doctor? So if you say AI first, doctor later, would they accept it?

Mevish P. Vaishnav: Yes. So there is a study where it states that six out of 10 patients know digital health, but two out of 10 doctors know digital health. So awareness, that is important. And doctors are more prone towards, you know, when a patient, I can share an experience with you. family member went to a doctor and she asked her can you prescribe me DTX she was like what is that so she’s not aware about DTX so it gives a thought or maybe a doubt for a patient that if a doctor doesn’t know about this technology how will she be able to help me in my managing diabetes so yes patients are more prone and they are more accessible to AI they would love to be a part of AI where there are zero errors I would not say zero errors but negligible errors.

Rajendra Pratap Gupta: thanks my wish and this brings me to a very important point that may 6 I grades you know about the empathy so may today my social media knows more about me than my doctor you know they tracked me what I do using my phone it tracks me what I do using my computer what I like what I do like the tone through my phone when I talk so it could be more you know personable to me in terms of personalizing the content the way I speak the way I like and how many seconds I need a response so if I have a doctor that AI doctor would be more empathetic won’t get angry because I get angry very often so do you see that technology is better in terms of empathy than a human doctor.

May Siksik: Actually yes so but having said that I do think that technology can be better than doctors at in terms of empathy but having said that sometimes a patient at a hospital will need for certain cases they will need humans so all I’m saying is that we can’t eliminate doctors in terms of empathy and I want to say that I mean using AI based tech for medicine is is critical actually right we actually are losing lives right now because people can go to a clinic and not have access to the right diagnosis and so I talked about I talked about the tiny bit of hallucination that an AI system does does, but humans make way more errors, way more errors than, than, than doctors. I just, just to be making my job easier.

Rajendra Pratap Gupta: Thank you.

May Siksik: It’s true. I just, I mean, I need to, uh, we need to talk about all aspects of this. Right.

Rajendra Pratap Gupta: So that’s a reality check. So going to ZAW, that’s all running an academic organization where you do research and you work across multiple countries, what do you think about AI being more empathetic, being more accurate, because there’s a two challenges that will come in the way of creating trust for technology.

Zaw Ali Khan: Yeah. Thank you, Rajenji. Uh, so as far as AI, uh, knowing more about me than I know about myself, I would frame it that way. Uh, I recently got to know that, uh, you know, if you ask chat GPT, uh, uh, how would a typical day, uh, based on what you know about me, if you give a chat GPT prompt prompt saying that, uh, based on what you know about me, what would my desk look like, or what would my workplace look like? So it’s able to recreate that quite accurately based because of the, uh, frequency and detail of interactions that we are having with these, uh, AI models. So, uh, imagine if we were to have health, uh, related, uh, interactions with these AI models, uh, as frequently as we are having work related, uh, uh, interactions, uh, that would be, uh, many times more, um, I mean, in terms of magnitude, I would say exponentially more personalized than any, what any doctor or healthcare worker can provide. Uh, it’s of course not physical. It’s not a physically or humanly possible for doctors to provide that level of personalization as. Ms. May said that doctors are getting 15 minutes with the patient and in UK I’ve heard that they get 7 minutes or 8 minutes on average. So it’s impossible to even diagnose many of the conditions in that short amount of time let alone give a personalized empathetic care to the patients. So I agree that digital tool has that potential.

Rajendra Pratap Gupta: Some countries they get 34 seconds. If I remember correctly it was Bangladesh you know where doctor gets 34 seconds or so. So even if it is any other country 34 seconds one minute you can’t even pronounce the name and the problem of a patient you know how will you diagnose. That means you have made up your mind to prescribe something and the patient has just come in on time. Yeah so that’s another that technology won’t do by the way because there will be a digital footprint of what you do. So Peter coming to you given your global experience and you are a nurse by training. That’s correct. And I think you are closest to the patient than the doctor. That’s why I call it like doctor says still close to God doctor and nurse is closest to the patient. How do you see this relationship between technology patient and the clinician evolving?

Peter Preziosi: I agree with many many of the comments that were stated earlier. I think the challenge is that we tend to be focused throughout the world on a medical model and the challenge that we face we face in communities is that clinicians are not working at the top of their capabilities and for a variety of different reasons. Many times if you’re talking about you know physicians, nurses, pharmacists, they’re they’re struggling with the morass of administrative paperwork that’s not necessary for them, takes them away from clinical direct clinical care. As a number of the panelists were mentioning, these AI-enabled tools, digital health tools, are partners to really enable people, clinicians, to be able to work at the top of their capability. But there’s so much more that happens, and it’s very different based on the jurisdictions that you’re talking about, the different countries that you work in. From a regulatory, legislative perspective, there are professional turf battles. And why I’m so thrilled to see the patient’s union here is that we talk a lot about patient-centered care, but we’ve got to really bring that patient involved and engaged in the services that are being provided because they should be much more empowered in terms of what it is that they need to do. Because if you look at it, just take the issue of obesity. Obesity is rampant now around the world because people are not eating properly. They’re not exercising properly. Aging is a challenge that there’s no cure for, really. Maybe there will be a digital cure into the future. But the issue is that there are many health issues that don’t succumb to a traditional medical model. So we just have to start to begin to turn some of that upside down and look at the appropriateness of care. I think it was Debbie that said earlier, looking at the right care at the right time, right place. These are the issues that we have to get better, and I do believe digital health solutions will help to augment and assist us in moving more into those areas.

Rajendra Pratap Gupta: Thank you, Peter, so much. And to those who have joined online, I would say put your questions in the. we’re going to take questions, we’re going to dedicate substantial amount of time for questions. But you know, this brings me to Debbie and Debbie, you know, that countries like Africa, India with billions of population, and there’s so much less resource in terms of doctors, of course, India in the last few years has done phenomenally in terms of adding doctors, we add now 175,000 doctors every year, which effectively means that in next five years, we would double the number of doctors we already have. Having said that now with technology taking dominant role, and number of doctors coming up more. And now going back to Africa as a region where some patients may not have seen a doctor in their lifetime. How will they understand the difference between technology and a doctor? I mean, if I present them, hey, look, there’s a doctor you’re talking to. And you know, I’ll put a disclaimer, there’s a doctor is AI based doctor, they haven’t spoken to a real doctor. What do you feel would be an opportunity or a challenge for a country like Africa where someone gets addressed empathetically through technology? How will they differentiate? What will be their response on this? I mean, they have not seen a doctor, they get their talk very well. They don’t get 30 seconds, they get 30 minutes if they want, they can chat, they can tell all the problems and the AI will tell them, look, this is your thing. And if they get the right advice, I think, as you said, as Dr. May said, and as Dr. Zaw said, if they get the right advice, they would come back to this for more because they would start trusting that voice that talks to them and it will be the voice that they probably like the most. What do you think about that, Debbie?

Debbie Rogers: I think you bring up an incredibly important topic of trust. And I think we provide digital health solutions and we work very, very hard on making sure that they are trustworthy, that people love the services, that they understand that the service is there to support. And in doing so, we are able to get them to change their behavior in a way that they wouldn’t be able to if they didn’t trust the source that it was coming from. Now, I don’t know whether people, I don’t have the knowledge or have done the research around whether people will trust AI or doctors more. I think it’s probably going to be down to a personal perception. But I do believe that people can trust AI and people can trust digital health solutions. And I think that’s incredibly important when you’re building these solutions. As an example, we integrated a diagnosis engine into MomConnect. It’s a diagnosis engine that we didn’t develop. It’s called Ada Health. And we integrated it via WhatsApp into MomConnect. And we had a higher completion rate of going through diagnoses on MomConnect than on the Ada Health app. And the best we can put that down to is that mothers actually trust the MomConnect service. Another example of trust is we get inundated on Mother’s Day with pictures of people’s babies and messages thanking us for the support that they’ve been giving. And good morning MomConnect and good night MomConnect messages. Whether it’s a piece of technology at the end or it’s a person at the end, you can build trust and you can get people to engage with them. And sometimes they believe that there is a person on the other end. And sometimes they believe that it’s AI on the other end. And that depends on the usage and the person as to what you want to try and encourage. For example, in our sexual reproductive health and rights platforms, we find that youth want to speak to AI and not to a person because they’ve been judged so much by people. And so in that instance… for example, it actually is very helpful for us to have AI. And so I absolutely believe that people will be able to engage with AI, trust AI, knowing that it is AI, they may trust it even more than doctors sometimes, depending on what their experience has been in the past.

Rajendra Pratap Gupta: You bring that very important point of the world called engagement. I think technology with no denominator of money at the point in time and not being a human, it could actually spend time probably the patient wants to spend with and engagement would lead to maybe better outcomes if the advice is right based on standard clinical protocols, which means, as you said, AI could be trusted more. I mean, as they send you pictures, I’ve seen your work and I really admire what you have done is that people feel that there is someone with them when they want it. And that’s what is missing in health care today. And I think Peter Presuzzi would allude to this, but Peter, given the fact that we have nurses and there are success stories of the nurses and how they touched and transformed lives, do you think that the future of health care lies with leveraging this workforce, nurses and technology together and somewhere maybe technology? What is your take on this?

Peter Preziosi: Yeah, that’s a great way of putting it around is is liberating all clinicians, nurses, given that there are twenty nine million worldwide, the largest profession. Absolutely. We need to you know, we start with them, but there there’s so much more out there like the work that you’re doing with pharmacists and others. You know, we’ve got you know, we’re doing much work around rehabilitation care with the World Health Organization. And we’re actually in Ethiopia looking at rehab care, driving that into primary care. A quarter of the world’s population needs some form of rehab care. Yet so many clinicians are ill prepared. to tackle this. And when you’re taking a look at the challenges around war and conflict and working in these zones where you’ve got traumatic amputees and trying to reintegrate people into a quality of life, rehab care is incredibly important. But yes, liberating through, I mean, there’s no magic bullet. Are these digital health and AI solutions the magic bullet? Absolutely not. As many people have talked about here, engagement. There are a variety of issues that need to help. But there’s nothing, obviously, to be afraid of with the technology evolution and emerging that as true partners. Again, I’ll just emphasize, it’s also as important to liberate the consumer, the patient, and their families equally with these technologies.

Rajendra Pratap Gupta: Peter, I think this is a very important point of liberating the clinicians from their too many tasks that can be handled and also the consumers. Let me now jump and open this to questions from the audience. Then I have some questions, which I will ask back the panel again. So if there are questions from the audience, we’ll be very happy to take that. Can someone please provide the mic to the audience? And Sakshi, if you have online questions, please read out to us. Sure, sir.

Audience: So everyone, that was a very nice discussion. We have a few questions online. I’ll just read them out for you. First one is from Ms. Arushi Negi. She asks, if we are planning to replace doctors with digital health tools, how do we make sure they are not compromising quality of care?

Rajendra Pratap Gupta: Peter, over to you.

Peter Preziosi: Well, this is exactly one of the reasons why we’re going slow and really testing these point of care solutions. Many, you know, and there are countries, high-income countries, Canada, the US, India has done, you know, a great deal of this kind of work where you’re looking at, you know, testing out these solutions, integrating that into clinical workflows. I don’t think replacing is the right word, but, you know, really augmenting. And, you know, there are AI-powered medical brains now that are helping to augment 92% of the workloads of clinicians and practices. And most of this is administrative burdens. Rohindra was saying before, when you’ve got a physician that has only 34 seconds to go in and they don’t even have time to pronounce the patient’s name and the diagnosis. So it’s that, it’s looking at, it’s looking at the work differently and testing those solutions out and seeing what’s best with the consumer to be able to optimize human potential.

Rajendra Pratap Gupta: Thank you, Peter. In fact, I would very much agree with you on that point. You know, recently in our office building, we had a plumber who always came and told us that his wife has low weight and she’s coughing. And, you know, we said, okay, why don’t you get a tuberculosis check done? So he went and got a check done and the sputum was negative, but the x-ray showed that it was tuberculosis. And the doctor would not start the treatment because the sputum was negative. So we did a telemedicine consultation through a tertiary care facility. And he said, just start the treatment. It is abundantly clear that, you know, there is tuberculosis. So it was technology that came to the rescue, but the conventional system of diagnosis and treatment or consultation didn’t work. But sometimes it is augmentation, sometimes it is replacement, and sometimes it- would be just the doctor. So having said that, Zaw, what do you feel will create that trust that is missing in technology for now?

Peter Preziosi: I want to comment on one thing, Rehendrik, because you’re right. Trust and engagement is so critically important. I was, prior to being at CGFNS International, I was the World Health Organization, actually during the pandemic. And there was a lot of vaccine hesitancy around the world. A lot of individuals don’t trust the treatments that are out there, the vaccines that will prevent the spread of infectious disease and pandemics that are cropping up. This is a real challenge that we have. It’s not just around digital health. It’s around the entire value chain that we see across the health care system. Because as it was said earlier, health care is very contextual and highly personal. And these are the things that we have to focus on. And to your point earlier, AI really understands us in a lot of ways that maybe other humans might not. And what Debbie was saying around sexual and reproductive health issues, where teenagers might feel more comfortable, less judged by talking to a bot. So these are really important issues that happen. But you can’t just make these blanketed statements around the world, because there are a lot of different cultural nuances. Over.

Rajendra Pratap Gupta: Thank you. Zao?

Zaw Ali Khan: Yeah, Rajenji. So about the issue of trust, I feel in the context of communities where doctors are not available at all. So in that vacuum, of course, any solution would be taken very spontaneously by the patients. But as far as systems where people are used to doctors, if you’re able to emphasize the advantages of technology which they are already seeing, like in India, there used to be next day deliveries used to be considered very fast. And now we have same day deliveries and even 10 minute hyper fast deliveries. So there’s this convenience that consumers are getting used to. And if you see patients as the consumers in the health care system, then eventually they’re going to demand that same kind of convenience in health care as well. Along with that, of course, they would also want high quality. For high quality, they would want to spend more time with doctors. They would want more counseling and coaching. But of course, since that’s not physically possible, then the next best thing would be technology. And technology can actually build more trust because it’s capable of counseling and coaching patients in a much more personalized manner, as we already discussed. But more importantly, it comes with the added advantage of being potentially very transparent about where they’re getting the information from, why they’re recommending a specific course of treatment, why they have concluded a specific diagnosis. So the technology can actually explain all of that much more better than any of our overworked health care workers can. And it can also provide the evidence-based data to support their recommendations. And I’ll just stop right here. Just one more point. Lifelong learning. Even if we train our doctors, our nurses, health care workers to be very good at lifelong learning, we don’t have to be good at lifelong learning. we can’t expect them to absorb the huge oceans of knowledge that is being created continuously in the medical science sector. Whereas AI can effortlessly absorb all of that, assimilate it, and always be up to date with the latest evidence.

Rajendra Pratap Gupta: Thank you, Zou. And the question is very important that how do we maintain the quality without compromise? I was on the accreditation committee of the government of India for hospitals. And of course, we still agreed on the same parameters of number of doctors to bed, nurses to bed, and what we do. But there was no objective parameter. So in technology, at least, I can say for sure that you will be having a digital footprint of all that you do. So I can see exactly at what readings did the patient enter the system, when did they exit, and when did they get readmitted. So maybe we would redefine what are the parameters of quality. In an AI system, I think the number of hallucinations per patient’s life journey would be an important quality yardstick. And to me, if you ask me, I am not saying I’m overexcited about it, but I believe that technology will be able to manage quality better than the conventional care delivery model because it tracks everything in the process. Everything you do has a digital footprint. I will definitely ask my colleague, Dr. May, what she feels about it.

May Siksik: Yeah, I completely agree, Rajendra. And this kind of reminds me of the empathy point that we brought up a little bit earlier, which is I want to mention that one of the projects that we’re working on is with First Nations. And First Nations have an issue with the way the medical system is set up in Canada, for example, because you can only go to the doctor with one issue. But the way their culture is, when you go in, you need to talk about the full picture, and they have multiple things that they talk about. It’s just a very cultural thing. And this is something that can be completely taken care of with an AI-based system. So I completely agree with you that this completely be helpful in this regard and develop that trust with populations who normally would not would avoid going to doctors which creates like I know in Canada that creates big problems because it means that conditions get worse and manifest into something serious and it cost the system in terms of you know not just dollar value but also societal impact and cost lives.

Rajendra Pratap Gupta: May I remember in one of the books I wrote that in one of the province in Canada they run a lottery to be attended by doctors. Lottery system you know by doctors so if that’s the seriousness you know it’s really tough for me to ignore not giving technology to such populations. Mavish from a patient standpoint how would you see quality in technology versus a conventional doctor?

Mevish P. Vaishnav: So if you see I feel patient would be more happy to get time from a doctor but a doctor should be able to understand my condition well my history he should know my history and if I don’t get that much time how will I trust the doctor but if I go to AI and I say okay these are my symptoms and what best options or what treatment you can provide me I would get better outcomes from that because it has as you said it has a complete standard operating protocols and the treatment guidelines from where it can take out the data collate it and share it with me so I would trust the AI more than a doctor.

Rajendra Pratap Gupta: Debbie what’s your take on on the quality that we’re talking of in the technology age?

Debbie Rogers: I absolutely believe that quality from a technology perspective could be better than, could be better than individuals and in many cases have been. One thing I do just want to point out is that we need to be very careful on technology increasing the digital divide. For example, LLMs at the moment are primarily trained on Western culture, medicine, languages, intonations, cultural context. And that is not going to be appropriate for rolling out in Rwanda, for example. And we need to spend a lot more time ensuring that the quality is not just high quality for certain communities, but high quality for all communities, particularly those who are underrepresented at the moment. So while I think that it’s possible that it will get to that point, I think we need to be conscious in how we are approaching improving quality within these things to ensure that we do it in an equitable way.

Rajendra Pratap Gupta: Thank you, Debbie, so much. We have a question from the audience, please. Please tell about yourself and ask the question.

Audience: Can you hear me? Yes. Okay, my name is Melody Musoni. I work for a think tank called ECDPM. So the medical field at all, but I was happy with the conversation that we were having here and I learned a lot. I guess mine are more of concerns. And I think Debbie have already touched on one issue when it comes to digital divide, because my fear is the more we digitalize the healthcare system, we are going to leave so many people behind. Even just to give the MomConnect example that Debbie was demonstrating and showcasing, you have to have a phone for you to be able to register and to use it. And if you don’t have that phone already you’re automatically excluded. So I think as we start thinking more about how we are incorporating AI in digital healthcare we also need to bear in mind the issues of digital divide and marginalization. That’s the first point I wanted to make. And then the second point I also wanted to make is perhaps we also need to think about the option of opting in to a digitalized service and opting out because I think you mentioned the other speaker about the importance of human to human interactions. Personally, I think I would still want to go to a human doctor if I have the opportunity to do so instead of relying on a technology. So I think as we are advancing in our innovation on healthcare, we also need to make room for people who may still want to have access to physical doctors or real doctors. And the third point I also wanted to mention I think the speaker online kind of touched on it. I think now with generative AI there’s a lot of misinformation and fake news that is flying, especially around fat diets. So people I know, especially they use Oprah a lot and they have generated so many videos where they’ll be like, Oprah, you think it’s Oprah. We say, I was using this product and after using it in two months, I lost weight. And a lot of people are falling for that misinformation and a lot of people are buying these products. So I think we also need to find ways in which we can address the issues of misinformation and fake news. And I think the example you gave about patients going to doctors and requesting for a certain medication and the doctor is not aware is a good example of that because they are seeing all these things on social media and they expect all doctors to know. So there should be a way in which we address issues on fake information. information, especially around use of certain healthcare services, for example, and I’ll stop there. Thank you. I think he also had a question.

Rajendra Pratap Gupta: Yeah. Very important points raised. And I think this is what Peter and others were alluding to, you know, that while we get to implement technology at scale, we should be careful about the don’ts more than dos, because that’s something that we need as frameworks to implement. Yes, you have a question.

Audience: Can you hear me? Yes. Yeah. So more than a question. I think it’s just some comments I had written it down so that I don’t forget it. But I think an important point that we missed upon was basically how doctors are limited. That has been addressed, of course. But compute is growing at a larger scale than ever. So it was Moore’s law before where it was, you know, it grows two times every two years. Now it’s four times and the cost is going down with GPUs and, you know, multiple compute systems. But at the same time, you have compute systems that can run AI models on a system that is as cheap as $20, but give you an accurate response in six seconds. So this has been, you know, still not been implemented. And I think this would help a lot of places where you need instant care. And the compute cost is a lot for running these large models. So I see them as superpowers. And also touching upon the point of the empathy that we discussed, you know, I remember the movie Wall-E. I’m not sure if everyone has watched it. But basically, you know, they learned everything from what they saw. So I think empathy is a social model, because now what you see is what you believe in. So if you start seeing an AI doctor, you won’t probably go to a normal doctor eventually thinking, oh, I think that’s a human that won’t be as accurate as an AI, although that’s very dystopian. But I think that’s just how it comes to be, you know, with new technologies. And one interesting point that I noted, Mr. Zoh, you said that you addressed at GPT as him. Now we have started, you know, addressing AI models as a person instead of, you know, just being on the cloud or somewhere else. So that’s something that we’ve already started, you know. started doing and it’s in the matter of two years so I think you know that’s something we need to consider as well yeah.

Rajendra Pratap Gupta: Thank you so much it’s a very important point you know that and I keep concluding you know with the way world is changing that this is not a technology change it’s a societal change society is changing you know don’t think that we need to adopt technology the society has already adopted adopted technology we are all learning without barriers today a teacher is not the main custodian of the information to deliver to the student it’s a YouTube any lecture you want on any topic you want you can go and watch same way patients watch Instagram they go and ask their doctors hey I looked at a star I saw actually a India’s world’s top cricketer talking about a CGM why would someone believe their doctor they would believe the best Batman who is like icon role model talking that I use CGM my sugar is controlled not tell me who is making a choice we used to say in healthcare doctor is the customer and patient is the consumer now technology is changing that role and relationship from customer to consumer directly which was never thought of for centuries it was doctor who decided not today as a patient I know more about my treatment than my doctor knows I mean it’s a real experience I am telling you being in healthcare being an advisor to my health minister of my country my doctor didn’t know about digital therapeutics I decided not to go to her she was a young doctor I mean very experienced very famous known for controlling but she didn’t know when I use a CGM I was told showed the readings I said why do I show to her she didn’t prescribe me you will have patients like me too I mean you’ll have patient who will believe the doctor so all kind of things will exist but I think it’s a big societal change now this brings me to I would still take one online questions actually it is before I want to go back to my panelist again based on what we have discussed sure so so the next question is how can we ensure the accessibility of digital tools in rural areas? I think I would go to Debbie first.

Debbie Rogers: There are a lot of things that we still need to address from an infrastructural perspective and from a cost perspective in order to ensure that digital health can be used in rural areas. Just the fact that what the previous audience member mentioned that even MomConnect, you have to have a phone, there are still major barriers to access to digital health technology. These include things like electricity. If there’s no electricity where you can charge a device, you’re not going to be able to have a digital health solution. If you don’t have any mobile penetration in that area, it’s going to be very challenging. You can do offline, but it’s very challenging. The other thing from a cost perspective, just thinking about AI, for example, we on MomConnect, it’s 20 cents per user, 20 US cents per user per year to run the program. If we were to shift that completely over to generative AI, it would be $10.20. The cost at the moment, and I hear the other previous audience member around the cost of compute going down. The cost at the moment is also prohibitive for a lot of these technologies. Things will go down, access to various things like electricity will go up. But we have to be more conscious about this in ensuring that those who need the services most are not left behind. Because unfortunately, just market forces alone have not actually solved all of these problems.

Rajendra Pratap Gupta: Bringing this point of cost of technology, of course, not what I’m saying may apply now, but when cell phone came in, at least in my country, I used to pay a very heavy cost of called airtime and cell phone usage. Today, you only pay for data. You don’t pay for calls. It’s free. We’re the cheapest data. So I think technology, not only the computing power goes up, the cost goes down, but I think you have raised an important point of the digital divide. Within the IGF, I also lead three dynamic coalitions. So one of the goals that I’m putting to my overall 31 dynamic coalitions is that still 2.7 billion people are not connected to the internet. That means one out of third person on this planet has no access to internet. We are very privileged people to be talking about all these things. And we should always understand that we are privileged and that’s what we are supposed to deliver. And these forums is talk the real issues and not ignore anything that matters. So I totally understand what you’re saying is very important. And as IGF community, you know, which is responsible for putting its views to the UN on this matters, this is a very important bond given that health is a determinant for living. You know, it’s not something that is a choice that you have. It’s a right health is a right. And technology makes that right possible. So that being said, I would move this to Peter. Peter, you working in Rwanda and now Ghana, what is your take on this?

Peter Preziosi: Yeah, I think what has been said, I would close by saying, look, technology, it’s exciting. It’s a new horizon of opportunity. But we have to be conscious of the unintended consequences, the ethical dilemmas, the digital divide issues, the challenges that we continue to see worldwide as as as we’re looking at governments around the world taking money from health education and welfare into defense spending. We’re seeing a sweeping shift in looking at, you know, that, you know, isolationism and that will be a challenge. if we think about not just the health worker being mobile, the work is becoming mobile. So it’s going to really democratize in many ways opportunity for access to good care if we pay attention to the digital divide discussion that was talked about. And here I would go back to all my panelists on a very important point. I have seen all the discussion that you all made in a very important element point, including questions from the audience.

Rajendra Pratap Gupta: What came out is that technology is going to move from a doctor-dependent model to a non-doctor-dependent model. It could be any clinician. It could be a non-clinician. It could be a citizen doctor. But having said that, when all of us agree on a few points, cost is going to come down because of technology. It’s going to be convenient. You can have it when you want. It’s going to have better engagement, better empathy, better quality, a better experience, and a better repository of knowledge. My question to each of my panelists is to justify why will technology not replace a doctor? I mean, starting with Dr. Peter Petrosi, please.

Peter Preziosi: Yeah, I think the human touch is critically important. I think that we’re going to find so many other types of technologies. Looking at precision medicine, I’ll just bring that up as well. We haven’t talked about really treating people at the molecular level, the cellular level, which is going to transform care delivery. And again, looking at the divide that exists between those higher resource countries to the lower resource countries, and how do we begin to democratize that? So I think that having the future health professional will succeed. and evolve with technology, not without it. Those will be the individuals and the patients that will be able to succeed and to do much better in the world.

Rajendra Pratap Gupta: Thank you, Peter. Mavish, from the patient standpoint, will technology replace doctors?

Mevish P. Vaishnav: If you see, it’s been years for me that I’ve gone to bank. So if bank is on mobile, why not AI doctor on mobile? So yes, I am for it. AI will actually help doctors to enable treatment. So yes, AI will replace doctors.

May Siksik: I think that AI will replace the majority of doctor visits.

Zaw Ali Khan: So as far as the title of this panel discussion is concerned, I feel yes, AI will definitely replace PHCs, first and foremost, and perhaps some of the other specialized use cases as well. But PHCs, because over there, it’s not actually replacing doctors. I mean, if I were to rephrase this, to make it more satiable for our health care workers, it’s not that the technology is replacing them. Rather, it’s making room for them to do their job more effectively. So that way, I feel it’s a win-win for doctors, patients, health care systems around the world. And one more thing that I’d like to add in that is about the regulatory standards and the need to define them with the participation of all stakeholders, particularly because one of the audience members mentioned about the risks of AI, particularly about generative AI. But I think there would be risks in other use cases as well. well. And over there, I would emphasize that the role of academic organizations, academies like the Academy of Digital Health Sciences or AUB, American University of Barbados, we need to make sure and other organizations need to make sure that we are training the next generation of doctors and our faculty members, our stalwarts, in a way that they’re not cynical when it comes to technology. Because if our experts are cynical, then they will, they might advocate for harsher standards and that might decelerate this transition from doctor-centric healthcare to a non-doctor-centric healthcare. Thank you.

May Siksik: I just wanted to add that, to clarify that they will replace the majority of hospital, of doctor visits, but even for the visits that they cannot replace, they need to be at, AI will need to be at the table as part of that team, interdisciplinary team. And it’s, that’s quite, going to be quite important.

Rajendra Pratap Gupta: Previously, have you seen the AI before doctor? Yeah. Debbie, your take?

Debbie Rogers: I’m going to use an example that you used of Kindle and how, you know, libraries and bookstores have been replaced by the Kindle. In recent years, there’s actually been a resurgence of bookstores and there’s a huge growth in that market. I personally, even though I’m a technologist, I like to read physical books because I like the way they smell and I like the way they feel. And I’m quite happy to carry around a big tome with me in order to get that advantage. And so I don’t think that even if something is possible, even if it does make these things easier, I don’t think we’re going to replace human touch entirely. And there are definitely going to be. people who prefer human touch. And it may even be that for a while, we have a drop in the number of people who are accessing doctors, but that it will resurge again when people realize the advantages that you can get from the human touch that you just cannot get from technology. So I don’t believe there’ll be entirely replaced. I do believe that it’s going to make the lack of health workers a far smaller problem. And I do believe that it’s going to be critical that AI is a part of the team, as you mentioned, May, but I don’t believe it’s going to replace doctors entirely, despite being a technologist.

Rajendra Pratap Gupta: Thank you so much, Debbie. And it was a great discussion. And every year at IGF at Digital Health, we discuss how technology is shaping healthcare and how healthcare is shaping technology, it’s both ways. I really thank Dr. Peter Preziosi for being awake late night and joining us and sharing his valuable insights, which are going to shape the way we look at health. This is Zohr Ali Khan from the American University of Barbados. May, Dr. May Siksek, Mavish and Debbie Rogers. Thank you all so much. And Sakshi Pandita, who’s been moderating us online and for all the panelists who joined us, all the viewers who joined us and asked those important questions. Next year, we’re going to present you the findings of what AI did to healthcare. Thank you so much and wish you a very happy holiday season. Merry Christmas and a great year ahead. Thank you. Bye, thanks. Thank you.

P

Peter Preziosi

Speech speed

132 words per minute

Speech length

1998 words

Speech time

906 seconds

AI and technology can augment and assist healthcare workers

Explanation

Peter Preziosi argues that AI and technology can support healthcare workers by enhancing their capabilities. He emphasizes that these tools should be seen as partners to enable clinicians to work at the top of their capabilities.

Evidence

Example of AI-powered medical brains helping to augment 92% of clinicians’ workloads, particularly in reducing administrative burdens.

Major Discussion Point

The impact of AI and technology on healthcare delivery

Agreed with

Zaw Ali Khan

Debbie Rogers

Agreed on

AI and technology can augment and assist healthcare workers

Cost and infrastructure barriers in low-resource settings

Explanation

Peter Preziosi highlights the challenges of implementing AI and digital health solutions in low-resource settings. He emphasizes the need to consider cost and infrastructure barriers when developing and deploying these technologies in underserved areas.

Major Discussion Point

Challenges and considerations in implementing AI in healthcare

Agreed with

Debbie Rogers

Agreed on

Need to address digital divide and ensure equitable access

Human touch will remain important for some aspects of care

Explanation

Peter Preziosi argues that while AI and technology will play an increasingly important role in healthcare, the human touch will remain crucial for certain aspects of care. He suggests that successful healthcare professionals will be those who can effectively combine technological tools with human empathy and expertise.

Major Discussion Point

The future role of doctors and healthcare professionals

R

Rajendra Pratap Gupta

Speech speed

180 words per minute

Speech length

4805 words

Speech time

1600 seconds

Technology eliminates middlemen and brings services closer to users

Explanation

Rajendra Pratap Gupta argues that technology removes intermediaries in various sectors, including healthcare. He suggests that this trend will bring healthcare services directly to users, potentially bypassing traditional healthcare providers.

Evidence

Examples from other industries like telecommunications, entertainment, and photography where technology eliminated middlemen and brought services directly to consumers.

Major Discussion Point

The impact of AI and technology on healthcare delivery

M

May Siksik

Speech speed

151 words per minute

Speech length

1190 words

Speech time

469 seconds

AI can provide more personalized and empathetic care than time-constrained doctors

Explanation

May Siksik contends that AI-powered systems can offer more personalized and empathetic care compared to human doctors who are often time-constrained. She suggests that AI can spend more time with patients and provide tailored responses based on individual needs.

Evidence

Example of First Nations patients who culturally need to discuss multiple health issues in one visit, which AI could accommodate better than time-limited doctor appointments.

Major Discussion Point

The impact of AI and technology on healthcare delivery

Agreed with

Zaw Ali Khan

Mevish P. Vaishnav

Agreed on

AI will play a significant role in future healthcare delivery

AI will be part of interdisciplinary healthcare teams

Explanation

May Siksik argues that AI will become an integral part of interdisciplinary healthcare teams. She emphasizes that even for medical visits that cannot be fully replaced by AI, AI systems will need to be involved as part of the care team.

Major Discussion Point

The future role of doctors and healthcare professionals

D

Debbie Rogers

Speech speed

161 words per minute

Speech length

1786 words

Speech time

664 seconds

Digital health tools can improve access to care in underserved areas

Explanation

Debbie Rogers argues that digital health tools can enhance access to healthcare services in underserved areas. She emphasizes the potential of these tools to bring care closer to communities and individuals who may not have easy access to traditional healthcare facilities.

Evidence

Example of MomConnect program in South Africa, which provides health information and support to mothers via mobile phones, improving access to care and health outcomes.

Major Discussion Point

The impact of AI and technology on healthcare delivery

Agreed with

Peter Preziosi

Zaw Ali Khan

Agreed on

AI and technology can augment and assist healthcare workers

Need to address the digital divide and ensure equitable access

Explanation

Debbie Rogers highlights the importance of addressing the digital divide when implementing AI and digital health solutions. She emphasizes the need to ensure equitable access to these technologies, particularly in underserved and rural areas.

Evidence

Mention of infrastructural challenges like lack of electricity and mobile penetration in some areas, as well as the high cost of implementing AI solutions compared to traditional digital health programs.

Major Discussion Point

Challenges and considerations in implementing AI in healthcare

Agreed with

Peter Preziosi

Agreed on

Need to address digital divide and ensure equitable access

Cultural context and localization of AI solutions

Explanation

Debbie Rogers emphasizes the importance of considering cultural context and localizing AI solutions for different communities. She argues that AI models trained primarily on Western data may not be appropriate for use in other cultural contexts.

Evidence

Mention of the need to ensure that AI solutions are not just high quality for certain communities, but for all communities, particularly those who are underrepresented.

Major Discussion Point

Challenges and considerations in implementing AI in healthcare

Doctors who use AI will replace those who don’t

Explanation

Debbie Rogers suggests that doctors who effectively utilize AI and technology in their practice will likely replace those who do not adopt these tools. She argues that AI-enabled healthcare will be more efficient and provide a better patient experience.

Evidence

Statement that people will ‘vote with their feet’ and choose healthcare providers who offer better patient experiences through the use of AI and technology.

Major Discussion Point

The future role of doctors and healthcare professionals

Z

Zaw Ali Khan

Speech speed

132 words per minute

Speech length

1458 words

Speech time

662 seconds

AI and technology can reduce administrative burdens on clinicians

Explanation

Zaw Ali Khan argues that AI and technology can significantly reduce the administrative workload of healthcare professionals. This allows clinicians to focus more on direct patient care and complex medical tasks that require human expertise.

Major Discussion Point

The impact of AI and technology on healthcare delivery

Agreed with

Peter Preziosi

Debbie Rogers

Agreed on

AI and technology can augment and assist healthcare workers

Regulatory and ethical considerations in AI adoption

Explanation

Zaw Ali Khan highlights the importance of addressing regulatory and ethical considerations in the adoption of AI in healthcare. He emphasizes the need for clear standards and guidelines to ensure safe and responsible use of AI technologies.

Evidence

Mention of the need to define regulatory standards with the participation of all stakeholders, including academic organizations.

Major Discussion Point

Challenges and considerations in implementing AI in healthcare

AI may replace primary care centers and some specialized roles

Explanation

Zaw Ali Khan suggests that AI has the potential to replace primary healthcare centers and some specialized medical roles. He argues that this shift will allow healthcare professionals to focus on more complex tasks that require human expertise.

Major Discussion Point

The future role of doctors and healthcare professionals

Agreed with

May Siksik

Mevish P. Vaishnav

Agreed on

AI will play a significant role in future healthcare delivery

Need to train new generation of doctors to work with AI

Explanation

Zaw Ali Khan emphasizes the importance of training the next generation of doctors to work effectively with AI technologies. He argues that this training is crucial to ensure smooth integration of AI into healthcare practices and to prevent resistance from healthcare professionals.

Evidence

Mention of the role of academic organizations in training future doctors and current faculty members to be open to technology adoption in healthcare.

Major Discussion Point

The future role of doctors and healthcare professionals

M

Mevish P. Vaishnav

Speech speed

146 words per minute

Speech length

621 words

Speech time

254 seconds

Patients may trust and engage more with AI-powered health services

Explanation

Mevish P. Vaishnav suggests that patients may develop greater trust and engagement with AI-powered health services compared to traditional doctor visits. She argues that AI can provide more comprehensive and accessible health information to patients.

Evidence

Mention of the Patient Centricity Index developed by the International Patients Union, which uses AI to provide disease information and prescriptions based on symptoms.

Major Discussion Point

The impact of AI and technology on healthcare delivery

Agreed with

May Siksik

Zaw Ali Khan

Agreed on

AI will play a significant role in future healthcare delivery

Patients may prefer AI for some health interactions

Explanation

Mevish P. Vaishnav argues that patients may prefer AI-powered health services for certain types of healthcare interactions. She suggests that the convenience and accessibility of AI-based solutions could make them more appealing to patients than traditional doctor visits.

Evidence

Comparison to banking services, noting that many people now prefer mobile banking to visiting physical banks.

Major Discussion Point

The future role of doctors and healthcare professionals

A

Audience

Speech speed

182 words per minute

Speech length

884 words

Speech time

290 seconds

Importance of maintaining human touch and option for in-person care

Explanation

An audience member emphasizes the importance of maintaining the option for human-to-human interactions in healthcare. They argue that while AI and digital health solutions are advancing, some patients may still prefer or require in-person care from human doctors.

Evidence

Personal preference expressed for seeing a human doctor when given the choice.

Major Discussion Point

Challenges and considerations in implementing AI in healthcare

Concerns about misinformation and need for quality control

Explanation

An audience member raises concerns about the potential for misinformation in AI-generated health content, particularly with the rise of generative AI. They emphasize the need for robust quality control measures to ensure the accuracy and reliability of health information provided by AI systems.

Evidence

Example of AI-generated videos featuring celebrities promoting fake health products and diets.

Major Discussion Point

Challenges and considerations in implementing AI in healthcare

Agreements

Agreement Points

AI and technology can augment and assist healthcare workers

Peter Preziosi

Zaw Ali Khan

Debbie Rogers

AI and technology can augment and assist healthcare workers

AI and technology can reduce administrative burdens on clinicians

Digital health tools can improve access to care in underserved areas

The speakers agree that AI and technology can enhance healthcare delivery by supporting healthcare workers, reducing administrative burdens, and improving access to care in underserved areas.

Need to address digital divide and ensure equitable access

Debbie Rogers

Peter Preziosi

Need to address the digital divide and ensure equitable access

Cost and infrastructure barriers in low-resource settings

Both speakers emphasize the importance of addressing the digital divide and ensuring equitable access to AI and digital health solutions, particularly in low-resource settings.

AI will play a significant role in future healthcare delivery

May Siksik

Zaw Ali Khan

Mevish P. Vaishnav

AI can provide more personalized and empathetic care than time-constrained doctors

AI may replace primary care centers and some specialized roles

Patients may trust and engage more with AI-powered health services

These speakers agree that AI will play a significant role in future healthcare delivery, potentially replacing some traditional roles and offering more personalized care.

Similar Viewpoints

Both speakers suggest that technology and AI have the potential to disrupt traditional healthcare delivery models by eliminating intermediaries and bringing services directly to users.

Rajendra Pratap Gupta

Zaw Ali Khan

Technology eliminates middlemen and brings services closer to users

AI may replace primary care centers and some specialized roles

Both speakers emphasize the importance of healthcare professionals adapting to and effectively utilizing AI technologies in their practice.

Debbie Rogers

Zaw Ali Khan

Doctors who use AI will replace those who don’t

Need to train new generation of doctors to work with AI

Unexpected Consensus

Potential for AI to provide more empathetic care

May Siksik

Mevish P. Vaishnav

AI can provide more personalized and empathetic care than time-constrained doctors

Patients may trust and engage more with AI-powered health services

It’s somewhat unexpected that AI is seen as potentially more empathetic than human doctors, challenging the traditional view that empathy is a uniquely human trait in healthcare.

Overall Assessment

Summary

The main areas of agreement include the potential of AI and technology to augment healthcare delivery, the need to address the digital divide, and the significant role AI will play in future healthcare. There is also consensus on the importance of adapting to these technologies and the potential for AI to provide more personalized care.

Consensus level

There is a moderate to high level of consensus among the speakers on the transformative potential of AI in healthcare. This implies a shared vision for the future of healthcare that integrates AI and technology, while also recognizing the challenges that need to be addressed. The consensus suggests a likely acceleration in the adoption of AI in healthcare, but with careful consideration of equity and access issues.

Differences

Different Viewpoints

The extent to which AI will replace doctors

Peter Preziosi

Mevish P. Vaishnav

May Siksik

Debbie Rogers

I think the human touch is critically important.

AI will actually help doctors to enable treatment. So yes, AI will replace doctors.

I think that AI will replace the majority of doctor visits.

I don’t believe there’ll be entirely replaced. I do believe that it’s going to make the lack of health workers a far smaller problem.

The speakers have differing views on the extent to which AI will replace doctors. Peter Preziosi emphasizes the importance of human touch, while Mevish P. Vaishnav and May Siksik believe AI will largely replace doctors. Debbie Rogers takes a middle ground, suggesting AI will significantly reduce the burden on health workers but not entirely replace them.

Unexpected Differences

Trust in AI vs. human doctors

Mevish P. Vaishnav

Audience

Patients may trust and engage more with AI-powered health services

Importance of maintaining human touch and option for in-person care

While it might be expected that patients would prefer human doctors, Mevish P. Vaishnav suggests that patients may actually trust and engage more with AI-powered health services. This contrasts with the audience member’s emphasis on maintaining the option for human-to-human interactions in healthcare, highlighting an unexpected difference in perspectives on patient preferences.

Overall Assessment

summary

The main areas of disagreement revolve around the extent to which AI will replace doctors, the challenges in implementing AI in healthcare, and patient preferences for AI vs. human doctors.

difference_level

The level of disagreement among the speakers is moderate. While there is general agreement on the potential benefits of AI in healthcare, there are significant differences in opinions on how much AI will replace human doctors and how to address implementation challenges. These disagreements have important implications for the future of healthcare delivery, medical education, and health policy. They highlight the need for continued research, ethical considerations, and careful planning in the integration of AI into healthcare systems.

Partial Agreements

Partial Agreements

Both speakers agree on the need to address challenges in implementing AI in healthcare, but they focus on different aspects. Debbie Rogers emphasizes the importance of addressing the digital divide and ensuring equitable access, while Zaw Ali Khan focuses on regulatory and ethical considerations. They agree on the goal of responsible AI implementation but differ on the primary challenges to address.

Debbie Rogers

Zaw Ali Khan

Need to address the digital divide and ensure equitable access

Regulatory and ethical considerations in AI adoption

Similar Viewpoints

Both speakers suggest that technology and AI have the potential to disrupt traditional healthcare delivery models by eliminating intermediaries and bringing services directly to users.

Rajendra Pratap Gupta

Zaw Ali Khan

Technology eliminates middlemen and brings services closer to users

AI may replace primary care centers and some specialized roles

Both speakers emphasize the importance of healthcare professionals adapting to and effectively utilizing AI technologies in their practice.

Debbie Rogers

Zaw Ali Khan

Doctors who use AI will replace those who don’t

Need to train new generation of doctors to work with AI

Takeaways

Key Takeaways

AI and technology have significant potential to augment and transform healthcare delivery, especially in underserved areas

AI may replace or significantly change the role of doctors in some areas, particularly primary care

There are important challenges to address in AI adoption, including the digital divide, quality control, and maintaining human touch

The future of healthcare likely involves interdisciplinary teams that include AI alongside human professionals

Patients may come to trust and prefer AI-powered health services for some types of care

Resolutions and Action Items

Continue testing and implementing AI and digital health solutions, especially in underserved areas

Work on developing regulatory standards for AI in healthcare with input from all stakeholders

Focus on training the next generation of healthcare professionals to work effectively with AI

Unresolved Issues

How to fully address the digital divide and ensure equitable access to AI-powered healthcare

How to balance AI adoption with maintaining human touch in healthcare

The extent to which AI will ultimately replace human doctors versus augment their capabilities

How to effectively combat health misinformation in the age of AI and social media

Suggested Compromises

Implement AI gradually, starting with administrative tasks and primary care, while maintaining option for human doctors

Develop AI solutions that work alongside human healthcare professionals rather than fully replacing them

Focus AI adoption on areas with greatest shortages of healthcare workers

Thought Provoking Comments

Today, if your doctors don’t need technology, technology would not need them in the future.

speaker

Rajendra Pratap Gupta

reason

This comment provocatively frames the relationship between doctors and technology as potentially adversarial rather than complementary, challenging the traditional view of technology as simply a tool for doctors.

impact

This set the tone for the discussion to explore how AI and technology might replace or fundamentally change the role of doctors, rather than just augment their existing work.

We believe this care solution will reduce the number of referrals that are transferred to upstream health facilities that are already crowded and lack adequate resources, and make the model for an integral part of primary healthcare.

speaker

Peter Preziosi

reason

This comment introduces a concrete example of how AI and technology could reshape healthcare delivery, particularly in resource-constrained settings.

impact

It shifted the discussion from theoretical possibilities to practical applications, prompting others to consider specific use cases and implementations of AI in healthcare.

At International Patients Union, we have a platform, the Patients Union, where we provide a platform for the patients to voice their opinions, their concerns, and share their views.

speaker

Mevish P. Vaishnav

reason

This comment brings the patient perspective into the discussion, highlighting the importance of considering end-users in the development of healthcare technology.

impact

It broadened the conversation to include patient empowerment and engagement, leading to discussion of how AI might directly serve patients rather than just assisting doctors.

For example, in our sexual reproductive health and rights platforms, we find that youth want to speak to AI and not to a person because they’ve been judged so much by people.

speaker

Debbie Rogers

reason

This comment provides a surprising and counterintuitive example of how AI might be preferable to human interaction in some healthcare contexts.

impact

It challenged assumptions about the necessity of human touch in all aspects of healthcare, leading to a more nuanced discussion of when and how AI might be more appropriate than human doctors.

LLMs at the moment are primarily trained on Western culture, medicine, languages, intonations, cultural context. And that is not going to be appropriate for rolling out in Rwanda, for example.

speaker

Debbie Rogers

reason

This comment highlights an important limitation of current AI technology in healthcare, particularly for global applications.

impact

It introduced considerations of equity and cultural appropriateness into the discussion, prompting reflection on how to ensure AI healthcare solutions are truly global and inclusive.

Overall Assessment

These key comments shaped the discussion by moving it from abstract possibilities to concrete applications and challenges of AI in healthcare. They broadened the conversation to include patient perspectives and global equity considerations, while also challenging assumptions about the necessity of human doctors in all healthcare contexts. The discussion evolved from whether AI would replace doctors to a more nuanced exploration of how, where, and for whom AI might be most beneficial in healthcare delivery.

Follow-up Questions

How can we ensure the accessibility of digital health tools in rural areas?

speaker

Audience member

explanation

This is important to address the digital divide and ensure equitable access to healthcare technology.

How can we address issues of misinformation and fake news in digital health, especially around fad diets and miracle cures?

speaker

Audience member (Melody Musoni)

explanation

This is crucial for maintaining trust in digital health solutions and protecting patients from harmful misinformation.

How can we develop AI models that are culturally appropriate for diverse communities, particularly those underrepresented in current training data?

speaker

Debbie Rogers

explanation

This is essential for ensuring AI-based healthcare solutions are effective and equitable across different cultures and contexts.

How can we define regulatory standards for AI in healthcare with the participation of all stakeholders?

speaker

Zaw Ali Khan

explanation

This is important to ensure safe and effective implementation of AI in healthcare while addressing concerns from various perspectives.

How can we reduce the cost of implementing AI-based healthcare solutions in low-resource settings?

speaker

Debbie Rogers

explanation

This is crucial for making AI-powered healthcare accessible in developing countries and rural areas.

How can we integrate AI into medical education to prepare future healthcare professionals?

speaker

Zaw Ali Khan

explanation

This is important to ensure future healthcare workers are equipped to work alongside AI technologies.

How can we balance the use of AI in healthcare with maintaining the human touch in patient care?

speaker

Multiple speakers (Peter Preziosi, Debbie Rogers)

explanation

This is crucial for ensuring that the implementation of AI doesn’t compromise the empathetic aspects of healthcare.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #260 The paradox of inclusion in Internet governance

WS #260 The paradox of inclusion in Internet governance

Session at a Glance

Summary

This panel discussion focused on the paradox of inclusion in Internet governance, exploring the challenges of creating truly inclusive processes in international cybersecurity and digital policy forums. The speakers highlighted how efforts to increase participation, such as proliferating initiatives and multi-stakeholder forums, can paradoxically create barriers due to the high resource demands of engaging in numerous processes.

Key themes included the need for better coordination between national and international levels, the importance of interdisciplinary teams in government delegations, and the challenge of balancing political control with meaningful inclusion of diverse stakeholders. Speakers discussed examples like the UN Open-Ended Working Group on cybersecurity and the Palma Process on cyber intrusion tools to illustrate these dynamics.

The discussion emphasized structural inequalities that persist despite inclusive processes, such as developing countries lacking resources to participate effectively in multiple forums. Participants noted the importance of national-level coordination mechanisms and capacity building to enable more diverse and substantive engagement internationally. The need to include marginalized communities and identities in digital governance was also raised.

Speakers proposed some best practices, including creating ownership through early stakeholder consultations, fostering interdisciplinary teams within governments, and calibrating political risk to allow for more distributed leadership of initiatives. Overall, the discussion highlighted the complex challenges of achieving meaningful inclusion in Internet governance while maintaining effective processes and outcomes.

Keypoints

Major discussion points:

– The paradox of inclusion in internet governance: efforts to be more inclusive can create barriers to meaningful participation due to the proliferation of forums and initiatives

– Challenges of coordinating between different government agencies and stakeholders at both national and international levels on cyber/internet governance issues

– The need for multidisciplinary teams and better knowledge transfer between different internet governance forums and processes

– Balancing political control with genuine inclusivity and openness to diverse perspectives

– Ensuring representation of minority and underrepresented groups in internet governance processes

Overall purpose:

The goal was to explore the “paradox of inclusion” in internet governance – how efforts to be more inclusive can paradoxically create new barriers to participation – and discuss potential solutions or best practices to address this challenge.

Tone:

The tone was collaborative and constructive throughout. Panelists and participants shared insights and experiences in a collegial manner, building on each other’s points. There was a sense of shared purpose in trying to tackle a complex challenge. The tone became more solution-oriented towards the end as participants reflected on key takeaways and potential next steps.

Speakers

– James Shires: Co-director of Virtual Roots, a UK-based NGO working on cybersecurity and Internet governance research, education, and public engagement

– Yasmine Idrissi Azzouzi: Cybersecurity program officer at the ITU (International Telecommunication Union)

– Hurel Louise Marie: Associate Fellow with Virtual Roots, works in the cyber program at RUSI (Royal United Services Institute)

– Corinne Casha: Representative from Malta’s Ministry of Foreign Affairs

Additional speakers:

– Audience member: Julia Eberl from the Austrian Foreign Ministry, working at the mission in Geneva

– Audience member: Akriti Bopanna from Global Partners Digital, previously worked in India’s foreign ministry for G20

– Audience member: Natasha Nagle from the University of Prince Edward Island

Full session report

Expanded Summary: The Paradox of Inclusion in Internet Governance

This panel discussion, featuring experts from various backgrounds in cybersecurity and internet governance, explored the complex challenges of creating truly inclusive processes in international cybersecurity and digital policy forums. The central theme was the “paradox of inclusion” in internet governance, a concept introduced by James Shires, co-director of Virtual Roots.

1. The Paradox of Inclusion

The discussion began with Shires explaining that efforts to increase participation in internet governance, such as proliferating initiatives and multi-stakeholder forums, can paradoxically create barriers due to the high resource demands of engaging in numerous processes. This proliferation of initiatives makes it difficult for stakeholders, especially those with limited resources, to participate meaningfully across all forums.

Louise Marie Hurel, Associate Fellow with Virtual Roots, expanded on this concept by highlighting how the specialisation of debates leads to fragmentation of discussions. She also raised the provocative point that inclusion efforts can be weaponised for political purposes, with the proliferation of initiatives sometimes serving as a political strategy to control the scope of debates and who participates in them.

2. Challenges of Coordination

A significant portion of the discussion focused on the challenges of coordinating between different government agencies and stakeholders at both national and international levels on cyber and internet governance issues. Yasmine Idrissi Azzouzi, Cybersecurity program officer at the ITU, emphasised the importance of national-level coordination for effective international participation. She highlighted the need for creating ownership at the national level across different expertises, including various ministries and critical infrastructure providers.

Corinne Casha, representing Malta’s Ministry of Foreign Affairs, echoed this sentiment, discussing the establishment of national cybersecurity committees for better coordination. She also pointed out the difficulty in maintaining consistent representation across multiple forums, underscoring the challenge of balancing specialisation with comprehensive engagement.

The lack of communication between different UN processes, particularly those based in Geneva and New York, was raised as a concern by an audience member from the Austrian Foreign Ministry. This highlighted the need for better coordination not just within nations, but also between international organisations and processes.

3. The UN Open-Ended Working Group (OEWG) on Cybersecurity

Louise Marie Hurel provided significant details about the UN OEWG on cybersecurity, highlighting it as an example of the proliferation of international forums. She discussed the challenges associated with this process, including the difficulty of meaningful participation for smaller states and non-state actors, and the potential for forum shopping by more powerful actors.

4. The Palma Process and Multi-stakeholder Initiatives

Corinne Casha discussed the Palma Process on cyber intrusion tools as an example of an initiative attempting to address inclusivity challenges. This process aims to develop guidelines for the responsible development, transfer, and use of cyber intrusion tools through multi-stakeholder consultations. It illustrates efforts to balance political control with diverse participation in addressing complex cyber issues.

5. Strategies for Improving Inclusion and Representation

The speakers proposed several strategies to address the challenges of inclusion:

a) Interdisciplinary Approaches: Yasmine Idrissi Azzouzi stressed the need for interdisciplinary teams to engage in various processes, combining technical, diplomatic, and policy expertise to address complex digital issues effectively.

b) Multi-stakeholder Consultations: Azzouzi and Casha both emphasised the importance of creating ownership through multi-stakeholder consultations, involving diverse stakeholders in the decision-making process.

c) Capacity Building: Casha mentioned funding initiatives to support participation from developing countries, addressing the resource imbalance that often hinders inclusive participation. Specific examples included the Women in Cyber Fellowship and the Global Conference on Cyber Capacity Building.

d) Fostering Productive Disagreement: Hurel highlighted the importance of fostering dialogue that allows for productive disagreement, suggesting that true inclusion requires openness to challenging perspectives.

6. The Role of Ministries of Foreign Affairs

Corinne Casha and other speakers discussed the crucial role of Ministries of Foreign Affairs in coordinating cyber issues at both national and international levels. They emphasized the need for these ministries to act as bridges between various domestic stakeholders and international forums, ensuring coherent national positions and effective representation in global discussions.

7. Persistent Challenges and Unresolved Issues

Despite these proposed strategies, the discussion highlighted several persistent challenges:

a) Structural Inequalities: Hurel pointed out that structural inequalities persist despite efforts at inclusion, particularly affecting developing countries and their ability to participate effectively in multiple forums.

b) Balancing Political Control and Inclusion: There was a recognition of the tension between maintaining political control and achieving genuine inclusivity, with Casha noting that relinquishing some control is necessary for true inclusion.

c) Representation of Minority Identities: An audience member, Natasha Nagle from the University of Prince Edward Island, raised the important question of how to ensure representation of minoritised identities in digital governance spaces.

d) Circumvention of National Legislation: Akriti Bopanna from India provided an example of how international forums can be used to circumvent national legislation, illustrating another aspect of the paradox of inclusion where global processes might undermine local democratic decisions.

8. Conclusion and Future Directions

The discussion concluded with Hurel summarizing three key paradoxes: meaningful leadership, meaningful coordination, and meaningful dialogue. These encapsulate the ongoing challenges in achieving true inclusion in internet governance.

Corinne Casha suggested a follow-up session or report to further explore the issues raised during the panel. The discussion also touched on the Global Partnership for Responsible Cyber Behavior as a potential framework for addressing some of the challenges discussed.

The speakers agreed that addressing the paradox of inclusion requires careful balancing of specialisation and comprehensive engagement, political control and diverse participation, and national coordination and international representation. The ongoing nature of these challenges underscores the need for continued dialogue and innovation in approaches to internet governance.

Session Transcript

James Shires: Yes. Hi, Louise. Testing. Can we hear and see you? Can you hear and see us? Yes, all good. Can you hear me okay? Very well. Let’s get started. So, hi, everybody. Hi. There’s behind us. And welcome, everybody, online. We’re very happy to be hosting this panel on the paradox of inclusion in Internet governance. My name is James Shires. I’m co-director of Virtual Roots. Virtual Roots is a UK-based NGO that works in cybersecurity and Internet governance research, education, and public engagement. So, we have a fantastic lineup of speakers today. We have Yasmin Azouzi, to my right, in person. We have Louise Marie Harrell online. And we have Corinne Kasher, who, unfortunately, is in a taxi coming from a very similarly named conference center that she was accidentally taken to and will arrive soon. These things happen. So, I’ll just say a little bit about the purpose of the panel overall, and then I’ll hand over to our speakers. I will start with Louise online. Then I’ll go to Yasmin. I’ll talk a little bit about my perspective on the paradox of inclusion. And hopefully, by then, Corinne will have sorted out her travel issues. We’ll then open the floor to questions and discussion for everyone, both in person and online. We’re very much looking forward to the discussion. And thank you all for being here early on a Thursday morning. So, we put together this panel because… because we felt that there was a real issue with internet governance and inclusion. And we call this the paradox of inclusion. The idea here is that we see a proliferation of efforts to bring in different actors in internet governance, whether these are multi-stakeholder forums, whether these are efforts to include developing countries and smaller states or states with fewer resources, and there’s lots of different efforts to do these, through different conferences, initiatives, meetings, and so on. In fact, there’s so many of these different efforts that actually keeping up with them all, keeping track of them all, and participating meaningfully in them all, is itself a high resource burden. And that’s what we term the paradox of inclusion. Internet governance recognizes that it has to be inclusive. It has to bring in multiple stakeholders. But those who are really able to track the real range of internet governance forums from this one to those of the UN, such as the OEWG on cybersecurity, through to the Global Digital Compact, through to the Cybercrime Convention, through to the multi-stakeholder initiatives such as the Paris Call, et cetera, et cetera. So this is what we want to talk about today, is the starting point, recognizing that inclusion matters, that there are genuine and very well-developed efforts to make internet governance inclusive, but that sometimes these efforts, as we would say in English, for want of a better phrase, shoot themselves in the foot. They actually bring up barriers to participation through requiring such a thin spread of attention and resources across the internet governance portfolio. That’s the idea behind the session. We’d love to hear your thoughts on this paradox of inclusion, but before we do so, I’ll turn to some short opening remarks from each of our speakers. Our first speaker is Louise Marie Harrell, who is an Associate Fellow with Virtual Roots and is also working at the Royal United Services Institute. Louise, I’ll leave you to do a much better introduction of your own work than I can, and the floor is yours.

Hurel Louise Marie: Thank you very much, James, and thank you all. I would love to be there with all of you, but sadly, and thankfully also, because since we’re talking about inclusion, I think the fact of just being able to connect the IGF has always been great in that sense. And I think James already kind of set a very interesting tone to our conversation here. I am Louise Marie Harrell. As James said, I work in the cyber program at RUCI. And we’ve been reflecting a lot on different elements related to that, but personally, I’ve been attending the IGF for 10 years now, which is kind of like baffling. And I think there’s no other better place to actually have this conversation because being involved in the IGF throughout different cycles of maturity, and also other spaces such as ICANN, but also being increasingly involved in the cybersecurity discussions, which is the bit that I’m going to talk a little bit more about, I think you see those different communities of practice emerging and specializing. So when we look at, in particular, at the proliferation of initiatives, especially when it comes to cybersecurity, and that has something, like if you look from 2017, or 2015, right, to today, it’s quite impressive to see how many initiatives, especially on cybersecurity that have emerged. you would have the group of governmental experts at the UN as the one place, and it was a very kind of multilateral, I mean, it’s still a multilateral process, but you would have 30 governments or so discussing what is state responsibility and how international law applies to cyberspace. And back then also you would have the Global Forum on Cyber Expertise, which back then was also like just called the London Process, starting to mature and to become a bigger platform. And today these initiatives have consolidated quite a bit. I mean, obviously the negotiations at the UN have been taking place for at least 20 years when it comes to state responsibility, but the traction that these dialogues have had is quite substantive. So just to frame this a bit, right now I think we see two movements. And one is we can look at the proliferation of these initiatives in firstly as the specialization of the debate, right? I remember I used to attend the IGF like in the, like again, like 10 years ago, and I would look, where is the cyber community here? And you would have one or two panels talking about this from, let’s say a more cyber diplomacy initiative. And you would see government representatives, and I remember in Geneva, trying to talk about the GGE at the IGF. But nowadays there’s so many other spaces. There’s the Counter Ransomware Initiative, which talks about rent, well, as the name says, you know, ransomware. The Pal-Mal Process, which looks at commercial cyber proliferation. The OEWG, which is looking at state responsibility. And also, I mean, to some degree, the interdependence between state and non-state responsibility in cyberspace. We have the GFCE, which is looking at capacity building, cyber capacity building, and obviously the… you know, Yasmin will definitely, you know, touch upon like capacity building from the ITU’s perspective. We have the Tech Accord, which is an initiative that was spearheaded by Microsoft, but that tries to create this community of practice and thinking within the private sector in different parts of the private sector and how they see norms for their own, let’s say, sector when it comes to cybersecurity and the Paris call as James very already mentioned, which is kind of a mix of different stakeholder groups. So one way in which we can see that discussion, as I said, is the proliferation as the specialization of the debate, where we think that, you know, we cannot have this ethereal, broad conversation. We need to get to these smaller bits and spaces. But obviously the other side of the coin is looking at the proliferation of the debate as also being a political strategy, which it is in many ways. So if you think about the ad hoc committee on cybercrime, that is the result of a long friction at the geopolitical level of Russia trying to push forward in some ways, the discussion of actually have, not just Russia, but in that case, the presentation of the resolution to have a legally binding instrument on cybercrime. And that echo, that really just contemplates the vision of many other countries that have not been involved in the Budapest Convention or that don’t necessarily agree that they should just subscribe to something, that they should be part of the development of it, which is, you know, an increasingly and very valid point from their standpoint. So you have those movements, such as the ad hoc committee, which has ended right now, that becomes part of that, let’s say, political strategy. Another example of proliferation being a political strategy is precisely to specialize debate because then you can control a bit more. or what the scope is, and who is involved in this conversation. So on the other hand, if we look at the counter ransomware initiative, it started out as something that was very much state department led, right? The US spearheading that, but then it has increased throughout the last couple of years. And that requires again, kind of how do you create a platform for a particular dialogue, but that you ensure that you’re still open and flexible to bring others on board. And I’m sure James will talk more about the Pal-Mal process as something that’s quite interesting as well in terms of that proliferation as a political strategy. But I’d say like from an OEWG standpoint, and James, please flag to me when I am, if I’m speaking too much, but I just wanted to give a little bit of a glimpse of the OEWG as part of this paradox of inclusion, right? I think it comes as this proposed solution. So back in 2019, when you had the start of like two simultaneous processes, the GGE, the last GGE and the OEWG, you had this narrative that the OEWG as an open-ended working group, as a UN mechanism for a particular type of dialogue, it would be more inclusive firstly, because it would include all member states. So it would shift the conversation to all of the GA members, the General Assembly members. So from a composition standpoint, it seems that it would probably be more inclusive and also that it would have some kind of participation from non-state actors. But so the enabler is that we’re going from 30 to 193 countries. The challenge there is obviously that enabling effective participation of member states as part of this process is a whole different ball game, right? We’ve been working quite a lot at RUSI to facilitate like workshops on responsible cyber behavior and just working with other governments, like let’s say small island states, like talking about ransomware and trying to really kind of democratize the access to some bits of the debate or to go deeper into some elements of the OEWG agenda. There are structural elements that are just reproduced in these spaces, which is normally you have one person, if it is a small UN mission, you have one diplomat, one person that’s there covering a myriad of themes, right? So even if you have a process that has gone to the 193 countries, is it actually an effective participation? Because again, many countries won’t see that, state responsibility in cyberspace is the first topic on their national priority. And they only have one person in the UN covering these topics. And on top of that, they really don’t, they don’t prioritize because that’s also challenging. And if they want to bring someone from the capital, right, to participate, that’s the cost of meaningful inclusion as part of that expansion of those that can participate. So you have an enabler from a process standpoint, but that does not address the structural challenges over there. Obviously, there’s some solutions such as the Women in Cyber Fellowship, which is an initiative that is funded by state department, the UK government, Australia, and a couple of others that seek to bring women diplomats, or let’s say representatives of national cybersecurity agencies to be the representatives at the OEWG. So again, how do you kind of, you enable a process, but then how do you make sure that that process is actually inclusive at the end of the day? So it’s more of a walking the talk paradox that we’re thinking over here. And in some ways, the- The second logic of inclusion within the OIWG, so we talked about the state one, the non-state actor inclusion, again, the process does enable non-state actors to participate, unlike the GGE, as I said, which is 30 states that participated, it was just them. No opportunity to look at what they were discussing or even like webcast at the UN, and the OIWG does all of those things, which is great. But the disabler, I’d say, or let’s say the paradox of inclusion for non-state actors is obviously that it becomes, it has become a weaponized discussion. So since the start of this latest OIWG, the 2021 to 2025, what we’ve seen is right at the start, at least a year of the process, or more than half of the first year of the process, there was the stalemate between states that wanted to promote effective modalities for stakeholder participation, so that they would be able to provide their speech, to give a speech over at the UN, or that they would be able to listen in through the UN webcast, or that they would be able to be accredited. And there were other states that said, no, we don’t need to have those stakeholders. But they also said, well, if we have to have these stakeholders, we need to have a veto power over who gets to be in the room. So that has led to a stalemate for most of the first bit. And after that, to the effective vetoing of different organizations, including my organization, and also, let’s say, really important technical community experts, such as the Foreign Money and Incident Response Team, which could effectively provide inputs into some of the conversations, but that they were also vetoed by some member states. So you see that there is a process enabling… but that there are political challenges or things when it comes to the meaningful inclusion of non-state actors in these spaces. And just to finalize, because I’m sure that I’m almost done with my time, is looking at the third logic of inclusion. So we talked states, non-state actors, and I think the third one is thinking about the context where this dialogue is being held being more inclusive. So this is a first committee process, which means that usually it’s the highest level of the conversation on international peace and security when it comes to cyber. So obviously the chair of this process has a lot of responsibility to shape meaningfully that inclusion. So the enabler there for thinking about this broader context of the dialogue is the chair, for example, hosting online convenings. He organized the high level round table on cyber capacity building, where organizations from different parts of the world could effectively share their experiences in implementing cyber capacity building. You see also in this process, different proposals from developing countries gaining traction, such as Kenya, suggesting and tabling recommendation for a portal to look at threats so that other states that might not have as much cyber threat intelligence or that might have less access to information that they can share. So you also have coalitions of different states coming together, both developing cross-regional representatives, which is not something specific about the OEWG, but it says that the process is enabling those types of interactions in spite of geopolitical tension between two poles that you see. see effectively happening in the room. But you see, for example, El Salvador working with Estonia, working with Switzerland to think about the applicability of international law in cyberspace and tabling, let’s say, documents for a further conversation. But the outcome, or let’s say, the background tension in this third bit of like the space of this dialogue is really that is the question of what comes next. So I don’t know how many of you are familiar, but the OEWG is coming to an end in 2025 in July. And there is another proposal for a program of action. So let’s see another way in which we structure the dialogue at the UN within this, you know, a regular institutional dialogue for cyber. And there is this dichotomy between these two proposals. One is obviously the OEWG was the result of a Russian table proposal, which has effectively been, you know, successful in the past five years in actually pushing the conversation forward, at least maintaining that dialogue. But there is obviously a need for a more dynamic dialogue that can go deeper into different topics. And that, you know, can more effectively include stakeholders. And that’s the program of action. Not that one is better than the other, but there are different proposals for how that dialogue should happen, that regular institutional dialogue, and the member states will need to decide. So within this context of these three paradoxes, in many ways, how do you think that going forward? And I think there is a very politicized tension between these two proposals. And I think right now is, is thinking about the design of the process, right? And I don’t think that necessarily we’re always tackling those underlying inequalities. But in any case, I just wanted

James Shires: to stop there. I think there are other bits in terms of the relationship between the IGF and the OEWG. AWG or the coexistence of different UN processes, especially on cyber, but I’m very happy to talk about that afterwards. But I just wanted to maybe set the scene from an AWG standpoint of what are these different logics of inclusion and what are the challenges to these three logics of inclusion? Louise, thank you so much. That is a incredibly rich introduction and overview of the paradox of inclusion. And I really appreciate you breaking it down into this question of between states with non-state actors and also these other modalities of inclusion as well. Given that you covered so much ground there, I do just want to give the people in the room and online the chance to respond or ask a few questions while it’s fresh in their mind. And then we will turn to our next panelist. So if there is anyone who would like to come in online, please do put your question in the chat. If you’d like to come in in person, obviously just raise your hand and we will bring the mic to you. So while you’re maybe thinking of that, and if anyone is thinking of questions, I would just highlight one recent publication from Virtual Roots through our site, Binding Hook. Now, Binding Hook is a way to disseminate academic research in an accessible way to a wide audience. And there’s a new piece from last week on Pacific Island cybersecurity, how to co-design cybersecurity governance for and with Pacific Island states. So if you’re interested in that part of Louise’s remarks, please do check out that piece that has just come out on Binding Hook. If there are no questions in the chat, and everyone here seems very content, I will move on to our next panelist, who is Yasmin Azouzi. So I’m going to turn it over to Yasmin, who’s going to talk a little bit more about cybersecurity and how it can be used in the future. So Yasmin, again, please do introduce yourself a little bit more, and the floor is yours.

Yasmine Idrissi Azzouzi: Thank you very much, James. Thank you, Louise, for that incredible overview. Good morning, everyone. So my name is Yasmin. I’m a cybersecurity program officer at the ITU. So the ITU, as many of you know, is the U.N. cybersecurity and technology agency, and the ITU is a global organization. So we have a virtual moment for Internet governance. So next year, we have the WSIS plus 20. We’re currently navigating the global digital compact, topic specific ones like the ones in cybersecurity, open-ended working group, ad hoc committee on cyber crime, and we’re seeing in general, as mentioned, very nicely, that there is a proliferation of fora, which has obviously both opportunities and challenges. So I’m going to talk a little bit about what we’re doing, and how we’re doing it. So we have a number of fora that are also addressing overlapping issues, overlapping Internet governance issues, and this is being compounded, of course, by duplication and silos at times. So, for example, I can give a specific example from the ITU. So agreements that member states and resolutions that member states have voted on at ITU statutory meetings on the topic of open-ended working groups, and we have a number of fora that we’re working on, and we’re also working on the open-ended working group agenda item on cybersecurity. And this is partly due to member state representation in the ITU being mainly ministries of ICTs, ministry of communication, of digitalization, while the open-ended working group, first committee diplomats at times, but also representatives of national cyber agencies, this shows basically a lack of coordination at national level. So, for example, the open-ended working group, this is a representative of the national cyber agency, and it’s a group that is not represented in the open-ended working group, and it’s a group that prevents them from being represented in this. It’s about financial resources, at times it’s also the technical expertise and ability to communicate, and it’s also the technical expertise and ability to communicate, and it’s also the technical expertise and ability to navigate the interdisciplinary nature of digital policymaking. So this I think is the core of why this paradox exists, because the silos that are present at national level are being reflected internationally. In fact, digital issues touch upon multiple disciplines, so it can span from national security to economic development to human rights to sociological change, and this very interdisciplinarity, while enriching, also contributes to the fragmentation. So I’ll take cyber security as an example. As I mentioned, the Open Ended Working Group addresses cyber security within the first committee of the General Assembly, which focuses on peace and security, and the Committee on Cybercrime operates in the third committee, yet cyber security can also have implications in the second committee on economic development, which has a critical role in security. And in parallel to that, at the same time, at the ITU, at WSIS processes as well, we are emphasizing technical cyber capacity building for the purpose of sustainable development and economic and social development, and this is far removed from peace and security aspects being discussed elsewhere. The Global Digital Compact sees cyber security more as an enabler of securing digital space in general, and it focuses very much on the harms of privacy protection and calls for international cooperation more in a high-level way. So this way of compartmentalization makes it challenging for stakeholders, particularly from low-resource nations, to align their priorities and also to maintain continuity across these different sectors, which causes, again, the silos and duplication. Paradoxically, given also the topic, the solution actually may lie in reducing this fragmentation at national level by improving, for example, inter-agency cooperation, focusing on fostering interdisciplinary teams that are equipped to engage meaningfully in I think that this approach can offer some key advantages. So first off, countries need to establish multidisciplinary teams that can have expertise in technical, diplomatic, and also policy-making communities. For example, representation of national cybersecurity agencies or national computer incident response team at the open-ended world often results in practical, context-specific experiences that are a bit different compared to, let’s say, traditional career diplomats. However, this, of course, requires a pipeline of trained multidisciplinary professionals that have the expertise in technology, but also in diplomacy, so being able to operate in that nexus, in a way. Second, capacity-building with inclusivity in mind must be key. So initiatives must prioritize inclusive capacity-building that can bridge technical and policy silos. So, for example, at the ITU, we have a program that is a part of the ITU that is a part of the ITU that is a part of the ITU that is working on a number of issues, both technical and policy silos. So, for example, at the ITU, that bring together those two communities at national level so that they are accustomed, let’s say, to interagency in that manner. Programs should focus on enabling countries to engage also in Internet governance forum in a holistic manner, so being equipped both, again, from the technical and the policy side, but also from the policy side. So, for example, the ICT forum is a very utopic goal, realistic or useful to think of consolidating all Internet governance forum, but what we can focus on is actually enhancing coordination and avoiding duplication by aligning mandates, creating, let’s say, better linkages between discussions, for example, on the capacity building, cyber security capacity building, these agendas can be implemented in a more holistic way. So, for example, the ICT forum is a very utopic goal, realistic or useful to think of consolidating all Internet governance forum, but what we can focus on is enhancing coordination and avoiding duplication by aligning mandates, creating, let’s say, better linkages between discussions, for example, on the capacity building, cyber security capacity building, these agendas can be implemented in a more holistic way. So, for example, the ICT forum is a very utopic goal, So, I would like to conclude by saying that I think that we need to be very, very careful about how we create these interdisciplinary teams and this can also include having coordination mechanisms in place that can regularly consult across disciplines so that there is consistency when it comes to international negotiations and international fora. So, just to conclude, as we’re looking at, say, the future of Internet governance , we need to be very careful about if there is a difference between

James Shires: . Thank you, Yasmin. And just to repeat my call from earlier, if anyone does have any questions for Yasmin or Louise at this stage, then please do bring them into the discussion. We would love to hear from you whether you are in person or online. I am extremely pleased to have our third speaker here, Corinne. Corinne, please do come up to the table. You snuck in behind me and I didn’t even see you, so that is clearly operating in stealth mode. Now, Corinne, hopefully we will follow on with her perspective on the paradox of inclusion and maybe also the paradox of travel in Riyadh as well. Corinne, it is a pleasure to have you here. I will hand over the mic to you.

Corrine Casha: Yes, hi. Thank you. It is a pleasure to be with you here today. I don’t have much to add, actually, to what Yasmin already said, because I think she really encompassed the discussion very well, and I noticed that she really sort of hit the nail right on the head about the paradox of inclusion. So I just really wanted to add on to what Yasmin said. I think it’s important to avoid having the fragmentation of all these processes through also the fact that you need both the technical and the political level to work very closely together. And the issue of resources was one that really struck me. I know this is one of the main issues, that there are a lack of resources. And from our perspective as Ministry of Foreign Affairs, we are really working hard to fund resources so that we aim to actually provide resources where necessary. So if it’s, for example, the fact of the lack of representation, we fund fellows. We fund also diplomats from, let’s say, least developed countries, et cetera, so that they are able to be represented at the highest levels of decision making. One other aspect that struck me was the need to harmonize the processes and also to enhance coordination. I think this is really key. And obviously, the Global Digital Compact only came into, let’s say, adoption last September at the UN General Assembly. So we will see how it will fit in with the other processes. But that’s all from my end. I will add some closing remarks as well. But I really wanted to hear what participants think about this and what their views are on how to promote more inclusion and also to avoid the fragmentation of the processes. Because this will be really sort of key in us as governments to factor in what needs to be done for this to be, let’s say, a more harmonized process.

James Shires: Thank you. Thank you, Corrine. And yes, we look forward to your concluding remarks as well. So before we turn to an open discussion, and at that point, I will be asking everyone what your perspective is on the paradox of inclusion. So please do get some interventions ready. What I’m going to do is reflect a little bit on one example of the paradox of inclusion that I’ve been working on very closely. And what I’ll do is I’ll use Louise’s framework, because I think it’s a very helpful one of inclusion at a state level, inclusion at a non-state or multi-stakeholder level, and inclusion in terms of modalities as well, to illustrate how this paradox emerges through a particular process. And the one I’ll talk about isn’t the OEWG or the Global Digital Compact, these sort of very high-profile major ones. It’s a little bit more niche. It’s the pow-mow process. Now, can I get a quick show of hands in the room if you’ve heard of or know about the pow-mow process? If anyone does, then please put your hand up. Glad the panelists do as well. If you don’t, then I think I will give a quick overview of what it is. So recently, there’s been a recognition among many states that many offensive cyber tools, otherwise known as cyber intrusion capabilities, have both positive and negative uses. They have positive uses because they are necessary for cybersecurity. They help organizations test their defenses and improve their defenses through things like penetration testing within cybersecurity, so asking someone external to try and hack into your networks, so you know where the holes are and you can fix them. They have negative uses when they are used by cyber. criminals, for ransomware or other theft, and also when they are misused by state actors. So this is where companies would offer cyber intrusion capabilities commercially, often known as spyware, and then states would buy that and then use it. This is a, in many cases, perfectly legitimate activity, right? States need such surveillance capabilities for reasons of national security, but often, in many cases, they have overstepped the line, right? They have used capabilities in ways that are not proportionate, that lead to significant human rights violations as well. This is a recognized issue in internet governance, and there have been many efforts to try and address this at both a national and a multi-stakeholder level. And so, for example, you have the US, which is a major producer of these capabilities, right? It’s a major center for spyware development, research and sale. The US then imposes sanctions on particular companies that it thinks have violated the norms or boundaries that it wants to impose. So there are very high profile cases of US sanctions and indictments and other measures, such as restrictions on government procurement, so US government agencies can’t buy from certain companies in order to shape this market. There are also efforts at the multi-stakeholder level. Some of you may have heard of the Cyber Tech Accord, and this is a group of tech companies, industry companies, who came together to develop their own voices. on internet governance. They produced principles for what they called curbing the cyber mercenary market, right? So they put a different frame on it. But again, they’re trying to intervene in this sphere. Now, again, that wasn’t especially effective. And so what happened last year is a new initiative was launched called the PowerMile process. This aimed at both bringing in industry, including the spyware industry itself, including the companies worried about that, such as the big tech companies, including the states buying and using spyware and commercial cyber intrusion capabilities, and those being affected by it. So in short, it was a big tent initiative, right? It wanted to get everyone together to find a solution to this complex problem. And let’s unpack that initiative, which has now been running for just under a year in the three levels that Louise mentioned. Firstly, at a state level, was it inclusive? Well, in one way, yes, right? Anyone who wanted to, any state that wanted to sign up to the PowerMile declaration published in February could do so, right? They could attend the conference, they can engage in discussions, and put their name at the bottom. And now the state interaction on these discussions is getting more detailed. But it was still not completely equal, right? The sponsors of this initiative, the funders and organizers, are the British and French governments, right? They are the ones running it. They are aiming to include as many other states as possible. But ultimately, the conferences are in Europe, they’re in the UK and France. Most of the attendees and organizers are from these states. there between having a very stereotypical sort of European perspective on the issues and making it as wide an invitation as possible. So that’s at the state level. At the multi-stakeholder level, yes, there are efforts to include multi-stakeholders. So Virtual Roots is a multi-stakeholder participant in the Pal-Mal process and multi-stakeholders have the opportunity to engage at these conferences. They can submit responses to a consultation that closed a couple of months ago on these issues and they will continue to be able to feed into the process. But again, this is clearly a two-tier system. You have in each event a day reserved for multi-stakeholder discussion and then a day reserved for state-only discussion. So while the multi-stakeholders are able to input, they are not able to observe or have any understanding of what is going on in the state negotiations. So in a way, it’s a bit like the OEWG on one side and then the GG on the other side, right, where it’s just the states in a closed forum. So multi-stakeholder, yes, but also two-tier. And then finally, in terms of modalities, and this is where it is very interesting, the open consultation that ran for three months over the summer this year was on the declaration on the way forward for the Pal-Mal process. And what was interesting was a lot of industry coalitions and companies contributed to this consultation. Most of them were from the cyber security industry, right? They were contributing from the defensive side. But what the aim was, was for also to get the companies who build and make and sell the spyware to contribute as well, right? It’s a genuinely multi-stakeholder process. And it didn’t quite succeed in that, right? They were looking for more and more contributions from all parts of the industry, as well as civil society as well. And so, again, when you go into each of these processes, and that’s just one example, you can unpack the layers in which efforts at inclusion are both very laudable, right? They do, in fact, increase inclusion. But on the other hand, they only go so far. And indeed, the barrier to entry to these processes, the amount of knowledge you have to have to enter is far beyond maybe that of an embassy diplomat or a non-specialist, right? You need to really be engaged in these processes to contribute effectively. So with that example, I will now open the floor. Open the floor. And so, how we’d like to run this second half of the session is just to ask the participants in person or online. If you’re online, then please do put it in the chat. I will read it out, and we can ask the panelists to reflect on your remarks. To ask you some very simple questions. Firstly, does the paradox of inclusion ring true to you? Is it something that you recognize in your own work? Is it something that you think, nah, what are you talking about? Why are we even here, right? So that’s question one. Question two is, where do you see it most relevant to your work? And how might you try and overcome it, right? So maybe first, is it something that you recognize? And then where and how do you see it happening? I’ll pause. In the room, please do put your hand up if you’d like to contribute. Online, please do come in on the chat. Yes. Okay, I can hear myself. That’s good. Thank you so much. I’m Julia Abel from the

Audience: Austrian Foreign Ministry, currently working at the mission in Geneva, and I’ve been involved in some of the processes that you’ve talked about. So it was very enlightening to see kind of a full overview, because I can very much resonate with the questions that you’ve raised, that as a diplomat, we tend to work on a couple of these processes, but not all of them. So I was not aware of the Palma declaration, for example, because I’ve been on a mission abroad, and not in capital working on the processes holistically, for example. So that was very interesting. Thank you very much for bringing that up as a question. I wanted to make three points on everything that has been discussed. And one was, from my point of view, we need different expertise in all these processes. And what you’ve brought up, for example, cybercrime, cybersecurity, when you when we look at it from a national point of view, cybercrime, we had a lot of our criminal law experts involved nationally looking at actual criminal law provisions and how it would be applied for us. When we talk about cybersecurity, we get a lot of the defense side in, we get international law questions in. So this requires quite a different expertise and different people also from a national level. But I do hear the need for also coordinating better nationally. So that that is something for sure. And we also always try to bring national experts also into our delegations when we have these discussions on different processes and negotiations. But Austria also funds other like developing country diplomats to come to certain processes. We did it, for example, in the cybercrime process, that we wanted to get the experts from capitals, from developing countries, to be part of the process, because we don’t only want to talk among diplomats, we want to talk with the people that actually have to implement this at a national level. So that is something we did, and I know the Council of Europe is also very active on capacity building when it comes to that. So there are initiatives and there is a wish to bring in the people that actually work on these issues, at least when we talk about government participation. One thing that resonated with me is what Yasmin said about what the ITU does and then what the different committees in New York do, and having worked in both environments, I do see that there is a bit of a lack of communication between Geneva and New York on processes like that, and then, of course, you have the capitals in it as well. And that’s a bit of my question also to the panellists of, have you seen any best practices, or do you have any ideas of how to help, both from a member’s perspective, from a stakeholder perspective, and also from a UN agency perspective? Like, how can we strengthen the communication between these processes to avoid duplication, which is a strain for all of us, really? Thank you so much for your intervention,

James Shires: and a combination of both very insightful points and also a good question to push us to identify best practices as well. Yasmin, given we had a little bit on the ITU, maybe I could turn to you first, and then, Louise, online, to give a little response. Over to you. Thank you very much, and thank you for the very insightful question.

Yasmine Idrissi Azzouzi: Indeed, it’s… I think it’s a decades-long problem of having the lack of communication between the Geneva and New York. But when it comes to good practices, one thing that pops into mind is the kind of work that we do at the ITU at national level in particular. So if I give you an example, when we are supporting developing countries in particular, when it comes to their development or establishment of something like national cybersecurity strategies, part of that agreed upon methodology is to actually have consultation workshops prior that are inclusive of many different stakeholders. And at times we found ourselves in contexts where those same actors had never actually been in the same room together. But this is sort of a prerequisite that we put in terms of if you need your strategy to be developed, these are the people that you need to have in the room. And so that stems from, of course, an inclusion need, but also a very practical need. I mean, if a strategy, since it’s a living document, it’s something that needs to be implemented, is developed by a small group of stakeholders and then is asked actually to be implemented by wider stakeholders, it makes it difficult. So creation of ownership, I think, at national level across different expertises, so ministries and national agencies, but also critical infrastructure providers. We’ve had in the same room central banks, energy representatives, but also ministries ranging from MFA all the way to, of course, defence, interior and others, because, of course, it’s extremely interdisciplinary and a national strategy also needs to have all of those elements be taken into consideration. So this is, say, a model that we have seen being also the start also of better coordination at inter-agency level, where you didn’t have it before. We’ve often felt a bit of resistance at times from, let’s say, the lead agency at national level due to, I guess, wanting to keep ownership over things. things, but then gradually seeing that shared ownership has actually yielded better results in terms of coordination, but also in terms of effective implementation of things. And having also different perspectives on the same topic has actually taught a lot to different stakeholders. So I think this could be one good practice. So having, again, might seem like a leitmotif here, but multi-stakeholder actors at the table prior to, let’s say, major negotiations or major establishment of strategies or policies is part of the solution. It might sound obvious or simple or easy, but I think this is really where it starts.

James Shires: Thank you. Thank you, Yasmin. And Louise, over to you.

Hurel Louise Marie: Yes, thank you so much for that question. And I mean, I 100% agree with what Yasmin just said. But just to add on top of that, I see there are two points that came to mind when you were kind of asking your question. And one of them is, I wonder whether we need to create a community that goes to these places and that kind of like, I know that that is like spreading one thin. And I think that’s exactly where we started this conversation of, do we have to follow so many processes that we can’t actually do that? So I mean, from both a government and a non-governmental perspective, I think, for example, attending the IWG and attending the IGF, that community being able to leave that privileged space. And also, I mean, you could argue that there are some privileges to being able to actually attend the IGF. But being able to have those communities going to other places means that you can, that whenever there’s not an overlap, there’s a new set of community there. But you ensure that there is that knowledge transfer, or at least that experience from that particular room, it’s going to the other. place. So I think the question here is how do we take these New York dynamics or New York-centric dialogues and how do we translate it, transpose it, and let’s say provide the space for those same dialogues, even though let’s say not in the same format, to happen in other places. And I think there’s some answers or let’s say best practices that I’d like to highlight. The first one is, well, just a couple of days ago, and starting from that same productive discomfort, we, RUSI and Virtual Routes, we organize the cyber policy dialogues over here at the IGF, well, over there at the IGF, which is a networking session, right? And the purpose is really to identify people that are doing that kind of similar research or that are engaging in those spaces or that are interested in those spaces, right? So creating the space for us to have those kinds of New York-centric conversations like in another geographic location is absolutely fundamental. And that is one way in which we can do that from, let’s say, an IGF, OAWG kind of cross-pollination, but that could obviously apply for many, many other, let’s say, processes. Another possibility there or example is that over at RUSI, we have launched the Global Partnership for Responsible Cyber Behavior, which is a platform for researchers from all different regions to kind of come together and reflect on what responsible cyber behavior means from their, let’s say, perspective, from their geographic location. And we organized a discussion over in Singapore during Singapore Cyber Week with researchers and government representatives, not just from the Southeast Asian countries, but other regions as well that were attending CICW to discuss norms of responsible state behavior and the practical behaviors that they see from their regional perspective. So we got like small island countries saying that, you know, for example, climate-related concerns and critical infrastructure protection is much more relevant, or that is actually kind of like state responsibility and responsible state behavior is ensuring that the climate discussion is connected to the critical infrastructure discussion at the first committee, right? Which is a very different interpretation, you would argue, but that it is like once you go to the regional level and once you do that cross-pollination is that that can be quite useful. So, I mean, that is another example of a best practice of how we take those very specific bubbles and how do we, let’s say, expand those dialogues and how we as non-governmental actors can do that, but also as governments, I think one other example that I would give here is my other, let’s say, side is I’m part of the National Cybersecurity Committee in Brazil, which has been established as an outcome of the National Cybersecurity Policy. And that committee is mostly government representatives, different parts of the government, of the public administration, but you do have three civil society representatives and three industry representatives and three representatives from the technical community. And what we’ve been discussing, one of the things is how to make sure that the Ministry of Foreign Affairs can better coordinate across, do more of the interagency coordination to then take those, let’s say, inputs to the international fora, right? So, for some countries, you don’t need to have an informal mechanism. And for Brazil, it hasn’t been a formal mechanism, but just having the conversation about how to do that better and how to kind of like calibrate because the ministry has developed its own cyber division, right? How do you foster that or facilitate that liaison role that the MFA crucially plays in collating those different views, right? Even if you can’t have a more diverse delegation going with you to, let’s say, New York or Geneva. So, my question, perhaps, back to the diplomats in the- room is, you know, is there something about best practices of the MFA being more well equipped, or best practices there to think about how to facilitate that interagency coordination, or even if you have a shortage of resources, how you can feed back into those places or feed into in a better setting? Do we need to have MFAs more well equipped for this scenario where there’s a really a spread of different cyber related processes? Anyways, I’ll stop there. Because as you’ve seen, I speak way too much, but hopefully, I’d like to hear from you all.

James Shires: Thank you very much, Louise. And just to add a short footnote to what Louise and Yasmin have said. I think Louise started by saying there are two reasons right for this proliferation of initiatives. One is specialization. And the other one is politicization, right? And Yasmin pointed out that one of the main challenges is giving participants, whether multi-stakeholder or states, a sense of ownership in each initiative, right? So you want to engage early, engage transparently, and set out a clear roadmap for how things will go ahead and when people should and how they should intervene. Now, the danger there, right, is if you engage very transparently and openly early on, with people who do not share the same objectives of the initiative, maybe even would like to sabotage or delay it or see it not exist, right, then that gives them an opportunity to do so very easily. Because you say, okay, we will not move ahead until we have got full ownership from everyone in the room. And so someone says, Oh, I don’t want this to happen. I’m not going to agree. And it doesn’t move ahead. So the best practice that I think of here is Be as open as possible, so do engage really transparently early on, but with clear deadlines and with clear suggestions for who will take forward action after those deadlines. So you have things like the PowerMile consultation period. That is good because it invites broad interventions, it has a deadline, and it’s clear who will go on afterwards. So it’s very hard to say, you know, there was no opportunity to be involved from maybe those that don’t want to see it go ahead or want to push forward for a delaying tactic. But still, the real nub of the issue is that how do you identify those actors that will take it forward? Is it going to be the same people? So ideally not. So in the PowerMile process, in the absence of anyone else, they close the consultation and it’s the UK and France who then take it forward. So you’re back to the original problem of inclusion. Ideally, you would have a different set of actors, but who will be nominated and ready and funded with the resources to take it forward. So just to highlight the politicisation happens in those processes as well. We do have a question online. So I wonder if we could enable unmuting and Akriti, please do ask your question. Hi, thank you so much. I guess it’s more of a comment to Louise’s point than a question. And I’d love to be able to unmute video if that’s possible, by the way, it can’t be. I think just to the point about politicisation and what Louise was saying about the MFA.

Audience: So I’m Akriti, I work at Global Partners Digital, and before that I was working in India as a foreign minister in the G20. And something that we noticed, and especially more so when we were organising a convention on cybercrime and AI at NFT. and this was part of the G20 presidency, so it was under the aegis of technically the foreign ministry, but because there are so many departments, like for us the internal security is done by a ministry called the Ministry of Home Affairs, and then we have a Ministry of Information and Technology, and then we have the Ministry of Foreign Affairs. So and I was coordinating technically on the side of the MFA, but coming from tech policy I was kind of more aware than say most foreign policy people were about some of those discussions, and I think what I noticed was that a lot of the times the positions that came up were something that at least my MFA were just checking to make sure that it didn’t go against, sorry, what we had said at the UN. It wasn’t so much that they were making the policy as so much that they were just checking that it wasn’t contradicting our international opinion on something else. So I just wondered that even if we did have diplomatic practices from the MFA, I think a lot of constructive input, and that’s our challenge as civil society is kind of to make sure that we have connections or sort of a community or interactions with different, like the internal political machinery, which is different ministries, because a lot of times you can come in at the very last stage, but we’re not really involved at the point where the policy is being discussed, so much as just where it’s being vetted, and to me sort of that’s a, you know, very clear delineation between how you participate. So there’s, of course, the things that the MFA can do, but I just wonder how much of that onus on how to involve civil society will then fall on, say, the MFA, whereas permeating that culture within the internal politics where you invite opinion from civil society and different departments, and I guess there is going to be one ministry that needs to lead that, and if it’s an internet governance, which I guess a lot of times happens in international discussions and whether that’s an MFA prerogative or whether it’s someone else, but I think that’s a huge challenge for us as civil society from a national point of view, that how is our national engagement so strong that when we say something internationally, it really comes from kind of the local perspective. or that we’re heard at the very first layer that we can be heard at. Thank you.

James Shires: Preeti, thank you so much. And yeah, it again just highlights this important for coordination at both layers, right? You cannot have a inclusive and effective international layer without first working hard and solving problems in the national layer. We do have a representative from a Ministry of Foreign Affairs on our panel. So I was wondering, Corinne, if you would maybe say a little bit about your perspective on what the role of the Ministry of Foreign Affairs is in these issues, sort of how technical should it be? How can it rely on other technical communities and draw on those different parts of government?

Corrine Casha: First of all, I wanted to make a point on the Palma process. We received an invitation from the French and the UK governments to participate. So we received a formal invitation. And I have to say that we thought about it and we thought that it was a very important process, particularly on the point of also promoting inclusivity and on also getting industry and other different factions on board. So we did participate. This was the first time for us. And I was aware also of the different factions that participated. We had one representative from the industry and one representative from the Ministry, sorry, from a line ministry that participated. And I was happy to see that they were included in the consultation process. And I think for us, this was something that we would like to also encourage other states to sign up to, because it’s very important to not only in terms of, as I said, including the other, let’s say, factions that are not always included in the decision-making, but also as a way of promoting, let’s say, coordination between different states. So that was a point on the Palma process. And on the points raised, I very much share the same thoughts as the Austrian colleague as well, from Ministry of Foreign Affairs. There are a lot of different processes and with the different processes, you have to see which representatives are going to attend which process. Sometimes it’s very difficult also from our side side I think particularly difficult was the cyber crime convention because we had an issue of not having our delegates participate directly in New York, our criminal lawyers, it was very difficult to get them to participate in negotiating sessions. So it was very challenging for us because we had to rely on our delegates in New York and to coordinate back sort of with the line ministry. So that was very challenging because also one other thing is with the sort of proliferation of different processes, I mean even the line ministries themselves are having to keep up or having to keep up pace with all these different processes taking place and sometimes you know you don’t have the necessary specialists, especially with cyber crime for example, there are certain technicalities that where we don’t have the necessary expertise to deal with them. So it’s very hard to keep pace with all these different processes to make sure that you have all these specialists on board and I would say that the cyber crime convention for us was the most difficult because of the fact that we didn’t have these specialists who would be able to go back and forth from New York to Malta and negotiate. And I was also very struck by what Louise said about the sort of transfer of knowledge from New York to other centers of discussion. I mean the fact that I’m participating the first time in the IGF is also sort of decision based on that. The fact that I’m coming from the foreign ministry but at the same time I’m participating in a forum where it’s not just the foreign ministry but also other technical fellows, civil society etc. participating. I think it’s important to share knowledge. in that respect. And there was another point on, I believe that Louise raised, about the sort of need to have this sort of harmonization process going also with respect to the Cyber Security Committee, which she mentioned about Brazil. We also have a Cyber Security Committee in Malta. I’m a representative on that committee. And again, it brings together all the players, all the line ministries, all the representatives of the industry. But the foreign ministry is really sort of coordinating, let’s say, has a coordinating role in that, make sure that what the representatives or the delegates say, as also the online participant mentioned, doesn’t sort of run counter to what we say in New York or in other areas. But I think the fact that this Cyber Security Committee was established was very, not only timely, but also very important. I think it has helped a lot to, first of all, bring sort of to the table items or let’s say issues that maybe not all participants or delegates are aware of. I’m thinking in particular about the issue of the application of international law in cyberspace. For example, that was an area which was being discussed at the EU level, but not every delegate, let’s say from the police force to, let’s say, the critical infrastructure department. I mean, they were not aware of the discussions that were taking place. I mean, they were aware of it in a general sense, but not so much into the detail. And I think it was very important that this was raised at the committee. And so, I mean, the committee has a very important role to play, because it brings together the different sort of factions, but it also enables certain issues to be exchanged, to have more information. And for us, it was important. I think without the committee certain sort of items or issues would maybe fall through the cracks and then line ministries or other entities would come to know about it much later than they would actually come to know about it if there was not the committee. So I think it’s a very important framework as well. For us it’s sort of a formal establishment, it’s under the office of the prime minister so there’s a sort of also prime ministerial lead over that which gives it also a certain influence and a certain weight to take decisions. But I believe that this is also one aspect where harmonization comes into play and where we avoid sort of also the fragmentation of cyber issues because at the end of the day there’s so many processes, so many different ministries tackling different aspects of cyber. The sort of committee brings them together and for us nationally it’s helped a lot. Thank you. Thank you very much Karina and that’s a

James Shires: yeah fantastic insight into how these national structures of coordination work as well. Akriti I can see you’ve got your hand up to respond again and then I will turn open the floor. So

Audience: Akriti come in and then we’ll open the floor. Yeah thanks, just to point out because you know she mentioned the issue of cybercrime convention and from our point of view, so when India we had this law in India which was struck down by the supreme court which was our highest law in the land because the speech was extremely vague in terms of what it marked as offensive and the overreach was such that it was marked down and then eventually India tried to bring the exact like literal verbatim exact same prohibition to the cybercrime convention trying to circumvent our national jurisdiction to you know then have it obviously the international law and then you have to ratify it then you know to get it back into our national legislation and it was I mean it was honestly a little bit shocking to us as a civil society that that route was trying to be used to you know legitimize the chilling effect on freedom of speech again. But also another point was that because that was happening at the cybercrime convention and at least in India the capacity to follow international conversations on intent governance and such is much more limited than national. So the kind of traction that we got on that was so little like if you weren’t very specifically tracking the cybercrime convention which is kind of literally maybe one organization if any in India then it didn’t get as much traction as when of course the national debate was happening and I think that was quite for us just kind of alarming to see that they were moving to the international forum from the national forum and kind of the like because the harmonization works that the way it does they thought that they could try to get away with it and eventually didn’t pass of course so it’s not a reality today but it was quite alarming to see that that kind of those kind of actions also happen and the lesser we see harmonization and even just civil society kind of input or attention on the internet governance spaces it can really come back from what people consider an elitist internet governance level to really our national legislation so just to respond to that thank you. Thank you and yeah that example of

James Shires: states trying to circumvent maybe civil society or popular resistance to certain measures of legislation by going through the international level is really fascinating. Is there anyone else in the room who would like to come in with their perspective on the paradox of inclusion? I see no hands would anyone online like to come in please do raise your hand or put something in the chat if not then I will offer the floor to Yasmin and Louise for their interventions but there is someone already so Sasha please do come in and can we give Sasha some video as well if you’d like to have video if you don’t want then leave it off. Hi, good day. Can you all hear me? Yes. All right. I can’t get the video so that’s that’s

Audience: fine. All right. So my name is Natasha Nagle. I’m here with the from the University of Prince Edward Islands. And my perspective is through the inclusion, digital inclusion perspective. And just inquiry considering the way in which we, we look at the way in which we fragment internet, generally speaking, and the way in which we identify the subtext within the presentation of information. And when it comes to internet government governance, how do we consider the identities that are being put forward? Is it going to be a situation where when it comes, when you look at that lateral transfer from physical inclusion to digital inclusion, what structures are in place to ensure that minority identities are being presented in such a fashion that it is represented on the world stage? And just just general comments on on that particular space, because when it when it comes to our intersectionality and internet identity, it’s important that we don’t lose sight of those minor minority identities within that digital space. Thank you.

James Shires: Thank you very much, Sasha. In which case, what I will do if there’s no other comments in in person, then I will offer the floor to Yasmin and Louise to give some concluding remarks, but also specifically to address that question of ensuring we have the digital

Hurel Louise Marie: inclusion of minoritized identities and communities. Louise, would you like to go first? Sure, happy. do so, and thank you for that question, Sasha. I think from where I’m standing, and from the, let’s say, sorry, sorry if my voice is a bit robotic at the moment, but just recovering from a cold. From where I’m sitting, and from the example that I gave, I think there is a reflection to be made as to how we best include or recognize minorities in the context of these very high-level processes, right? And I think I gave a very brief example of how some member states have been trying to do that, and I know that that doesn’t respond very specifically to the kind of physical offline, online kind of representation, but again, from that process specifically, let’s say like the OEWG, you do have member states facilitating and supporting the Women in Cyber Fellowship, which I think creates, let’s say, a precedent for not only having more like gender balance there, but effectively having women and folks from different, let’s say, standpoints to actually negotiate an official text, and I wouldn’t underestimate that because there are different ways of approaching the negotiation of a text, and obviously the subjectiveness of your background, where you’re coming from, be it geographically, be it in terms of your gender, that really kind of like plays into the way in which you navigate the world, and that’s not different when you enter a UN windowless room, right? And I think that’s actually very important, and I, you know, just seeing that specific fellowship taking place throughout the last, like, I don’t know, since 2018 at least, 2019, sorry, with the start of the OEWG, you do see the of similar women attending these spaces. And I think that’s good because it just maintains a memory of having effective representatives there. So hopefully that responds a bit to your question, but again, from the very specific standpoint, my concluding remarks is almost like a summary because as you know, James, I tried to kind of like cluster things and structure kind of thoughts. So going back to the notion of paradox, I think we have three paradoxes if that’s the word now reflecting. I think the first one is thinking about the paradox of meaningful leadership. We talked about lots of different processes. I think there is a thing over here, which is there is a value in spearheading certain initiatives and setting into motion, so structuring it. But I think there’s a very important point. And I think that’s something that you raised quite nicely, James, which is, is there a moment for us to delegate some of that leadership? And if that leadership is delegated, how that should happen? Also calibrating political risks, right? Because that’s what member states are usually doing, right? It’s like, I don’t wanna lose control over this process, but I want to indicate that it’s actually inclusive. But is there something about calibrating between spearheading, setting it in motion and delegating? What does delegation look like? The counter ransomware initiative has different working groups with different countries doing that. And I think non, like also like non-state actors, like sharing that, but I might be wrong. The second paradox then, meaningful leadership. The second one for me is like meaningful coordination. And in the meaningful coordination, I think what we saw is calibrating between like interagency mechanisms, developing those if that is something that’s relevant at the national level, be it like a committee structure. So similarly to Malta, like in Brazil, it’s right. like the office of the president, so that provides some political capital domestically, but how do you ensure also that you are having a mixed delegation once you go externally, right? So how do you calibrate between like inter-agency coordination and kind of like international projection? And the third paradox, and finally, is a meaningful dialogue. So, and this is a provocation really, like, okay, it’s very nice and easy to say like, oh, we’re open to dialogue, to meaningful dialogue, we’re going to bring these stakeholders in, we’re going to do consultations, we’ll have a very nice timeline, and this will look very structured, but are we actually open to productive disagreement? Even if like for member states that are funding, like, you know, there were lots of examples of like funding to go to the ad hoc committee, funding, like, these developed countries or developing countries or underrepresented communities, is that actually in the end kind of like, is there an openness for a productively discomforting, uncomfortable dialogue? That’s the word. And open to other expert input from maybe communities that we don’t know, or that we haven’t figured out. And I think one example is the Global Conference on Cyber Capacity Building, which obviously there’s this whole kind of like cyber capacity building community within the cyber world, but there’s a development community, so how do you bridge those? Not saying that the GC3B, the Global Conference on Cyber Capacity Building, is the example, but that is one case where you see this attempt to articulate that conversation, and it’s still very, you know, it’s still gaining its own kind of like traction, right? And I’m sure Yasmin will have some additional thoughts on that. So these are the three kind of like points that I like to add there. But thank you very much all for your contributions. This was great, especially at

James Shires: 6am until like 8am here in London. Thank you, Louise, and impressively awake. I was really struck by this idea… of calibrating political risk and maintaining control, right? Because at the end of the day, right, to have a more inclusive process to really spread ownership, there has to be some kind of letting go, right? Some states who are currently in charge or others new in charge process have to relinquish some kind of control and be ready for the progress to go in different directions. And that is really an uncomfortable place to be in, especially if your whole mandate as a Minister of Foreign Affairs or other official is to steer and maintain control in that way. Yasmin, minoritized identities and communities, and then concluding

Yasmine Idrissi Azzouzi: remarks. Thank you. Sounds good. Thank you very much, James. So I would like definitely to echo what has been said until now and really, really, really keep the focus also on fostering ownership at national level. I think the point on the delegation of leadership definitely is a challenging one, but I’ve seen through national processes, lead agencies that sort of relinquish a little bit that lead role to a certain extent and seeing the usefulness in doing that. That, of course, doesn’t stop us from keeping the balance, of course. And I think both sort of approaches are necessary. So one of them is definitely creating the focus on interdisciplinary teams that are equipped, again, to engage meaningfully in different fora. So in a perfect world, we would have similar something to like what the Austrian representative here has mentioned, multidisciplinary teams that bring different experts at international level as well. And I think to ensure also sort of continuity with these sort of multidisciplinary teams, keeping the lead agency as being sort of the core is also necessary. So the lead agency can sort of keep tabs on the processes, on the different ones, and give that overview as well on the different things that may be lacking at times, while in parallel to that, delegating some of that power or some of that leadership when it comes to having specific processes that are topic-specific, for example. So apart from the national level that I keep coming back to that I think is really, really core, it’s really the key here, of course, inclusivity at international level is needed as well. Akriti’s very relevant example definitely showed the need for obviously going beyond let’s say multilateral or state-focused processes and keep that inclusivity of civil society at international level as well, but I think a lot of it actually needs to happen at national level. Thank you. Thank you very much, Yasmin. And finally, I would turn to our third panellist, Corrine

James Shires: Kasher, for your concluding remarks. We have had our five-minute warning, so we will be wrapping up after these remarks. And thank you, everyone, for your participation. Corrine, over to you.

Corrine Casha: Thanks, James. Well, not much to say. I think the two panellists before me actually wrapped up everything nicely. I mean, we’ve discussed a lot today, and I think we’ll take home, I’ll definitely take home a few of the remarks that participants made today, especially, yes, about the Austrian colleague mentioned interdisciplinary teams, and also Louise mentioned, for example, the transfer knowledge. Yasmin mentioned also the need to sort of, about the consultation process. It was one thing that struck me most about best practices. So I think we have quite a checklist of things that we have gathered here today. And I think they were all very, very valid remarks. I myself, I think, I’m also pleased to be here, not only because I shared some of my experiences, but because I took home a lot of points to consider. So maybe, I don’t know, I mean, we can definitely come up also with a sort of report from this session, and maybe circulate to participants as well. But I think we have all spoken very much about the need to reduce fragmentation, about the need for inclusivity, about the need, as you said, of political risk and sort of relinquishing control. I personally think that what we discussed here today would be very relevant to take forward. Perhaps you can have another session also to follow up on this. And from my perspective, I mean, it ends there. I think we’ve discussed a lot today. And I’m very happy to have participated and to have listened to everybody’s take here. So thank you very much.

James Shires: Thank you, Corrine. And as a quick reminder, before we close, please do check out Roussi’s Global Partnership on Responsible Cyber Behavior, which is online. Louise is running that. And of course, do visit Virtual Roots, and we will be doing more activities in this space. We will be engaging more, so we’d love to continue to have this conversation in future. Have a great last day of the IGF, and thank you everyone. Thank you.

J

James Shires

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Proliferation of initiatives creates barriers to meaningful participation

Explanation

The increasing number of internet governance initiatives makes it difficult for stakeholders to effectively engage in all of them. This creates a paradox where efforts to increase inclusion actually lead to exclusion due to resource constraints.

Evidence

Examples of various initiatives mentioned: UN OEWG on cybersecurity, Global Digital Compact, Cybercrime Convention, Paris Call

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Agreed with

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Corinne Casha

Agreed on

Proliferation of internet governance initiatives creates challenges for meaningful participation

H

Hurel Louise Marie

Speech speed

154 words per minute

Speech length

4192 words

Speech time

1629 seconds

Specialization of debates leads to fragmentation of discussions

Explanation

As internet governance discussions become more specialized, they split into separate forums and processes. This fragmentation makes it challenging to maintain a holistic view and coordinate across different areas.

Evidence

Examples of specialized initiatives: Counter Ransomware Initiative, Pal-Mal Process, OEWG, GFCE, Tech Accord

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Agreed with

James Shires

Yasmine Idrissi Azzouzi

Corinne Casha

Agreed on

Proliferation of internet governance initiatives creates challenges for meaningful participation

Differed with

Yasmine Idrissi Azzouzi

Differed on

Role of specialization in internet governance debates

Inclusion efforts can be weaponized for political purposes

Explanation

Some states use the creation of new inclusive processes as a political strategy to advance their interests. This can lead to the proliferation of initiatives that may not genuinely promote inclusivity.

Evidence

Example of Russia pushing for a legally binding instrument on cybercrime through the ad hoc committee

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Structural inequalities persist despite efforts at inclusion

Explanation

Even when processes like the OEWG aim to be more inclusive by involving all UN member states, structural barriers still limit effective participation. Smaller states often lack the resources to engage meaningfully in all discussions.

Evidence

Example of small UN missions with limited staff covering multiple topics

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Importance of knowledge transfer between different forums

Explanation

Facilitating knowledge transfer between various internet governance forums is crucial for maintaining coherence and continuity. This involves creating opportunities for participants to share experiences and insights across different processes.

Evidence

Example of organizing cyber policy dialogues at the IGF to discuss New York-centric conversations in a different geographic location

Major Discussion Point

Challenges of Coordination Across Different Forums

Fostering dialogue that allows for productive disagreement

Explanation

True inclusivity in internet governance processes requires openness to productive disagreement and uncomfortable dialogues. This involves going beyond superficial consultations and being willing to engage with diverse and potentially challenging perspectives.

Major Discussion Point

Strategies for Improving Inclusion and Representation

Y

Yasmine Idrissi Azzouzi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

National-level coordination is crucial for effective international participation

Explanation

Effective participation in international internet governance forums requires strong coordination at the national level. This involves bringing together various stakeholders and agencies to develop coherent positions and strategies.

Evidence

Example of ITU supporting developing countries in creating national cybersecurity strategies through inclusive consultation workshops

Major Discussion Point

Challenges of Coordination Across Different Forums

Agreed with

Corinne Casha

Hurel Louise Marie

Agreed on

Importance of national-level coordination for effective international participation

Creating ownership through multi-stakeholder consultations

Explanation

Engaging various stakeholders in consultations during the development of national strategies or policies can create a sense of ownership. This approach leads to better coordination and more effective implementation of internet governance initiatives.

Evidence

Example of ITU’s methodology for supporting the development of national cybersecurity strategies through inclusive consultation workshops

Major Discussion Point

Strategies for Improving Inclusion and Representation

Need for interdisciplinary teams to engage in various processes

Explanation

Effective participation in internet governance requires interdisciplinary teams that can engage meaningfully across different forums. These teams should combine expertise in technical, diplomatic, and policy-making areas to address the complex nature of digital issues.

Major Discussion Point

Challenges of Coordination Across Different Forums

Differed with

Hurel Louise Marie

Differed on

Role of specialization in internet governance debates

C

Corinne Casha

Speech speed

136 words per minute

Speech length

1619 words

Speech time

709 seconds

Relinquishing some control is necessary for true inclusion

Explanation

To achieve genuine inclusivity in internet governance processes, states and organizations leading initiatives must be willing to give up some control. This involves being open to different perspectives and allowing for outcomes that may diverge from initial expectations.

Major Discussion Point

The Paradox of Inclusion in Internet Governance

Difficulty in maintaining consistent representation across multiple forums

Explanation

The proliferation of internet governance forums makes it challenging for states to maintain consistent and expert representation across all processes. This is particularly difficult for smaller states with limited resources.

Evidence

Example of challenges in participating in the cybercrime convention negotiations due to lack of specialized expertise

Major Discussion Point

Challenges of Coordination Across Different Forums

Agreed with

James Shires

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Agreed on

Proliferation of internet governance initiatives creates challenges for meaningful participation

Establishing national cybersecurity committees for better coordination

Explanation

Creating national-level cybersecurity committees can improve coordination among different government agencies and stakeholders. These committees can help ensure coherent positions across various international forums and facilitate knowledge sharing.

Evidence

Example of Malta’s Cyber Security Committee bringing together various ministries and industry representatives

Major Discussion Point

Strategies for Improving Inclusion and Representation

Agreed with

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Agreed on

Importance of national-level coordination for effective international participation

Funding initiatives to support participation from developing countries

Explanation

Providing financial support for representatives from developing countries to attend international forums is crucial for improving inclusion. This helps ensure a more diverse range of perspectives in internet governance discussions.

Evidence

Mention of Austria funding developing country diplomats to participate in the cybercrime process

Major Discussion Point

Strategies for Improving Inclusion and Representation

Role of foreign ministries in coordinating national positions

Explanation

Foreign ministries play a crucial role in coordinating national positions across various internet governance forums. They need to ensure consistency in positions taken at different international venues while also facilitating input from various domestic stakeholders.

Evidence

Example of Malta’s foreign ministry coordinating with the national Cyber Security Committee to ensure coherent positions

Major Discussion Point

Challenges of Coordination Across Different Forums

Agreed with

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Agreed on

Importance of national-level coordination for effective international participation

A

Audience

Speech speed

172 words per minute

Speech length

1562 words

Speech time

542 seconds

Lack of communication between Geneva and New York-based processes

Explanation

There is insufficient coordination between internet governance processes taking place in Geneva and New York. This leads to duplication of efforts and makes it difficult for stakeholders to engage effectively across all relevant forums.

Major Discussion Point

Challenges of Coordination Across Different Forums

Ensuring representation of minoritized identities in digital spaces

Explanation

It is important to consider how minoritized identities are represented in internet governance processes and outcomes. This includes addressing the transition from physical to digital inclusion and ensuring diverse perspectives are included.

Major Discussion Point

Strategies for Improving Inclusion and Representation

Agreements

Agreement Points

Proliferation of internet governance initiatives creates challenges for meaningful participation

James Shires

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Corinne Casha

Proliferation of initiatives creates barriers to meaningful participation

Specialization of debates leads to fragmentation of discussions

Difficulty in maintaining consistent representation across multiple forums

All speakers agreed that the increasing number and specialization of internet governance initiatives make it difficult for stakeholders, especially those with limited resources, to participate effectively across all forums.

Importance of national-level coordination for effective international participation

Yasmine Idrissi Azzouzi

Corinne Casha

Hurel Louise Marie

National-level coordination is crucial for effective international participation

Establishing national cybersecurity committees for better coordination

Role of foreign ministries in coordinating national positions

Speakers emphasized the need for strong national-level coordination mechanisms, such as cybersecurity committees, to ensure coherent positions and effective participation in international forums.

Similar Viewpoints

Both speakers highlighted the tension between political control and genuine inclusion, suggesting that true inclusivity requires a willingness to relinquish some control and engage with diverse perspectives.

Hurel Louise Marie

Corinne Casha

Inclusion efforts can be weaponized for political purposes

Relinquishing some control is necessary for true inclusion

Both speakers emphasized the importance of meaningful engagement with diverse stakeholders, including being open to disagreement and challenging perspectives, to create genuine ownership and inclusivity in internet governance processes.

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Creating ownership through multi-stakeholder consultations

Fostering dialogue that allows for productive disagreement

Unexpected Consensus

Importance of interdisciplinary approaches in internet governance

Yasmine Idrissi Azzouzi

Corinne Casha

Hurel Louise Marie

Need for interdisciplinary teams to engage in various processes

Difficulty in maintaining consistent representation across multiple forums

Importance of knowledge transfer between different forums

There was unexpected consensus on the need for interdisciplinary approaches to internet governance, with speakers from different backgrounds agreeing on the importance of combining technical, diplomatic, and policy expertise to address complex digital issues effectively.

Overall Assessment

Summary

The main areas of agreement included the challenges posed by the proliferation of internet governance initiatives, the importance of national-level coordination, the need for inclusive and diverse participation, and the value of interdisciplinary approaches.

Consensus level

There was a high level of consensus among the speakers on the key challenges and potential solutions for improving inclusion in internet governance. This consensus suggests a shared understanding of the complexities involved and a common desire to address the paradox of inclusion. The implications of this consensus are that future efforts in internet governance may focus on developing more coordinated and interdisciplinary approaches, both at national and international levels, to ensure more effective and inclusive participation across various forums.

Differences

Different Viewpoints

Role of specialization in internet governance debates

Hurel Louise Marie

Yasmine Idrissi Azzouzi

Specialization of debates leads to fragmentation of discussions

Need for interdisciplinary teams to engage in various processes

Louise Marie argues that specialization leads to fragmentation, while Yasmine emphasizes the need for interdisciplinary teams to address this fragmentation.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to specialization vs. interdisciplinary engagement, and the specific mechanisms for improving national and international coordination.

difference_level

The level of disagreement among the speakers is relatively low, with more emphasis on complementary perspectives rather than outright contradictions. This suggests a general consensus on the challenges of inclusion in internet governance, with differences primarily in the proposed solutions and areas of focus.

Partial Agreements

Partial Agreements

Both speakers agree on the challenges of inclusion, but Louise Marie focuses on structural inequalities, while Corinne emphasizes the practical difficulties of representation.

Hurel Louise Marie

Corinne Casha

Structural inequalities persist despite efforts at inclusion

Difficulty in maintaining consistent representation across multiple forums

Both speakers agree on the importance of national-level coordination, but propose different mechanisms to achieve it.

Yasmine Idrissi Azzouzi

Corinne Casha

National-level coordination is crucial for effective international participation

Establishing national cybersecurity committees for better coordination

Similar Viewpoints

Both speakers highlighted the tension between political control and genuine inclusion, suggesting that true inclusivity requires a willingness to relinquish some control and engage with diverse perspectives.

Hurel Louise Marie

Corinne Casha

Inclusion efforts can be weaponized for political purposes

Relinquishing some control is necessary for true inclusion

Both speakers emphasized the importance of meaningful engagement with diverse stakeholders, including being open to disagreement and challenging perspectives, to create genuine ownership and inclusivity in internet governance processes.

Yasmine Idrissi Azzouzi

Hurel Louise Marie

Creating ownership through multi-stakeholder consultations

Fostering dialogue that allows for productive disagreement

Takeaways

Key Takeaways

The proliferation of internet governance initiatives creates barriers to meaningful participation, especially for actors with limited resources

There is a tension between specialization of debates and fragmentation of discussions across multiple forums

National-level coordination and capacity building are crucial for effective international participation

True inclusion requires relinquishing some control and being open to productive disagreement

Structural inequalities persist despite efforts at inclusion in internet governance processes

There is a lack of communication and coordination between different internet governance forums (e.g. Geneva vs New York-based)

Resolutions and Action Items

Consider creating a report summarizing the key points from this session to circulate to participants

Explore having a follow-up session to continue the discussion on inclusion in internet governance

Unresolved Issues

How to effectively balance specialization of debates with the need for coherent, coordinated governance

How to ensure meaningful inclusion of minoritized identities and communities in digital governance spaces

How to improve coordination between different internet governance forums and processes

How to address structural inequalities that persist despite inclusion efforts

Suggested Compromises

Creating interdisciplinary teams that can engage across multiple governance forums

Delegating some leadership/control to other stakeholders while maintaining overall coordination

Balancing state-led initiatives with meaningful multi-stakeholder consultation and civil society inclusion

Fostering national-level coordination mechanisms (e.g. cybersecurity committees) to inform international engagement

Thought Provoking Comments

The idea here is that we see a proliferation of efforts to bring in different actors in internet governance, whether these are multi-stakeholder forums, whether these are efforts to include developing countries and smaller states or states with fewer resources, and there’s lots of different efforts to do these, through different conferences, initiatives, meetings, and so on. In fact, there’s so many of these different efforts that actually keeping up with them all, keeping track of them all, and participating meaningfully in them all, is itself a high resource burden. And that’s what we term the paradox of inclusion.

speaker

James Shires

reason

This comment introduces the central concept of the ‘paradox of inclusion’ which frames the entire discussion. It highlights how efforts to be more inclusive can paradoxically create barriers to participation.

impact

This set the stage for the entire discussion, providing a framework for analyzing various internet governance initiatives and their inclusivity challenges.

So you have those movements, such as the ad hoc committee, which has ended right now, that becomes part of that, let’s say, political strategy. Another example of proliferation being a political strategy is precisely to specialize debate because then you can control a bit more or what the scope is, and who is involved in this conversation.

speaker

Louise Marie Hurel

reason

This comment introduces the idea that the proliferation of initiatives can be a deliberate political strategy, adding complexity to the discussion of inclusivity.

impact

It shifted the conversation to consider the political motivations behind the creation of new forums, deepening the analysis beyond just logistical challenges.

So creation of ownership, I think, at national level across different expertises, so ministries and national agencies, but also critical infrastructure providers. We’ve had in the same room central banks, energy representatives, but also ministries ranging from MFA all the way to, of course, defence, interior and others, because, of course, it’s extremely interdisciplinary and a national strategy also needs to have all of those elements be taken into consideration.

speaker

Yasmine Idrissi Azzouzi

reason

This comment highlights the importance of national-level coordination and inclusivity as a foundation for effective international participation.

impact

It broadened the discussion to consider how national-level processes impact international inclusivity, leading to a more holistic analysis of the challenges.

And I was happy to see that they were included in the consultation process. And I think for us, this was something that we would like to also encourage other states to sign up to, because it’s very important to not only in terms of, as I said, including the other, let’s say, factions that are not always included in the decision-making, but also as a way of promoting, let’s say, coordination between different states.

speaker

Corinne Casha

reason

This comment provides a concrete example of how a specific initiative (the Palma process) is attempting to address inclusivity challenges.

impact

It moved the discussion from theoretical concepts to practical examples, allowing for a more grounded analysis of potential solutions.

And just inquiry considering the way in which we, we look at the way in which we fragment internet, generally speaking, and the way in which we identify the subtext within the presentation of information. And when it comes to internet government governance, how do we consider the identities that are being put forward?

speaker

Natasha Nagle

reason

This comment introduces a new dimension to the discussion by focusing on the representation of minority identities in digital spaces.

impact

It broadened the scope of the inclusivity discussion beyond just state and organizational representation to consider individual and community identities.

Overall Assessment

These key comments shaped the discussion by progressively expanding and deepening the analysis of inclusivity in internet governance. Starting with the introduction of the ‘paradox of inclusion’, the conversation moved through considerations of political motivations, national-level coordination, practical examples of inclusive initiatives, and finally to questions of identity representation. This progression allowed for a multifaceted examination of the challenges and potential solutions to achieving meaningful inclusivity in internet governance processes.

Follow-up Questions

How can we strengthen communication between different UN processes (e.g. Geneva and New York) to avoid duplication?

speaker

Austrian Foreign Ministry representative

explanation

This is important to improve coordination and efficiency in international cybersecurity discussions.

What are best practices for Ministries of Foreign Affairs to facilitate interagency coordination on cyber issues?

speaker

Louise Marie Hurel

explanation

This could help improve national-level coordination to better inform international positions.

How can we ensure meaningful inclusion of minority identities in internet governance processes?

speaker

Natasha Nagle

explanation

This is crucial for ensuring diverse perspectives are represented in digital policy discussions.

How can we calibrate between spearheading initiatives and delegating leadership in international processes?

speaker

Louise Marie Hurel

explanation

This is important for balancing control and inclusivity in multi-stakeholder initiatives.

How can we foster openness to productive disagreement in international dialogues?

speaker

Louise Marie Hurel

explanation

This is necessary for truly inclusive and meaningful discussions on complex issues.

How can we better bridge the cyber capacity building community with the broader development community?

speaker

Louise Marie Hurel

explanation

This could lead to more holistic and effective approaches to digital development.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

YCIG & DTC: Future of Education and Work with advancing tech & internet

YCIG & DTC: Future of Education and Work with advancing tech & internet

Session at a Glance

Summary

This session, co-hosted by the Dynamic Team Coalition and Youth Coalition on Internet Governance, focused on addressing the evolving demands on education and workforce shaped by AI, quantum computing, blockchain, and robotics. Participants discussed the challenges and opportunities presented by these technologies, particularly in the context of the Global South and marginalized communities.

Key points included the need for innovative educational strategies that incorporate AI and emerging technologies into curricula, while also addressing digital divides and ensuring ethical use. Speakers emphasized the importance of adapting education systems to prepare students for a rapidly changing job market, with a focus on critical thinking, problem-solving, and ethical considerations in technology use.

The discussion highlighted the complexities of implementing AI in education, including concerns about over-reliance on AI tools and the need for proper guidance on their use. Participants stressed the importance of inclusive design in educational technology and the need to consider diverse perspectives, including those of indigenous communities.

Several speakers addressed the need for regulations and ethical guidelines for AI use in education and the workforce. They emphasized the importance of involving multiple stakeholders, including students, educators, and policymakers, in developing these frameworks.

The session also touched on the challenges of digital exclusion in the Global South and the need for policies that ensure both technological inclusion and high-quality education. Participants called for more research into local needs and contexts to inform policy development.

In conclusion, the discussion underscored the need for collaborative action among various stakeholders to navigate the complex landscape of AI and education, with a focus on creating inclusive, ethical, and effective strategies for preparing youth for the future workforce.

Keypoints

Major discussion points:

– The need to adapt education systems and curricula to prepare students for AI and emerging technologies

– Challenges of digital divides and unequal access to technology, especially in the Global South

– Importance of teaching ethical use of AI and technology, not just technical skills

– Balancing technology use with human interaction and social skills

– Need for regulations and policies to guide ethical AI use in education and workforce

Overall purpose:

The goal of this discussion was to explore how education systems and workforce development need to evolve to address the rapid advancement of AI and other emerging technologies, while considering challenges like digital divides and ethical concerns.

Tone:

The tone was largely collaborative and solution-oriented, with participants building on each other’s ideas. There was a sense of urgency about the need to adapt education systems quickly. The tone became more nuanced as the discussion progressed, with increased focus on challenges faced by marginalized groups and the Global South.

Speakers

– Marko Paloski: Moderator

– Ananda Gautam: Coordinator for Asia-Pacific region from the Youth Coalition on Internet Governance

– Mohammed Kamran: Advocate from Pakistan, co-coordinator

– Marcela Canto: Representative of the Youth Brazilian Delegation

– Ethan Chung: Youth Ambassador of the Mampa Foundation from Hong Kong

– Umut Pajaro: Teacher at university and high school

– Denise Leal: From the Youth Coalition, Latin American Caribbean region

Additional speakers:

– Nirvana Lima: Facilitator and participant of Youth Programme Brazil

– Sasha: Student at University of Prince Edward Island

Full session report

Revised Summary: AI, Education, and the Future Workforce

Introduction

This session, co-hosted by the Dynamic Team Coalition and Youth Coalition on Internet Governance, addressed the evolving demands on education and workforce shaped by emerging technologies, particularly AI and robotics. The discussion brought together speakers from diverse backgrounds to explore challenges and opportunities presented by these technologies, with a focus on the Global South and marginalized communities.

Key Discussion Points

1. Adapting Education Systems to AI and Emerging Technologies

Speakers agreed on the urgent need to adapt education systems for a rapidly changing technological landscape. Umut Pajaro emphasized interdisciplinary and project-based learning approaches, as well as teaching critical thinking and problem-solving skills. The discussion highlighted innovative educational strategies incorporating AI into curricula while balancing technology use with human interaction.

2. Digital Divides and Unequal Access

Ananda Gautam raised concerns about the digital divide, particularly in the Global South. Marcela Canto provided a perspective on technological colonialism, stating, “We are now experiencing a new configuration of colonialism. While the global north has large technology companies that employ CEOs and software engineers, we in the south are left with the worst jobs.” Gautam also noted the progression from digital divide to AI divide and highlighted the importance of community networks in providing internet access to underserved regions.

3. Ethical Use of AI and Technology in Education

Mohammed Kamran and Ethan Chung emphasized the need for digital literacy and proper use of AI tools. Kamran advocated for government regulations, while Chung focused on the lack of ethical guidelines for AI use in education. Umut Pajaro suggested involving students in creating rules for AI use, highlighting the need for diverse perspectives in policy-making.

4. Language Accessibility and Cultural Sensitivity

Denise Leal discussed the importance of language accessibility in digital education, referencing the UFLAC IGF session. Umut Pajaro raised points about addressing language barriers for indigenous populations, emphasizing cultural sensitivity and self-determination in technology adoption.

5. Challenges in AI-Assisted Learning

Mohammed Kamran discussed the challenges of detecting AI-generated homework and the need for more experience-based assignments. Ethan Chung elaborated on the proper use of AI in education and assessments, suggesting a balance between AI-assisted research and original thought.

6. Technology Overuse and Digital Detox

Ananda Gautam raised concerns about the overuse of technology and the need for digital detox, emphasizing the importance of maintaining a balance between digital and non-digital experiences in education.

7. Future of Work and AI Impact

Marko Paloski highlighted the risk of job losses due to automation, underscoring the importance of preparing students for a rapidly changing job market with a focus on critical thinking and problem-solving skills.

Proposed Solutions and Action Items

1. Include more debates about education and involve educators in future IGF discussions

2. Create standards for appropriate use of AI and technology in education

3. Implement project-based assessments that require original thought alongside AI-assisted research

4. Develop guidelines for AI use in education alongside formal regulations

5. Balance technology integration with preservation of traditional teaching methods and social interaction

6. Connect different UN spaces and regulations related to technology and internet governance

7. Promote community networks to improve internet access in underserved regions

Conclusion

The discussion emphasized the need for collaborative action among stakeholders to navigate the complex landscape of AI and education. It highlighted the importance of creating inclusive, ethical, and effective strategies for preparing youth for the future workforce while addressing challenges such as digital divides, ethical concerns, and cultural sensitivities. The session stressed a holistic approach considering technical skills, critical thinking, and the preservation of diverse cultural perspectives in the evolving digital landscape.

The Youth Coalition election registration was briefly mentioned at the end of the session, encouraging youth participation in internet governance.

Session Transcript

Marko Paloski: So, let me, hello everyone and welcome to our session, the, just a second, to our session I think that also online participants can hear us, yes, so this session is co-hosted by the Dynamic Team Coalition and the Youth Coalition on Internet Governance with a will to address the evolving demands on the workflows, workforce shaped by the AI quantum computing and blockchain infrastructures and robotics. The focus will be on identifying innovative educational strategies and career paths that adapt to these, to adapt to nowadays technologies. Participants will discuss interdisciplinary learning, project-based models, micro-credentials, tech appreciations, lifelong learning platforms and language divide, also gender divide, aiming to align educational and professional development with the future technological landscapes. This session is a roundtable session, so we’ll guide the dialogue on educational systems and employment, also going through the digital divide and the existing gaps in the countries that have less access to digital literacy, empowerment and infrastructure. According to UNESCO, around 3.6 billion people worldwide still lack reliable internet access and in the developing countries, access to digital literacy and infrastructure is limited. By the ITU reports, only 19% of the individuals in the least developed countries use the internet, compared to 87% in the developed countries. There is no longer a clear pathway to success through education. A report by the MC Kinsey Global Institute suggests that by 2030, up to 800 million jobs worldwide could be lost to automation, representing one-fifth of the global workforce. As the youth and the team dynamic coalition on and internet governance. We see as a part of our responsibility to promote this dialogue on the perspective of education and future consideration the advancements of the technology and the internet. With this, I want to give the floor now to participants. First, I will go with the on-site participants, and then on the online participants. Give the floor to give introduction and why their statement is very important here in this panel, and also to give us answer on what are their personal views on the current situation of education and employment, especially on the topic of the youth. I mean, this is the panel for the youth, but we can also cover other areas. But here, this question is to the topic on the youth. I would give the first question to Ananda Gautam to introduce himself and try to answer his personal point of view.

Amanda Gautam: Thank you, Marco, for inviting me here. And I think I’m audible, right?

Marko Paloski: Yes.

Amanda Gautam: OK, cool. So my name is Ananda Gautam. I’m from Nepal. So I’m also coordinator for the Asia-Pacific region from the Youth Coalition on Internet Governance. I lead various other youth initiatives in global, regional, and national level. So my major engagement is capacity building of young people. So as Marco said, the challenges of education and young people, so we are in a very kind of situation where a lot of digitization has happened. I can reflect when I think if someone is from the 90s, like we can say, the Gen Z has been so much kind of like in the digitized world. We were not that much connected, because we grew up with the technology, I believe. In the 90s, the technology was just thriving on the public sector. It was the 70s when the technology got into its footsteps, but in the 90s it was in thriving situation and we got to experience the development of technology with the development of our livelihood. We saw the offline part of the world as well, but now if there is no internet, I think we barely use our electronic devices. But nowadays there used to be electronic devices without internet as well, and they were used for purposes, and then before that there were typewriters and other things. And coming today, now we are in the kind of like, we used to talk about digital divide and now we are in the age where we talk about AI divide. So this is kind of situation, but still there’s a very kind of situation where we need to separate global north and global south because we have existing traditional gaps as well. We have literacy gaps, we have digital gaps, and we have AI gap. You need to have first literacy and then you need to have digital access, and then you can have access to AI and other emerging technologies, but we have all the three gaps. But today’s young people have chances to thrive with all the technologies and they have literacy, they have access, they have access to AI technologies and other emerging technologies as well. So the point here is how do we actually segregate or how do we separate what is the maximum limit that we can use the technology, or are we leveraging technology for good, or are these technologies always useful, are there any threats while teens and young people are using the internet, along with the opportunities that it provides, we are seeing so much threats. that are now aided by AI and all kinds of things. Another thing relating to education is now we are using generative AI and we believe that some children and young people might believe that the content generated by the generative AI applications is legitimate. That is another issue because people don’t know what legitimate or what is the originality of the content, people might not know. So we need that kind of literacy now. It is not like we need to regulate or we need to ban those kind of users but we need to teach people how they can leverage this internet and other aided technologies like AI and there might be another technology sooner, they need to be taught or we need to have capacity building, digital literacy programs that enables them to properly utilize this technology. I’ll stop here for the first round and I think we’ll be definitely going for the second round. Thank you.

Marko Paloski: Thank you, Ananda. You point out very important topics. I mean, there is not just digital divide but only digital literacy. There is also other divide, gender gap, which are very important, especially in the youth, as you mentioned, we are coming or raising now with those kind of technologies every day but still it’s a good thing and also other points which we’re gonna come back later with the questions. Now I will give the floor to Mohammed Kamran to introduce himself and give some personal state on the current situation.

Mohammed kamran: Thank you, Marco. It is not less than an honor for me. So hello guys, everyone. Mohammed Kamran from Pakistan. I’m advocate, I’m practicing in my own province right now. Also I’m with as a co-coordinator and apart from that I am with other organizations but no need to mention. and all of that. I agree with coming back to the point that how we see from Pakistan. Like, I’ll talk about my own country because I belong to Pakistan. So how we see all of this situation, how we see what is happening around, I totally agree with my brother, with my friend, that the shift from digital world, digital computers to the AI computers was too fast. We couldn’t see that coming. And all the teachers, students, and the parents, we are really confused how to deal with them and how to tackle the situation. Like, most of the teachers nowadays in the institutions that I know, they’re detecting the AI homeworks. Like, the students that have done homeworks with the AI, they are busy in detecting such homeworks. They do not know if it is useful if students are using AI. I’ll just give you, like, I won’t go to the explanations, which we’ll be talking in a while. Two, we lack literacy. And by literacy, I mean that students, they are trying to use AI, but they do not know the limits. And who are the ones who are teaching the limits? The adults again. So I think we have to start from the adults. And from adults, we will move to the youth because youth are the future of the country. So first, we have to have some regulations about the AI, that what are the limits to which they are not dangerous. Because if we use a lot of AI, I think that is also going to be a bit dangerous. So we’ll talk about this in detail, inshallah, in a while. That was a brief talk about this in a while. Thank you, guys.

Marko Paloski: Thank you. Thank you, Mohamed. Yes, I mean, the future is AI. So I think we’re going to more, more, and more use AI. But we’re going to see later. if there is any what we can do about in the future, yes. Now I will give the floor to Marcella Canto, which is coming from Brazil, to introduce and give some personal statement on the current situation.

Marcella Canto: Thank you so much. So I’m from Brazil. I’m here representing the Youth Brazilian Delegation. And I’m from Rio de Janeiro, but not the city of Rio. I live in the state of Rio. And my city is called Sao Gonçalo, which is the second most populous city of the state of Rio de Janeiro, yet it remains a peripheral city. And I think we need to take like two or three steps behind to talk about education and digital education specifically, thinking about inequalities specifically of the global south. And especially when we are talking about public policies who will make change and make difference. But I will talk about this later in my presentation, so I will pass the mic.

Marko Paloski: For the introduction and on answering the question, now I will give the floor to Ethan Chung for the last on-site participants to introduce themselves and give a short statement.

Ethan Chung: Hi, guys. I’m Ethan Chung, and I’m a Youth Ambassador of the Mampa Foundation. I’m from Hong Kong. And I’m not actually working for a job, so I can only tell you guys a perspective for education and a perspective of how we are educated. So I think in order to be related to technology, the education system is not enough because we’re doing paperwork right now, and we’re writing papers on normal writing and comprehension things on papers. But I think that we should educate something that is related to AI or maybe other technologies. And the education system should actually educate us how to use the AI and how to use these technologies much harder, not only just how to use it. Because we all know how to use it, right? Check GPT, you just input the question and it gives you an answer. But how to use it correctly? How to use it better that it won’t be against the law? This is the key point and I think should be discussed later.

Marko Paloski: Yes. Thank you very much. Now I would go to the online participation. I would give the floor to Umut. Can we just, yeah.

Umut Pajaro: Hello everyone. Good morning from my side. Well, we see that these kind of technologies are a rapid advancement and have those like taking my surprise as someone you already say. And it’s actually revolutionizing the industry and redefining the skill that we need to try in the modern workplace and the education system. Well, this kind of transformation actually presents not only a challenge, but also opportunities for educational institution and individuals seeking to navigate this evolving evolution of the workplace. So in my personal opinion, I think that educational institution must adapt to these changes to adequately prepare the future workforce. And I hope that we explore a little more on that or during this conversation, especially in the educational strategies that we can adopt to adapt these technologies and examine the changes in career pathways or different workforce that we can have in the future.

Marko Paloski: Thank you. Umut, I will pass the floor to Denise.

Denise Leal: Hello everyone. I think you can hear me well. I am Denise. I’m the CEO of the company. And I’m here to talk to you about the future of education in the future. And I’m here to talk to you about the future of education in the future. And I’m here to talk to you about the future of education in the future. I am Denise Leal from the Youth Coalition. I am from Latin American Caribbean region. When we first started to think about this session, this Dynamic Coalition session, we were in a dialogue with the DTC, the Dynamic Team Coalition, and we were talking a lot about how the future is changing and how it’s changing right now, right in front of our eyes. And we have to think about the work and the education in a very innovative way. We cannot, we can’t anymore think about the education as traditional as we’ve always seen before. We need to adapt because future that is already here is asking for it. We’ve discussed a lot about many things during our meetings to come to this session and to have a dialogue on it, not only thinking about the youth view as the 18 and 35 years old youth view, but also for children and teens. What is it like to be a student in a classroom where you are studying a thing that you won’t use, considering that you actually will need to know about AI, about technology, about social media, and lots of other stuff that you are not learning in your class. So here we are today waiting for discussions on these topics and willing to hear you all. And I am super excited to talk about it. And I also will bring some aspects on marginalized people, global South and traditional people also. So, because we have to include them in the discussions and talk about them and also about infrastructure because. if we are talking about digital era education and work. We have also to consider that there are people excluded, there are people that don’t have access to internet, proper access to internet. So we have a lot to talk and now that I have presented myself I will give the word to you, Mirko, so we can continue the flow.

Marko Paloski: Thank you, Denis. Yes, I want to point also one thing that we discussed in the youth coalition between us that the youth coalition is from 18 to 30 and we represent the youth, but the youth sometimes it’s not just 18 from to 30. That’s why I think the teen dynamic coalition is very much needed because nowadays I’m, for example, 29 and when I was studying or going to primary school, high school, I was still using the technology, Facebook, I don’t know others, but it’s not the same like it’s now with the TikTok, with much more usage. I know my nephew is using the tablet sometimes too, he knows automatically, so it’s totally different viewpoint and sometimes even we are as a youth still cannot represent their, how can I say, issues, their challenges or what they are and how they are experiencing the technology. So that’s very important and this coalition should be established like dynamic teen coalition and get more and more involved to be also their voice heard. Okay, I will go now to one of the questions, I mean in the beginning Ananda mentioned that there are some more divide gaps, Karen also mentioned, Cameron, sorry, also mentioned that should also be education to the adults because they are the ones who are teaching the youth and sometimes the youth may know more than the adults. There we also had it, I think Umut mentioned that the institution get more involved, so I would ask the question, what is, I mean, with the current situation we see that more and more AI is evolving and other advanced technology like quantum computing, robotics, those kind of things, but there is a big demand of the workforce shaped by the AI. So my question here is, considering this, which innovative educational strategies could be used that we can adapt to these technologies? So in the future, but maybe not in the future, maybe in the present, because most of this technology, depending on the country, are already taking over. So what do you think is the innovative educational strategy that we should go with that one, or how should we come to that adapt or, I don’t know, approach, let’s say? I will get, I don’t know who first would want to answer you. Okay, Kamran.

Mohammed kamran: Yeah, so as I’ve mentioned earlier that the jump from the digital machines to the AI machines was too quick. I think we didn’t see that coming, but yeah, it is what it is. So first of all, I think some people, they see it like the strategy, we’re talking about the strategies, how we can face all of these things. So like there are three ways. Firstly, people would be mostly adults, because the youngsters, the youth are using AI more than the adults. It is their thing. So the adults, what they do is they flee from the situation, like they leave the situation and they go to some, we are not seeing it, so it’s not happening. Secondly is that they fight for it, like they fight against it, that, you know, you shouldn’t be doing this. As I’ve mentioned earlier that most of the adult teachers, they’re busy in detecting the homeworks that have been done by AI. I think that is also wrong, that is injustice. If a kid is able to solve his homework with the help of an AI, It is just another tool that is here to help us. As our vehicles are here, as other technologies are here, I think AI is also here to help us if we do not cross the dangerous limits. We’ll talk about that as well. But this is the second one. We shouldn’t be fighting it as well. Then what should we do about it? I think we should adopt to it. That let’s not flee from it, let’s not run from it, let’s not fight it either. Let’s adopt to it. Why don’t we adopt to it? Because this is the future. We cannot say that AI is not happening and I’m not going to allow that. That adults who say that, let’s do it the old-fashioned way. I think that is also wrong because if there is something, this is as if we would say that, let’s not go to that place by our vehicle. Let’s not use the aeroplanes. Why not to use horses? It is the same to me. No, they’re just listening. So yeah, using AI is not bad, but to a certain limit. So I think the strategy that we should adopt is the adaptation to the situation, adaptation to the AI, and also there should be some regulations that to what limit AI, and not only AI, but all the other technologies that you’ve mentioned earlier, of course. But as AI is the boss, we know that. So that is why I’m mentioning, I am referring to AI again and again. So yeah, we should have some regulations. We should have some limits. Like this is the limit. This is, but yeah, who’s like, then you’ll ask us, you’ll ask me that, who’s going to watch over a kid with an iPad? As Marco has mentioned, that our children, like our people, like the babies, they know how to use, how to turn on the Wi-Fi. They’d go to the YouTube, they’d search for the shark, so baby shark, yeah. I think all of you know that. So our kids, they do know about all these things. When we were kids, we didn’t even know how to turn on a mobile phone. Like we’d ask our parents to please open the snake game for us. We’d be playing the snake games. I don’t know if you guys have done that. But nowadays, kids have adopted it. And I think we shouldn’t stop their way, but we should teach them their limitations. Because if they exceed their limitations, their screen time increases. That is going to be a bit dangerous, because we do not want machines in the future. We want humans. We want them to have feelings. And we’ll be talking about the future strategies, inshallah. I have a point in that too, but all in good times. I’ll get back to that in that question. But now we are talking about the present strategies. So I think the biggest strategy, and to me, the simplest strategy, I’ll say this in very simple words, is the adaptation to the situation. We shouldn’t run from it. We should adapt to it. Because as it is said that, when we do the reconstruction of a house, let’s say that we are like the beautification or something like that. If we want beautification in this room, we’ll demolish one wall of the room only to reconstruct it in a better way. So I think if we do not run from the situation, let’s reconstruct it in a better way. Let’s do the beautification in it. Let’s beautify it. Let’s make it in another way. If this is not working, let’s do this in another way, in the modern way that is needed. So yeah, that’s from my side. Thank you, Marco.

Marko Paloski: Thank you for your deep answer, I would say. And thank you for pointing out that there are many, many more things that are included here. It’s not just one thing. And that the AI is… part of all that I mentioned, robotics, in every technology that we are using now, AI is a part, as you mentioned, AI is the main component or the boss. I would give the floor to Umut now, if he can share also some experience on this question, and then we’re going to revert on-site to someone.

Umut Pajaro: Okay, yeah. Well, in my case, I actually think on this question every day, because I’m not only a teacher in university, but also in high school, when actually I had teens from 13 to 18 and 17 using AI in my classroom. And besides thinking on ways, on how they are using it as a tool, I start to explain to them that most of the answers probably are not exactly correct. So I started to use the tool in a different way than everyone else. One of the things that I do is to put the actual tool into test in the classroom, so they actually can know exactly what are the limits of the tool inside of the classroom. So they actually can know the implication of using that technology by experience. So that’s one of the things that I think it should be incorporated in the curriculum when it comes to emerging technologies, is having to use the technology and the implication of that technology by experience. So the students are actually going to know beforehand in really early stages, as I do in my case because most of my students have 12 or 13 years, so they are really young. So from that age they are already starting to learn what are the implications of using that and what is the limit of using that. Another thing that I started to find out in my practice as a school teacher is that not only this hands-on experience or the real-world applications on air disposal, also the interdisciplinary approach to the use of these technologies is also really helpful because it gives them a more realistic way of understanding how these technologies can be used and how this technology can actually help them to improve their daily lives. Another thing that I would try to do in my classroom is actually create some critical thinking and problem solving. It’s to use this tool, the different tools, as a way to improve their analytical thinking and problem solving skills and try to give them some abilities to adapt to the new challenges. Another thing that I try to improve there is not to demonize or to be against these technologies in the classroom. I just wanted to understand that what is there actually could help them also to increase their creativity, increase their design thinking, and ability to generate solutions to new ideas. This could be the pinpoint to another solution or what the problem that they had. So that’s what I’m trying to do. And another thing that I use this kind of tools in the classroom, and I think it’s important to have it besides all the things I already said, is to understand that this tool could also have an important role of creating collaborative efforts inside a classroom and outside of the classroom. Because if you have this different kind of tools to resolve a problem in a real life context, you will actually have to work together and collaborate between peers. So that is important. That is an important ability, not only in our educational system, but for our life. When we’re going to start to work in some, I don’t know, some important entrepreneurship or something like that, we’re going to work with people and we’re going to collaborate together. So we need to adapt to have these technologies inside of the classroom, but thinking on the future of the workforce. Yeah, that would be all for my side now.

Marko Paloski: Thank you. Thank you very much, Umut, for sharing your experience from the other side, from the institution. Now I will give the floor maybe to Marcela to give more on the Global South and those opinions.

Marcella Canto: I think to answer your question, it’s crucial to consider that digital skills must be included in the curriculum of basic education. The changes I think that people used to see as a have to, I see as a must to. There’s a misconception around the use of digital device will take the educational process more innovative. Some examples are using tablets in class or using AI to increase productivity, but it’s a mistake. If education doesn’t challenge social structures that perpetuate inequality, discrimination, and oppression, then it’s not really innovative because it doesn’t promote significant change there. We are now experiencing a new configuration of colonialism. Migration and international division of labor continues, but in a new guise. While the global north has large technology companies that employ CEOs and software engineers, we in the south are left with the worst jobs, whether it’s mining for processor production or removing objectable content from the internet. Technology alone will not save us from our problems because technology is not neutral. There is always a purpose and a bias. What I mean is that if the global south maintains the colonial logic, the structure remains the same. Aerocentrism remains the same. The underemployment that we are facing in the world is the same as the underemployment in all of the contexts of dependent capitalism. Our development will still be underdevelopment because it is still operating the colonial system where the south feeds the north in an unequal relationship not only with raw materials, but also with precarious labor. So, what I believe is that the curriculum in digital education needs to focus on training technology creators, not just average users with common digital competencies and non-technical know-how. We don’t need only competencies as promoted, for example, in this document of UNESCO called Global Citizens for Education. We must have systemized it and historically accumulated knowledge that enables critical think. Because through this, we will be able to criticize and transform reality. And I need to highlight the curriculum’s power. The curriculum in basic education is what is projected as a society. And curriculum produce ways of being as decent interpreting the world. When a current problem in a region or a need of a group of population are not included in the curriculum, it’s also a political decision. If technology curriculum does not address race, gender, class, is this curriculum truly emancipatory and effective in changing the current scenario and combating online violence? Does this curriculum really manage to significant change in a way that impacts the collectivity? Or just one or another individual will be able to ascend economically and socially different for the community? Does this curriculum empower people who are not recognized as knowledge producers and whose very humanity has been denied to justify subjugation of people in the colonial process? A process that ensured the exploitation of all the possible forms of the global South and developed in the North-South? In addition, can we produce a society technology creators who will behave ethically if the curriculum does not proactively combat diverse forms of discrimination? That’s say, I don’t think it’s possible for a single person or even a single stakeholder group to point out effective solutions to such a complex and profound issue. But I would like to use this basic granted to me at the IGF to extend an invitation to who’s are here, either in person or online, especially our colleagues from the global South. How can we articulate an emancipatory digital technology curriculum that truly combats extraterrestrials and propagate oppression? What are the needs of your country, your state, your city, and your community that should be addressed by educational policies? What are the latent concerns? How do we need to encourage different groups that excluded to become technology producers? How can we organize a way that respects multiculturalism and the diversity of region and allows for global cooperation? Thank you so much.

Marko Paloski: Thank you very much for bringing the whole perspective. I mean, it was more than answering. No, no, it’s good. It was more than going deep and explaining all the issues that are happening, and that it’s not just one issue, but they are more connected. I would give now the floor to Ananda to give his opinion on this question.

Amanda Gautam: Am I ready? OK. So my point is, on this point, I think she has created a lot of thoughts. And being a bit late on the panel is like you can opt into whatever is shared already. And then the challenge is I have to bring something new. So my point would be, as we have been discussing how it has changed the landscape for the young people and teens, with the changing landscape, we are now also talking about digital detox as well. Some people doesn’t have access. Some people has over access, like so much of screen time, and then the health issues related to that, and then the kind of social interaction that we are missing in most urban parts of the world. That is one thing. And another thing is we need to. leverage these same technologies to provide same equal opportunities to the people who are in remote locations. So these are the very fundamental two things that needs to be covered and balanced, you know, for where we have like already access and other kind of thing. Children today are very tech-savvy, I think someone told, I think Mohammed told that, they can just find anything out of the smartphone, that I have never been able to using a smartphone, but they will be overusing it at times because what we do, we go to the door in our house and then we just send a text message, can you open a door for me, we have like forgotten to knock the door or like call someone to open the door, you know, like these are the kind of things. If we need something, we’ll just text someone in the kitchen, in the home, we’ll not opt in for talking with our families, we’ll, if we are in a dinner table, we’ll be like looking at our smartphones and we can like, when friends meets at the restaurant for a meetup, everyone is looking at the social media. So this is the kind of essence we are losing at one point. And another part is like, there are some people who need to have access to these kind of things. So the balance would be, we need to have mechanisms, right set up, like I will share a case study for you. We were doing a USIGF in Nepal, we had like a fellowship calls for that, and I think there were 150 applications for 15 spots. And then while going through the reviewing the application, we found out that 90% of the applicants use AI to draft their applications. And the strength, another interesting point is the 10% who didn’t use the AI was not because they don’t want to. want to because they didn’t know how to use it. So sooner or later, it is not like we wouldn’t be using AI or generative AI. It is that we have to use it. I do it every day, but how do we use it is very important. So we need to find out the ways we need to teach. Like someone was talking about integrating it into the school syllabus. Yeah, exactly. We need to integrate how do we people use digital technologies along with the emerging technologies so that they are aware. And also, we need to teach them. We need to know, or we need to have, I think there are not much resources done. What is the optimal time that they will be hanging up with their gadgets? Because we do it starting from day to night. If you tell my kind of routine, I wake up in the morning, get in my desk, and I start using laptop. And then I think it’s until midnight of the day. That’s my almost routine. But we need to have enough research in place so that we know what is the balance in between having social interactions or having time in real nature and then using those technologies to kind of foster our work. Someone was asking a question how these things can be done in rural context. Maybe we can use these technologies to empower education and everything in rural context. But at the first, we need to know the nuances that how do we do it. Otherwise, we’ll be just throwing a technology to people. We’ll start using it. We can see. I don’t know how popular is TikTok in other countries. In Nepal, people started making money out of TikTok. And then people started. throwing their clothes out you know like because that gave them money. They thought that is the good way of making money. People were like topless and then like and they found out in a while that getting a nipple will block their account. They started just blocking their nipples and getting topless in the TikTok life because they thought this is the legitimate way to make money. So throwing out technology without education will be kind of like massacre but we need to have technology in place. We need to provide access to them. Along with them empowerment is very important. We need to know what are the right set of skills they need to know before using that technology. I’ll stop here.

Marko Paloski: Thank you. Yeah thank you very much Anandan for pointing out several issues here especially the one for too much consumption and using of the internet and while we have still places where they cannot have the basic access and needs to access to the internet. I will give the floor before to Dennis and then come back to Ethan.

Denise Leal: Thank you. We are having some online participation so I would like to mention before speaking we have here a friend from Africa saying that we need to consider the UNESCO guidelines on ethical use of AI as a possible answer which is pretty interesting. I was having a dialogue here on the online chat saying that in some countries we don’t have regulations and we don’t have basic information on how to consider the use of AI and how to monitor to supervise it. So in the discussion here that our online friends have been talking about is that ethical use of AI is a major point to consider in education. implementation, but how we supervise it. And it is a challenge to comply with the principles of the safety on internet. So it is also a discussion I want to bring because our friends here are telling us it’s important. And we have some questions I would like to address. A friend from Bangladesh, IGF, Charmaine, has asked how can remote work opportunities be made accessible in underserved regions? And there is another question, but I will talk about this one. It is indeed a challenge. It is important to discuss about it. We do have some places across Latin American Caribbean which don’t have proper internet access. So we have some projects, some community networks that take internet to these places. And they also work with empowering communities to teach them how to use internet and all the internet network. And it’s really interesting. But that shows us that we don’t only need to talk about the use, this very futuristic thing which is using AI and technology in schools, but we also need to talk about the implementation of internet in some places and how we empower these people to properly use internet and telephones and mobile apps and all this stuff. Because this is a discussion and it’s very beautiful to say that we are implementing technology in our schools. In Brazil, we have some places in some regions, the schools of future, they are so-called like that because they implement programming classes. as part of the education and this is impactful, but we also have this discussion on how we implement the education on the very basic aspect of using internet. We also need to talk about the literacy of internet, of using internet. So, I wanted to talk about it and say that I heard some very important things about this topic in UFLAC IGF this year. We had a session on indigenous people where a community of an organization called KIST talked with us saying that when it comes to internet education, we need to provide language accessible internet education. So, it’s our very futuristic to include a language accessible internet education. So, when it comes to technology, we have to provide a language accessible education. So, these people that come from different backgrounds, regions and communities can also develop their technologies and create data and create information in internet. So, I will stop what I’m talking now and I will just mention some questions that we have here on the chat so that other speakers can address the answers if they want. We have this other question here. How can we ensure ethical use of AI and data in both education and workforce management? Also from Bangladesh Women IGF. And we have some comments here. Another question from Gregory Duke is I’ve heard that regulations could affect innovation negatively and I like this new revolution. What should be considerations from youth leaders? in policy input. And Sasha is asking, as youths, how do you envision AI to be used in assessments in educations? That’s it. A lot of questions, a lot of discussions. Now you guys get there. Thank you so much.

Marko Paloski: I will give the floor first to Ethan to answer the main question, and then we can jump in the question that they’re asking. Maybe we will open later the floor to other questions from the participants on site and of course online.

Ethan Chung: Hey, okay, let me take this off. So first of all, before I start the pitch, I want you to keep in mind that AI is here, not for us to rely on, but for AI should be assisting us to maximize its effectiveness on the human society. So here’s the thing. When we rely on AI, it leads to a lot of misbehaviors, and therefore it will have negative effects. So for example, my own experience, so last year I had an exam about coding, and then it’s a JavaScript system. And what my friend did is he opened another website and he used the AI to generate a whole code that works. And we submitted it, but he failed, I passed, because his AI generated code, it’s in Python method, but not JavaScript, so he failed the exam. And by this, we could tell that when we rely on AI, it will have negative effect. He did not process it in his brain. So what we need to use, like what we need to do for AI, or not do for AI, I should say, what we need to use AI for is we need to use the information provided by AI, and we process it, and we do a summary of the information that it provides. And so I think, for me, I would like to be educated with the education strategy that I would want so that I can adapt to these kind of technologies is that. some kind of project-based works. Because, so for example, we’re doing a project, right? And the teacher shall allow us to use AI beside of banning us from using AI. And by this, I mean that the teacher should guide us how to use AI correctly. And so we can know that that’s how we use AI. That’s how we actually, how to process AI’s information, but not only relying it to generate information and we use it. Because by that, we cannot improve. That then we will stop here. The improvement of the whole society will stop here if we rely on AI. So that’s a kind of education strategy. So to enhance the use of AI. And yeah, that’s kind of the educational strategies that I can give it.

Marko Paloski: Yes. Thank you, Ethan. I mean, it’s a good point of view how you make, how we use AI and it was for what? Not just to relay, I mean, on AI. To use for our, to approach our maximum or to, I don’t know, establish maximum potential or to help the things that we can, I mean, we spend time much more on something else. I would now go to the two questions that Denise mentioned from the online. And after that, open the floor on on-site if there are some questions so we can proceed. The question that was asked online and Denise tried to answer some of them, were how can remote work opportunities be made accessible in under-reserved regions? And then the second one was, how can we ensure ethical use of AI and data in both education and workplace management? The second one was, how can we ensure ethical use of AI and data in both education and workforce management? I mean, we mentioned something. I saw Ananda that he gets the mic.

Amanda Gautam: Yeah, it is about rural context. So I think I can talk about it. So. The first starts with the access, of course, because without access we cannot talk about anything else. So there will be, how do we, if there is no community networks or not, they are the last mile networks where community manages their own network and there are a lot of funding opportunities available for doing community networks. That is the one how we can, and there are universal service funds that are available to take the access into the underserved regions. That would be a start. And then having only access to internet is not enough. Like Dennis mentioned, I was about to mention that as well. If you take an example of Nepal, only 75% of the people are literate. And if people don’t have enough literacy to access the content, there might be a barrier with language. In Nepal itself there are more than 120 languages spoken. So having access to meaningful content would be more challenging. After we bring the access to the internet, having access to meaningful access is equally challenging and equally important. That’s why TikTok 5, you don’t need to have much knowledge. You just need to either record yourself in your language you know and you post it and people don’t need to do anything. Just swipe, you get the knowledge. Or like you get the content you want. And that is how TikTok got idol in Nepal. So those kinds of platforms might be sometimes very essential to maintain the access to meaningful content. But we need to know, have the right balance, what contents are shared on those platforms. Another thing I think somebody asked in the question as well. what is the right balance of over-regulation or what are the things that we can have ethical AI. There might be three instances. One might be legislation, policy, or code of conduct. Among these three, I think code of conduct is very important because we know the digital world is just as mirror of our real world that we are living in. Umut mentioned in some way some days back we have biases in society and data are made by the society and those biased data lead to the biased decision by the AI system, which is obvious. And similar on the how do we use technology, we have a kind of like non-ethical code of conducts that we do, misconducts that we do in the society and same thing we reflect that in the digital world. So we need to have like good code of conducts so that we use those things decently and the thin line between decency and indecency varies from place to place. So it is very contextual and it is very local but we need to identify what is decent and what is indecent. So these are like very basic fundamental things. Of course we need to have legislations, we need to have policies that promote these technologies that provide access to the people where these are not underserved because there is no profit in the ruler context but it is not always about profits. There are ways to do it, many agencies are doing it, community networks are the very good examples which have been thriving in Africa and other regions. I think we can discuss about it all day but I’ll end it here. Thank you.

Marko Paloski: Thank you very much especially for the pointing out for the use and how we can of these technologies. I would give now floor to Umut because he can also relevate to these questions?

Umut Pajaro: Well, there is two questions that I call my attention. One is how we can ensure the ethical use of AI in education and workforce. Well, my answer to that is actually, it could be quite obvious. It’s actually remote. It’s actually to try to get those involved or the stakeholders that are actually part of the education system. And that implies to include even students as part of making the rules, the key rules to use these kind of tools in education, in education, the clearest as possible. But those rules, let’s say in that way, it can be not only as a recommendation or suppose or kind of best practices or something like that. We need also that the government start to regulate this, the use of this kind of technologies inside the educational systems because the educational system institutions, because there is a way somehow that they can be complemented. And actually, can we somehow enforce or make accountable these ethical guidelines that we are putting into place in a different discussion or consensus that we get as different stakeholders. In this case, I include the stakeholder from the students. Also, that means you include teens, children, or everything on everyone that is part of the educational process. The other question is about innovation. And I, my answer to that is that regulation, it doesn’t actually, it doesn’t actually cut innovation. It’s actually, if you do a good regulation based on human rights, the innovation that you make and respect those laws that are based on human rights, are going to last longer if you actually respect the human rights and respect the law. So innovation make it into a framework, and making it into a framework that respect human rights and the characteristics or the context of where those laws came from, actually made those innovation and the law lasting in time, because it actually is responding to what the people needs. And actually it’s respecting what the people needs and what the people want from the society. So yeah, that’s pretty much what I wanted to say.

Marko Paloski: Thank you very much, Umut. I would ask now the audience if they have some questions here. I know that online there is a lot of questions, but let’s first see if there is. We have one question. Let me, I will try to use this mic.

Nirvana Lima: Good morning, everyone. My name is Nirvana Lima. I’m a participant of Youth Programme Brazil. I work as a facilitator and I’m here with the delegation. So first I would like to thank you for your panel. Congratulations for all of you. I live in a indigenous territory in Brazil composed for 29 towns. It’s in Ceará state. One of these towns is called Brejo Santo, and that is sited in Brejo Santo, the third official indigenous school of Brazil. They have a curriculum based on their ancestral traditions, philosophies, and culture. And until now, they don’t have access to the internet. I would like to ask you about the positive and negative aspects that they can face when they are connected to the internet in the very near future. Thank you.

Marko Paloski: I would ask any of the participants if they want to answer this question, also online. Or to take the second one? Okay, you can. Okay. Umut, just to take the second one, and we’re going to go to both. Okay.

Audience: Okay, thank you so much. My name is Mariana. I’m from Youth Brazil also. Thank you for the panel. I think all you said is really important, but my question goes a lot with the curriculum that was said and some things about the AI. I think we need to think, I’m going to be direct to my question, given to the challenges faced by the global south, such as digital exclusion of many young people due to an unequal access to internet and technology, as well the biases built in technology systems. What public policies do you think can be implemented to ensure both technology inclusion of these young people also has high quality technology education, while ethically protecting these young people, individuals in labor market that are increasingly driven by technology skills in their CVs, like entering the market and entering all these things. And we need to understand that, like, English right now. is something that you have to understand to insert yourself in the market, so technology skills also. So how do you guys think we can build these public policies for the future? Thank you so much.

Marko Paloski: Thank you. I would give you now the floor to Umut, because he wanted to answer the first question.

Umut Pajaro: Yes, when it comes to indigenous population, when they want to have access to technology, one of the things that I learned in the recent years is they have to decide what actually one to access on the internet and how they want to access on the internet. They’re going to face a lot of challenges, as far as I see. Until now, probably one of the main challenges is the language barrier. Dennis already mentioned that before in his speech, that the language barrier is one of the main things that they’re going to face. And there is a couple of wars in a region in Latin America where indigenous people are actually using the internet to actually protect their languages. So for example, in Colombia, we have in the north part of Colombia, we have indigenous people that is called the Waju. They’re actually trying to protect their languages, their language, creating Wikipedia in their language. And they’re trying now in the second stage not only having Wikipedia on their language, but also creating some specific capacity building on cybersecurity and other things in the language also. So that’s probably the next stage, not only in defining that they’re going to face the same problem as everyone else in the internet, but trying to make it in their own language and respecting the way they want to be involved in the internet. because that’s the other aspect that it gains here. Yeah.

Marko Paloski: Thank you very much, Umut, for answering the question. I would ask now the speakers who would like to take the second question or maybe also add something on the first one.

Mohammed kamran: Okay, so I think the second question is, she asked that, how can we regulate the useful, the positive use of AI and all of such things? I think I will go with what Anand here said that it should be from the government, from the parliament. And I would like to add to it, and that is that it shouldn’t be a taboo anymore, that let’s talk openly about what is right and what is wrong. Our parents, they tell us that this is right and this is wrong, but we do not tell the youth that you shouldn’t go for, I do not know, but something is in our head that don’t talk about these things. We do not tell our youth people, like our youth that don’t prefer nudity, as he has mentioned, that on a lot of social medias, the youth is going towards that way in order to make more money. I don’t know, because they get more views for that and they get a lot of money for that, so they are going that way. It’s just a small example. Hacking, for example, is nowadays so into us, like, sorry for moving from, fleeing from the topic, but these are the problems that we are facing, so let’s not give a deaf ear to this thing, let’s discuss it, but before that, it should be from the government. In Pakistan, for example, we have a PICA Act, Prevention of Electronic Crimes Act 2016, it was promulgated in 2016. but these are only for the prevention of electronic crimes like cyber crimes. We should have such acts, such regulations by the parliament which would enclose the hacking, sorry not hacking, but the nudity, like very specifically. I’m again saying nudity is just a small example of the problems that we are facing on the social media, there are a lot more than that. So all of them should be pointed out and should be promulgated by the government, from the government, for the public, and they should regulate the acts. Only then I think we would move to the positive use of AI with 100% like the positive, like we will have the positive impact of all the AI and all the stuff. So yeah, that’s from my side. Thank you.

Marko Paloski: Thank you very much.

Audience: I think the first step to understand the needs or what we need to combat, I think we need to create and research the needs of locations, states, and even countries in a way that we know we are facing, you know. You know to do the research, ask to local communities, to people, and in the way there is a interaction between government, private sector, and civil society. So I think it’s the first step, and then after that we need to face these questions. I don’t think only education, but education is a really important step. We need to address like gender issues, race issues, class issues, and any other type of violence should be talked about in curriculum of digital education. So I think it’s, I don’t think it’s the only way. but I think it’s really important.

Marko Paloski: Thank you. Thank you. I will go now online to Denisse to see if there is online some questions and to raise some comments.

Denise Leal: Yes, there are a lot of questions and discussions here on the chat. There is also a participant that wants to speak, but I will before talk about an important topic that we’ve been talking here. Well, they’ve asked us about regulations and innovation and what should be our considerations in policy input. And we’ve started a discussion on regulations and how it’s important that we get our regulations improved. Because we have these very old innovation policies, and also the intellectual property laws, they are not really adapted to the digital era reality. So we have to improve our regulations. And we also need, and it’s also another comment I made here, and also that I’ve been making in other spaces, we need the United Nations spaces to connect with each other. You see, we have these spaces from the intellectual property, and we have these spaces from biodiversity and climate change. And they are all creating treaties and impacting regulations. And they are all having discussions on AI, internet, and other spaces that encompasses the digital era and digitalization. But why don’t we have these people and these United Nations spaces also here in IGF, or in the United Nations Data Forum, for example, because the thematics, they are related. So there is a… a mechanism being developed to regulate the DSIs, the digital sequence information, but it’s being developed in the UN Biodiversity Forum and it’s not related to other spaces, but how is it? Because it will impact internet and other things and data specifically. So for me, the aspect is we need regulations to improve, to guarantee the safe use of internet and innovations that are inclusive. And we need them to connect these internet regulations and the other regulations, they need to be connected. They need to talk, these spaces need to have conversation. And a very important aspect of it is we cannot create a regulation about internet and technology without hearing the technical community because we are going to create a law that doesn’t have an effective impact. Because we will say you have to supervise it, but if we don’t say how and why, without hearing the technical community, it will probably not be a good regulation with effective results. But I see some comments here and I would ask our IGF person here in the chat, if we can open Sasha’s mic, so she can speak. She wants to make a comment. May I unmute her?

Marko Paloski: You will see now. I’m sorry.

Denise Leal: No?

Marko Paloski: I would check now, but I think let me just.

Denise Leal: Okay, so.

Marko Paloski: Now she should be able to unmute herself.

Denise Leal: Yes, thank you. We can see you.

Audience: Good day, everyone. Very lovely presentation and session so far and definitely glad to be here. So if you could hear the assumption in my voice, I’m from Trinidad and Tobago in the Caribbean. And I had a, it was such a revolting conversation. I had just one or two comments there based on the conversation. When we consider inclusive education, when we consider inclusive design, we also considered within the education spectrum, how do we design technology to fit the user? And so the user, the students are the center of the process and it hasn’t always been that way, but it is definitely the direction in which we hope that we’ll take adaptive technologies and inclusive technologies and design for the persons in mind. Just like the chairs that you sit on and you think about human-centered design, it’s really the direction in which education and educators hope to take technology and AI. And that’s the problem within itself is that you have to be able to define the limitations of AI technology. And it’s such a fast moving field that it’s difficult to pinpoint. And so it’s also important to consider, okay, before we even put that into the curriculum to teach students, how do we regulate it? How do we guide that practice and move from governing policies to practices, to curriculum, to the school content and then to the classroom environment and the way in which we implement it when it comes to project or problem-based learning. So it’s a very complicated issue that we’re having right now and that’s within my field of study here at the University of Prince Edward Island. And so it’s very, very… complicated to really begin that process. And I would love to get a little more, you know, idea of your perspectives when it comes to that particular space and what you have been experiencing. From the point of the Indigenous culture as well, when you look at that transfer from physical space to digital space, what’s being lost in that process? And so many parts of their culture as well would be lost within the process of that change of physical identity to digital identity. And you lose yourself in that process of digital exchange. So definitely, please, if you could give me some more of your perspectives within your education of how you see AI being used in the most proficient way when we can’t really define its parameters at this time. It’s such a fast-changing space. Thank you.

Marko Paloski: Thank you, Sasha. I would give who wants to answer this question, but I will try to make one minute maximum because we are close to the end of our session. So, okay.

Mohammed kamran: Hello. Yeah. So I think, like, the last point, I would just add to the last point that how we see the use of AI when it’s like no regulations, we don’t have anything on the floor, and yet we are using AI. So I think, as I’ve mentioned very in the start of the session, that all of us are confused. The teachers, the parents, the students, we are all confused how to make use of it. Some are saying that, okay, use it like the teachers, for example. I’m giving example of teachers again and again because we are here to talk about the youth, right? But it is not specifically to the teachers. All of us, like, some say that it’s good to use. Others are here only to… Like only to stop us from using it. So I think everyone is confused right here and I myself I’m unable to answer to the question that how are we like how we are doing with this stuff because we have nothing on the floor yet a 100% regulations like we have the traffic rules. We have other rules, but we have generally the cybercrime rules and Regulations like that, but not very specifically to pinpoint the this very issue. We have nothing on the floor so I think we are confused that but Using our humanity using our limits of humanity. We should I think only use it for the positive ways We should go for the positive ways. We should teach our children. We should teach our youngsters the Positivity in everything that the like the way we use positivity in our in our in our lives We should do the same way in this issue as well. So, thank you. That’s all for my side

Marko Paloski: Thank you Because we are closing to the our time I would give to every speaker just maximum one minute to wrap up and what is from this I don’t know take over or action points that we everybody should Take and in consideration or maybe it’s actionable stuff that we should do it Maybe to the next stage of IGF or maybe even closer. So yeah Who would like Maximum one point just one minute. Sorry

Amanda Gautam: The major would be It is a like common one, but it is a collaborative action It is not a part of like one stakeholder can do it. There is a role to play by every stakeholder so but navigating the Landscape it might be complex for people. So we need to make it easy like people don’t know how to take on some places so we need to Make a loud about internet works how they can be deployed opportunity is to do that. And I think we need a kind of transparency in terms of information available out how to do the things. I’ll stop here for the sake of time. Thank you. Thank you so much, everyone.

Marko Paloski: I think you should maybe turn on.

Denise Leal: I think in the next IGF, we should have more debates about education and more debates calling teachers and educators, specifically, that want to work from the technical committee and try to analyze more the public policies of each country, each continent, and have more of that debates. I think, I guess.

Marko Paloski: OK, thank you. It’s OK. Yeah.

Ethan Chung: All right, so for the next IGF, first of all, I think I won’t be here due to some, because I got to go for an exam. So I won’t be here. But I’ll try my best to contribute. And I think for next IGF, we are all together here from different nations, different race, and to here to actually create a standard for people to know what we are doing. And we want the people to follow the standard so that they know what’s not suitable, what’s suitable. And I think that’s what IGF is doing now and what it is made for. Yes.

Mohammed kamran: So being the last speaker online, sorry, on-site. Yeah, sorry. We’re just confused, online and on-site. Sorry. So being the last speaker on-site, I think it’s a responsibility for me. I don’t think I’ll be able to wrap up as it deserves to be. But I think sticking to the topic, education should be more experience-based, like for example. For example, if we are going to go for something, once again, I’ll go to this school or college. The teachers should test the students on their experience. If they give them an assignment to write an essay on, let’s say, the Riyadh, of course, they’re going to use the AI for that, or the internet. So I think the assignment should be more like an experience based, how was your experience in Riyadh? Of course, the AI doesn’t know about that. Also, about the meaning of something to them. Let’s say what internet means to them. So that would be another way that they are going to contribute on their own to it. Apart from that, they can ask them about the application of something, that if we have this mic, what is your application? What do you think we can use this for? So they’re going to put a lot of efforts from themselves, from their own mind and their own heart. And apart from that, being humans, I think that we should ask them about their feelings about something. OK, what do you feel about Saudi Arabia? What do you feel about the IGF 2024 that has happened in Riyadh? So I’m sure each one of us are going to write, I think, maybe 500 pages essay on this. And so yeah, I think our education should more focus on ourselves, instead of going for just a stereotypical kind of style. So thank you so much for having us. Thank you, Marco. Thank you.

Marko Paloski: Thank you. I would just give a brief one minute to Moot.

Audience: Denis, maybe shorter. Well, in my case, it’s going to be fast. It includes educators and students from all the stages. or designing, implementing and developing the technologies that actually are affecting the daily lives.

Denise Leal: Thanks everyone for being here. It was very interesting. I wanted just to thank everyone and especially I wanted to say that I really enjoyed hearing people from just to thank everyone and especially I wanted to say that I really enjoyed hearing people from Caribbean region. I am glad for it because usually we from Latin America lack this contact, so it’s important for us to hear from you from Caribbean. And also I wanted to point that this was a session that was thought and created with DTC, Dynamic Team Coalition. Unfortunately they are not here. They need to receive more support next years to be in IGF, but I wanted to thank Ethan for being here representing the teams because he is a team with us, a very brilliant and intelligent one, so you don’t even notice he’s so young. And I really appreciate our debate. Thanks for everyone that joined. Our speakers not only from the Youth Coalition but also from Pakistan and the projects there and also from the Brazilian Youth Program and other programs here. Thanks everyone for you joining us and we count on you on our next activities. We wanted to remember that we are on our election registration time from Youth Coalition, so if you are part of our mailing list, please register yourself. That’s it.

Marko Paloski: Thank you, Denis, and I want to thank you everybody, to the panelists, to the audience here and also online. Thank you for coming here and enjoying in this discussion. And before we wrap up, I would say maybe make a group picture while you are still on screen. So yeah, thank you everyone.

A

Ananda Gautam

Speech speed

147 words per minute

Speech length

2176 words

Speech time

886 seconds

Digital divide and unequal access to technology

Explanation

Ananda Gautam highlights the existing gaps in digital access and literacy between the global north and south. He points out that there are multiple divides including literacy gaps, digital gaps, and AI gaps.

Evidence

Gautam mentions that according to UNESCO, around 3.6 billion people worldwide still lack reliable internet access.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Mohammed Kamran

Umut Pajaro

Ethan Chung

Agreed on

Need for digital literacy and proper use of AI tools

Balancing technology use with social interaction

Explanation

Gautam discusses the need to balance the use of technology with real-world social interactions. He points out the importance of teaching proper utilization of technology while maintaining human connections.

Evidence

He gives examples of people texting family members in the same house instead of talking, and friends looking at social media when meeting at restaurants.

Major Discussion Point

Ensuring ethical and inclusive use of technology

M

Mohammed Kamran

Speech speed

165 words per minute

Speech length

2163 words

Speech time

784 seconds

Need for digital literacy and proper use of AI tools

Explanation

Mohammed Kamran emphasizes the importance of teaching students and adults how to use AI tools properly. He argues that there’s a need for regulations and education on the limits and ethical use of AI.

Evidence

Kamran gives an example of teachers detecting AI-generated homework and not knowing how to handle it.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Ananda Gautam

Umut Pajaro

Ethan Chung

Agreed on

Need for digital literacy and proper use of AI tools

Need for government regulations and policies

Explanation

Kamran argues for the need for government regulations and policies to guide the ethical use of AI and technology. He suggests that these regulations should be specific and address various issues related to technology use.

Evidence

He mentions Pakistan’s Prevention of Electronic Crimes Act 2016 as an example, but notes that more specific regulations are needed.

Major Discussion Point

Ensuring ethical and inclusive use of technology

Differed with

Umut Pajaro

Differed on

Approach to AI regulation in education

U

Umut Pajaro

Speech speed

123 words per minute

Speech length

1333 words

Speech time

647 seconds

Importance of hands-on experience with AI in classrooms

Explanation

Umut Pajaro advocates for incorporating AI tools into classroom learning. He suggests that students should be taught how to use these tools effectively and understand their limitations through practical experience.

Evidence

Pajaro shares his experience of using AI tools in his classroom and testing them with students to demonstrate their limits.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Ananda Gautam

Mohammed Kamran

Ethan Chung

Agreed on

Need for digital literacy and proper use of AI tools

Need for interdisciplinary and project-based learning approaches

Explanation

Pajaro argues for the implementation of interdisciplinary and project-based learning approaches in education. He believes this provides a more realistic understanding of how technologies can be used in real-world situations.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Marcela Canto

Denise Leal

Agreed on

Adapting education systems to emerging technologies

Importance of teaching critical thinking and problem-solving skills

Explanation

Pajaro emphasizes the need to develop critical thinking and problem-solving skills in students. He suggests using AI tools to enhance these skills and prepare students for future challenges.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Marcela Canto

Denise Leal

Agreed on

Adapting education systems to emerging technologies

Importance of stakeholder involvement in creating guidelines

Explanation

Pajaro stresses the importance of involving all stakeholders, including students, in creating guidelines for AI use in education. He argues that this inclusive approach will lead to more effective and accountable ethical guidelines.

Major Discussion Point

Ensuring ethical and inclusive use of technology

Differed with

Mohammed Kamran

Differed on

Approach to AI regulation in education

Addressing language barriers for indigenous populations

Explanation

Pajaro discusses the challenges faced by indigenous populations in accessing digital content due to language barriers. He emphasizes the need for content in indigenous languages to ensure inclusive access to technology.

Evidence

He gives an example of indigenous people in Colombia creating Wikipedia in their language to protect and promote it.

Major Discussion Point

Ensuring ethical and inclusive use of technology

Rapid advancement of AI is redefining needed skills

Explanation

Pajaro points out that the rapid advancement of AI and other technologies is changing the skills required in the modern workplace. He argues that educational institutions must adapt to these changes to prepare the future workforce adequately.

Major Discussion Point

Future of work and education

M

Marcela Canto

Speech speed

135 words per minute

Speech length

781 words

Speech time

345 seconds

Curriculum should address social issues like race and gender

Explanation

Marcela Canto argues that digital education curricula should address social issues such as race, gender, and class. She emphasizes that these issues are crucial for creating an emancipatory and effective educational system.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Umut Pajaro

Denise Leal

Agreed on

Adapting education systems to emerging technologies

Differed with

Ethan Chung

Differed on

Focus of digital education curriculum

Focus on training technology creators, not just users

Explanation

Canto emphasizes the need for education systems in the Global South to focus on training technology creators, not just users. She argues that this approach is necessary to combat inequality and change the current scenario of technological dependence.

Major Discussion Point

Adapting education systems to emerging technologies

Agreed with

Umut Pajaro

Denise Leal

Agreed on

Adapting education systems to emerging technologies

E

Ethan Chung

Speech speed

168 words per minute

Speech length

663 words

Speech time

236 seconds

Lack of regulations and ethical guidelines for AI use in education

Explanation

Ethan Chung points out the lack of clear regulations and ethical guidelines for AI use in education. He emphasizes the need for education on how to use AI correctly and within legal boundaries.

Evidence

Chung gives an example of a friend who failed an exam by using AI-generated code without understanding it.

Major Discussion Point

Challenges of AI and technology in education

Agreed with

Ananda Gautam

Mohammed Kamran

Umut Pajaro

Agreed on

Need for digital literacy and proper use of AI tools

Differed with

Marcela Canto

Differed on

Focus of digital education curriculum

M

Marko Paloski

Speech speed

155 words per minute

Speech length

1908 words

Speech time

737 seconds

Risk of job losses due to automation

Explanation

Marko Paloski highlights the potential risk of job losses due to automation. He points out that a significant portion of the global workforce could be affected by this trend in the near future.

Evidence

Paloski cites a report by the McKinsey Global Institute suggesting that by 2030, up to 800 million jobs worldwide could be lost to automation, representing one-fifth of the global workforce.

Major Discussion Point

Future of work and education

D

Denise Leal

Speech speed

132 words per minute

Speech length

1732 words

Speech time

785 seconds

Need for lifelong learning and adaptability

Explanation

Denise Leal emphasizes the importance of lifelong learning and adaptability in the face of rapidly changing technology. She argues that traditional education approaches are no longer sufficient and that innovative thinking is needed.

Major Discussion Point

Future of work and education

Agreed with

Umut Pajaro

Marcela Canto

Agreed on

Adapting education systems to emerging technologies

A

Audience

Speech speed

138 words per minute

Speech length

791 words

Speech time

342 seconds

Importance of human-centered design in educational technology

Explanation

An audience member emphasizes the importance of human-centered design in educational technology. They argue that technology should be designed to fit the user, with students at the center of the process.

Evidence

The speaker draws a parallel with the design of chairs, highlighting the need for human-centered design in technology.

Major Discussion Point

Future of work and education

Agreements

Agreement Points

Need for digital literacy and proper use of AI tools

Ananda Gautam

Mohammed Kamran

Umut Pajaro

Ethan Chung

Digital divide and unequal access to technology

Need for digital literacy and proper use of AI tools

Importance of hands-on experience with AI in classrooms

Lack of regulations and ethical guidelines for AI use in education

Speakers agreed on the importance of teaching students and adults how to use AI tools properly, addressing the digital divide, and providing hands-on experience with AI in classrooms.

Adapting education systems to emerging technologies

Umut Pajaro

Marcela Canto

Denise Leal

Need for interdisciplinary and project-based learning approaches

Importance of teaching critical thinking and problem-solving skills

Curriculum should address social issues like race and gender

Focus on training technology creators, not just users

Need for lifelong learning and adaptability

Speakers agreed on the need to adapt education systems to include interdisciplinary approaches, critical thinking skills, and addressing social issues, while focusing on creating technology creators and promoting lifelong learning.

Similar Viewpoints

Both speakers emphasized the importance of creating guidelines and regulations for AI use in education, with Kamran focusing on government involvement and Pajaro stressing the inclusion of all stakeholders, including students.

Mohammed Kamran

Umut Pajaro

Need for government regulations and policies

Importance of stakeholder involvement in creating guidelines

Both speakers highlighted the importance of considering cultural and social aspects when implementing technology in education, with Gautam focusing on maintaining human connections and Pajaro addressing language barriers for indigenous populations.

Ananda Gautam

Umut Pajaro

Balancing technology use with social interaction

Addressing language barriers for indigenous populations

Unexpected Consensus

Importance of human-centered design in educational technology

Audience

Umut Pajaro

Ethan Chung

Importance of human-centered design in educational technology

Importance of hands-on experience with AI in classrooms

Lack of regulations and ethical guidelines for AI use in education

There was an unexpected consensus on the importance of human-centered design in educational technology, with speakers from different backgrounds agreeing on the need to prioritize user experience and ethical considerations in AI implementation.

Overall Assessment

Summary

The main areas of agreement included the need for digital literacy, adapting education systems to emerging technologies, creating regulations and guidelines for AI use, and considering cultural and social aspects in technology implementation.

Consensus level

There was a moderate to high level of consensus among the speakers on the main issues discussed. This consensus suggests a shared understanding of the challenges and potential solutions in integrating AI and emerging technologies into education systems. The implications of this consensus include the potential for collaborative efforts in developing educational strategies and policies that address the identified challenges and opportunities.

Differences

Different Viewpoints

Approach to AI regulation in education

Mohammed Kamran

Umut Pajaro

Need for government regulations and policies

Importance of stakeholder involvement in creating guidelines

While Kamran emphasizes the need for government regulations to guide AI use, Pajaro advocates for a more inclusive approach involving all stakeholders, including students, in creating guidelines.

Focus of digital education curriculum

Marcela Canto

Ethan Chung

Curriculum should address social issues like race and gender

Lack of regulations and ethical guidelines for AI use in education

Canto argues for a curriculum that addresses broader social issues, while Chung focuses more specifically on the need for education on ethical AI use within legal boundaries.

Unexpected Differences

Perspective on AI’s role in education

Mohammed Kamran

Ethan Chung

Need for digital literacy and proper use of AI tools

Lack of regulations and ethical guidelines for AI use in education

While both speakers discuss AI in education, Kamran unexpectedly takes a more positive stance, viewing AI as a tool to be adapted to, while Chung focuses more on the potential risks and need for strict guidelines.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation in education, the focus of digital education curricula, and the balance between embracing AI and setting boundaries for its use.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of addressing AI in education, speakers differ in their specific approaches and priorities. These differences reflect the complexity of integrating AI into education systems and highlight the need for multifaceted solutions that address various concerns including regulation, curriculum design, and ethical considerations.

Partial Agreements

Partial Agreements

All speakers agree on the need for proper education on AI use, but they differ in their approaches. Gautam emphasizes balancing technology with social interaction, Kamran focuses on teaching limits and ethical use, while Pajaro advocates for hands-on experience in classrooms.

Ananda Gautam

Mohammed Kamran

Umut Pajaro

Need for digital literacy and proper use of AI tools

Importance of hands-on experience with AI in classrooms

Balancing technology use with social interaction

Similar Viewpoints

Both speakers emphasized the importance of creating guidelines and regulations for AI use in education, with Kamran focusing on government involvement and Pajaro stressing the inclusion of all stakeholders, including students.

Mohammed Kamran

Umut Pajaro

Need for government regulations and policies

Importance of stakeholder involvement in creating guidelines

Both speakers highlighted the importance of considering cultural and social aspects when implementing technology in education, with Gautam focusing on maintaining human connections and Pajaro addressing language barriers for indigenous populations.

Ananda Gautam

Umut Pajaro

Balancing technology use with social interaction

Addressing language barriers for indigenous populations

Takeaways

Key Takeaways

There is a need to adapt education systems to emerging technologies like AI, blockchain, and robotics

Digital divide and unequal access to technology remain major challenges, especially in developing countries

Ethical use of AI and data in education and workforce management is a key concern

Interdisciplinary, project-based learning approaches are needed to prepare students for future technological landscapes

Curriculum should address social issues like race, gender, and class alongside technical skills

Balancing technology use with social interaction and critical thinking skills is important

Regulations and policies need to be updated to address emerging technologies while fostering innovation

Resolutions and Action Items

Include more debates about education and involve educators in future IGF discussions

Analyze public policies on education and technology across different countries and continents

Create standards for appropriate use of AI and technology in education

Involve educators and students from all stages in designing and implementing educational technologies

Unresolved Issues

How to effectively regulate AI use in education when the technology is rapidly evolving

How to ensure ethical use of AI and data in both education and workforce management

How to make remote work opportunities accessible in underserved regions

How to balance innovation with regulation in technology and education policies

How to address the potential loss of indigenous cultures in the transition to digital spaces

Suggested Compromises

Use AI as a tool to assist learning rather than relying on it completely

Implement project-based assessments that require original thought alongside AI-assisted research

Develop code of conduct guidelines for AI use in education alongside formal regulations

Balance technology integration with preservation of traditional teaching methods and social interaction

Thought Provoking Comments

We are now experiencing a new configuration of colonialism. Migration and international division of labor continues, but in a new guise. While the global north has large technology companies that employ CEOs and software engineers, we in the south are left with the worst jobs, whether it’s mining for processor production or removing objectable content from the internet.

speaker

Marcela Canto

reason

This comment provides a critical perspective on how technological advancement is perpetuating global inequalities, challenging the notion that technology inherently leads to progress for all.

impact

It shifted the conversation to consider the broader socioeconomic implications of technological advancement and the need for more equitable development.

We need to have mechanisms, right set up, like I will share a case study for you. We were doing a USIGF in Nepal, we had like a fellowship calls for that, and I think there were 150 applications for 15 spots. And then while going through the reviewing the application, we found out that 90% of the applicants use AI to draft their applications.

speaker

Ananda Gautam

reason

This real-world example illustrates the pervasive use of AI in unexpected areas and raises questions about authenticity and fairness in competitive processes.

impact

It led to a discussion about the need for guidelines on ethical AI use and the importance of teaching critical thinking skills alongside technological skills.

We need to integrate how do we people use digital technologies along with the emerging technologies so that they are aware. And also, we need to teach them. We need to know, or we need to have, I think there are not much resources done. What is the optimal time that they will be hanging up with their gadgets?

speaker

Ananda Gautam

reason

This comment highlights the need for a holistic approach to digital education that goes beyond just teaching technical skills to include digital wellness and time management.

impact

It broadened the discussion to include considerations of digital well-being and the importance of balancing technology use with other aspects of life.

When it comes to indigenous population, when they want to have access to technology, one of the things that I learned in the recent years is they have to decide what actually one to access on the internet and how they want to access on the internet.

speaker

Umut Pajaro

reason

This insight emphasizes the importance of cultural sensitivity and self-determination in technology adoption, particularly for indigenous communities.

impact

It led to a discussion about the need for culturally appropriate approaches to digital inclusion and the preservation of indigenous languages and knowledge in the digital space.

Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technical considerations to include critical perspectives on global inequality, ethical concerns, digital well-being, and cultural sensitivity. They challenged participants to think more holistically about the societal impacts of technology and the need for inclusive, equitable approaches to digital education and development. The discussion evolved from a focus on skills and access to a more nuanced exploration of the complex interplay between technology, society, and culture.

Follow-up Questions

How can we articulate an emancipatory digital technology curriculum that truly combats extraterrestrials and propagates oppression?

speaker

Marcela Canto

explanation

This question addresses the need for a curriculum that tackles discrimination and promotes equality in digital education, especially in the Global South context.

What are the needs of your country, state, city, and community that should be addressed by educational policies?

speaker

Marcela Canto

explanation

This highlights the importance of understanding local contexts when developing educational policies for digital literacy and technology.

How do we need to encourage different groups that are excluded to become technology producers?

speaker

Marcela Canto

explanation

This question addresses the need for inclusivity in technology production, especially for marginalized groups.

How can we organize a way that respects multiculturalism and the diversity of region and allows for global cooperation?

speaker

Marcela Canto

explanation

This question explores how to create inclusive and globally cooperative approaches to digital education and technology development.

How can remote work opportunities be made accessible in underserved regions?

speaker

Charmaine (online participant)

explanation

This question addresses the need to extend digital work opportunities to areas with limited access to technology and internet.

How can we ensure ethical use of AI and data in both education and workforce management?

speaker

Bangladesh Women IGF (online participant)

explanation

This question highlights the need for ethical guidelines in the use of AI and data across educational and professional contexts.

What should be considerations from youth leaders in policy input regarding regulations and innovation?

speaker

Gregory Duke (online participant)

explanation

This question explores how youth perspectives can be incorporated into policy-making for technology regulation and innovation.

As youths, how do you envision AI to be used in assessments in education?

speaker

Sasha (online participant)

explanation

This question addresses the potential applications and implications of AI in educational assessment from a youth perspective.

What are the positive and negative aspects that indigenous communities can face when they are connected to the internet in the very near future?

speaker

Nirvana Lima (audience member)

explanation

This question explores the potential impacts of internet access on indigenous communities, considering both benefits and challenges.

What public policies can be implemented to ensure both technology inclusion of young people and high-quality technology education, while ethically protecting these young people in a labor market increasingly driven by technology skills?

speaker

Mariana (audience member)

explanation

This question addresses the need for comprehensive policies that promote digital inclusion, education, and ethical protection for youth in the evolving job market.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future

Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future

Session at a Glance

Summary

This discussion focused on creating gender-inclusive data policies and a more equitable data future in Africa. Panelists from various sectors explored the opportunities and challenges in achieving this goal. Key points included the need for representative data collection that considers intersecting identities, addressing biases in algorithms and data sets, and ensuring data privacy and security. Participants emphasized the importance of involving diverse communities, especially women and youth, in designing and implementing data initiatives.

The discussion highlighted the role of governments in developing inclusive policies, raising awareness about data rights and risks, and collaborating with multiple stakeholders. Tech companies were urged to prioritize inclusivity in product design and stakeholder engagement. The importance of capacity building, digital literacy, and education was stressed as crucial for empowering marginalized groups to understand and protect their data rights.

Challenges discussed included the implementation gap between policy creation and execution, data interoperability issues within Africa, and the need for greater transparency in data practices. Panelists agreed that progress is being made, with many African countries developing data protection frameworks, but emphasized that continued efforts are needed to build trust and improve policy communication.

The discussion concluded with calls for ongoing collaboration, education, and skill development to create a more inclusive data future in Africa. Participants recognized that while significant strides have been made, achieving a truly gender-inclusive data ecosystem requires sustained effort and engagement from all stakeholders.

Keypoints

Major discussion points:

– The importance of gender-inclusive data policies and practices in Africa

– Challenges and opportunities in implementing data protection laws and raising awareness

– The role of different stakeholders (government, tech companies, youth, civil society) in shaping inclusive data governance

– Strategies for engaging communities and building trust around data issues

– The need for collaboration, education, and transparency in data policy implementation

The overall purpose of the discussion was to explore how to create more gender-inclusive data policies and practices in Africa, with a focus on engaging different stakeholders and addressing challenges in implementation and awareness.

The tone of the discussion was generally constructive and solution-oriented. Panelists shared insights from their various perspectives and experiences. There was an emphasis on the progress being made, while also acknowledging ongoing challenges. The tone became more urgent when discussing the need for youth involvement and practical implementation of policies. Overall, the conversation maintained a hopeful outlook on achieving more inclusive data governance in Africa.

Speakers

– Christelle Onana: Senior Policy Analyst and lead of the digitalization unit at the African Union Development Agency

– Catherine Muya: Online moderator

– Suzanne El Akabaoui: ICT advisor to the ICT minister in Egypt, advisor for data governance

– Victor Asila: Data manager and lead data scientist at Safaricom (a telecommunications company in Kenya)

– Emilar Gandhi: Global Head of Stakeholder Engagement and Policy Development at META

– Bonnita Nyamwire: Head of the Research department at Pollicy Uganda

– Osei Keja: IGF Riyadh representative, public interest technologist

Additional speakers:

– Melody: Audience member

– Chris Odu: Audience member from Nigeria

– Peter King: Representative of Liberia Internet Governance Forum

Full session report

Gender-Inclusive Data Policies in Africa: Challenges and Opportunities

This discussion explored the creation of gender-inclusive data policies and a more equitable data future in Africa. Panelists from government, technology companies, and civil society organisations examined the opportunities and challenges in achieving this goal.

Key Themes and Insights

1. Importance of Gender-Inclusive Data

Bonita Nyamwire emphasised that truly inclusive data should represent all genders and their intersecting identities, including factors such as race, ethnicity, age, educational level, socioeconomic status, and geographical location. This comprehensive definition set the tone for a nuanced discussion about the complexity of achieving inclusive data practices.

Speakers highlighted the need to identify and address biases in data collection, algorithms, and technology design. Victor Asila from Safaricom specifically mentioned the importance of algorithmic audits to prevent bias.

2. Strategies for Achieving Gender-Inclusive Data

Panelists proposed various strategies to achieve more inclusive data practices:

a) Capacity Building and Education: Nyamwire advocated for transforming data collection processes through capacity building, while Suzanne El Akabaoui, ICT advisor to the Egyptian ICT minister, emphasised the need for broader digital literacy initiatives.

b) Community Engagement: Nyamwire stressed the importance of involving diverse communities in designing data initiatives. Emilar Gandhi from Meta echoed this sentiment, highlighting the value of stakeholder engagement and trust-building.

c) Transparency and Accountability: El Akabaoui emphasised the need for transparency and accountability in data practices, as well as the implementation of privacy-enhancing technologies.

d) Sharing Best Practices: Nyamwire suggested sharing good practices on collecting and reporting gender data across different regions and sectors.

3. Role of Technology Companies

Emilar Gandhi from Meta outlined several responsibilities for technology companies:

a) Ensuring inclusivity by design in products and policies

b) Hiring people from underrepresented groups

c) Engaging with stakeholders and building trust

Gandhi also highlighted Meta’s initiatives to support youth involvement, including their trusted partner program and efforts to engage with civil society organizations and academia.

4. Youth Involvement in Data Governance

Osei Keja, the IGF Riyadh representative, raised critical points about youth involvement:

a) Youth are often left out of policy conception and implementation

b) There’s a need for a shared vision and continuous learning to ensure youth buy-in from the beginning of policy development

c) Young people face challenges in accessing decision-making spaces and having their voices heard

d) The importance of creating opportunities for youth to participate in policy discussions and implementation

Christelle Onana from the African Union Development Agency also emphasised the value of youth perspectives in policy discussions, indicating broad agreement on this issue.

5. Government Roles and Responsibilities

Suzanne El Akabaoui outlined several key responsibilities for governments:

a) Developing inclusive policies and regulations

b) Implementing privacy-enhancing technologies

c) Promoting digital literacy initiatives

d) Ensuring transparency and accountability in data practices

El Akabaoui highlighted Egypt’s efforts in this area, including the establishment of the Personal Data Protection Authority and the implementation of data protection laws.

6. Challenges in Policy Implementation

While progress is being made in developing data protection frameworks across Africa, speakers identified several challenges in implementation:

a) Lack of public awareness about the importance of data protection

b) Need for improved transparency and collaboration in policy communication

c) Importance of contextualising approaches for different regions

d) Implementation gap between policy creation and execution

Specific Initiatives and Examples

1. Egypt’s Personal Data Protection Authority and data protection laws

2. Meta’s trusted partner program and engagement with civil society organizations

3. African Union Development Agency’s role in promoting youth involvement in policy discussions

4. Safaricom’s focus on algorithmic audits to prevent bias

Thought-Provoking Comments and Future Directions

1. Nyamwire’s comprehensive definition of gender-inclusive data, which broadened the conversation beyond simple gender binaries.

2. Keja’s call for a shared vision that includes youth from the conception stage of policy development, challenging typical top-down approaches.

3. Keja’s acknowledgement of male privilege in a patriarchal society, calling for men to be more engaged in supporting gender-inclusive policies.

Audience Questions and Responses

Audience members raised questions about:

1. The practical implementation of data protection policies

2. Strategies for improving data literacy among different population groups

3. The role of technology companies in supporting underrepresented groups in developing countries

Panelists emphasized the need for continued collaboration between governments, tech companies, and civil society organizations to address these challenges.

Conclusion

The discussion revealed a strong commitment to creating more gender-inclusive data policies and practices in Africa. While significant progress has been made, particularly in developing data protection frameworks, challenges remain in implementation, awareness-raising, and ensuring meaningful inclusion of diverse perspectives, especially those of youth and underrepresented groups.

Key next steps identified by panelists include:

1. Improving collaboration between governments and tech companies on data transparency

2. Developing sustainable programs to protect and empower underrepresented groups

3. Enhancing mechanisms for policy implementation

4. Exploring secure ways for African countries to share data among themselves

5. Continuing to promote digital literacy and awareness of data protection issues

Panelists’ closing “one word” summaries:

– Emilar Gandhi: “Collaboration”

– Suzanne El Akabaoui: “Awareness”

– Victor Asila: “Inclusivity”

– Osei Keja: “Action”

– Bonita Nyamwire: “Transformation”

These summaries underscore the multifaceted approach needed to achieve a more equitable and inclusive data future for Africa.

Session Transcript

Christelle Onana: On behalf of the African Union Development Agency, I am honoured to welcome here today for this session which is actually a continuity of a discussion we started at the African IDF in Addis Ababa. So the African Union Development Agency is mandated to support the socio-economic development of African countries. And part of it is this year we have been working on supporting the domestication of the EU data policy framework because we do support the implementation of policies and strategies defined at the African Union level. So we are committed to supporting the implementation, as we just said, of the African Union data policy framework that was adopted in 2022. We are working to help the member states to develop robust national strategies and national data policies and build capacity in general within data governance and specifically with the data protection authorities. As we work to build data-driven economies across the continent, we must be acutely aware of the persistent gender digital divides we have been hearing from the beginning of the forum on Sunday and the gender gap that exists within the data governance landscape. So these gaps may pose a significant barrier to what we are trying to achieve, to the full participation of African women. marginalized groups in the digital economy. So this session will explore the importance of a gendered approach to data and digital environments. We must ensure that the unique needs of women, girls, and marginalized communities are recognized and met. This requires the intentional application of a gender lens in the implementation of the AU data policy framework and the development of national data strategies and policies. So we have with us this morning distinguished panel on site and online that I choose who will be sharing with us their expertise and insight on this crucial topic. So my name is Christelle Nana. I work for the African Union Development Agency. I’m a senior policy analyst and I also lead the digitalization unit. So to me, with me here this morning, we have online Mrs. Suzanne El Akabaoui, who’s the ICT advisor, the advisor of the ICT minister in Egypt. Welcome, Suzanne, if you’re online, if you can hear.

Suzanne El Akabaoui: Yes, I can hear you. Thank you so much.

Christelle Onana: We also have Mr. Victor Asila online, who’s a data manager at Safaricom. Welcome. Thank you, and good morning. On site, we have Madam Bonita Nyamwire, who’s the director of research at Policy. We do have Madam Emilar Gandhi, who is Global Head of Stakeholder Engagement and Policy Development at META. And to close the loop, we have Mr. Osei Kagea, who is IGF Riyadh representative. So welcome to all of you. I think we’ll start the discussion straight. I would like you, starting with the speakers online, to introduce yourself, share with us in two minutes, briefly, what you do that is relevant to our topic today. Starting with Mrs. Suzanne Akabawi. Thank you.

Suzanne El Akabaoui: Thank you. Good morning, esteemed panelists. My name is Suzanne Akabawi. I am advisor to the ICT minister for data governance. My main role when I joined the ministry was to establish the Personal Data Protection Authority of Egypt so that we can implement the personal data law that was issued back in 2020 as part of the creation of a legal context that is favorable for digital transformation.

Christelle Onana: Thank you, Suzanne. Victor?

Victor Asila: Thank you. Good morning. My name is Victor Asila. I work for Safarico, a telecommunications company in Kenya. As a lead data scientist, so on a day-to-day basis, my work is to lead a team that builds data products using scientific methods that can be used for data protection. give insights to the business so that the business can work effectively. It’s a pleasure being here, and I’m glad to be part of the panel.

Christelle Onana: Thank you, Victor. Bonita?

Bonnita Nyamwire: Good morning, everyone. My name is Bonita Nyamwire, and I work for Policy. Policy is based in Kampala in Uganda, and at Policy we work at the intersection of data design and technology to ensure that experiences, needs of women are amplified in tech and data, and digital technology overall on the African continent. Thank you.

Osei Keja: Hello, good morning, good afternoon, depending on where you are joining us from the world. My name is Osei Keja from Ghana. I’m a public interest technologist working at the intersection of society and technology, and I’m also here as an African youth rep on this panel. The topic is a very nuanced one, and youth being the central core of this conversation, whether it’s forming or using the Internet, we hope to be part of the discussion where we get to contribute. I’m excited to be here and hope to learn more. Thank you very much.

Emilar Gandhi: Thank you so much, everyone. My name is Emilar Gandhi, and good morning to you all. I’m head of stakeholder engagement at Meta, and my role really is to ensure that we have strategies in place to ensure that, you know, whenever we are building our products or our policies, we engage externally. We talk to people who use our products. experts, we talk to people who are interested in the issues that we are dealing with. So that’s the team that I work on. And this is an important topic. And thank you so much for including us. And I’m really looking forward to having this discussion and learning from everyone on the panel.

Christelle Onana: Thank you very much. I think now that we know who we are in the room, we can kick off with the discussion. So we’ll start with Bonita with the first question. So what, for you, is a gender-inclusive data future, specifically for Africa? And how can it be achieved?

Bonnita Nyamwire: Thank you so much, Christelle. So a gender-inclusive data is one that is representative of all genders. It also is representative of their intersecting identities. By intersecting identities, I mean like race, ethnicity, their age, educational level, socioeconomic status, geographical location, so that everyone is captured and no one is left behind. Because intersection reveals injustices, inequalities, and so on and so forth. Then the other one on gender-inclusive data is one that actively identifies biases and then addresses them. There are several biases in data, but also in technology. For instance, there is bias in algorithms. I remember this was talked about on Monday in the plenary session. There is bias in data, which can make data skewed or unevenly distributed, which means that even the outcomes of such data that has bias will also be unevenly distributed. And so this also affects the other processes that come after the data, where such data will be used. For instance, in decision-making, it will also be uneven and so on and so forth. So there’s also bias in designing technologies, for instance, bias in the languages, you know, not supporting diverse languages, for instance, dialects on the African continent, which then limits accessibility. Then the other one about agenda-inclusive data is one that ensures safety and privacy, generally protecting individuals from harm and exploitation, especially due to data misuse, but also the biases that come from the data. Then the other one is agenda-inclusive data should be one that ensures agency and ownership in terms of allowing individuals and communities to have control over their data in a way that they are controlling to how the data is collected, how the data is stored, how the data is used. If there are any changes that need to be made to the data, for instance, they are involved. So generally like citizens participating in the data, but especially on the gender side and other marginalized communities. And so how can this be achieved? So one is to transform, no, one is to mainstream gender into national statistics in terms of in planning, in research, because mainstreaming helps to assess gender data collection and identify gaps relating to a missing agenda related. indicators. So mainstreaming is one aspect that can be done to achieve a gender-inclusive data and gender-inclusive data initiatives. Then the other one is transforming data collection processes through capacity building. For instance, capacity building on designing data collection tools to be able to capture data in all different gender diversities, and then training data collectors and researchers to understand what gender inclusivity is, because not everyone may be aware or have that kind of training. And then, again, under transforming data collection and capacity building, there is also equipping researchers and other stakeholders like policymakers with the skills to identify and mitigate the biases that I already talked about. And then the other one is to involve and engage diverse communities in designing and implementing data initiatives. For instance, collaborating with women and feminist organizations to align goals and processes of initiatives. And then the other one is to share good practices on collecting and reporting gender data so as to shape the notions and impact of excellence. So this is very good, sharing good practices, what are the different stakeholders doing in terms of gender-inclusive data initiatives so that we all can be able to learn from each other. And then also to connect gender data to gender equality agendas, because gender equality agendas ideally are based on evidence and facts. And the facts come from the data that is collected, whether it is is text data, whether it is numbers. So connecting these two gender data to gender equality agendas is very important. And then invest in research and innovation, funding interdisciplinary research focused on intersection of gender, data, and technology is also very important. And then also sustaining this funding in terms of upskilling that I already talked about, funding to maintain collaboration and so on and so forth. Yeah, thank you so much.

Christelle Onana: Maybe I should have interrupted you earlier because you gave quite a lot of insight towards the answer we were expecting, but it will also be good to give the responsibilities or suggest as you name. You mentioned quite a lot to answer the question. What I noticed is you mentioned the agenda, inclusive data involving the cultural representation within the design, the algorithm, safety and privacy, the agency and ownership of the communities and the individual. And then lately you were giving the answer to the how it can be achieved. Yeah, mostly about involving the communities to collaborate on the gender inclusive data, good practice sharing, investment into research. I think we’ll leave it there. We need to digest it and we’ll move to the next speaker, Mrs. Suzanne L. Akabawi. So. What are our government, our African government, doing currently to ensure that women and marginalized groups have control over their data in a way that respects privacy and agency?

Suzanne El Akabaoui: Thank you very much for allowing me to have a word about this matter. I think that mainly the work that is being done in African countries revolve around trying to have inclusive policy and regulations. It’s important to develop gender inclusive policies and enforce these policies that would expressly address the needs and rights of women and marginalized groups. These policies should include ensuring that data protection laws are inclusive and consider unique vulnerability of these groups. A very important principle in this case is the principle of transparency and accountability, whereby regulations should require companies to be transparent about their data practices and hold them accountable for any issues related to misuse of data. In this case, governments should provide for regular audits and impact assessments to ensure compliance with privacy standards. Another aspect would be related to the education and digital literacy. In which case, providing education and training on digital literacy would empower women and marginalized groups to understand the rights, to practice them. And the implications of data sharing, this would include teaching them how to protect the personal data and personal information online. Of course, teaching or encouraging women and marginalized groups to pursue education and career in fields like science, technology, engineering, and mathematics. These types of education encourage critical thinking, creativity, and problem-solving skills. So, by having more of these people thinking critically, it will help us also implement the laws in a more efficient way. On the side of the businesses, designing tech companies should prioritize the development of technologies that are inclusive and accessible. In the case of, in order to mitigate the impacts of the digital illiteracy in certain instances, having systems that instate privacy and personal data protection ex ante is important. So, it’s important to think during the design and testing phases. include personal data protection principles is important. As mentioned earlier by my colleague, addressing bias is key. So it’s important to implement measures to identify and mitigate bias in data sets. Obviously, recently in AI systems as well. So it’s important to use diverse database and involve marginalized groups in the development process to ensure fair and equitable outcomes. Community engagement is another important pillar, whereby both governments and tech companies should actively engage with communities to understand the needs and concerns. And this can obviously be done through consultations, focus groups, partnerships with local organizations. The collaboration with civil society is important as well, because working with NGOs and advocacy groups and other civil society organizations can help ensure that the voices of women and marginalized groups are heard and considered in policymaking and technology development. Finally, strengthening personal data protection through data protection laws that are robust and data protection laws that provide strong protection for personal data through the issuance of clear guidelines on how to obtain consent. how to minimize data and to guarantee the right to access and deletion of personal data where applicable. The implementation of privacy enhancing technologies is becoming important. Encryption, anonymization and securing data storage to protect users’ data from unauthorized access and misuse is an important aspect as well. To give a quick overview of where Egypt stands in this case, Egypt does recognize the importance of data protection and it has introduced the law 151 of the year 2020 and the law aims at protecting the personal data and penalize the misuse of personal data. So it is part of the strategic goals and the vision, the Egypt vision 2020 and we work on achieving and guaranteeing gender equal, gender equality through the empowerment of women economically, socially and politically. We do try to give women control over their data. The Personal Data Protection Center has an important project in this case ensuring that their privacy and agency over the data are respected. So generally speaking, it is important that the visions emphasize the importance of creating inclusive digital societies where all… citizens, especially women and marginalised groups, can benefit from digital transformation and protection initiatives. Thank you.

Christelle Onana: Thank you very much, Suzanne, for your very elaborated answer. Allow me to follow up with you on a certain of points that you mentioned. Definitely, our member states are working towards strengthening the data protection authorities and working on enforcing the data protection side of the data for the communities overall, and I’m sure for the minorities, the marginalised groups, for the women, for our girls. But you talked about the government that need to work with the companies to be transparent about their data processes. How does this practically happen? That’s one. How is it enforced? I would like to ask. What do our states do? That’s one. How do we track inclusive technologies? I mean, practically, how does that work at the national level? What do the government do with the research, the academia? Because you mentioned the data protection authorities, you mentioned the companies, the commercial side of it, you mentioned the civil society. What happened at the academia level? And practically, do the engagement with the communities happen in the countries? How often? How is it further enforced or implemented? How do we measure the impact? How can we evaluate the impact of such engagement? Thank you.

Suzanne El Akabaoui: Thank you for your question, that is very interesting. Actually in Egypt, the personal data protection law has established the Personal Data Protection Authority and it has given the authority a certain set of mandates varying from building the capacity of personal data protection officers and personal data protection consultants, but it has also provided for the controllers and processors to obtain licenses. And in the case of Egypt, this has been an important part because it is allowing the personal data protection center to review the practices with the tech companies and generally speaking with controllers and processors on sound personal data management practices. The center instates methodologies on how to handle personal data in a more secure way and through the licensing process review the methodologies and the policies and the collection so that we can guarantee that the principles relevant to personal data protection are respected, such as minimization, purpose limitation, etc. So in practice, what happens is that through the review, through the licensing process, granting the license process, we get to review with the various stakeholders their practices relevant to the personal data protection, both from a legal perspective. and from a technical perspective. On the other hand, the center has another mandate that is to raise awareness. So we work very closely with the civil society, with the private sector, and we try to raise awareness through various events. And this is another aspect of how we get to include all stakeholders. In the case of issuing policies, guidelines and up-to-date policies and guidelines with the fast-paced developments in technology, we also get to have public consultation on these guidelines. So it gives us an idea about the interests and how the different stakeholders see the implementation of the law so that we can implement it in the most efficient ways. Academia is heavily involved because training data protection officers, who are the ones that need to assist the personal data protection in implementing the law in their respective, be it a controller or a processor, are involved. We are trying to include curricula related to personal data protection in various disciplines through academia as well. So the center has vast mandates, a vast range of mandates that are, if working together and put together, should allow to mitigate the impacts and have a more inclusive approach in the journey of digital transformation. Thank you very much. I hope I have answered your question.

Christelle Onana: Definitely. Thank you very much, Susan. Thank you. We’ll now move to the technology side. We’ll look at Victor, who works daily on big data and data science. So generative AI and big data analytics are shaping the future of the information. You do a lot of computation, a lot of analysis. We get insight from that. What opportunities and what risk do these technologies present for gender-inclusive data policies, looking specifically at the African context for our women, mothers, ourselves, our girls, and then the marginalized group?

Victor Asila: Right. Thank you. So there are numerous opportunities. Victor, maybe before you start, I would like you to think of if my grandmother was in the room and you were to explain that to her so we can all understand. I will try. I will try to be as basic as possible. So having said that, I think it will be imperative that I try to… kind of define what generative AI and big data analytics means for a basic person. So, I’ll start with big data. So, we generally describe big data using what we famously call the three Vs. So, we describe it using the volume, that is the amount of data we generate per unit time. We also define it by a second V, which we call veracity. So, how frequent do we generate this data per unit of time? Then thirdly, we define it in terms of variety. So, what different pieces of data do we generate within a specific unit of type? So, generation, the different data sets that can be generation could be classified as text, probably images, sound, video, and what have you. So, I think from a basic perspective, that’s how we describe big data. Then analytics is just the tools and methodologies that we use to get insights from the data, from the big data that we have generated and collected. Now, what is generative AI? So, generative AI is a type of AI that can generate new content, and the new content can be a text, can be a word, can be a picture, can be a video, or whatever that is possible within the technology. So, I think in that sense, my grandmother should be able to understand what generative AI and what big data means. and moving on to what potential it holds in terms of shaping gender and inclusive data policies. We’ll start by opportunities. I think from a technology point of view, having the ability to generate huge datasets within a unit of time at a faster rate, and having varied types of data being collected, then we have an opportunity, number one, to ensure that we get granular insights. And these insights are not just insights for the sake of insights, but insights that are related to gender disparities, insights that are going to help us identify these gender disparities and give us a glimpse into the areas that need intervention. So looking at analyzing big data, I mean, we have an opportunity to uncover nuanced patterns and trends that relate to gender. So that’s the first part. I mean, we have an opportunity to collect gender-specific data, and we have an opportunity to analyze these gender-specific data to ensure that we uncover the patterns that relate to gender. Then number two, as a practitioner, most of the time we do help the business using data to kind of tailor specific products that speak to specific appetites for our customers. So we can flip that and also use the same technologies to come up with solutions using data that address gender-specific related issues. And in doing so, we are going to promote. inclusivity and also promote equity. Now, the second opportunity that I look at from a practitioner perspective is that as practitioners when we build these models, we use algorithms and partly these algorithms do propagate the bias that we as humans, the biases that are inherent in the data that we generate and collect as humans. Therefore, by propagating these biases, then we inadvertently kind of perpetuate the same into the algorithms. What we can do is that we usually do what we call algorithmic audits. From a Safaricom perspective, we specifically come up with policies and practices that each data scientist who is building a model or an algorithm that they adhere to. Part of the checks that we do have is to ensure that the algorithms that we build do not perpetuate biasness and that they are fair and that they are equitable. From a craft perspective, that’s what we do at Safaricom. From also a craft perspective, we try to ensure that the data that we use to build these models and algorithms is as diverse as possible. One thing we usually do is to ensure that the data is balanced. We encourage our data scientists to ensure that their data is balanced and that it’s inclusive to all. groups of interest. And more so, it does not, it does not negatively impact any group. A third opportunity that I see is policy development and implementation. Once we have these insights, we can make informed decisions. And therefore, policymakers who make these, these decisions can leverage these insights to craft more effective and inclusive gender policies. There’s another bit to that, which is monitoring and evaluation. Since we are collecting timely data, I think we have an opportunity to sort of in a near real time basis, I always believe that we cannot, you know, achieve real time kind of monitoring. But we can achieve a near real time monitoring where we continuously monitor the impacts of gender policies, providing real time feedback to our policymakers and enabling them to make adjustments where needed. So I’ll quickly cover the risks. So one risk that as a practitioner, I see is that these these is on data privacy and security. So whenever you handle gender specific information or data, then that exposes you to, to, you know, information that, you know, can, can be used in a negative way. And therefore, it can fail to, to, it can expose privacy issues of individuals by by, you know, exposing their sensitive information. So any breach could could have serious implications. And therefore, for the bad actors, they can use that as an opportunity to misuse that data. It could be misinterpreted inadvertently. are leading to policy that can inadvertently harm rather than help the gender inclusivity agenda. So the other one, I think I’ve spoken about it, which is bias and discrimination. Then we also can run into ethical and legal challenges. I’ve heard of cases where, you know, some of the companies have been penalized by the regulators because of the biases that, you know, that the algorithms do inherently carry. And also, you know, just by doing that, they have failed to adhere to the regulatory compliance landscape around data usage and and the complexity of AI. So I think in a nutshell, those are the risks and opportunities that I see from a practice perspective. Thank you.

Christelle Onana: Thank you very much, Victor. Maybe Emilar has something to add there. Thank you so much. Adding to what he just said or in general? In general, to what he said.

Emilar Gandhi: Yeah, definitely. Thank you so much, Victor. I was writing notes. I know you asked, you know, how to describe this for, you know, like our grandparents, but I was writing notes as well as I was talking because we all benefited from that. I will, I think just beyond just adding to what he said, I think obviously it’s important to look at inclusion for products, you know, to look at inclusivity by design and not just think about it as an afterthought. I thought that was really, you know, that’s something that Safaricom is doing. I think that’s something that’s really, really. But just, I think, adding on to what he said, I think for us as a tech company, you know, when we think through about what inclusion means for us, and just going back to what some of my colleagues have already said, for us, you know, diversity and inclusion for data practices, just obviously it starts not only when, you know, you see the product out there, inclusion for us is at the core of our mission as a company, it defines what we do, and by that, I mean, I think let’s take a step back, because when we are thinking of just products and policies, we are forgetting that there are people behind them. So I think, and I’ve seen, I think, even some research by policy as well, that it’s important for tech companies to even, you know, when they’re doing the hiring practices, to actually hire the, you know, people who come from these underrepresented groups. I’m really gratified to see, I think even for Safaricom, you have people like Victor who are leading this work, rather than having someone sitting somewhere trying to design products for a society that they are not, you know, that they are not in, because lived experiences I think are very, very important. You might read about something, you might, you know, learn about something from books, but actually having lived experiences is important. So for us it matters, inclusion is at the core of our mission, we hire people, you know, with that in mind, but also when you hire someone, and we know this professional development is also important, because you want to keep them, so making sure that they actually stay, you know, in the company. I’ve been at Meta for eight years now, you know, so, you know, I think prioritising professional development is important. I’ll also look at where I work in, which is stakeholder engagement. And for us, stakeholder engagement is not just outreach. And I think for some, it’s just focusing on outreach, but we know stakeholder engagement is about relationship building. And once we talk about relationships, particularly for us in these parts of the world, there is the issue of trust. And I’ll be the first to acknowledge that there is a trust deficit for us, especially in tech companies and the people that use our platforms. How do we ensure that we build trust? And it’s not something that, by just being here at the IGF and having this conversation, we build trust. It’s a marathon. It’s not something that you just build trust. One thing Madam Susan mentioned is that it’s important for tech companies, and we do this as well, where there is a trust deficit, working with local partners to ensure that they are an intermediary for us. We have a trusted partner program. I’m sure for some of them, you’ve heard about that, where we have 400 organizations globally that we work with. And we’ll be here, if you’re part of that program, to hear from you as well. But our inclusive stakeholder engagement strategy is anchored on three things. One is expertise. And by expertise, I know in this room, we have some people who have PhDs, but for us, expertise, we’re also looking at lived experience as a form of expertise. And I think that broadens the people that we talk to. I’m happy to see someone from the youth group, and I was saying to him, what’s the limit of youth groups? Because we know in some regions, it’s as high as you can be. So expertise is one pillar that we look at. We also look at transparency. So when we are identifying the… that we are going to talk to about our policy development or product development, we go beyond, you know, when we’re identifying, we go beyond geographical diversity, we go beyond gender diversity or even language, but, you know, or expertise, but we really look at it from a comprehensive, you know, from a comprehensive perspective. I liked what Bonita said, you know, about intersecting identities, because if we just talk about underrepresented groups, is it women, is it people with different needs, so they are, you know, intersecting identities, I think that we can look at when we are engaging. And the last one would be transparency. It doesn’t, it doesn’t, it’s not, we can do all these things, you know, look at people with lived experiences, people who have the expertise, we can be as inclusive as we think we want to be, but if we are not transparent about the work that we are doing, and we are not talking about it and, you know, responding to the questions that we receive and sharing as much information in the decisions that we are making and sharing information about who we are engaging, why are we engaging with them, what have we had from them, then it’s a fruitless exercise. So being transparent about all these things, I think would be important. And I think I’ll just end there, and let me know if there’s anything else. I know it was just about, do you have anything to add? And I’ve added a whole.

Christelle Onana: We’ll come back to you. Thank you very much, Emilar. So now we’ll turn to the young man in the room representing the Yelp voice. So, Osei, how do you think young people in our society can play a role in shaping a more inclusive society? data governance ecosystem. Where do we start? Can you share any initiative where you guys have voices and it has successfully influenced a policymaker in the data governance landscape?

Osei Keja: Thank you very much. A lot has been said. Whilst my able panelists were presenting, I picked some words, representative, inclusive, transparent, lived experience, not just outreach, expertise. I think we need to start from the conception space of policies, then we talk about implementation. And also one word you use, afterthought. Oftentimes, most young people are seen as afterthought in these stakeholder engagements or say formulation of policies. We are just like the props to the occasion when everything has been done. Hey, young people, come. And we just add them out. But I’m very, very happy in this discussion, we have a young person on board. And throughout the whole process, the design of frameworks, data frameworks, you voice are not there. I made this comment whilst, I think on this same topic in Ethiopia, Honorable Stanley Olagide mentioned that there was a youth forum and one young person did made a comment which changed the entire perspective of parliamentarians. Young people, especially in West Africa and also Africa, I will lead with Mariam as coordinator for West Africa Youth IGF and also the Ghana IGF. We’ve been doing amazing things through. So Ghana IGF, we did a virtual hackathon, tech hackathon this year. And there was so many ideas which were churned out. We did push to. We had our report and we did push in our policy makers. But as I said, from the conception stage to the implementation stage, there’s that kind of big gap. Young people are left out. The implementation process or, say, legislative process, young people are left out. We have the expertise, too. We are not saying we are a repository of all knowledge or we are a monolith of knowledge. We know better than our fathers or our mothers. No. But we need to be included. There’s that big gap there. From the conception stage, we’ve not had anything. So the West Africa Youth IGF, we had good engagement with our parliamentarians. There were outcomes. But we don’t know the end game of it. We’ve not been included in the whole process at the end of it. So the young people have been doing African Youth IGF. We came here last time in Ethiopia. We’ve done incredibly well. We’ve had our outcomes. But we don’t know. We just, OK, go. That’s it. Young people have been at the forefront of advocacy, awareness, and also mobilization. We’ve been effective mobilizers. So Safaricom, META, AU-NEPAD, GIZ, doing amazing works. But I think we can be great conveyor belts in speaking to people with lived experience and bringing people with lived experience on board, conveying the message out there, being part of the implementation process. I hope I’ve fairly answered your question.

Christelle Onana: I’m following up. So I understood that you have been voicing, you have been doing great things, and there have not been any positive outcome from your engagement or your handovers, which means there is no successful example of what you have worked on that has influenced the policymaker. Is it correct?

Osei Keja: On the granular level or, say, on personal level, I personally may have worked on projects which have influenced things. But as a collective, it’s firefighting is very hard. It’s like you are shouting and across the continent, Africa, a lot of young people feel dejected on head. It’s like they are screaming but they are not heard of because they keep pouting the same things. On a personal level, on a granular level, people may have or I may have some experiences here.

Christelle Onana: Okay, may I follow up by asking, so what do you recommend two to three recommendations, practical on how we can engage and make sure that your voice is not only heard but act up and on regarding the issues we’re discussing now to the policymakers, to the private sector, to the researcher, to us as a development agency, to the partners?

Osei Keja: Yeah, vision. I would like to quote my favorite teacher from primary school. He said, if you know the road and you don’t know where you are going, it will lead to nowhere. So, we need to have a shared vision. So, from the conception stage to the implementation stage, we know where we are going so that the young people may be bought into the idea. We are going here. We are going here. That’s how we are going. And also, system thinking. We need to continuously think through things. There may be some faults from the conceptualization or say frameworks where we can piece things together, the jigsaw. We can piece things together so that we know we are moving somewhere. Continuously thinking through things through linear complexity or say diverse complexity. We need to continually think through things. And also, continuous learning. Whether it’s policymakers or big tech, obstacles, we need to continuously learn. That leads to personal mastery. Continuously learning through things. Learning, benchmarking from other experiences. we need to robustly think through the framework where we need to learn from other people to benchmark appropriately. So this is my answer to you.

Christelle Onana: Thank you very much. Thank you, Jose. So I would like to come back to Emilarbefore we open the floor for the first break to the participant. We heard you when you were complimenting Victor’s answer about how you consider the inclusivity at META, how you’re doing that. I would like to add, how is it tailored to the different context, the inclusivity that you incorporate into your processes from the beginning, looking at different perspective? Thank you.

Emilar Gandhi: Thank you so much. Thank you, that’s a very important question. I think before I even respond to that, I think I was just writing notes on what he was saying, and I think also youth need as much support as possible to get to where they are going once there’s a shared vision. In terms of contextualizing our approaches, here’s what we do, and it really depends on each context. But what’s important for us, first of all, is really to ensure that external stakeholders, when we are engaging, are involved right from the beginning. So it’s not when we are now going out to them, but actually understanding what are their issues, what is it that we need to prioritize? We have the products, we have the Facebook, Instagram, all these platforms, but what are some of the issues? that you are facing in terms of, you know, our community standards. Yeah. So what are those issues? So understanding that and bringing it internally to ensure that we, you know, when we look at our policies, we look at that, you know, from that lens. So that’s number one. So ensuring that we are actually prioritizing issues that we hear on the ground, that we hear from local participants. The second thing is actually devising our engagement approach with, you know, with an understanding of our stakeholders. And by this, I mean, not all formats of engagement, you know, can work with different stakeholders. Zoom, there’s a Zoom fatigue. I don’t want to say Zoom, not like company Zoom, but, you know, there is, you know, just engaging virtually. For some, it doesn’t work. That’s number one. Some people prefer face-to-face. Also understanding language, because we know, you know, different languages. What we might express in English is not what it is in Izizulu or in other languages. So really understanding who needs to be in the room as well. I might be the one working on the issue, but for him to understand, maybe I’m not the one who can talk to him about it. Can we do it via policy or can we do it through other organizations? So understanding that to ensure that our engagement strategy speaks to that. So what I’m trying to say here is that there are processes that we have to put in place before we get to the destination, as we are saying. So many ways, I think, of slicing the cake.

Christelle Onana: You know, since you started talking, I was about to say, what have you done in relation to the youth? Talking about that. Can you share with us practically?

EmilarGandhi: or what we’ve done up with the youth. So first of all. Considering the point that he mentioned to be embarked from the beginning, having the vision as. Yes, yes. So what have we done with the youth? Quite a number of initiatives. So I can start one around capacity building because we also know that for youth to actually contribute meaningfully into our product and policies developments, you have to understand the issues. Otherwise just you and me talking will not be as useful. So one of the things that we have done is to put resources into capacity building initiatives like the African Internet Governance School, which I think the last one was in Addis. So making sure that initiatives like that are well supported and well resourced. Also supporting some of the local youth IGFs as well. Supporting in terms of resources, but also ensuring that we even have some of our internal experts, if you invite them to some of the events. The other thing also is we talked about recruitment, but also even actually having programs where we have some young people working within the company and also learning what is happening and ensuring that they can bring that through internships, through placements as well, to ensure that they can bring some of the things that they learn externally. We’re also working with some universities as well to support their programs around tech degrees or apprenticeships or courses as well. So quite a few multifaceted ways that we are working with the youth. But we also know that it’s not something that we can do by ourselves. So we’ll need to work with governments like Madam Susan’s departments or other organizations like. policy who are already entrenched in the processes and NEPAD as well who are already doing quite a lot with Agenda 2063 and all these other things.

Christelle Onana: Thank you very much Emilar. We’d like to pause here for now to open the floor to the participants on site but also online if we do have questions before we continue. Maybe we’ll look online first with our online moderator Catherine. Do we have questions?

Catherine Muya: So we have one question from Gahar Nye who says greetings from Afghanistan. Could anyone share a sample of the strategy to draw on it? Yeah so I think it’s not particularly about the discussion but maybe of the strategy like the one we gave in the description the AU data policy framework and the strategies we are developing but I’m not sure if the tech support can allow him to ask his question.

Christelle Onana: Maybe we can try to find out if you like with the participant online and then we come back to you to have a more accurate question I will suggest. So we’ll take a question on site. Yes Melody.

Audience: Thank you. So mine is more of a contribution. I think one of the issues you raised was we want something more practical and if you are to explain to your grandmother they understand and I think when we are talking about our community engagement and capacity building. Can you hear me? I’m going to give an example. I don’t work for META but I’ll give an example of WhatsApp for example. My family lives in rural Zimbabwe, so there is not any form of entertainment at all. So imagine you have been working the fields the whole day, you come home, there is no entertainment. But recently when I was talking to my mother, she was telling me that there is a WhatsApp group she joined. So every week they post like a chapter of a novel. Then she sits with her daughter-in-law and they read the novel. So I was thinking that capacity building and community engagement should not be that difficult. It is about finding something that will facilitate an engagement with your community. So if it means using a WhatsApp platform, for example, to reach out to so many people and talking about issues of privacy, we are talking about issues of gender inclusion and access to data, that would be one way. I think something very practical and a way of actually reaching out to the communities. Yes, I don’t work for NETA, but I think it is quite relevant and in my context I find it quite useful as well.

Christelle Onana: Thank you very much, Melody, for your contribution. Once more, I think it highlights the need to collaborate because you were talking about the WhatsApp group, but this has to be initiated maybe by the local community group, the NGO on the ground, who will have to work maybe with people like us or with the company so much. Any other questions on site? The ladies, no questions? The men, no questions? We come back to our online participant question. Has it been refined? Maybe we will open it to the panelists if they would like to contribute to it. And it has been drafted. Pete?

Catherine Muya: Greetings from Afghanistan. Could anyone share a sample of the strategy to draw on it? Thank you.

Christelle Onana: Is it an engagement strategy or? I’ll suggest to answer as you understand it, because it’s quite vague.

Emilar Gandhi: So, OK, OK, I will respond to and also just drawing up, I think, from what Melody just said, I think to. You know, to draw up a strategy, you have to understand, I think you have to have a clearer picture of what success looks like, like what is it that you want to do and then you start sort of working backwards. And the second thing is to know that a strategy is not something that, in my opinion, that you finalise and then you say, now let me go out there and do what I laid out, because you might have a strategy, but you need to fine tune it as you go. Because and I’ll give you an example where sometimes we are working on a policy at Metta and like a few weeks ago, we were working on something around eating disorders. And you think, oh, let’s talk to, you know, medical professionals who deal with this issue or psychologists. But then you realise actually you need to talk to young people who might be affected by this or creators who are creating content around, you know, having a certain body type and having a certain, you know, body image and all that. So once you do that, once you maybe talk to a few people, you come back to say, you know what, actually, I need to re-look at my strategy. So I think just to answer to him, you know, a strategy, of course, you might have the frame, who is it that you want to talk to? So the identification. And first of all, understanding the problem, identifying who it is that you want to talk to, maybe lay out the different formats of the engagements. Is it going to be in-person? Are you going to do virtual engagements? What resources do you need? Like what budget do you need? Or do you not need a budget? The people that you’re talking to, are they willing to talk to you? Are they able to talk to you? Or are they willing but unable? Because maybe they don’t have a time to talk to you. So I think there are quite a few things that you might look at. And the last thing I think that you mentioned is impact measurement and looking at how do you measure the results of your engagement?

Christelle Onana: Thank you very much, Emilar We will resume with the questions. Do you, oh, sorry.

Audience: Hi, can you hear me? Good, good. Good morning or good afternoon for those online in case they’re turning in from somewhere in the afternoon. I come from The Gambia and there was an issue for the FGM bill that was called to be amended. And it caused a riot in the country. And a lot of people from the local communities and from the urban areas as well, they came out and had, I think, about a week or two. It was, the country was in uproar. They didn’t want the bill amended. It wanted the bill to be just out. And it kind of calls on to show how people, when they’re concerned about something, when they understand what it is, they actually push for it. And so in my country, for instance, we’ve had the data protection bill drafted since 2019, where in 2024 now, it’s been five years. And so we don’t have that kind of uproar or that kind of concern from civil society, from the students, young people, from the academia or anything like that, that much, that concerned, like they were with the FGM. And although there’s- this is an important issue as well. I think data is also an important issue. So the context is in such a way that people don’t understand data, its importance, why it needs to be protected and what measures are to be put in place to ensure that there’s inclusive data policy, inclusive data future that we’re talking about here. So what can the various stakeholders, the policymakers, the youth, the big tech companies, the academia and government do to ensure that people have a deeper and more concise understanding of data, its importance and things like that? What can they do collaboratively or at individual levels as well to kind of build that understanding within the community?

Christelle Onana: Thank you very much for your question. I believe some of the answer to the question has been given but I will open the floor again. I’ll let the panelists, maybe starting with Suzanne to tell us what can the country do for the citizen to be aware, to be sensitized about such issues in such a way that they feel concerned or they react when need be. Suzanne, please.

Suzanne El Akabaoui: I’m sorry, I won’t open my video because I have an internet issue. So I’m barely hearing most of what’s happening. If I understand well your question, you’re asking about what governments should do. Could you please repeat the question again?

Christelle Onana: Yeah, so the participants said it’s challenging for the population to react to some issues if they are not aware of the subject, if they don’t know what’s going on. And we all understood or we all know that data is very important, data protection, data privacy is important. What can be done by, the question was, what can be done by the different stakeholders to ensure that the population, the citizen are sensitized, aware. of the importance of data management, data protection, data privacy, data security issues?

Suzanne El Akabaoui: Okay, thank you very much. Let me tell you that the issue with data protection is that we used to have relationships between human beings that see each other. And now with digital transformation, we are sharing our data with people we don’t know. And the fact that we don’t know the risks associated with such sharing is what is an actual problem. Because if we understand the risks of misuse, the value of personal data and how it is, it has become a very valuable asset to the citizen and to businesses. It then becomes embedded in the culture and inherent to our day to day actions. So governments, mainly the role would be to raise awareness on various aspects, raise awareness about the rights to citizens, raise awareness about the risks associated with the mishandling of personal data. Putting in place a proper taxonomy of risks is important. Having scenarios of the risks associated with misuse of data, shared with citizens, so that they understand the importance of protecting their personal data and the value of their data and how to ask what will be done with their data is important. This is mainly done through education. It takes a long time because there will be an important cultural shift associated with this. Most African countries are warm countries and they feel closer when they share their data to each other. Now we are putting them in a context where most of the services are moving to a digital space where they don’t know what is happening and where this data will end up being with. So it’s important to raise awareness. It’s important also to give a lot of responsibility and accountability to the companies and controllers and processors about the importance of properly handling data. Governments should emphasize the value of data as an asset that is worthy of protection like any other asset a company would have, that it gives a competitive edge when there is a security of personal data. People will be more encouraged to deal with those who have sound personal data practices. So cascading down methodologies on controllers and processors on how to handle the data and how to secure it and how to see the value and draw value out of it. will encourage them to implement those practices and internal policies that allow such protection. And in parallel, of course, raising awareness, including in curricula for school students, university students, the importance of personal data and personal data protection. This is also a multi-stakeholder approach and the involvement of all stakeholders, including youth, is very important. In Egypt, we work a lot with the Ministry of Youth in trying to find solutions so that they are interested in reading privacy policies and understanding the rights. So it’s important that governments work on various pillars to achieve the target and purpose of raising awareness about the risks, about the opportunities, and the value of data.

Christelle Onana: Thank you very much, Suzanne. Do any of the panellists would like to add to the response? Yes.

Bonnita Nyamwire: Thank you. I would like to add on what Suzanne was explaining. So raising awareness on risks, benefits, but also government needs to maintain transparency throughout the whole process. Because most of the time you find that citizens lack information or some information is withheld for reasons also that we do not know. So transparency is very key. Then the other one is collaboration. as government is raising awareness, they need to collaborate with other stakeholders. There’s academia, there are civil society organizations that work with citizens a lot. You know, so this collaboration is very important. They can go where, you know, into raising awareness together with government, where government cannot reach, civil society will reach, academia will reach. Then the other one is also, as they do awareness raising, it should be done on platforms and channels that can reach all the citizens, you know? Because for instance, if I’ll give an example of what government in Uganda did when they were introducing the digital ID. We didn’t know much about the digital ID, but we just had, oh, you need to go and register. You need a national ID for you to be able to access services, but we’re not told, what are the benefits? What are the risks? You know, what do we need to register? And many people misinterpreted because it is an exercise that came at a time when we’re nearing elections. And again, they are going to do the same thing. They are going to renew our national IDs when we are nearing elections in 2016 and nothing is being done, just like the other time. So explaining to the citizens, you know, why are we doing this? So that time, all of us misunderstood the exercise to like wanting to track the voters within the country and because of multi-party politics. So people say, I am not registering. Others gave wrong information. And now people are suffering because of the wrong information they gave during the national ID registration exercise. So they got the transparency is key, involving other stakeholders, but also the different channels, you know, because if you use a radio and then they say my mother in the village who doesn’t have a radio, or if you. which are going around the city with megaphones. What about those people who are deep down in the village? What, how will they get the information? So, and that’s what I can add on that. Thanks.

Christelle Onana: Thank you very much, Bonita. We know transparency collaboration with the different stakeholders and making sure that the channels and the platform used can reach all the citizens. Anything else to add?

Osei Keja: Quick one. I think the topic is very, very interesting. Data have policies towards a gender-inclusive data future. I sit here as a man, and I would like to tell all the men here that we are in a position of privilege. In this society we live is deeply patriarchal and we should not be very dismissive in terms of the position we do find ourselves in our offices when these policies are brought to us. We should not be dismissive of what we are talking about. I think that part is often neglected because of how gender and society norms are. That’s what I would say.

Christelle Onana: Thank you very much. Yes, please, sir. May I as well suggest that you present yourself before you ask the question so we know? Could you please present yourself briefly and then you ask the question? Thank you.

Audience: Okay, I’ll be very quick. I hope I’m audible. Yes. Good morning, everyone. My name is Chris Odu from Nigeria. It’s a very good thing I’m actually in this space listening to all what we’ve been saying in the conversations. And I think I’m here to learn and I want to know. We’ve been talking about these data policies and the rest. And I think over time what I found out is we’re not lacking policies. In fact, we have a good repository of policies, but we always have issues. And I’m speaking from my own primary constituency, which is Africa. We do have issues when it comes to this implementation. Are there actually mechanisms that we can start using? to actually improve how we implement these policies. Because you come up with a good policy, yes, you want to include women and all of that and everything, but two years, three years down the line, it’s the same result. So we’re still repeating the same thing, going around the same cycle. It’s just something we can start doing to improve how we implement these policies. That’s one. The second one, which I have an issue, is data interoperability within Africa. How are we sharing data? How secure is it? Can we even share data amongst ourselves within the African continent? It’s an issue, which I think I want to learn. I want to know more. How can you help with this so that I can take something back home? Thank you.

Christelle Onana: Thank you very much for your question. We’ll take the second one and then we’ll quickly have an attempt to answer them.

Audience: Okay, good morning to all and good day everywhere you are. My name is Peter King and I am from Liberia. I represent the Liberia Internet Governance Forum. My question goes to the META lady. Please, I heard you talk about trust partnership program, inclusive stakeholder strategy. My issue goes to the idea, what META as one of the global brand in terms of data as data. The data you have at your disposal, what measure or what not just, because I want to also commend you for program that you’ve sponsored or supported at the Africa School Internet Governance. Beyond that, what program or what project do you have in mind in terms of sustainability that look at the issue of protecting policy for under represented groups and under safe countries? For me, I speak also for the… region in the MROU, that is Liberia, Sierra Leone, Guinea, and Ivory Coast. We do not see a program that affects your users in terms of data. Because if you look at it, there are so many things that people need to be capacitated towards it. And when you talk about gender-inclusive data future, how is the future protected when a lot of content that make you get the money and from the people who are not even seen by you? That is my issue. Thank you so much.

Christelle Onana: Thank you very much for your questions. So we will have an attempt to answer to all of them before we get kicked out of the room. Yeah, we have exactly five minutes to finish the session. Oh, so do you want me to jump in quickly? Yes, please.

Emilar Gandhi: So we have five minutes, and we can always discuss later. So good to hear from you. We have a team that’s responsible for Anglophone speaking countries. And I’ll be happy to introduce you to that team as well. Because I think, yes, local partnerships. And the example that I gave is just one of the many things that we are working on. But as you say, I think it’s so difficult to just say these things in these big forums. But you are not seeing something at the local level. And it will be great, I think, for you to meet with some of our local teams as well. Yeah. There was a question about the implementation of the policies. So one of the participants said he doesn’t think that we lack policies. But we lack the implementation of the. Does any of the panelists would like to take that?

Osei Keja: Yeah, thank you very much. Oftentimes, the importance of data protection frameworks or, say, laws are often misconstrued in some of the policy communication around it. I know for a fact in 2022 or 2023, I stand to be quite right, Nigeria implemented a data protection act, which is very, very paramount, very, very necessary. But the policy communication around it, so the average person is seen as, oh, these data frameworks, it’s not necessary. But Africa has come a long way. So far, more than 30 countries have developed frameworks. And it’s still a work in progress. And data interoperability is quite a big issue. But I think we are also making a significant stride as a continent, Africa. But the issue has to be trust. The issue has to be trust. So we need to build on trust. And most importantly, too, is the policy communication around it. How do people, governments bind their trust? How we can exchange data and all that? But still, I think we are getting somewhere, compared to, say, 10, 5 years ago. Thank you very much.

Christelle Onana: Thank you, Osei, for your answer. Any other view to complement?

Bonnita Nyamwire: To add on what Osei has said, I think the AU is doing a great job on getting different African countries to comply on data protection, on privacy, and all these other issues. And even GIZ is also doing a great job supporting the AU in all these aspects. So like Osei has said, there are baby steps that we are taking. But we have moved. And we are somewhere. And we are continuing to move. Maybe by the end of some 10, 20 years, we’ll be somewhere. And also, African countries are taking into consideration benchmarking and learning from each other. So Rwanda is doing well. Different countries are learning from it, and also the others.

Christelle Onana: Thank you very much. Just to reemphasize what you just said, the work is in progress. We are doing baby steps. But eventually, we’ll be there. So indeed, we may not be lacking the policies and the regulation. But even in the way we develop them, we are now including implementation plans, which means that we have the intent to have them implemented, domesticated, and potentially enforced. That’s one. This is the work that we’re doing. We work for a development agency. Our work is to implement the policies that are defined at the union level. So we are making progress. And just to re-say what has been said during the week, we talk about harmonization. It may be an ideal concept. But we are looking into aligning policies regionally, aligning them continentally. So there is a projection to have the system, the technology, all that to communicate. Let’s put it this way. So before we get kicked out of the room, I would like each of my distinguished panelists maybe to say a word to resume our conversation today. One word. We’ll start with Suzanne and Victor online. And then we’ll move back to the room.

Bonnita Nyamwire: Thank you. It’s really difficult to say just one word. But I think that the main word I like is education. I believe education is the key to understanding, to securing, to critically think. And it’s important that we keep raising awareness and educating people about their rights, their duties, and responsibilize them to act soundly. Thank you very much.

Christelle Onana: Thank you very much, Suzanne. Victor?

Victor Asila: Yeah, thank you. So I’ll summarize it in a sentence or two. So we have something we say. One word? We train for the world. So we train. One word? We train for the world. One word. Skills. Thank you.

Christelle Onana: Emina? Collaboration. Thank you. Multi-stakeholder. Thank you. Inclusivity. Thank you. Thank you. Thank you very much for today. Thank you to our distinguished panelists. Thank you for taking the time to be with us for the conversation we had on the subject, on the topic. This is also how we raise awareness. We talk about that. We discuss that. We say sometimes things that we have heard a thousand times. But you know, in French we say, la répétition est la mère de la science. So repetition is the mother of the science. I would like as well to thank all my participants on site. Thank you for your attention and for your participation to the conversation. Have a good day. Bye. I would like to invite the room to have a family picture before we get kicked out of the room. Thank you.

P

Bonnita Nyamwire

Speech speed

123 words per minute

Speech length

1325 words

Speech time

645 seconds

Data should be representative of all genders and intersecting identities

Explanation

Gender-inclusive data should represent all genders and their intersecting identities such as race, ethnicity, age, education level, and socioeconomic status. This ensures that everyone is captured and no one is left behind in data collection and analysis.

Evidence

Intersection reveals injustices and inequalities

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Suzanne El Akabaoui

Emilar Gandhi

Victor Asila

Agreed on

Importance of inclusive data policies and practices

Need to identify and address biases in data and algorithms

Explanation

Gender-inclusive data actively identifies and addresses biases in data and algorithms. This is important because biases can lead to skewed or unevenly distributed data, affecting decision-making processes.

Evidence

Bias in algorithms was discussed in a previous plenary session

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Victor Asila

Agreed on

Addressing bias in data and algorithms

Importance of ensuring data safety, privacy and individual agency

Explanation

Gender-inclusive data should ensure safety, privacy, and agency for individuals. This involves protecting people from harm and exploitation due to data misuse and allowing individuals and communities to have control over their data.

Major Discussion Point

Importance of Gender-Inclusive Data

Transform data collection processes through capacity building

Explanation

To achieve gender-inclusive data, there is a need to transform data collection processes through capacity building. This includes training on designing data collection tools to capture diverse gender data and equipping researchers with skills to identify and mitigate biases.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

Agreed with

Suzanne El Akabaoui

Agreed on

Importance of education and capacity building

Differed with

Suzanne El Akabaoui

Differed on

Approach to achieving gender-inclusive data

Involve diverse communities in designing data initiatives

Explanation

Achieving gender-inclusive data requires involving and engaging diverse communities in designing and implementing data initiatives. This includes collaborating with women and feminist organizations to align goals and processes of initiatives.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

Share good practices on collecting and reporting gender data

Explanation

Sharing good practices on collecting and reporting gender data is important for shaping notions and impact of excellence. This allows stakeholders to learn from each other’s experiences in gender-inclusive data initiatives.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

S

Suzanne El Akabaoui

Speech speed

91 words per minute

Speech length

1698 words

Speech time

1112 seconds

Governments should develop inclusive policies and regulations

Explanation

Governments need to develop gender-inclusive policies and enforce regulations that address the needs and rights of women and marginalized groups. This includes ensuring that data protection laws are inclusive and consider the unique vulnerabilities of these groups.

Evidence

Egypt’s Personal Data Protection Law (Law 151 of 2020) aims to protect personal data and penalize misuse

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Bonnita Nyamwire

Emilar Gandhi

Victor Asila

Agreed on

Importance of inclusive data policies and practices

Need for education and digital literacy initiatives

Explanation

Governments should provide education and training on digital literacy to empower women and marginalized groups. This includes teaching them about their rights, how to protect personal information online, and encouraging pursuit of STEM education.

Major Discussion Point

Importance of Gender-Inclusive Data

Agreed with

Bonnita Nyamwire

Agreed on

Importance of education and capacity building

Differed with

Bonnita Nyamwire

Differed on

Approach to achieving gender-inclusive data

Implement privacy-enhancing technologies

Explanation

There is a need to implement privacy-enhancing technologies such as encryption, anonymization, and secure data storage. These technologies protect users’ data from unauthorized access and misuse.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

Ensure transparency and accountability in data practices

Explanation

Regulations should require companies to be transparent about their data practices and hold them accountable for any issues related to misuse of data. This includes providing for regular audits and impact assessments to ensure compliance with privacy standards.

Major Discussion Point

Strategies for Achieving Gender-Inclusive Data

E

Emilar Gandhi

Speech speed

162 words per minute

Speech length

2374 words

Speech time

875 seconds

Need to ensure inclusivity by design in products and policies

Explanation

Tech companies should prioritize inclusivity when designing products and policies. This means considering inclusion from the start of the development process, not as an afterthought.

Major Discussion Point

Role of Technology Companies

Agreed with

Bonnita Nyamwire

Suzanne El Akabaoui

Victor Asila

Agreed on

Importance of inclusive data policies and practices

Importance of hiring people from underrepresented groups

Explanation

Tech companies should hire people from underrepresented groups to ensure diverse perspectives in product and policy development. This is important because lived experiences are crucial in designing inclusive products and policies.

Evidence

Meta hires people with inclusion in mind and prioritizes professional development to retain diverse talent

Major Discussion Point

Role of Technology Companies

Value of stakeholder engagement and trust-building

Explanation

Stakeholder engagement is crucial for tech companies, going beyond outreach to focus on relationship and trust-building. This is particularly important in addressing the trust deficit between tech companies and users in certain parts of the world.

Evidence

Meta has a trusted partner program with 400 organizations globally

Major Discussion Point

Role of Technology Companies

V

Victor Asila

Speech speed

116 words per minute

Speech length

1213 words

Speech time

622 seconds

Opportunity to use big data for gender-specific insights

Explanation

Big data analytics provide an opportunity to uncover nuanced patterns and trends related to gender. This can help identify gender disparities and areas that need intervention.

Evidence

At Safaricom, data is used to tailor products that address gender-specific issues

Major Discussion Point

Role of Technology Companies

Agreed with

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Agreed on

Importance of inclusive data policies and practices

Need for algorithmic audits to prevent bias

Explanation

There is a need for algorithmic audits to prevent bias in AI models and algorithms. This involves implementing policies and practices to ensure that algorithms are fair and equitable.

Evidence

Safaricom has policies requiring data scientists to conduct algorithmic audits to prevent bias

Major Discussion Point

Role of Technology Companies

Agreed with

Bonnita Nyamwire

Agreed on

Addressing bias in data and algorithms

O

Osei Keja

Speech speed

158 words per minute

Speech length

1128 words

Speech time

427 seconds

Youth often left out of policy conception and implementation

Explanation

Young people are often excluded from the conception and implementation stages of policy development. They are often seen as an afterthought rather than being included from the beginning of the process.

Major Discussion Point

Youth Involvement in Data Governance

Need for shared vision and continuous learning

Explanation

There is a need for a shared vision and continuous learning in policy development and implementation. This involves system thinking and benchmarking from other experiences to improve policy outcomes.

Major Discussion Point

Youth Involvement in Data Governance

A

Audience

Speech speed

163 words per minute

Speech length

1048 words

Speech time

384 seconds

Lack of public awareness about data protection importance

Explanation

There is a lack of public awareness about the importance of data protection and privacy. This makes it challenging for the population to react to data-related issues or policies.

Evidence

Example of The Gambia where there was public uproar about an FGM bill but not about the data protection bill

Major Discussion Point

Challenges in Policy Implementation

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need for transparency and collaboration in policy communication

Explanation

There is a need for transparency and collaboration in communicating policies to the public. This involves working with various stakeholders and using diverse channels to reach all citizens.

Evidence

Example of Uganda’s digital ID implementation where lack of clear communication led to misunderstandings

Major Discussion Point

Challenges in Policy Implementation

Importance of contextualizing approaches for different regions

Explanation

It’s important to contextualize engagement approaches for different regions and stakeholders. This involves understanding local needs and preferences in communication and engagement strategies.

Major Discussion Point

Challenges in Policy Implementation

Progress being made but still work to be done on implementation

Explanation

While progress is being made in developing data protection frameworks in Africa, there is still work to be done on implementation. Trust-building and effective policy communication are key challenges.

Evidence

Over 30 African countries have developed data protection frameworks

Major Discussion Point

Challenges in Policy Implementation

Agreements

Agreement Points

Importance of inclusive data policies and practices

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Victor Asila

Data should be representative of all genders and intersecting identities

Governments should develop inclusive policies and regulations

Need to ensure inclusivity by design in products and policies

Opportunity to use big data for gender-specific insights

Speakers agreed on the need for inclusive data policies and practices that represent all genders and intersecting identities, from government regulations to product design in tech companies.

Addressing bias in data and algorithms

Bonnita Nyamwire

Victor Asila

Need to identify and address biases in data and algorithms

Need for algorithmic audits to prevent bias

Both speakers emphasized the importance of identifying and addressing biases in data and algorithms, with Victor Asila specifically mentioning algorithmic audits as a method to prevent bias.

Importance of education and capacity building

Bonnita Nyamwire

Suzanne El Akabaoui

Transform data collection processes through capacity building

Need for education and digital literacy initiatives

Both speakers highlighted the need for education and capacity building to improve data collection processes and empower marginalized groups in the digital space.

Similar Viewpoints

These speakers all emphasized the importance of engaging with diverse communities and stakeholders in the development of data policies and initiatives.

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Involve diverse communities in designing data initiatives

Need for education and digital literacy initiatives

Value of stakeholder engagement and trust-building

Unexpected Consensus

Recognition of progress in African data protection frameworks

Osei Keja

Bonnita Nyamwire

Progress being made but still work to be done on implementation

AU is doing a great job on getting different African countries to comply on data protection

Despite the focus on challenges, there was unexpected consensus on the progress being made in developing data protection frameworks in Africa, with both speakers acknowledging advancements while noting ongoing implementation challenges.

Overall Assessment

Summary

The main areas of agreement included the importance of inclusive data policies, addressing bias in data and algorithms, the need for education and capacity building, and the value of stakeholder engagement.

Consensus level

There was a moderate to high level of consensus among the speakers on the key issues discussed. This consensus suggests a shared understanding of the challenges and potential solutions in creating gender-inclusive data policies, which could facilitate more coordinated efforts in addressing these issues across different sectors and stakeholders.

Differences

Different Viewpoints

Approach to achieving gender-inclusive data

Bonnita Nyamwire

Suzanne El Akabaoui

Transform data collection processes through capacity building

Need for education and digital literacy initiatives

While both speakers emphasize education, Bonnita Nyamwire focuses on transforming data collection processes through capacity building, while Suzanne El Akabaoui emphasizes broader digital literacy initiatives.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were subtle and primarily focused on different approaches to achieving similar goals in gender-inclusive data practices.

difference_level

The level of disagreement among speakers was relatively low. Most speakers shared similar views on the importance of gender-inclusive data and the need for education and awareness. The differences were mainly in the specific strategies and focus areas each speaker emphasized, which could be seen as complementary rather than contradictory approaches.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of including diverse perspectives, but Bonnita Nyamwire focuses on community engagement in data initiatives, while Emilar Gandhi emphasizes hiring practices within tech companies.

Bonnita Nyamwire

Emilar Gandhi

Involve diverse communities in designing data initiatives

Importance of hiring people from underrepresented groups

Similar Viewpoints

These speakers all emphasized the importance of engaging with diverse communities and stakeholders in the development of data policies and initiatives.

Bonnita Nyamwire

Suzanne El Akabaoui

Emilar Gandhi

Involve diverse communities in designing data initiatives

Need for education and digital literacy initiatives

Value of stakeholder engagement and trust-building

Takeaways

Key Takeaways

Gender-inclusive data is crucial and should represent all genders and intersecting identities

There is a need to identify and address biases in data collection, algorithms, and technology design

Governments should develop inclusive policies and regulations while promoting digital literacy

Technology companies have a responsibility to ensure inclusivity by design in their products and policies

Youth involvement in data governance is important but often lacking in policy conception and implementation

Progress is being made on data protection policies in Africa, but implementation remains a challenge

Resolutions and Action Items

Transform data collection processes through capacity building

Involve diverse communities in designing data initiatives

Share good practices on collecting and reporting gender data

Implement privacy-enhancing technologies

Ensure transparency and accountability in data practices

Hire people from underrepresented groups in technology companies

Conduct algorithmic audits to prevent bias

Unresolved Issues

How to effectively implement existing data protection policies

How to improve data interoperability within Africa

How to ensure sustainable programs for underrepresented groups in different African regions

How to measure the impact of community engagement efforts

Suggested Compromises

Balancing the need for data collection with privacy concerns through education and transparency

Collaborating across different stakeholders (government, private sector, civil society, academia) to address data governance challenges

Contextualizing approaches for different regions while working towards continental alignment of policies

Thought Provoking Comments

A gender-inclusive data is one that is representative of all genders. It also is representative of their intersecting identities. By intersecting identities, I mean like race, ethnicity, their age, educational level, socioeconomic status, geographical location, so that everyone is captured and no one is left behind.

speaker

Bonita Nyamwire

reason

This comment provides a comprehensive definition of gender-inclusive data that goes beyond just gender to include other important demographic factors. It highlights the complexity and intersectionality involved in truly inclusive data.

impact

This set the tone for a more nuanced discussion about what gender-inclusive data really means and the many factors that need to be considered. It broadened the conversation beyond just male/female to consider multiple dimensions of identity.

We need to have a shared vision. So, from the conception stage to the implementation stage, we know where we are going so that the young people may be bought into the idea.

speaker

Osei Keja

reason

This comment emphasizes the importance of including youth from the very beginning of policy development, rather than as an afterthought. It challenges the typical top-down approach.

impact

It shifted the discussion to focus more on how to meaningfully involve youth throughout the entire process of developing and implementing data policies. Other panelists began to discuss more concrete ways to engage young people.

Considering the point that he mentioned to be embarked from the beginning, having the vision as. Yes, yes. So what have we done with the youth? Quite a number of initiatives.

speaker

Emilar Gandhi

reason

This comment directly responds to and builds on the previous point about youth involvement, demonstrating active listening and engagement between panelists.

impact

It moved the conversation from theoretical ideas about youth involvement to concrete examples of initiatives, providing more practical insights. It also modeled how panelists could engage with and build on each other’s points.

I sit here as a man, and I would like to tell all the men here that we are in a position of privilege. In this society we live is deeply patriarchal and we should not be very dismissive in terms of the position we do find ourselves in our offices when these policies are brought to us.

speaker

Osei Keja

reason

This comment brings attention to the role of men in addressing gender inequality, acknowledging privilege and calling for men to be more engaged in gender-inclusive policies. It’s a powerful statement coming from a male panelist.

impact

This shifted the conversation to consider the role and responsibility of those in positions of privilege in creating more inclusive data policies. It added a layer of self-reflection to the discussion.

Overall Assessment

These key comments shaped the discussion by broadening the understanding of gender-inclusive data beyond simple gender binaries, emphasizing the importance of youth involvement from conception to implementation of policies, providing concrete examples of initiatives, and highlighting the role of those in positions of privilege. The discussion evolved from theoretical concepts to more practical considerations and self-reflection on the roles different stakeholders play in creating inclusive data policies. The interplay between panelists, building on each other’s points, led to a richer, more nuanced conversation that touched on multiple aspects of the complex issue of gender-inclusive data policies.

Follow-up Questions

How do governments practically work with companies to ensure transparency about their data processes?

speaker

Christelle Onana

explanation

This question addresses the practical implementation of data transparency policies, which is crucial for effective data governance.

How do we track inclusive technologies at the national level?

speaker

Christelle Onana

explanation

Understanding how to measure and monitor the inclusivity of technologies is important for ensuring equitable access and use of data.

What do governments do with research from academia regarding data policies?

speaker

Christelle Onana

explanation

This question explores the connection between academic research and policy implementation, which is vital for evidence-based policymaking.

How often do engagements with communities happen, and how is their impact measured?

speaker

Christelle Onana

explanation

Understanding the frequency and effectiveness of community engagements is crucial for ensuring that data policies are responsive to community needs.

What can the various stakeholders (policymakers, youth, big tech companies, academia, and government) do to ensure that people have a deeper and more concise understanding of data, its importance, and related issues?

speaker

Audience member from The Gambia

explanation

This question addresses the need for widespread data literacy, which is essential for informed public participation in data governance.

Are there mechanisms that can be used to improve the implementation of data policies?

speaker

Chris Odu from Nigeria

explanation

This question focuses on the critical issue of policy implementation, which is often a challenge in many African countries.

How are African countries sharing data among themselves, and how secure is this data sharing?

speaker

Chris Odu from Nigeria

explanation

This question addresses the important issue of data interoperability and security within the African continent.

What programs or projects does Meta have for sustainability that address the issue of protecting policy for underrepresented groups and undersafe countries?

speaker

Peter King from Liberia

explanation

This question explores the role of large tech companies in ensuring data protection for vulnerable populations in developing countries.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #18 World Economic Forum – Building Trustworthy Governance

Open Forum #18 World Economic Forum – Building Trustworthy Governance

Session at a Glance

Summary

This panel discussion focused on the future of the internet and the development of digital technologies, exploring regulatory, ethical, and practical considerations. Participants emphasized the importance of building a global infrastructure to support emerging technologies like AI and the metaverse. They discussed the need for adaptable, interoperable regulations that promote digital connectivity while respecting data privacy and security.

The conversation highlighted Greece’s digital transformation journey, showcasing how investment in digital public infrastructure can lead to economic growth and improved governance. Panelists stressed the importance of creating regulatory frameworks that are flexible enough to keep pace with rapidly evolving technologies while addressing cross-border challenges and accountability issues.

Ethical considerations for the private sector were explored, with emphasis on integrating ethical principles into product development and building user trust. The discussion touched on data stewardship and sovereignty, noting the tension between maintaining national digital sovereignty and preventing internet fragmentation. Participants agreed on the need for collaborative, multi-stakeholder approaches to governance that prioritize user privacy, security, and consent.

The panel also addressed the importance of cultural engagement in new digital spaces and the challenges posed by evolving hardware standards. They concluded by emphasizing that all stakeholders have an active role in shaping the future internet, and that a principled approach focusing on user needs and economic opportunities is essential for positive development.

Keypoints

Major discussion points:

– The importance of building trust, transparency and user control into emerging internet technologies and platforms

– The need for adaptable and interoperable regulatory frameworks that can keep pace with rapid technological change

– The role of digital public infrastructure in enabling economic growth and improved governance

– Balancing data sovereignty with the need for global data flows and interoperability

– Ethical considerations and accountability in AI and other emerging technologies

Overall purpose:

The discussion aimed to explore key considerations for shaping the future of the internet and digital technologies in a way that promotes trust, economic opportunity, and good governance while addressing potential risks and challenges.

Tone:

The tone was largely collaborative and optimistic, with panelists from different sectors sharing perspectives on how to responsibly develop emerging technologies. There was a sense of shared purpose in wanting to create a better internet future, even while acknowledging complexities and challenges. The tone became more action-oriented towards the end, with calls for active participation in shaping the future of the internet.

Speakers

– Judith Espinoza: Governance Specialist, World Economic Forum (Moderator)

– Hoda Al Khzaimi: Advisor to multiple industries and companies

– Brittan Heller: Senior Fellow of Technology and Democracy, Atlantic Council

– Robin Green: Representative from Meta

– Apostolos Papadopoulos: Chief Technology Officer, Hellenic Republic of Greece

Additional speakers:

– Audience: Representative from Digital Impact Alliance (DIAL)

Full session report

The Future of the Internet: Navigating Emerging Technologies and Governance Challenges

This panel discussion, moderated by Judith Espinoza, brought together experts from various sectors to explore the future of the internet and the development of digital technologies. The conversation focused on regulatory, ethical, and practical considerations for shaping a digital landscape that promotes trust, economic opportunity, and good governance while addressing potential risks and challenges.

Key Themes and Discussion Points

1. Emerging Technologies and Their Impact

The panelists emphasized that the future internet will be shaped by a constellation of emerging technologies, including artificial intelligence (AI), extended reality (XR), blockchain, and quantum computing. Judith Espinoza highlighted that AI should be viewed as an enabler for other technologies rather than a standalone product. This perspective underscores the interconnected nature of technological advancements and their collective impact on the digital landscape.

The discussion touched upon the need for a global infrastructure to support these emerging technologies, with particular emphasis on the development of the metaverse. Panelists agreed that building trust, transparency, and user control into these platforms is crucial for their successful integration into society.

2. Regulatory Frameworks and Governance

A significant portion of the discussion centered on the need for adaptable and interoperable regulatory frameworks that can keep pace with rapid technological change. Robin Green, representing Meta, stressed the importance of technology-neutral legal frameworks that can evolve alongside innovations. This view was echoed by Brittan Heller, who emphasized the need for cross-border regulation and coordination for effective internet governance.

The panel highlighted the challenges of balancing data sovereignty with the need for global data flows and interoperability. Robin Green argued for the importance of maintaining an open, interoperable internet while respecting national digital sovereignty concerns. Hoda Al Khzaimi emphasized the importance of respecting legal sovereignty rights when developing technology regulations across different jurisdictions.

The panelists agreed on the need for adaptable and flexible governance frameworks, with Hoda Al Khzaimi suggesting sandboxing approaches for developing regulations for emerging technologies.

3. Digital Public Infrastructure and Economic Growth

Apostolos Papadopoulos, representing the Greek government, shared insights from Greece’s digital transformation journey. He provided specific examples and statistics, such as the implementation of a national digital identity system, which led to a 25% increase in digital service adoption. The country also saw a significant reduction in bureaucratic processes, with 94% of public services now available online. This real-world example illustrated the potential benefits of embracing digital technologies at a national level.

The panel agreed that digital public infrastructure, including payment systems and digital identity, serves as a crucial pathway for connection and economic opportunity. Judith Espinoza emphasized the alignment of interests between users, human rights advocates, and economic development stakeholders in building a robust digital ecosystem.

4. Trust, Ethics, and User-Centric Design

Hoda Al Khzaimi stressed the importance of incorporating ethical considerations into product functionality from the outset. She advocated for transparency and accessibility in AI algorithms, as well as the implementation of user-centric dashboards that clearly show how personal data is being used and processed. Al Khzaimi also highlighted the need for a single source of truth in trust stack guidelines.

Robin Green echoed these sentiments, highlighting Meta’s commitment to responsible innovation principles that focus on user trust and safety. He provided practical examples of how these principles are applied, such as implementing privacy-by-design features and conducting regular human rights impact assessments. Green also emphasized the importance of accessibility in technology design.

5. Challenges and Opportunities in the Digital Age

The discussion touched upon potential risks associated with emerging technologies, including increased surveillance capabilities and the erosion of privacy. Brittan Heller raised concerns about accountability and transparency in automated systems, emphasizing the need for robust safeguards.

The panel explored the evolution of consent mechanisms for new computing platforms, recognizing that traditional models may not be sufficient in immersive or AI-driven environments. Brittan Heller highlighted the potential loss of cultural engagement spaces in the next iteration of the internet and stressed the importance of hardware floor considerations in emerging technologies like XR.

Hoda Al Khzaimi pointed out the potential of government technologies as a growing industry, suggesting opportunities for innovation in this sector.

6. Multi-stakeholder Approach to Governance

A key takeaway from the discussion was the importance of a collaborative, multi-stakeholder approach to internet governance. The panelists agreed that all stakeholders—including governments, private sector entities, civil society organizations, and users—have an active role in shaping the future internet.

The discussion also touched on the challenges faced by developing nations, with Ibrahim raising a question about how African countries can develop data governance frameworks.

Unresolved Issues and Future Directions

While the panel reached consensus on many points, several unresolved issues emerged:

1. Effectively balancing data sovereignty with cross-border data flows

2. Addressing potential increased surveillance and privacy erosion in new technologies

3. Resolving hardware floor issues in emerging technologies like XR

4. Evolving consent mechanisms for new computing platforms

5. Ensuring accessibility and inclusivity in the future internet across different regions and demographics

6. Developing appropriate data governance frameworks for developing nations

The discussion concluded with a call for continued dialogue and collaboration among stakeholders to address these challenges. The panelists emphasized the need for a principled approach that focuses on user needs, economic opportunities, and ethical considerations in shaping the future of the internet.

In summary, this thought-provoking discussion highlighted the complex interplay between technology, regulation, user rights, and societal values in the digital age. It underscored the need for adaptable frameworks, trust-building mechanisms, and the preservation of cultural spaces as we navigate the evolving landscape of the internet and emerging technologies.

Session Transcript

Robin Green: changing, but in order to make this happen, it’s going to be essential to have the global infrastructure that supports it. Data centers are a great example of some of the kinds of infrastructure that we’re going to need, but in order to really, I’m so sorry, I think some people online couldn’t hear me. In order to grow that infrastructure, it’s going to be really important that we have a regulatory and legal environment that supports it. This means having globally predictable, interoperable, and adaptable regulations that promote digital connectivity and really bridge the digital divide, and that promote data flows and secure communications like encryption of data and transit.

Judith Espinoza: I really appreciate what you said about AI always being part of these technologies, right? I think it’s easy, perhaps from a consumer perspective, to look at things as siloed developments, but as we move into the next phase of internet, we can see that none of this is developed on its own. These are things that have to go together. AI is a enabler for lots of these technologies, but it’s not a product on its own, so I think this is perfect. With that, I also want to share, part of the way that at the forum we’re envisioning the future of the internet is that these are digital intermediaries for connection, whether it be through social media, whether it be to commerce, whether it be to health, agenda AI, you name it, right? And one of those ways, one of those pathways forward is through digital public infrastructure. So the way that people can connect to each other, also economic opportunity, growth. And with that, I want to turn to Apostolos, and I want to ask you, how has Greece advanced the next iteration of the internet experience through digital public infrastructure? How are you developing DPI in Greece, and what are some of the, maybe, the governance opportunities that that presents, right? DPI as an enabler of good governance, as a means of connection.

Apostolos Papadopoulos: Thank you very much for your question, and I’m excited to be here. So in the Greek context, I think, the digital transformation journey of the country is in two stages, in two phases. We’re currently in a stage where we are doing a lot more work in AI and working with emerging technologies, and you were talking about experiences, and I think the permeating word that would delineate this would be trust and directness and transparency. So citizens would like to interact with governments and to have a direct and easy way to do that. And a way to do that in a way. So currently, we’re doing a lot of work in AI. We are doing work in LLMs, where we created a government chatbot, so citizens can interact with the government portal and figure out easy ways to interact with every service and have access to digital services. We’re doing work in AI and education with digital tutoring and homework assignments. So in this phase, we’re investing a lot in new and emerging digital public infrastructure, emerging technologies. The first phase that allowed us to do that starts in 2019 with the creation of a digital ministry, digital transformation ministry, and that was because up to that point, some of that did not exist in Greece, and that created the baseline for the second phase to be able to be executed. So from 2019 to 2023, there’s been a digital tiger leap, as people have called it, in the sense that digital adoption was very low in Greece. 2018, we had 8 point something million digital transactions in total. Greece is a country of about 10 million people, so it’s a very low number of adoption. But 2023 ended with 1.4 billion. So if you chart that, it’s exponential growth, both in terms of supply as well as demand. So this stage, this first stage, created the regulatory framework, the engineering framework, the platforms for us to be able to go in the second phase and do more work with emerging technologies. And the regulatory framework, speaking of that, is a crucial layer of this stack. So you have to have common sense, light touch approaches, regulation, people can trade both internal and inside the government, as well as external partners. And overall, I would say, API in Greece currently is very much a given, and digital is considered something that is, you know, by default.

Judith Espinoza: People and businesses expect of the government. Thank you so much. I want to follow up with one more question for you. You’re talking about exponential growth and usership, and in following this model, do you think you see this as an essential way, I guess, for also for financial growth for the country, right? You’re connecting, it’s peer-to-peer, it’s also services-to-peer, and also, I guess, for businesses as well. How do you see this growth?

Apostolos Papadopoulos: Yes, very positively. One of the deliverables of this approach has been $2.5 billion in investment in FDI. We have, we are the, Greece is the only European member state, the only European member state, along with Poland, a major high-risk country. So, okay. Can you hear me better now? Perfect. Okay, great. All right, thank you. Sorry. So, FDI is a crucial part of this equation. Can you hear me better? Yes, that is fantastic. Okay. So, we had a microphone problem. So, I was just saying, FDI is crucial, and it is a direct byproduct of the strategy, and of the execution of the strategy. So, the Greek government has been working with international and local partners, and there has been a great synergy between all the stakeholders, and both in terms of job growth, as well as in terms of investments, has been a very positive story so far. Thank you so much.

Judith Espinoza: With that, you know, there is an interesting narrative that we are starting to weave here, right, which is investment, and that leads to growth, and that leads to opportunity. And that builds good governance, right? This is an opportunity to build better governance, to build better trust among stakeholders. And with that, I really want to pivot now to Britain. You know, we are talking about the Internet evolving, and as these technologies evolve, I wonder, what do you think are the core regulatory and policy obstacles that we must overcome to really make a better Internet, right? What have we done wrong? Where can we do things better? And are there really any new risks that you think regulators should be paying attention to? Thank you.

Brittan Heller: Can you all hear me? Great. So, I teach international law and AI regulation, and have worked in emerging technologies for about eight years now. So, I’m going to give you the conclusion first. The conclusion is that emerging technologies are a constellation, and if your regulatory approach focuses on one aspect in lieu of the others, you’re going to miss the bigger picture. So, you have to think about the way that AI will be interacting with immersive technologies. We’ll look at new payment systems like blockchain. We’ll look at the new petrol of the Internet, quantum computing, and seeing how all of those systems will feed off each other, will interact with each other, and how existing law may not be a clean fit for these new technologies. There are four things that I think can be valuable when you’re trying to figure out this puzzle about if your existing law will fit, and how to determine what needs to be addressed first in a regulatory regime. The first obstacle is ensuring that these regimes, which were designed primarily in the late 1990s and early 2000s, are adaptable enough to keep pace with the rapid evolution of these technologies. One example that I work a lot on are virtual reality or XR systems. We put on a conference at Stanford Law School last year called Existing Law and Extended Reality, because you can’t just take laws formulated for 2D computing, put them into 3D spaces, and expect that they’re going to work the same way. The way that you formulate jurisdiction, the way that privacy concerns operate in a technology that is different from your laptop because it has sensors that must reach out into the environment to calibrate your devices. your privacy looks different when it’s not just based on the words that are going in and out of servers, when it’s actually location-based and based on your biometric data. So looking at that, how adaptable is your legal system? Second is the question of cross-border regulation, and I know I sound like I’m coming straight at you from 2006, but it’s a very important issue, and when you look at all of this, look at it with all puns intended as a second bite at the apple. All of the things that you wish could be different about the way internet governance works and manifests in your jurisdiction, in your company, in your stakeholder group, you have a chance to do it differently this time. Take that opportunity. So looking at the way that data protection laws align with regulations in other parts of the world so we don’t create another fragmentary regulatory landscape, and how do you create the coordination necessary to make this work across different countries? Third is a question of accountability and transparency. As we rely more on automated systems, the question of who is responsible when something inevitably goes wrong becomes much more complicated. So when I evaluate AI regulatory regimes, it’s not just the robustness or strength of the laws that I look at, it’s the actual enforceability of those regulations. And the way that laws that are cut and pasted from one country and placed into another legal context may not have the same impact on the ground and in the business sector based on the way your corporate laws are structured. So you can’t expect the same results by cutting and pasting. And finally, in terms of new risks, one of the most pressing concerns is the potential for increased surveillance and erosion of privacy. As these technologies are evolving, you see enabled, they enable more granular tracking and profiling of individuals, oftentimes without knowledge or consent. And in new technologies where AI grows legs and walks out in the world amongst us, you you need this type of information to calibrate the device. So your conception of privacy, of consent, freedom of freedom of information, all of these things need to shift in the type of understanding that you see embedded in earlier generations of laws. Overall, regulators need to think about these risks on a broad scale, focusing on fundamental rights while fostering innovation. And the nice thing about these new ecosystems is that what is good for users is also good for human rights. So they don’t have to develop at odds with each other when you’re starting to create these systems of new. Thank you. Thank you so much. And I think this is a perfect segue to

Judith Espinoza: you, Dr. Hoda. We’ve heard now what those policy gaps are. I wonder, this is governance and policy, right? But from your perspective, you’ve advised multiple industries, multiple companies. What do you think the most important ethical considerations are for the private sector when developing these technologies? How do you think that this can be built in a trustworthy way? And then also, we always talk about trust at the forum, right? We want to talk about how you build trust with users, with society at large. But what are the metrics then to know that something is trustworthy, right? We can all say that something is trustworthy, but how can we prove that there is trust there, right? Whether it’s the government level, whether it’s at the product level. I

Hoda Al Khzaimi: think one of the most important aspects that faces the private sector is how can you bring the ethical stack into the trust component, into the functionality of the final product that you’re putting into the market. We have talked about several trust frameworks that exist internationally, with the OECD, with the UN, and as well with the World Economic Forum and in TASSI, which is mostly that’s addressable to accountability, transparency, security, inclusivity, and interoperability. But when you look at the technology that’s being produced in the market today, you don’t see that kind of holistic deployment of ethical components across map and the technology stack. So how can we encourage that at the algorithmic level is very important. And I think right now in 2024, when we are trying to publish in my research group and any kind of AI top tier conferences, what I see very positive is the fact that they kind of encourage you to make sure that your algorithm is accessible and the transparency is available in the system. And that’s quite important, because then you start changing the system. And you don’t get access to publication unless you do that. And I would like to see these kind of not just as well existing on the platform level. Because when we talk about the current social media platform, for example, we don’t see the same level of transparency. I mean, I’m not talking about annual reporting or reporting that exists at specific periodic level, but that kind of dynamic, quick at the tip of your finger level of ethical transparency that exists, that will tell you who used your data for what purpose you use your data, that kind of end user dashboard platform that should exist for user. And I think in the in the research space, we do a lot to improve security, we do a lot to make sure that we have, you know, privacy aspects, zero trust systems, homomorphic encryption, federated learning, these big tools that takes us sometimes years to develop in order to bring trustworthiness and level of reliability and security into the technology, but not necessarily always we see them used or transformed into the product cycle. So that’s kind of concerning on the map, on the holistic map. And I think the 2000, this area of 2025 to 2030, would be the period where we perfect this, perfect this kind of transitioning of ethical components into technology. That’s the first head, I think, or challenge that we see across the map. And the second challenge is for us to understand that bringing ethical and trustworthy digital solutions into the platform is a multi stack layer kind of challenge. So you’re not dealing only with the technology or with the ethical stack, but also dealing with the regulation aspects with the harmonization efforts that exist across the globe. You’re dealing with how we should write those into policies and to, as well, regulations that would bring data acts into action in different jurisdictions, respecting the indigenous differences of those jurisdictions is very important, because for as Breton just said, it’s quite different to bring activation of laws when you’re dealing with it in specific jurisdiction versus another. And we should respect that. And we should allow those kind of legal sovereignty rights of developing the law when it comes to technology to exist across different markets. So this is the second, I think, challenge that I see existing. And it worries me at the moment that everybody is looking at the EU, for example, AI act as the grand flagship regulation to be used across jurisdiction, which is not going to be the same because it’s risk oriented kind of framework of legislation that might not work for Asian countries, for example, where they are much more concerned about the value principle based kind of approach, and they want that to be, as well, translated into the platform. So interpretation of ethics and legislation into the platform is very important. And your second question is, what do we have to include when we are talking about the trust stack across board? I mean, if you ask the technology oriented person, the answers will be different than if you asked a legal kind of entity. And if you asked a different stakeholder who’s coming from the policy framework or from the implementation industrial framework, and in my opinion, the first thing we should have is a single source of truth into this, like a governance structure that would tell you a trust stack should, and the best kind of guidelines aspect include these different layers. And to me, the first layer is the ability to allow users to have accessibility to their data, and also visibility of data transactions that exist across the map, and authenticating who actually access those data transactions at different layer of the mapping. And this is something that we had in conversation and research communities, and as well industrial communities since 2009, because we had this massive technological crisis where user woke up one day and realized that they want to have acquisition of their own, as you have said, and also our colleague from Greece just highlighted, accessibility to data and accessibility to data market is considered today an economy by itself. The work we’re seeing around the DPIs and the government technologies, which is the new rise of technologies that we are gonna see until 2030, is gonna be massing to over 6 trillion USDs. So it’s a huge industry that’s being developed on the back of the data that’s being provided by the citizens. So how can we make sure that the first layer of providing accessibility to those data in a secure manner is available for the users? The second layer is about the security stack, and this is what we already have, and I think we’ve done quite a rigorous work around it. We just have to perfect the adaptability of those security stack onto different platform, especially if we’re talking about the metaverse, or if we’re talking about this kind of real-time transactions, then we need to make sure that they are, I would say, fast enough, they are as well light in operations to be able to be computed into different devices as well. And the third layer is the layer of legislation and regulation, because, I don’t know, we have discussed this several times across the map, but I think I just wanna reiterate this for people who don’t understand that legislation takes time. Legislation takes, I mean, a cycle of three years or a cycle of more than three years in certain jurisdictions to take an effect. And technology development is not waiting for legislation to be passed. We see new models of AI being deployed and pushed across markets, so how can we protect users through legislations if we can produce maybe something that’s faster than what we’re having in the current cycle?

Judith Espinoza: It’s very important. I wanna come back to a couple of your points, especially on data and open-source modeling, but I wanna, in the interest of time, open this up now to the audience. I wanna see if anyone has any questions. We can go ahead and pass the microphone around, and I’m also gonna ask that we monitor the chat online to see if there’s some questions. But we have a question over here. I can pass you the mic. Please tell us what your name is and where you’re from, and please address. Sure, fantastic, thank you.

Audience: My name is Ibrahim, and I’m from the Digital Impact Alliance, DIAL. We work in supporting countries in Africa to deal with or develop data governance frameworks which are in-line, up-to-standard, with global best practices. Now, with that in mind, what, and Dr. Hoda, I’m looking at you for this question, probably. Britain said this legal framework development is a second bite at that apple, which I think is quite exciting. But with countries in Africa, which are latecomers into this digital governance space, and with the advent of fast-paced development of technologies that require consume, ingest, but at the same time produce a whole lot of data, how do you expect countries in Africa, or how do you advise for them to deal with private sector actors at this point in time with enabling legal frameworks, with supportive legal frameworks that are not stifling innovation, but at the same time creating that ability to drive value out of engagement with the private sector? Ibrahim, right?

Hoda Al Khzaimi: Thank you so much for the question, I would say. The first thing I would say that Africa is not a latecomer into this conversation, because Africa itself have produced the first, I would say, payment infrastructure. Like, within DPI infrastructure, you care about payment scales, you care about digital identity, you care about, as well, accessibility to healthcare services and other type of services on the platform, and regulation. And Africa, as I said, is not a latecomer into this conversation. And Africa, with examples that happened in Kenya, like the M-PASA, for example, and the payment structure, were pioneering in this space, even globally, so I would say. And I think it’s one of the first or two payment global systems that existed. And as well, Rwanda itself, at the moment, is building loads of good stack when it comes to government tech, that is also pioneering on a government level. So I think there is a lot to learn from Africa when it comes to their mass deployment onto those structures. And also, when we talked, like in 20, I think, 23, we talked to the Minister of Technology and Infrastructure in Rwanda, and they were also trying to pass this knowledge through African countries to African countries from Rwanda, which is great to see. My advice, when it comes to developing legislation or regulations for government technologies in general, touching emerging technologies, not just one aspect of technology, is try to embody what we have already seen in the global de facto, which is a sandboxing approach. A sandboxing approach normally is something that we see mostly in financial sectors, because you’re trying to de-risk the threat that might come into the financial space from adapting a new technology or adapting as well a new emerging aspects into the mass deployment of a system. So a sandboxing approach into those technologies between private sector and public sector is quite important. And this is what we have tried to push for with the World Economic Forum in UAE as well. We have established a framework for the World Economic Forum in UAE as well. We have established this kind of a global trade regulatory structure where countries are encouraged to come and be on boarded on it to understand how can they deploy specific technologies like AI into different domains, not just in the government, but as well in the public sector as well as industry. So I think learning from those global examples and building your own niche localized example is quite important for you to understand the current pressing needs in your markets and try to keep that kind of indigenous a space of solution making and build your own jurisdiction of regulations and policies because this is something you should not, as Britain said, I do agree on this 100%, you should not copy paste from a global structure. You should try to understand the nonsense and the problems and the challenges that you have in the ground and that you’re trying to solve for because it’s part of the sovereignty aspects of technology, sovereignty aspects of data, sovereignty aspects as well of the infrastructure that you will be developing for these type of technologies across the map.

Judith Espinoza: Do we have anyone else in the room with a question? If not, can we pull up maybe the chat from the Zoom room so we can also look at that? Okay, while we wait for that to come in, I wanna… Okay, we’ve touched on some of these, but I wanna touch on something that came up here in this conversation. And I wanna really, there seems to be a tension, right? In most bodies of research and some work about having sovereignty, right? But also making sure that the internet that we develop isn’t fragmented. And I wanna, and a large part of that is this data economy that you touched on. And a lot of that is this really just global data stewardship, right? I mean, we’re talking about tech and we’re talking about platforms whether decentralized or centralized that really span multiple physical jurisdictions, right? Across countries, across nations, regionally. So I wanna come in and I wanna open this up. First, I wanna direct it to Robin. How is meta thinking about this data stewardship aspect of this technology, of this future internet? All of these technologies are sort of changing the way users either produce data or interact with data. So yeah, how is meta thinking about this? And how do you see it maybe changing or affecting again, building on that user trust?

Robin Green: Thanks, that’s such an important question. And I think it applies not only when you’re thinking about the metaverse and AI and things like that, but really to the way that we are interacting with the internet in general. I think we really need to get crisp on what we mean by sovereignty, right? Because there are a lot of different approaches and in different definitions to digital sovereignty. For some, it can mean sovereignty of government and often that historically has been very territorial in nature and physical in nature. But then with the internet, that sort of shifts all of that. But then there’s also the concept of personal sovereignty, digital sovereignty. And so I think one of the most important things to do is make sure that as we are creating different governance frameworks, we’re doing two things. One, making sure that they’re interoperable with one another so that we are not creating frameworks that are not compatible so that you can’t offer services in two separate jurisdictions at the same time that are more or less the same. And so I think that’s sort of one of the key things essential to ensuring that is making sure, as I mentioned earlier, we’re promoting things that are foundational to an open, interoperable and secure internet, in particular, the free flow of data across borders and digital security and broader adoption of some of the best technologies and tools that we have to augment digital security, like encryption of data. data in transit and data at rest. The second thing is we need to make sure that governance is adaptable. And that is a really hard needle to thread. I think we do this in every space of digital governance the best we can, but we’re still really trying to get to good. And the reason for that is because it’s really hard to know what the future’s gonna look like. I think Britain was absolutely hitting the nail on the head when she was talking about how these laws that we’re often applying today that were created in the 80s, 90s, and early aughts, they don’t necessarily seamlessly fit with the technologies of today. So let’s take that as a cautionary tale, not only around making sure that we are not just copy, pasting, and making the mistakes of yesterday, but also making sure that as we’re creating legal frameworks, we’re building them with, sorry, this keeps going out on me. We’re building them with enough flexibility and adaptability, and in a way that in some sense is really technology neutral, even though we’re still talking about tech governance, so that in 20, 30 years, we’re not in the same position where we have a slower to develop legal framework than technology is adopted that really is not fit for purpose. To that end, I think governance has to be collaborative, cooperative, and multi-stakeholder. One of the most essential things in how we think about not only product and, excuse me, product and service governance, but also just creating what the policy frameworks and legal frameworks around the world, what we think that they should look like, is making sure that we’re collaborating with other private sector peers, not only within our sector, but with other kinds of companies in different sectors as well, collaborating with government, civil society, academia, and users, and I think that’s one of the great examples of why FORA’s like IGF are so critical. It gives us this opportunity to come together and to really promote that kind of multi-stakeholderism, and then I think the last thing is we have responsible innovation principles, and one of the things that’s really important about those principles is we’ve developed them in a way that is meant to be adaptable in just the same way that I’m sort of suggesting our legal frameworks need to be adaptable. They’re high-level principles that we have to execute on in a way that users trust, and the way that we know we’re doing that right is because users are happy with it, and it’s exactly like Britton said. What’s good for users is good for human rights, and frankly, what’s good for users and human rights is also good for economic development and digital transformation, and so our responsible innovation principles are never surprise people. A good example of that is on our smart glasses, the Meta Ray Bans, if they’re turned on, you can see a little LED light, so people will know if a person in their vicinity is using these glasses to take pictures or to livestream or something like that, and if the user actually tries to cover up the LED, they’ll get a prompt that they have to uncover it in order to continue using the product as they want. In addition to that, we wanna provide controls that matter. This is especially important as it applies to youth using our products, not only making sure that youth have those controls and that we’re starting with built-in privacy by default, but also making sure that parents have the kinds of controls that they want so that they can play a really active role in guiding the experiences of the, excuse me, the experiences that their children are having online using these technologies. In addition to that, consider everybody. Consider everybody is our third principle, and it’s really meant to ensure accessibility. It’s meant to ensure that this is an internet and these are technologies for everybody. An example of how we do that is by making sure that we have adjusted height, for example, on our Meta Horizon operating system, which means that whether you are standing up or sitting down you can have the same really comfortable experience in VR. We also have a put people first principle. This is all about privacy and security. Oh, I’m sorry, I’m not good at holding microphones. You’d think I was a digital native, and so some of this would be easier, but I’m not great with technology, although I guess this isn’t really digital technology. So anyways, put people first, privacy, security, I could go on about that for a very long time. In the context of VR in particular in the metaverse, well, VR and augmented reality and XR, I think we think a lot first and foremost about the youth experience and making sure that we’re building privacy and security into that, but then the other aspect of that is making sure that adults have that same kind of control over their experiences and autonomy. We implement this through a lot of different approaches that range from the kinds of user controls that we’ve talked about, but also privacy enhancing techniques like processing data on device. And then we also try to minimize data collection as much as we can, and then we do safety and integrity as one of the major things, and I think you’ll notice that safety and integrity sort of principles are woven throughout some of our other principles, but it’s also its own standalone principle. And we really try to live that and make sure that our users can experience that principle by fostering safe and healthy communities. We want to make sure that we are promoting communities where people can gather with shared intent incentives and establish positive norms to connect online. We want to be empowering people, developers, creators, and users with the kinds of tools to create the experiences they want, but we also need to make sure that people with bad intentions are not able to just do whatever they want on services. And so with that in mind, we have a code of conduct for virtual experiences that makes sure that we do things like prohibit illegal, abusive behavior, or excuse me, behavior that promotes illegal activity, behavior that is abusive, or behavior that could actually lead to physical harm. And then we’re also doing things to promote admins and their ability to moderate their spaces. And so we just want to make sure that as we’re thinking about these things, those high-level values, those principles are really adapted into governance structures that governments are considering so that we can really be maximizing voice, safety, authenticity, dignity, and privacy in the growing adoption of these new technologies.

Judith Espinoza: Thank you, Robin. I think that was very comprehensive. And I want to touch on one thing that I think is really important, right? So when you’re developing these frameworks, right, you really do need a whole society approach, but there’s also something interesting here that I think we can all take away, which is there really is an alignment of interest, right? And it’s an alignment of interest for everyone because trust makes things work, right? When a user trusts a technology or trusts a platform or a service, that can expand, that can grow. That’s an opportunity for growth for everyone. And with that, I want to pass this on to Apostolos now. You’re sort of the example of what private-public cooperation can do. It’s kind of like the bread and butter that we do at the forum. So I want to ask you, how does Greece approach this, right, this issue of data? How do you approach data stewardship? How do you come up with these frameworks that work, that are trustworthy, that are interoperable, and that leverage all of these sort of new technological innovations so that people can have better access to opportunities through digital intermediaries? And then I’m going to pass on to Britton after that on a similar question, but I’ll let Apostolos go first. Please, go ahead.

Apostolos Papadopoulos: Thank you, Judith. Fantastic question. I think in the Greek context, trust, privacy, and data security are defining axioms and characteristics of the digital transformation strategy. Everything that was done and is still being done has always put users first, citizens first, their data, and everything happens with consent. So my colleagues here mentioned a bunch of great words earlier. Transparency, consent is important. So anything, anytime, a digital service, whether that’s commenced by the citizen or by another government organization, has to access data. The citizen has to consent to that data processing. Other than that, from an institutional perspective, when the Ministry of Digital Governance was created, the minister, it was designed that he was endowed with CIO roles, let’s say. That means he or they had the unilateral power to connect any data set they want. But I think connect is the operative keyword here because it’s not about owning the data sets. It’s not about owning the data. It’s about simply connecting different registries with the intent of producing a digital service outcome. for the citizen and the citizen has explicitly asked for that. So it’s not about the government going out there on its own and processing data and creating new registries and creating, you know, stuff like that. But it’s about creating the experience and creating the trust culture that people know, oh, I want to do X, Y, Z. Here’s how I do it. Here’s one platform to do it. And it’s being done in a transparent way to me and to my understanding. So trust, openness, trustworthiness are defining characteristics of the digital transformation strategy.

Judith Espinoza: Okay, thank you so much. You know, when we talk about traditional digital public infrastructure, the things that kind of come up really always are, you know, data exchange, online payment systems, and digital identity. And so, you know, across the stage, we see how people approach that in different ways, right? Whether you’re building soft digital identities and footprints through like a meta account or, you know, your Google account or whatever it is. But these all sort of build on this aspect of connection. And I want to pass on to you now, Britton. What do you think are those gaps really? Because we’re talking about, you know, theoretically, and we see this alignment, right? This is a good alignment of incentives. But what do you think is the gap there then to take us there? And then you can talk about it from a regulatory standpoint, but what do you think are the gaps there to make sure that we sort of all align and take this work forward? Three things.

Brittan Heller: Number one, I think if we are not deliberate about creating spaces for cultural engagement and education in the next iteration of the internet, we will not have them in the same way that we did in the first. When you look at the people who created the internets, the first time, they all were professors who were trying to share information. They really privileged, they worked for government organizations. They got their funding from government organizations. With the next iteration, having extensive private investment into it, it is not a natural evolution to have a cultural space emerge if civil society does not ask for it and if governments aren’t aware that that is a gap. You can look at this with the metaverse where you saw certain countries starting to create cultural properties. Barbados created an embassy in the metaverse. South Korea had a widespread presence. And if you look at Saudi Arabia, there’s actually augmented reality aspects of their cultural tours when you go to some of their UNESCO World Heritage Sites. So you have to think about how the things that make people unique, the things that your people value, the things that make you special, translate into the new mediums of computing. The second is you have to think about hardware floor because the hardware floor for some of these new technologies is not solidified yet. What this means is that we risk creating fragmentation via technical means when we may not intend for that to happen. The example for that is Magic Leap just announced that they are going to stop supporting their first edition of their XR headset. So all of the content that was created for the last eight years will no longer be accessible in a matter of weeks. This is happening again and again and again, and there are many industry groups and user groups within the XR community who are very, very concerned about the loss of their data, the loss of their creative energy because the hardware floor is not settled. We don’t know the format. There are groups working on that now that are just starting to emerge, like the Metaverse Standards Forum. Most people are very surprised to learn that it was just this year that the file format for 3D assets to actually move between worlds and function between worlds was created by Adobe, so the equivalent of a PDF-type format for digital assets. We’re really at that phase in some of these new computing platforms, and so you have to think about what that means and what will be lost if we don’t bring it along. I think the final piece is looking at ways that concepts like consent can be evolved with new computing platforms. I did a study that was published and presented at ISMAR, which is a big conference about spatial computing. Kind of strange for an international law professor to be there, but we were looking at different ways that the notice and consent mechanisms that you have in flat-screen traditional computing could be adapted to 3D computing, and if the affordances of 3D technology meant you could do it differently. And we found that, yes, you could do it differently. Users liked the mechanism that we built that showed them that their eyes were being tracked and how the eye tracking was working. They responded really, really positively to that, and then they felt like they were able to consent to the use of their data in more meaningfully informed ways. That’s kind of anathema to what a lot of companies thought, that if you showed people that their eyes were being tracked, it might freak them out, to be honest. But they liked understanding what the data flows… We visualized the data flows for them and explained to them how the device worked. That was the basis for meaningfully informed consent that you couldn’t do on a flat screen. You had to do it in 3D. I think those are the three pieces that might get overlooked if we’re just looking at it through a pure kind of platform policy or regulatory lens. That’s fantastic. Thank you.

Judith Espinoza: And we have now the warning three-minute mark, but I want to wrap up. And I think there’s some good takeaways to this, right? First, I think when we think about the future Internet, all of us are active participants in how we build that future together, right? None of us are, like, passive users of the Internet or online or digital intermediaries. We all have an active role in how we shape that. And I want us all to feel empowered and walk away in knowing that what we do matters, right, from a user standpoint or through your own personal capacities in whatever way you join us. I see Jeff from Amazon Web Services here, and we’ll chat in a bit with him. But the second takeaway is, regardless of what the future Internet looks like, right, we have to make sure that we’re taking a principled approach to how we build this, right? We want to make sure that the users at the center, that digital public infrastructure really is a means to further, whether it’s economic opportunity or connectivity, whether it’s metaverse, whether it’s projects like the ones that Brittan mentioned. And there’s also, you know, there’s the Duaverse now, which is like a Dubai Electricity and Water Association created this, like…

Hoda Al Khzaimi: I mean, in UAE, we have many. We have, as well, the one with MR and the land authority where you can pay and actually co-pay for real estate assets on this spot. We have, as well, developed a strategy that is extremely applicable to a wide scale of industries, and we are encouraging the industry to build that kind of metaverse collaborative space that reflects back into the economy and different FDI structures. So I think it is about how the leadership of this space will happen. I mean, we have advocacy on across the map from the leaders of the country, which translate to building economies and building companies and building a solution that translates across map. But this is exactly what we talked about, right?

Judith Espinoza: So we, in these examples, see how metaverse or AI is being built into DPI, right? This is really pushing forth how people are going to experience the future of the internet. And I think, lastly, right, all of our incentives align. No one advocates. No one wants, like, a bad future internet. So it’s important to all come together. And I want to thank, to close up, I want to thank the IGF for hosting us and allowing us to have this space. I want to thank all of you for being wonderful supporters of our work, but also really great collaborators in what we do. And, you know, the final takeaway is this is kind of the example of what we want moving forward, right? This is all of society represented on this panel and through the work that we’ve been doing here for the last couple of days. So I encourage you to take that with you and be active participants in the future internet that we want to create, right? It’s not static. It’s a product that keeps evolving. And we keep evolving with it. So, again, thank you so much. I’ll let all of us go. Again, thank you for spending the last day of the forum with us. We’re super grateful. And if you have questions and you want to hang around, please do so. We’ll be here for a couple more minutes. Thank you. Round of applause for our wonderful panelists. Thank you. Thank you. Thank you.

J

Judith Espinoza

Speech speed

211 words per minute

Speech length

1761 words

Speech time

500 seconds

AI as an enabler for other technologies, not a standalone product

Explanation

Judith Espinoza argues that AI is not developed in isolation but is integrated with other technologies. She emphasizes that AI acts as an enabler for various technologies rather than being a standalone product.

Major Discussion Point

The Future of the Internet and Emerging Technologies

Digital public infrastructure as a pathway for connection and economic opportunity

Explanation

Judith Espinoza highlights the importance of digital public infrastructure in facilitating connections and creating economic opportunities. She views DPI as a crucial pathway for advancing digital connectivity and fostering growth.

Major Discussion Point

The Future of the Internet and Emerging Technologies

Alignment of interests between users, human rights, and economic development

Explanation

Judith Espinoza highlights the alignment of interests between users, human rights, and economic development in building the future internet. She emphasizes that trust is crucial for the growth and expansion of technologies and platforms.

Major Discussion Point

Building the Future Internet

B

Brittan Heller

Speech speed

147 words per minute

Speech length

1453 words

Speech time

591 seconds

Need for adaptable legal frameworks to keep pace with rapid technological evolution

Explanation

Brittan Heller emphasizes the importance of creating legal frameworks that can adapt to rapidly evolving technologies. She argues that current laws, often designed for earlier tech generations, may not fit seamlessly with new technologies.

Evidence

Example of laws from the 80s, 90s, and early 2000s not fitting well with current technologies

Major Discussion Point

The Future of the Internet and Emerging Technologies

Agreed with

Robin Green

Agreed on

Need for adaptable and interoperable legal frameworks

Importance of cross-border regulation and coordination for internet governance

Explanation

Brittan Heller stresses the need for coordination in cross-border regulation for effective internet governance. She highlights the importance of aligning data protection laws globally to avoid a fragmented regulatory landscape.

Major Discussion Point

The Future of the Internet and Emerging Technologies

Constellation of emerging technologies (AI, XR, blockchain, quantum computing) shaping the future internet

Explanation

Brittan Heller describes the future internet as being shaped by a constellation of emerging technologies. She emphasizes that focusing on one technology in isolation will miss the bigger picture of how these technologies interact and influence each other.

Evidence

Mentions AI, XR, blockchain, and quantum computing as examples of interconnected emerging technologies

Major Discussion Point

The Future of the Internet and Emerging Technologies

Potential for increased surveillance and erosion of privacy with new technologies

Explanation

Brittan Heller warns about the potential for increased surveillance and privacy erosion with new technologies. She points out that emerging technologies enable more granular tracking and profiling of individuals, often without their knowledge or consent.

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Importance of accountability and transparency in automated systems

Explanation

Brittan Heller emphasizes the need for accountability and transparency in automated systems. She argues that as reliance on automated systems increases, it becomes more complex to determine responsibility when things go wrong.

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Need for deliberate creation of cultural engagement spaces

Explanation

Brittan Heller stresses the importance of deliberately creating spaces for cultural engagement in the next iteration of the internet. She argues that without intentional effort, these spaces may not naturally emerge as they did in the first iteration of the internet.

Evidence

Examples of countries creating cultural properties in the metaverse, such as Barbados creating an embassy and Saudi Arabia using augmented reality for cultural tours

Major Discussion Point

Building the Future Internet

Importance of addressing hardware floor issues in new technologies

Explanation

Brittan Heller highlights the need to address hardware floor issues in new technologies to prevent unintended fragmentation. She warns that unsettled hardware standards can lead to loss of content and creative energy.

Evidence

Example of Magic Leap discontinuing support for their first edition XR headset, making years of content inaccessible

Major Discussion Point

Building the Future Internet

Evolution of consent mechanisms for new computing platforms

Explanation

Brittan Heller discusses the need to evolve consent mechanisms for new computing platforms. She argues that 3D computing environments offer new possibilities for obtaining meaningful informed consent from users.

Evidence

Study presented at ISMAR showing users responded positively to visualizations of eye tracking and data flows in 3D environments

Major Discussion Point

Building the Future Internet

R

Robin Green

Speech speed

152 words per minute

Speech length

1506 words

Speech time

591 seconds

Importance of interoperable governance frameworks to avoid fragmentation

Explanation

Robin Green emphasizes the need for interoperable governance frameworks to prevent fragmentation of the internet. She argues that frameworks should be compatible across jurisdictions to allow consistent service offerings.

Major Discussion Point

Data Governance and Digital Sovereignty

Agreed with

Brittan Heller

Agreed on

Need for adaptable and interoperable legal frameworks

Need for technology-neutral and adaptable legal frameworks

Explanation

Robin Green stresses the importance of creating legal frameworks that are technology-neutral and adaptable. She argues that this approach will ensure the frameworks remain relevant as technology evolves rapidly.

Major Discussion Point

Data Governance and Digital Sovereignty

Agreed with

Brittan Heller

Agreed on

Need for adaptable and interoperable legal frameworks

Balancing data sovereignty with an open, interoperable internet

Explanation

Robin Green discusses the challenge of balancing data sovereignty with maintaining an open and interoperable internet. She emphasizes the need to promote free flow of data across borders while ensuring digital security.

Major Discussion Point

Data Governance and Digital Sovereignty

Differed with

Hoda Al Khzaimi

Differed on

Approach to data sovereignty and internet governance

Need for cross-border data flows and digital security measures

Explanation

Robin Green highlights the importance of promoting cross-border data flows and implementing strong digital security measures. She specifically mentions the need for encryption of data in transit and at rest.

Major Discussion Point

Data Governance and Digital Sovereignty

Importance of regulatory frameworks supporting digital infrastructure

Explanation

Robin Green emphasizes the need for regulatory frameworks that support digital infrastructure development. She argues that such frameworks are essential for the growth of technologies like AI and the metaverse.

Evidence

Mentions data centers as an example of necessary infrastructure

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Need for globally predictable, interoperable, and adaptable regulations

Explanation

Robin Green stresses the importance of creating globally predictable, interoperable, and adaptable regulations. She argues that such regulations are crucial for promoting digital connectivity and bridging the digital divide.

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Responsible innovation principles focusing on user trust and safety

Explanation

Robin Green discusses Meta’s responsible innovation principles that prioritize user trust and safety. She emphasizes the importance of providing controls that matter and considering everyone in the development of new technologies.

Evidence

Example of LED light on Meta Ray Bans to indicate when they are in use for recording or livestreaming

Major Discussion Point

Trust and Ethics in Technology Development

Importance of privacy, security, and user controls in new technologies

Explanation

Robin Green highlights the importance of privacy, security, and user controls in new technologies, especially for youth. She emphasizes Meta’s approach of starting with built-in privacy by default and providing parental controls.

Evidence

Mentions privacy enhancing techniques like processing data on device and minimizing data collection

Major Discussion Point

Trust and Ethics in Technology Development

Agreed with

Apostolos Papadopoulos

Hoda Al Khzaimi

Agreed on

Importance of user privacy and consent in data processing

Multi-stakeholder approach to internet governance

Explanation

Robin Green advocates for a multi-stakeholder approach to internet governance. She emphasizes the importance of collaboration between private sector, government, civil society, academia, and users in shaping policy frameworks.

Evidence

Mentions the Internet Governance Forum (IGF) as an example of a platform for multi-stakeholder collaboration

Major Discussion Point

Building the Future Internet

A

Apostolos Papadopoulos

Speech speed

131 words per minute

Speech length

838 words

Speech time

381 seconds

Greece’s digital transformation journey and exponential growth in digital adoption

Explanation

Apostolos Papadopoulos describes Greece’s rapid digital transformation, which he calls a ‘digital tiger leap’. He highlights the exponential growth in digital transactions and adoption in the country since 2019.

Evidence

Increase from 8 million digital transactions in 2018 to 1.4 billion in 2023

Major Discussion Point

Challenges and Opportunities in Digital Transformation

Importance of user consent and transparency in data processing

Explanation

Apostolos Papadopoulos emphasizes the importance of user consent and transparency in data processing in Greece’s digital transformation strategy. He states that all data access and processing requires explicit citizen consent.

Evidence

Mentions that citizens must consent to data processing for any digital service

Major Discussion Point

Data Governance and Digital Sovereignty

Agreed with

Robin Green

Hoda Al Khzaimi

Agreed on

Importance of user privacy and consent in data processing

H

Hoda Al Khzaimi

Speech speed

154 words per minute

Speech length

1843 words

Speech time

714 seconds

Incorporating ethical considerations into product functionality

Explanation

Hoda Al Khzaimi emphasizes the importance of integrating ethical considerations into the core functionality of technology products. She argues that ethical components should be deployed across the entire technology stack.

Major Discussion Point

Trust and Ethics in Technology Development

Importance of transparency and accessibility in AI algorithms

Explanation

Hoda Al Khzaimi stresses the need for transparency and accessibility in AI algorithms. She highlights the positive trend in academic conferences encouraging researchers to make their algorithms accessible and transparent.

Evidence

Mentions the requirement in top-tier AI conferences for algorithm accessibility and transparency

Major Discussion Point

Trust and Ethics in Technology Development

Need for user-centric dashboards showing data usage

Explanation

Hoda Al Khzaimi advocates for user-centric dashboards that provide real-time information about data usage. She argues for a level of transparency that allows users to easily see who used their data and for what purpose.

Major Discussion Point

Trust and Ethics in Technology Development

Agreed with

Robin Green

Apostolos Papadopoulos

Agreed on

Importance of user privacy and consent in data processing

Differed with

Robin Green

Differed on

Approach to data sovereignty and internet governance

Agreements

Agreement Points

Need for adaptable and interoperable legal frameworks

Brittan Heller

Robin Green

Need for adaptable legal frameworks to keep pace with rapid technological evolution

Need for technology-neutral and adaptable legal frameworks

Importance of interoperable governance frameworks to avoid fragmentation

Both speakers emphasize the importance of creating legal frameworks that can adapt to rapidly evolving technologies and remain interoperable across jurisdictions to prevent fragmentation.

Importance of user privacy and consent in data processing

Robin Green

Apostolos Papadopoulos

Hoda Al Khzaimi

Importance of privacy, security, and user controls in new technologies

Importance of user consent and transparency in data processing

Need for user-centric dashboards showing data usage

These speakers agree on the critical importance of user privacy, consent, and transparency in data processing, emphasizing the need for clear user controls and information about data usage.

Similar Viewpoints

These speakers share the view that transparency and accountability are crucial in the development and deployment of AI and automated systems, emphasizing the need for responsible innovation that prioritizes user trust and safety.

Brittan Heller

Robin Green

Hoda Al Khzaimi

Importance of accountability and transparency in automated systems

Responsible innovation principles focusing on user trust and safety

Importance of transparency and accessibility in AI algorithms

Unexpected Consensus

Cultural engagement in the future internet

Brittan Heller

Judith Espinoza

Need for deliberate creation of cultural engagement spaces

Digital public infrastructure as a pathway for connection and economic opportunity

While not explicitly discussed by other speakers, both Brittan Heller and Judith Espinoza touch on the importance of cultural engagement and connection in the future internet, suggesting an unexpected consensus on the need for deliberate efforts to create spaces for cultural and social interaction in digital environments.

Overall Assessment

Summary

The speakers generally agree on the need for adaptable and interoperable legal frameworks, the importance of user privacy and consent, and the necessity of transparency and accountability in AI and automated systems. There is also a shared recognition of the interconnected nature of emerging technologies and their impact on the future internet.

Consensus level

There is a high level of consensus among the speakers on core principles such as user-centric approaches, the need for adaptable regulations, and the importance of transparency. This consensus suggests a shared vision for the future internet that prioritizes user rights, innovation, and responsible development of technologies. However, there are some variations in emphasis and specific approaches, particularly in how different countries or organizations are implementing these principles.

Differences

Different Viewpoints

Approach to data sovereignty and internet governance

Robin Green

Hoda Al Khzaimi

Balancing data sovereignty with an open, interoperable internet

Need for user-centric dashboards showing data usage

Robin Green emphasizes the need for interoperable governance frameworks and cross-border data flows, while Hoda Al Khzaimi focuses more on user-centric control and transparency in data usage.

Unexpected Differences

Cultural engagement in the future internet

Brittan Heller

Robin Green

Need for deliberate creation of cultural engagement spaces

Responsible innovation principles focusing on user trust and safety

While both speakers discuss the future of the Internet, Brittan Heller unexpectedly emphasizes the need for the deliberate creation of cultural spaces, which is not directly addressed by other speakers who focus more on technical and regulatory aspects.

Overall Assessment

summary

The main areas of disagreement revolve around the balance between data sovereignty and internet openness, the approach to user data control and transparency, and the emphasis on cultural aspects in the future internet.

difference_level

The level of disagreement among the speakers is relatively low, with more emphasis on complementary perspectives rather than direct contradictions. This suggests a generally aligned view on the future of the internet, with differences mainly in specific focus areas and implementation strategies.

Partial Agreements

Partial Agreements

Both speakers agree on the need for adaptable legal frameworks, but Brittan Heller emphasizes the importance of considering the constellation of emerging technologies, while Robin Green focuses more on technology-neutral approaches.

Brittan Heller

Robin Green

Need for adaptable legal frameworks to keep pace with rapid technological evolution

Need for technology-neutral and adaptable legal frameworks

Similar Viewpoints

These speakers share the view that transparency and accountability are crucial in the development and deployment of AI and automated systems, emphasizing the need for responsible innovation that prioritizes user trust and safety.

Brittan Heller

Robin Green

Hoda Al Khzaimi

Importance of accountability and transparency in automated systems

Responsible innovation principles focusing on user trust and safety

Importance of transparency and accessibility in AI algorithms

Takeaways

Key Takeaways

The future internet will be shaped by a constellation of emerging technologies including AI, XR, blockchain, and quantum computing.

There is a need for adaptable and interoperable legal frameworks to keep pace with rapid technological evolution.

Data governance and digital sovereignty must be balanced with maintaining an open, interoperable internet.

Incorporating ethical considerations and user trust is crucial in developing new technologies.

Digital public infrastructure and digital transformation offer significant opportunities for economic growth and improved governance.

A multi-stakeholder, collaborative approach is essential for effective internet governance.

Resolutions and Action Items

Develop governance frameworks that are interoperable across jurisdictions

Implement responsible innovation principles focusing on user trust and safety

Create user-centric dashboards showing data usage and processing

Establish sandboxing approaches for testing new technologies in regulatory environments

Deliberately create spaces for cultural engagement in new computing platforms

Unresolved Issues

How to effectively balance data sovereignty with cross-border data flows

Addressing potential increased surveillance and privacy erosion in new technologies

Resolving hardware floor issues in emerging technologies like XR

How to evolve consent mechanisms for new computing platforms

Ensuring accessibility and inclusivity in the future internet across different regions and demographics

Suggested Compromises

Adopting technology-neutral legal frameworks to allow for future adaptability

Balancing innovation with user protection through responsible development principles

Using sandboxing approaches to test new technologies within existing regulatory structures

Implementing privacy-enhancing techniques like on-device data processing to balance functionality with data protection

Thought Provoking Comments

The conclusion is that emerging technologies are a constellation, and if your regulatory approach focuses on one aspect in lieu of the others, you’re going to miss the bigger picture.

speaker

Brittan Heller

reason

This comment introduces a holistic perspective on regulating emerging technologies, emphasizing the interconnected nature of different innovations.

impact

It shifted the discussion towards considering the broader ecosystem of technologies rather than isolated innovations, setting the stage for a more comprehensive analysis of regulatory challenges.

Overall, regulators need to think about these risks on a broad scale, focusing on fundamental rights while fostering innovation. And the nice thing about these new ecosystems is that what is good for users is also good for human rights.

speaker

Brittan Heller

reason

This insight aligns user interests with human rights, suggesting a win-win approach to regulation and innovation.

impact

It reframed the discussion around finding solutions that benefit both users and broader societal interests, encouraging a more balanced approach to technology governance.

The first thing we should have is a single source of truth into this, like a governance structure that would tell you a trust stack should, and the best kind of guidelines aspect include these different layers.

speaker

Hoda Al Khzaimi

reason

This comment proposes a concrete solution to the complex issue of building trust in digital systems across different jurisdictions.

impact

It sparked a more detailed discussion about the specific components needed in a trust framework, moving the conversation from theoretical concerns to practical implementation.

Let’s take that as a cautionary tale, not only around making sure that we are not just copy, pasting, and making the mistakes of yesterday, but also making sure that as we’re creating legal frameworks, we’re building them with enough flexibility and adaptability, and in a way that in some sense is really technology neutral.

speaker

Robin Green

reason

This insight highlights the need for flexible, future-proof regulatory approaches that can adapt to rapid technological change.

impact

It encouraged participants to think more critically about long-term implications of current regulatory efforts and how to create more adaptable frameworks.

Number one, I think if we are not deliberate about creating spaces for cultural engagement and education in the next iteration of the internet, we will not have them in the same way that we did in the first.

speaker

Brittan Heller

reason

This comment brings attention to the often-overlooked cultural and educational aspects of internet development.

impact

It broadened the scope of the discussion beyond technical and regulatory concerns to include cultural preservation and education in the digital age.

Overall Assessment

These key comments shaped the discussion by encouraging a more holistic, user-centric, and culturally aware approach to internet governance and emerging technologies. They moved the conversation from siloed thinking about individual technologies or regulations to considering the broader ecosystem and long-term implications. The discussion evolved to emphasize the importance of adaptable frameworks, trust-building mechanisms, and the preservation of cultural spaces in the digital realm. This comprehensive perspective highlighted the complex interplay between technology, regulation, user rights, and societal values in shaping the future of the internet.

Follow-up Questions

How can we ensure that governance frameworks for new technologies are interoperable across jurisdictions while still respecting local needs?

speaker

Robin Green

explanation

This is important to avoid creating incompatible frameworks that prevent offering consistent services across different jurisdictions.

How can we make governance frameworks for digital technologies more adaptable to keep pace with rapid technological change?

speaker

Robin Green

explanation

This is crucial to avoid the problem of outdated laws not fitting new technologies, as happened with laws from the 80s-00s being applied to current tech.

How can we create spaces for cultural engagement and education in the next iteration of the internet?

speaker

Brittan Heller

explanation

This is important to ensure cultural aspects are not overlooked in the development of new internet technologies, which are largely driven by private investment.

How can we address the issue of the unsettled hardware floor in new technologies like XR?

speaker

Brittan Heller

explanation

This is crucial to prevent the loss of content and creative work due to rapid obsolescence of hardware platforms.

How can concepts like consent be evolved for new computing platforms?

speaker

Brittan Heller

explanation

This is important to ensure users can provide meaningful informed consent in new technological environments like 3D computing.

How can African countries develop supportive legal frameworks for digital governance that enable innovation while creating value from private sector engagement?

speaker

Audience member (Ibrahim)

explanation

This is important for countries that are newer to digital governance to effectively manage rapid technological development and data issues.

What metrics can be used to prove that a technology or system is trustworthy?

speaker

Judith Espinoza

explanation

This is important for building and measuring trust with users and society at large in new technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.